
“I can’t rely on LLMs because they’re not deterministic. How can I be sure what I’ll get?” — I bet you’ve thought that yourself. I did for sure.
It’s a fair point. LLMs can be unpredictable: the same input can produce different outputs. But here’s the thing: neither are you deterministic, nor am I. A cup of coffee can dramatically improve our “inference.” A bad night’s sleep can tank it. We’re so unpredictable that we have a term for when our variability wreaks havoc: human error.
And when human error happens, we’ve learned not to simply blame the person. Instead, we examine the environment. Why didn’t the system prevent this? We expect systems to withstand human errors, or better yet, prevent them from happening in the first place.
LLMs are made in our likeness. They’re trained on the code, books, and articles we’ve written. So why do we blame them for human errors? Sorry, I meant machine errors.
[Read More]






Figure 1: The outbox pattern