Machine Errors and Non-Deterministic Humans

A friendly robot holding onto a railing, illustrating the concept of designing systems that support variable components

“I can’t rely on LLMs because they’re not deterministic. How can I be sure what I’ll get?” — I bet you’ve thought that yourself. I did for sure.

It’s a fair point. LLMs can be unpredictable: the same input can produce different outputs. But here’s the thing: neither are you deterministic, nor am I. A cup of coffee can dramatically improve our “inference.” A bad night’s sleep can tank it. We’re so unpredictable that we have a term for when our variability wreaks havoc: human error.

And when human error happens, we’ve learned not to simply blame the person. Instead, we examine the environment. Why didn’t the system prevent this? We expect systems to withstand human errors, or better yet, prevent them from happening in the first place.

LLMs are made in our likeness. They’re trained on the code, books, and articles we’ve written. So why do we blame them for human errors? Sorry, I meant machine errors.

[Read More]

The Missing Link in Vibe Coding: Feedback Loops

Vibe coding, or as some prefer to call it, “hands-off” software engineering, is having its moment. AI agents write code, implement features, and even spin up entire projects from a prompt. The results are impressive for demos. For production systems, not so much.

The knee-jerk reaction is to blame the models. That was my reaction too. But the problem isn’t the models. It’s what we’re not giving them.

Production-grade code is iterative. No one writes it right on the first attempt. Our profession is built around this fact: code reviews, refactoring, pair programming, testing. Domain-Driven Design makes iterative modeling a first-class concern, because getting the model right requires continuous learning and refinement. We iterate until the code is good enough. Yet when AI doesn’t nail it on the first try, we use it as proof that AI isn’t there yet.

I was in that camp. Then something changed my mind.

[Read More]

AI Doesn't Fix Your Real Bottleneck

An assembly line where an AI robot speeds up code production, creating a massive pile of blocks in front of an overwhelmed human operator with a warning alarm — illustrating how accelerating code generation creates a bottleneck at human comprehension.

Every other post on my feed celebrates how AI lets us write code faster: whole apps built in a matter of a few hours, 99% AI-generated codebases, hundred-fold productivity gains, and on and on. But does writing code faster actually make us more productive?

A Quick Detour Through a Factory

The Theory of Constraints says that every system’s throughput is limited by a single constraint: its bottleneck. What makes a system more effective? Improving the bottleneck. What makes a system less efficient? Improving anything else.

I know, that second part is counterintuitive. Here’s the thing: if you speed up a non-bottleneck, you don’t improve the system; you produce more work-in-progress that piles up in front of the bottleneck! More inventory. More cost. More waste. The system becomes more expensive to operate, not more productive.

This is a well-established principle in manufacturing, and software manufacturing is no exception.

[Read More]

With AI, everything is so complicated... and this is great news!

With AI, everything is so complicated... and this is great news!

Lately I’ve been spending more and more time researching AI models and their effects on software engineering and architecture, so it’s time to share my findings.

First, let me explain what I mean by “complicated.” I’m a huge fan of the Cynefin framework. If you are not familiar with it, here is the gist.

Cynefin

Say you need to decide on something. For example, let’s assume you need to change the behavior of a software application and are contemplating how to do it.

Cynefin is a tool for guiding the decision-making process in different kinds of situations. It says that first you need to understand what kind of situation you are in, and once you do, picking the optimal course of action is much easier. For that, the framework identifies four basic situations—domains:

[Read More]