In a control system, an open-loop output is a red flag.

No sensor reading the result. No setpoint to compare against. No correction signal. The actuator fires and the system keeps moving — with no idea whether anything changed.

Engineers call it blind control. It's only acceptable when the relationship between input and output is so stable, so well-understood, that feedback would be redundant. A conveyor at fixed speed. A valve that opens to the same position every time.

In any system where conditions vary — and they always vary — open-loop control isn't a simplification. It's a design flaw.

Most people are open-loop systems.

I've spent 16 years watching this play out at scale. Organizations that couldn't tell you whether the work they were doing was producing the result they intended. Teams running the same retrospective process every quarter, generating observations, filing them, and arriving at the next quarter with the same problems. Not because they lacked capability. The loop was never closed — between what the system produced and what the people running it actually did differently.

The feedback existed. The correction never arrived.

I've done the same thing. For longer than I'd like to name.

The industrial control loop has four parts. It's useful to know them precisely before translating them.

Sensor. It reads the current state of the process. Continuously. Without opinion.

Setpoint. The target state — what the process is supposed to produce.

Error signal. The gap between what is and what should be. The controller calculates this and decides what to do.

Actuator. The correction. It changes something in the process, and the sensor reads the result.

Close the loop, and the system self-corrects. Open the loop, and the system drifts. Not dramatically. Gradually. In a direction no one chose.

Here is what I've come to understand: the most common constraint in any knowledge work system isn't the tools, the process, or the team.

It's the person running it.

Not because they're incompetent. Because they're unexamined.

The sensor isn't reading. Which means there's no error signal. Which means the actuator is firing based on assumption, habit, or inertia — not data. The system keeps producing. It just can't correct.

Metacognition is the sensor. It reads your current state — your assumptions, your confidence calibration, your actual output versus your intended one. Without it, you're operating blind. You might be performing well, or you might be repeating the same structural mistake for the third time this quarter. You won't know until the downstream signal arrives — late, noisy, and already compounded by everything that ran in the interval.

The setpoint is harder to name. In a control system it's explicit: 74.5°C, ±0.3. In a professional context, it's the question most people never stop long enough to answer: what, exactly, am I trying to produce — and what claim does my work advance?

If the setpoint is vague, the error signal is meaningless. You can't calculate a gap if you don't know what you're aiming at. Performing without a setpoint isn't underperformance. It's a different problem entirely — and one that no tool will solve.

Closing the loop on yourself is not a productivity practice. It's an engineering requirement.

It means building the sensor — the discipline of reading your actual state instead of assuming it. It means committing to a setpoint specific enough that you can calculate the gap. It means treating self-examination not as indulgence but as the error signal it actually is. And it means being willing to act on the correction even when the correction is uncomfortable.

Most systems don't drift because the actuator failed.

They drift because no one closed the loop.

That's true for the system you manage. And for the person who built it.

Keep reading