There is a seductive story about automation. Models improve, software gets cheaper, and more work simply falls to machines. At the level of raw capability, this story is true. At the level of real organizations, it is incomplete to the point of being misleading.
Most automation projects do not fail because the model cannot perform the task. They fail because the task has never been specified in a form stable enough to delegate. Teams say they want to automate intake, follow-up, reporting, approvals, or documentation. But what they often mean is that they want relief from a workflow that exists mostly as tacit adaptation in the heads of a few reliable people.
Humans are unusually good at compensating for organizational ambiguity. They know which fields can be ignored, which client deserves an exception, when a policy is real versus ceremonial, and who actually needs to be looped in despite what the org chart says. Automation surfaces all of this hidden work. It forces the organization to answer an uncomfortable question: what exactly do you do here?
Automation does not remove organizational ambiguity. It prices it.
Automation does not fix ambiguity
AI can now summarize, classify, route, draft, extract, transform, and answer with startling fluency. That creates the impression that the main bottleneck has been lifted. Often it has. But a lifted bottleneck downstream does not solve confusion upstream.
A tool can accelerate a workflow. It cannot decide what your organization means by done. It cannot resolve an unspoken disagreement between teams about who owns an exception. It cannot infer whether a form field is essential, decorative, or the residue of some forgotten compliance anxiety.
This is why automation projects often look excellent in a demo and brittle in production. The model appears competent. The process turns out to be unstable. Inputs arrive in inconsistent formats. Edge cases are handled differently by different staff. Handoffs depend on personal relationships rather than rules. Success is judged after the fact, socially, rather than by an explicit standard.
Hidden variation is the real workload
The happy path is rarely the real work. The real work lives in variation: incomplete information, contradictory requests, partial approvals, messy data, urgent exceptions, and the small judgments people make to keep a system moving. Organizations often mistake the existence of a documented procedure for the existence of an actual process. They are not the same thing.
This is especially obvious in healthcare and social services, where two cases that look identical on paper can diverge immediately under real conditions. One patient answers the phone, another does not. One referral is complete, another is missing a lab result. One family needs translation support, another needs transportation, another needs three follow-ups because life is collapsing around them. The formal workflow may be stable; the operational reality often is not.
Peter Pronovost's work on ICU checklists is instructive here. The checklist mattered not because checklists are magical, but because a critical procedure became explicit, shared, and auditable. On the publisher's summary, the resulting reforms are described as reducing hospital-acquired infection rates by 70% (Pronovost & Vohr, 2011). The lesson is broader than healthcare: many improvements that look like intelligence are really improvements in coordination.
Clarity is an operational artifact
Organizations often talk about alignment as if it were a mood. It is closer to an artifact. Clarity has to exist somewhere durable: in a decision memo, an escalation rule, an A3, a checklist, a definition-of-done, a service-level expectation, or a foundational document that outlives the meeting where it was improvised.
John Shook's Managing to Learn is still useful because the A3 process forces a team to compress a problem, current condition, target state, analysis, countermeasures, and ownership into a single shared frame (Shook, 2008). This is not bureaucratic theater. It is a way of converting diffuse concern into an object that can be reasoned about.
Claire Hughes Johnson makes a related point at company scale in Scaling People, where foundational documents and lightweight operating systems are presented as the infrastructure of a growing organization (Hughes Johnson, 2023). The common idea is simple: ambiguity should not live only in conversations. If the process matters, the understanding of that process has to become portable.
Will Larson's An Elegant Puzzle approaches the same issue from engineering management. Healthy organizations are not just collections of talented individuals; they are systems in which responsibilities, interfaces, and tradeoffs become legible enough to coordinate at scale (Larson, 2019). Automation inherits the same requirement. Before you hand work to software, you have to know what the work is.
A test for automation-ready work
If a team wants to know whether a workflow is ready for automation, the useful question is not “Can a model do this?” It is “Have we made this process explicit enough that delegation is even possible?” A practical filter looks something like this:
1. A named trigger
What event starts the workflow? An uploaded document, a submitted form, a changed status, a scheduled date, a patient message? If the trigger is fuzzy, the automation will either misfire or never fire at all.
2. Stable inputs
What information must be present for the work to proceed? If staff regularly compensate for missing, contradictory, or poorly formatted inputs, then the automation problem is downstream of an intake problem.
3. An explicit definition of done
What counts as successful completion? A sent message, an updated record, a resolved task, a human review, a scheduled appointment? If success can only be recognized socially, after the fact, then the task is not yet specified.
4. A single accountable owner
When the automation fails, who owns the failure? Not who touched it last, but who is responsible for the system continuing to work. Shared ownership is often another name for invisible abandonment.
5. An escalation path
What happens when the workflow encounters an exception? Good automation is rarely full replacement. More often it is a routing system that handles the obvious cases and escalates the ambiguous ones cleanly.
6. A measurable quality signal
How will you know whether the automation is helping? Turnaround time, error rate, completion rate, staff time saved, handoff quality, or some domain-specific outcome. If nothing is measured, the team will end up arguing from anecdotes.
If a workflow only works because a good employee knows when to ignore the documented process, the process is not ready to automate.
What follows from this
The practical implication is not anti-automation. It is more demanding than that. Organizations should automate aggressively where the work is recurrent, the trigger is clear, the output is measurable, and the exception path is known. But they should stop pretending that software can rescue a workflow whose meaning is still negotiated in hallway conversations, Slack messages, and personal memory.
This is why the best automation work often begins with a document rather than a model. Map the current state. Identify the exceptions. Name the owner. Define done. Decide what gets escalated. Then automate the boring middle. Only after the workflow becomes legible does model quality become the interesting question.
Cheap cognition changes what is possible. It does not eliminate the organizational work of making a process coherent. In practice, the machine is often the first participant that insists the organization explain itself clearly. That is not a limitation of automation. It is one of its most useful properties.