Skip to main content

Issue #27 — The Speed of Waste

·1599 words·8 mins

“We implemented AI. We’re processing 10x more requests. But somehow… our ROI is negative.”

If this sounds familiar, you’ve stumbled upon one of the great ironies of enterprise technology: the better the tool, the faster it makes you fail. I call this the Speed of Waste—the phenomenon where AI makes your organisation do the wrong thing faster, at greater scale, with compounding errors.

The statistics are brutal, though by now you’ve likely seen them quoted so often on LinkedIn that they’ve lost their capacity to shock. 42% of enterprises deployed AI with zero ROI (Constellation Research, 2025). 95% of generative AI pilots at enterprises are failing (MIT, 2025). 88% of AI proofs-of-concept never reach production (IDC, 2025). And the trend is worsening: 42% of companies scrapped most of their AI initiatives in 2025, up from 17% in 2024 (S&P Global).

The conventional diagnosis blames technology: models aren’t good enough, data isn’t clean enough, infrastructure isn’t ready. There’s truth to this—I wrote about data quality last week. But it misses something rather important. Data quality is often a symptom. The deeper question is: why is the data bad in the first place? And the answer, more often than not, is that the process generating the data is broken. Inconsistent inputs, undocumented exceptions, manual workarounds—these process failures produce the dirty data that then fails in AI systems. The rot starts earlier than most people care to admit.

The Jet Engine on a Horse Cart
#

Here’s a thought experiment: attach a jet engine to a horse cart. You don’t get a faster cart. You get a spectacular crash, possibly involving fire and a rather surprised horse.

This is precisely what happens when AI is deployed on top of inefficient workflows. The underlying process—with its redundant approvals, missing quality gates, undefined handoffs, and accumulated workarounds—doesn’t get fixed. It gets accelerated. And acceleration, when you’re heading in the wrong direction, is not progress.

Consider what automation actually does to a broken process:

  • Faster execution of unnecessary steps — You’re now wasting time at machine speed

  • Scaled propagation of errors — One mistake becomes a thousand mistakes before anyone notices

  • Compounded technical debt — Workarounds become embedded in production systems, then calcified

  • The illusion of productivity — “We’re processing more!” without creating more value

There’s something almost comic about this. We spend millions on AI to do faster exactly what we shouldn’t be doing at all.

This creates a recognisable pattern: organisations implement AI one use case at a time, each team solving their immediate problem, until they wake up with a fragmented landscape of siloed solutions that don’t talk to each other. Wasted resources, minimal impact.

A nuance is warranted here. “One use case at a time” is actually the correct execution strategy—multi-year “big bang” AI transformations fail precisely because the technology evolves too quickly for waterfall planning. The problem isn’t incremental implementation. The problem is incremental implementation without a coherent strategic framework. Each use case should be governed by the same principles, the same data standards, and the same success metrics. Fix processes one at a time, yes—but with the bigger architecture in mind.

The Compounding Error Problem
#

Here’s where the mathematics become unforgiving.

Research from Patronus AI quantified something that should terrify anyone running multi-step AI workflows. An AI agent with a mere 1% error rate per step reaches a 63% probability of error by the 100th step. At the token level, an LLM with 1% error per token cascades to an 87% probability of error by the 200th token.

Applied to real enterprise workflows: if your AI achieves 95% accuracy on each individual task—impressive by any measure—stringing together 20 sequential tasks leaves you with only a 35% chance that everything works correctly. In other words, two-thirds of the time, something has gone wrong. And you may not know where.

Now layer this on top of a process that was already flawed. If the human workflow contained three redundant approval steps, ambiguous handoff criteria, and two undocumented exception paths, the AI faithfully learns and replicates all of it. Every inefficiency becomes a new error vector. Every workaround becomes a new failure mode.

This is the old “garbage in, garbage out” problem—except in automated systems, it’s “garbage in, garbage everywhere, permanently”. Biased training data doesn’t just carry over existing bias; it amplifies and exaggerates those patterns. When the underlying process itself is biased or inefficient, AI scales that dysfunction systematically.

The Evidence: Process Maturity Predicts Success
#

If technology isn’t the differentiator, what is?

Research across multiple consulting firms consistently shows that organisations with mature, well-documented processes sustain AI projects far longer than those without. The defining characteristic of successful AI deployments isn’t better data science—it’s better process discipline. The boring stuff, in other words.

Accenture’s research reinforces this finding. Only 16% of organisations reached what they call “Reinvention-Ready” status—where processes have been modernised end-to-end before AI deployment. Those that did achieved 2.5x higher revenue growth and 3.3x greater success at scaling AI use cases.

The primary differentiator? 87% of “Reinvention-Ready” companies excel at Methods & Processes, compared to only 47% of “Insights-driven” organisations. Process discipline emerged as the key factor—not data science capability. Which is rather inconvenient for anyone selling AI as a silver bullet.

The Engineer’s Fix: Process Mapping Before Prompt Engineering
#

The solution is unglamorous but effective: fix the workflow before you automate it.

Thomas Davenport, the pioneer of Business Process Reengineering, updated his framework for the AI era. His guidance is prescriptive:

  1. Establish process ownership — Clear accountability, end-to-end

  2. Map out the existing process — What actually happens, not what’s documented

  3. Establish performance measures — Define success before automation

  4. Redesign the process — Eliminate waste, redundancy, and exceptions

  5. Only then evaluate technology enablers

⠀ His key insight: “Layering AI on top of existing processes produces better results than attempting to redesign entire workflows around AI.” This is counterintuitive—surely AI should enable radical redesign? But the reality is that radical redesign introduces radical risk, and most organisations have neither the appetite nor the capability to absorb it.

Andrew Ng has made a similar argument through his “data-centric AI” campaign: for most enterprise projects, off-the-shelf models are good enough. The bottleneck isn’t the algorithm—it’s the process of preparing, cleaning, and structuring the data that feeds it.

The model isn’t the bottleneck. The process is. Almost always.

When Imperfect AI Works
#

I should acknowledge the legitimate counterargument.

In certain contexts, an 80% AI solution that makes experts 5x more productive is preferable to waiting for perfect process redesign. Generative design platforms demonstrate this: customers tolerate 80-85% complete designs because the tool solves a genuine talent shortage. It’s imperfect, but it’s better than nothing—which was the previous alternative.

But this works only when three conditions are met:

  1. A genuine capacity constraint exists (solving scarcity, not optimising cost)

  2. Users tolerate imperfection while the system learns

  3. Learning loops are built into the workflow

⠀ This does not apply to back-office automation, compliance workflows, or operational processes where the underlying process itself is broken. Capacity expansion is different from efficiency improvement. The former can tolerate imperfection; the latter cannot.

Which brings me to a general observation: the AI vendors are selling you implementations. The consultants are selling you transformation programmes. But the unglamorous truth is that you probably don’t need either until you’ve mapped and fixed your workflows first. That’s cheaper, lower-risk, and—ironically—often delivers more value than the AI itself. Not that anyone has much incentive to tell you this.

The Briefing
#

The 6% Club
#

McKinsey’s latest State of AI survey (November 2025, ~2,000 respondents) puts a number on what we’ve been discussing: 88% of organisations now use AI regularly, but nearly two-thirds remain stuck in pilot phase. Only 6% qualify as “AI high performers”—defined as achieving 5%+ EBIT impact with significant attributed value.

The differentiator? High performers are nearly three times more likely to fundamentally redesign workflows than their peers. Not better models. Not bigger budgets. Workflow redesign.

McKinsey’s finding deserves to be quoted directly: “Intentional redesigning of workflows has one of the strongest contributions to achieving meaningful business impact of all the factors tested.”

This is as close to a controlled experiment as we’re likely to get. The 94% seeing limited or no enterprise-level impact are, by and large, automating existing processes rather than fixing them first. The 6% did the unglamorous work.

The Chat Phase Is Over
#

Meanwhile, OpenAI’s enterprise usage data (December 2025) reveals an interesting shift. While ChatGPT Enterprise messages grew 8x year-over-year, API reasoning tokens grew 320x. The implication: serious enterprises are moving from treating AI as a “consultant you chat with” to treating it as an “engine embedded in workflows.”

The top 5% of enterprise users—OpenAI calls them “Frontier” workers—send 6x more messages than median users and use coding tools 17x more frequently. The gap isn’t access. Everyone has access. The gap is integration depth and workflow maturity.

Perhaps most telling: 25% of enterprises still haven’t turned on basic connectors to give AI access to company data. A quarter of paying customers are using enterprise AI as an expensive autocomplete.

The “easy wins” of generic chatbots are gone. What remains is the hard work of process engineering—which, conveniently, is what we’ve been discussing.

This Week’s Question
#

Before your next AI initiative review, ask your project team this:

“Can you show me the process map that was optimised BEFORE we started automating?”

If the answer is “we went straight to the AI solution”—you’ve found your ROI leak.

The model isn’t the problem. The process is.

Until next time, build with foresight. Krzysztof