Skip to main content

Issue #40 — Forty Issues Later

·1302 words·7 mins

Dear Reader,

I started this newsletter in mid-2025 because I believed AI governance was an important problem that most organisations were ignoring. I had a subject I cared about and no one around me who wanted to discuss it at the level of detail I thought it deserved.

Forty issues later, two things have changed. The first: more people care about this than I expected. The second: the problem is much larger than I thought when I started writing.

I came in through governance. What I found was that governance is one layer of a five-layer problem many enterprises have not named yet: how to get AI from experiment to production. The gap is not between “AI strategy” and “AI governance.” It is between the demo that impressed the board and the system that runs every day without someone babysitting it.

The feedback from readers kept pointing at the same place. Not “we need better policies” but “we have five pilots, none in production, and the board is asking what happened to the budget.” Not a governance problem. A production problem. Governance is part of it, but only part.

This issue is different from the usual format. No research section, no regulatory deep-dive. Instead: what emerged from forty issues that I did not plan to write about, and where it led.

Three patterns I did not set out to find
#

1. Many companies ask the wrong first question
#

The question I hear most often from leadership teams is: “Where can we use AI?”

It is the wrong question. The useful question is: “What is our most expensive broken process?”

The first question is technology-first. It produces a list of thirty use cases, none prioritised, each with its own vendor pitch. The second is pain-first. It produces a sequence: start with what is easy to fix and yields concrete gains, work outward.

This pattern appeared in every industry I covered. In banking (#36), the most common gap was not missing AI models — it was missing inventories of the AI already running in production. In telco (#37), operators had fifty use cases identified and no logic for sequencing them. In pharma (#38), the failures (Watson for Oncology, $62 million, zero patients treated) shared a root cause with the successes (Insilico, target-to-Phase-2a in five years): not model quality, but problem scoping. In public sector (#39), the Dutch Toeslagenaffaire ran for seven years because nobody asked what would happen to the families the model flagged.

The technology worked in every case. The question that preceded it determined the outcome.

2. Governance theatre is the default
#

Boards write AI principles. Teams rubber-stamp model outputs. Employees use tools nobody approved. At every layer, the same architectural flaw: an impressive surface with nothing structural behind it. Or — even worse — companies just close their eyes and pretend no governance is necessary (or genuinely don’t know it is).

I kept reaching for the same term — governance theatre — because nothing else described it accurately. Issue after issue, different sectors, different regulatory regimes, different maturity levels: the pattern repeated. A 40-page governance policy that the monitoring system does not enforce. A “human-in-the-loop” requirement met by someone who signs off outputs without reading them because they have forty other tasks. A Shadow AI inventory that does not exist because nobody was asked to build one.

The gap between what organisations claim to govern and what they actually govern is not a communication problem. It is structural. And it is invisible from the boardroom, which is precisely why it persists.

3. The problem keeps expanding
#

I started writing about governance. By Issue #16 it had become governance architecture, controls-as-code instead of controls-as-PowerPoint. By #29 I was into business cases and ROI. The Production OS series (#31-35) ended up specifying five layers at once: strategy, governance, process redesign, technical architecture, and the operating model that connects them.

The scope expanded because it had to. Every conversation with a reader or a client hit the same wall: governance alone does not get AI into production. A company can have a compliant risk framework and still have zero AI systems generating value. The missing piece is never just one layer — it is the connection between them. A business case built on assumptions the architecture cannot deliver. A governance policy the gateway does not enforce. A process redesign that nobody mapped to the existing stack.

I looked back at the forty issues and realised the newsletter itself went through the same evolution. It started by explaining one layer. It ended by specifying the system.

The tipping point
#

Over time, questions from readers started arriving. They were not about specific EU regulations or risk management frameworks. Those would be typical newsletter questions. These were about implementation: “Can you help us build this?” or “We have the same problem — can we talk?”

I did not plan to write a trilogy on Shadow AI. In #24 I described unsanctioned tool usage. In #28 the problem escalated to unsanctioned code production — employees building systems with AI tools outside any governance framework. In #32 the architectural solution appeared: an AI gateway that enforces policy at the infrastructure level. Three issues, written months apart, that added up to diagnosis, escalation, and treatment.

The frameworks I wrote for the newsletter started appearing in my consulting conversations. The Shadow AI Protocol. The Production Readiness Checklist. The Business Case Validation Canvas. What I had written as analysis, clients were treating as tools.

That is the moment it stopped being just a newsletter. Not because I decided to build something — because readers started building with it.

What next?
#

The pattern was clear enough: readers wanted more than reading. They wanted someone to walk through the implementation with them. Not just governance. The full path from experiment to production.

I did not plan to build a practice around this. The demand arrived before the business plan.

The name came from a comparison with where the enterprise AI market is right now. A quintant is a navigation device constructed in the 18th century — far less known than a sextant — a fifth of a circle, used by sailors to fix their position on open water when the seas were uncharted and the only reliable references were the stars. It did not steer the ship. It told you where you were, so you could decide where to go next.

That is what I kept doing in consulting conversations. Not steering, measuring. Where is AI already running? Where is the exposure? The value was never just in telling companies what AI could theoretically do — that is what technology vendors do. It was in helping them see where they actually are, where the shallows are and where the storms are forming.

Quintant works with organisations stuck between “we have AI pilots” and “we can prove AI generates value.” The five layers the newsletter mapped out — now also as advisory projects.

If you have been reading this newsletter and recognising your own organisation in these patterns — the wrong first question, the governance that exists on paper, the pilots that never reach production — that recognition is the starting point. There is a diagnostic tool at quintant.ai: fifteen minutes, no commitment, a report showing where the gaps are. Fix your position first, then navigate.

The newsletter continues. The weekly research teaches me something new each time, and writing each issue is the best way I know to organise my thinking on a topic. Quintant is there for those who want to move from thinking to building.

If your organisation is deploying AI and nobody has asked “what happens when this reaches production” — who in your leadership team should be asking that question?


Stay balanced,

Krzysztof Goworek