Dear reader,
In our last issue, I argued that the “blast radius” of enterprise AI extends far beyond the company walls. This week, as we conclude our initial ten-part exploration, we address the most persistent challenge of all: how does one maintain control in a state of perpetual, accelerating change? If the AI landscape is a permanent storm, a leader’s role is not to predict the weather, but to cultivate a resilient garden—one that can withstand any storm and continue to bear fruit.
The Briefing#
A new report from MIT’s NANDA Initiative, “The GenAI Divide,” details the state of enterprise AI adoption. The study reveals a gap between experimentation and value. While over 80% of organisations have piloted tools like ChatGPT and Copilot, 95% are failing to see any measurable return at the profit-and-loss level. The report confirms these tools enhance personal productivity but do not, on their own, translate to company-level gains. The reasons cited—resistance to new tools, quality concerns, poor user experience, and lack of sponsorship—point to a failure of strategy, not just technology. The report’s core insight is that the 5% of companies extracting value are not just buying tools; they are building “learning-capable systems” integrated with their unique workflows. They treat AI implementation as a systemic change, not a software update. This requires rewiring processes and investing in data readiness. This points to a disconnect. Enterprise users, the report finds, like consumer-grade tools such as ChatGPT for their flexibility and immediate utility. Yet these same users are overwhelmingly skeptical of custom or vendor-pitched AI tools, describing them as “brittle” or “science projects.” This is a classic user experience problem. The consumer tools feel empowering, while the enterprise tools often feel restrictive and clunky.
In my opinion this comes from the fact that what most vendors and consultants have been doing is closing their eyes, and throwing LLMs at business processes, somehow hoping they would “do their magic” just as they seem to do in many consumer applications. The difference is that consumer applications are very much alike across the user base, so a model trained on large foundational data can manage 99% of requests reasonably well. In an enterprise though, the processes are more complicated and individual, so a general model trained on Internet data will not be able to yield results nearly as good as in simple one-step individual requests.
All this has led to a surge in “Shadow AI.” Employees from over 90% of companies report regular use of personal AI tools for work tasks, even though only 40% of their companies have an official LLM subscription. The workforce isn’t waiting for a top-down solution; they are using their own tools to solve their own problems. For a leader, this means your most sensitive corporate data is likely being pasted into a consumer-grade tool with a questionable privacy policy, creating a large, ungoverned risk.
A Quick Word on “Hallucinations”#
Before we proceed, a moment on terminology. We often hear that LLMs “hallucinate.” I believe this is a misleading term. It comes from psychiatry and mean sensory perceptions—such as seeing, hearing, smelling, tasting, or feeling something—that occur without any external stimulus and feel real to the person experiencing it. LLMs do not hallucinate; they generate statistically probable sequences of words based on their training data. They have no concept of “truth.” The term “hallucination” frames the problem as a correctable glitch in an otherwise thinking machine. It is more accurate to say that the machine is operating exactly as designed, and we are the ones who are mistaken (are we the ones who actually hallucinate?) to expect it to possess a human-like understanding it does not have.
A Recap of the Journey So Far#
Over the past nine weeks, we have built a foundational argument. We began with Pragmatism, establishing that most AI projects fail not because of flawed models, but because they are built on poor data and belief that models can perform some kind of magic tricks, runnings business processes as complex as they get. We then moved to Trust & Control, arguing that true governance is not found in policy documents (“governance theatre”) but in the engineering reality of auditable, automated controls—“Governance-as-Code.” Finally, we explored the Human-Centric dimension, reframing the narrative from one of replacement to one of augmentation and expanding responsibility. These three pillars—Pragmatism, Control, and Human-Centricity—are not separate concepts. They are the three legs of the stool upon which a stable AI equilibrium rests. Without all three, any strategy will collapse. A leader’s job is to be the one person in the room who can hold all three ideas in their head at once.
The Challenge of Constant Change#
The central difficulty of AI governance is that you are trying to build a stable structure on constantly shifting ground, governing a technology that is changing month-by-month. The models get more powerful, the regulations evolve, and societal expectations keep changing. Complexity is always increasing. A governance framework designed for the AI of 2024 is already obsolete. Attempting to create a single, static rulebook is therefore a futile exercise. The psychological trap here is the desire for certainty. Leaders are paid to provide clear answers. But in the world of AI, the only certainty is uncertainty. The winning strategy is not to build a fortress that can withstand a predicted storm, but to build a ship that can navigate any weather.
Building an Adaptive Framework#
So, how do we build this resilient, future-proof governance ship? The key is to shift from building rules to building an adaptive system. This system has three core components:
1. A Living Model Inventory: Your inventory of AI systems cannot be a spreadsheet updated once a year. It must be a dynamic, real-time dashboard connected directly to your development pipelines. It should automatically flag new models, track their performance, and monitor for “model drift.”
2. Principle-Based “Guardrails,” Not Prescriptive Rules: Instead of a 500-page rulebook that tries to account for every eventuality, define a set of clear, non-negotiable principles (e.g., “No AI system will make a final, un-reviewed decision on a customer’s access to a fundamental service.”). Then, empower your teams to innovate within those guardrails, using their judgment to apply the principle to new situations.
3. A Rapid-Response “Triage” Team: Create a small, cross-functional team (e.g., from Legal, Engineering, and a business unit) that can be convened at short notice to assess a novel AI use case or an unexpected model behaviour. Their job is not to be a slow-moving committee, but to make a fast, pragmatic decision based on the established principles.

The Leader’s Role#
In this world of constant change, what is the ultimate, enduring role of the leader? It is to be the head gardener of the organisation’s human-AI ecosystem. A gardener does not command the plants to grow. Instead, they focus on creating the conditions for healthy growth. They enrich the soil (data quality), pull the weeds (kill bad projects early), ensure there is enough sunlight (provide clear strategic direction), and build strong trellises (the adaptive governance framework) to support the plants as they climb. They need to be pragmatic, not believing in any snake oil promises of getting the plants to grow at 10 times the usual pace. This is a continuous, patient, and human-centric task. It requires critical thinking to assess the health of the system, ethical stewardship to ensure it grows in a beneficial direction, and a focus on the long-term health of the garden, not just the size of a single season’s harvest. It is a journey of learning and adaptation, not a destination.
This, ultimately, is the “AI Equilibrium." It is not a static state to be achieved, but a dynamic balance to be maintained.
All the best, Krzysztof
