Skip to main content

#2 AI's Reality Check

·1439 words·7 mins

Dear Reader,

After setting the stage last week, it’s time for a dose of engineering reality. The initial, breathless excitement for Generative AI is now colliding with the unglamorous work of making it function inside a real business. For many leaders, the journey feels less like a rocket launch and more like trying to start a stubborn lawnmower on a damp November morning.

This isn’t cynicism. It’s the necessary pragmatism required to see where true value lies. It’s about avoiding what the British in 1945 called „cargo cult" thinking: mimicking the rituals of success without understanding the underlying principles. An AI can produce a 30-page report that looks like the product of deep thought; it can generate code that appears functional. We must not confuse the convincing facsimile of work with the work itself. This doesn’t render these tools useless, but it does mean we must govern them well—and resist the powerful temptation of our own mental laziness. After all, if an AI can sound intelligent, it’s very easy for us to stop questioning whether it actually is.

The Briefing
#

A paradox is defining the current enterprise AI landscape: a widening chasm between the revolutionary proclamations of technology vendors and the sobering reality of implementation.

Two dominant narratives are shaping the market. First, there are the bold claims of a “digital labour revolution.” Salesforce’s CEO, for instance, suggests AI now performs “30% to 50% of the work” in key departments. However, deconstructing this figure reveals it is not a single, auditable metric but a marketing narrative built by aggregating specific, task-level automations. This “digital labor revolution” is being proclaimed alongside significant layoffs, suggesting a primary strategy of using internal AI success as a powerful case study to sell its platforms to other enterprises.

Second, there is the grand architectural vision. ServiceNow is pursuing a future where it becomes the “AI operating system” for business, a central control tower for the chaos of disparate AI tools. It’s a cohesive and strategically sound vision, but one that relies on a level of organisational and data maturity that few companies actually possess. It’s like selling a state-of-the-art flight control system to someone who hasn’t yet built an aeroplane. Thinking that AI Agents will soon replace existing business processes and workflows, designed and optimised by people, is not unlike the e-commerce, cloud, or Big Data hypes from years ago. All those technolgies have had their big impact on changing business and technology, but the reality always ends up hybrid. E-commerce has not replaced brick-and-mortar shops completely, cloud is a great solution unless we really need to scale and then start seeing the huge invoices. Big Data had great promises, but in the end complex data management architectures, as well as poor data quality ended up limiting its practical potential.

Another counterpoint comes from the experience of Klarna. After boasting its AI chatbot had replaced 700 human agents, the company had to publicly reverse course. The CEO admitted that an excessive focus on cost-cutting led to “lower quality” service. Klarna is now rehiring humans to handle complex interactions, having learned a crucial lesson: “AI gives us speed. People give us empathy.” Also, people generally prefer talking to people, not bots; this is especially true for the growing share of silver generation in the overall population.

This isn’t an isolated incident. With some reports suggesting that 42% of businesses are now scrapping the majority of their AI initiatives, the takeaway for leaders is clear: ignore the marketing noise. The smarter path is to measure results carefully and learn from the expensive public mistakes of others.

The “1% Problem”: Why Generic AI Isn’t Your Enterprise Silver Bullet
#

This reality check leads us to a crucial point about data. The large language models making headlines are trained on the public internet. Impressive, certainly. But that ocean of information—that digital soup of Wikipedia articles, Reddit arguments, and forgotten blogs—often represents only 1% of the data truly relevant to your business.

The real gold, the 99%, lies in your proprietary data: your customer transaction histories, your internal risk models, your supply chain logistics, your private market intelligence. A generic model has no understanding of your company’s unique context. It doesn’t know that “Project Nightingale” is a top-secret R&D initiative, not a bird-watching club.

What’s even more challenging and scary — it cannot grasp the subtle, unwritten rules that govern your most valuable client relationships.

The market hype suggests you can simply “plug in” these models. This is dangerously misleading. Imagine a wealth management division using a generic AI to craft financial advice for high-net-worth clients. The AI could generate perfectly fluent, grammatically correct advice based on public financial information. But it would be utterly blind to the client’s specific, off-the-record risk tolerance, their complex family trust structures, or their whispered intention to sell a business in two years. The advice wouldn’t just be generic; it would be actively harmful, a form of automated malpractice.

Without being deeply and securely integrated with your unique data, a generic AI’s value is limited. It is an incredibly expensive way to get a slightly better search engine, one that hallucinates with unnerving confidence. Making decisions, contrary to what many tech bros and managers wanted and expected, is not based only on data. There is also intuition and experience, which come from many years of making decisions and seeing results — our own, protein-based version of feedback reinforced learning. Models are far from getting the same capabilities humans have.

Unlocking Your Data Safely: The Governance Imperative
#

Here we are at the critical point. If the real value of AI is tied to your data, then enabling access is paramount. But this is a double-edged sword.

  • The Opportunity: AI can analyse your data to find new efficiencies, personalise customer experiences, and accelerate innovation.

  • The Risk: Unleashing AI on your core data without proper governance is an invitation for “expensive problems”—from regulatory fines, through amplifying biases hidden in your data, or even leaking the data, ultimately destroying customer trust.

Think of it this way: you wouldn’t give a brilliant but unknown intern the keys to your entire corporate server room on their first day. You’d give them supervised access to specific files. Yet, many organisations are so eager to “do AI” that they are rushing to connect powerful, third-party models to their most sensitive data with little more than a hopeful smile.

The failure is rarely the AI model itself; it’s the readiness of the data ecosystem it must drink from. It’s like inviting a world-class chef to cook in a kitchen with no ingredients, rusty pans, and a faulty oven. The result will be disappointing, and it won’t be the chef’s fault. A poorly governed data environment doesn’t just limit an AI’s potential; it actively poisons it, turning a powerful tool into a vector for chaos. It can launder old biases into new, automated decisions, giving them a dangerous veneer of objective, technological authority.

Questions for Your Leadership Team
#

As you navigate this landscape, here are four pragmatic questions to put to your team. They are designed to cut through the hype and focus on the engineering and operational realities that truly matter.

  1. Are we mistaking a calculator for a colleague? Are our expectations for AI autonomy grounded in the reality of today’s technology, or do we need to design more robust human-in-the-loop systems to prevent costly, nonsensical errors?

  2. Is our data strategy a strategy, or a wishlist? What is our specific, funded, and accountable plan to ensure the quality, security, and ethical sourcing of the proprietary data that will fuel our most critical AI initiatives?

  3. Can we survive a “governance audit” tomorrow? If a regulator asked why our AI made a specific decision about a customer, could we show them the auditable, technical controls and data lineage, or would we just have to shrug and point to a policy document?

  4. Are we outsourcing our thinking? How do we create a culture where AI is used as a tool to augment and challenge our thinking, rather than a crutch that allows our critical faculties to atrophy?

Towards a Pragmatic Equilibrium
#

This reality check isn’t cause for pessimism. It is a call for the strategic diligence and engineering rigour that separates sustainable success from expensive failure. The path to a true AI Equilibrium lies in respecting the technology’s limits while meticulously governing your most valuable asset: your data.

In our next issue, we’ll dip a toe in the water of what the AI Act really considers an “AI System.” The answer may surprise you.

Until then, lead with foresight.

Krzysztof