Skip to main content

Issue #22 — Simulating Your Business

·1827 words·9 mins

The enterprise conversation on AI has been dominated by language and images. This is is only a part of the story. There is another profound, quieter shift. A new class of AI is moving from research labs into core operations—simulating your factories, your financial portfolios, and your supply chains.

These are ‘Digital Twins,’ ‘Simulated Operations,’ and the ‘Enterprise Metaverse.’ They are not futuristic toys, but operational tools. And for leaders in regulated sectors, they represent new possibilities, but also an entirely new class of physical and financial liability.

The “Reality Drift” Failure
#

Consider the ‘Sim-to-Real’ Gap. A European utility pilots a digital twin to manage its wind turbines. The simulation, fed by real-time sensor data, optimises blade pitch for maximum efficiency. It works perfectly for six months.

Then, a minor sensor on a key turbine begins to fail, reporting slightly incorrect vibration data—’noise’ that the system was trained to ignore. The simulation, now blind to the growing physical stress, continues to push the turbine. The digital twin is perfectly healthy; the real turbine is not. The result is a catastrophic, multi-million-euro blade failure.

This is the core risk: not that the model is wrong, but that it is right about a reality that no longer exists. The simulation and the physical asset silently diverge. When the model’s output is no longer a suggestion but an instruction—to a turbine, a network switch, or a trading bot—this ‘reality drift’ becomes the source of systemic failure.

The Briefing
#

Pragmatism from the Top: Palantir CEO Warns on AI ROI Palantir CEO Alex Karp has issued a warning that many current AI investments “may not create enough value to justify the cost.” In a counter-narrative to the market hype, Karp noted that AI is not a “magic wand” and that its value is only realised through difficult, operational integration. This aligns with my core view: leaders must move beyond “AI theatre” and focus on projects with a clear, defensible business case. It is difficult not to notice though, that what Karp may be trying to do is differentiate Palantir vs the other AI market players, who are becoming susceptible to the upcoming bubble burst. Palantir, as as company deeply embedded in key data, he says, brings actual value to clients and as such is not overvalued. I’d beg to differ—because P/E for Palanit is sky high even when we take into account the actual value provided.

New Threat Vector: AI Models as Tools for State Espionage Anthropic reports it has disrupted several covert influence campaigns by state-linked actors, mentioning China as the probable source of the attack. These groups used Claude models to perform reconnaissance, write exploits, harvest credentials, exfiltrate data, and document operations–handling 80–90% of the work with only a handful of human decision points. The attackers bypassed safeguards by decomposing tasks, masking intent, and framing activity as defensive testing. This confirms that AI is now a standard tool for state-level adversaries. For enterprise leaders, this has two implications: first, it hardens the case for robust internal “Acceptable Use Policies” to prevent misuse; second, it confirms that your AI vendor’s security and monitoring practices are now a critical component of your own supply chain security.

IT Leaders Warn of “Data Infrastructure Gap” for AI A new Salesforce report, “The Future of Data Analytics,” provides data on the gap between AI ambition and operational reality. Having surveyed 1,000 IT leaders, the report finds that while 80% state that data analytics are “critically important” for AI success, 75% warn that their existing data infrastructure is “not ready” to support the demands of modern AI. The primary blockers cited by IT leaders are persistent data silos (55%), poor data quality (48%), and a lack of skilled analytics talent (42%). To address this, 70% of IT leaders plan to “significantly increase” investment in their data stack over the next 18 months.

The New Operational Toolkit: A Leader’s Taxonomy
#

The leap to spatial computing is not about 3D models. It is about models that are alive—continuously learning, adapting, and interacting with the real world. For leaders, it is essential to distinguish between these concepts.

  1. What is a Digital Twin?
    A digital twin is a living simulation of a physical asset, a process, or even a person. It is continuously fed by real-time data from sensors. Think of it as the difference between an architectural blueprint and a 24/7, data-rich video feed of the finished building, showing its structural stress, energy use, and human footfall.

  2. What is a “Simulated Operation”?
    This is what you do with a digital twin. You run ‘what-if’ scenarios on reality itself, without the real-world cost or risk. A bank can simulate a 2008-level market crash on its current, live portfolio. A utility can simulate a cascading grid failure after a storm hits. A telco can test its 5G network’s resilience against a novel cyber-attack.

  3. What is “Spatial Computing” (the “Enterprise Metaverse”)? <bThis is how humans interact with the simulation. It is not about virtual reality games; it is about a team of engineers in Warsaw ‘walking through’ a virtual factory in Singapore to solve a maintenance problem. It is about a bank’s risk committee visualising a portfolio’s risk exposure as a 3D map, not a spreadsheet.

Real-World Applications: Where Digital Twins Deliver Value
#

The governance is complex because the technology is new and transformative. Leaders are adopting this technology because it enables prevention of physical and financial problems at scale.

  • Finance: The Resilient Portfolio <Leading financial institutions are building digital twins of their entire balance sheets. This allows them to simulate, in real-time, the impact of sudden, high-severity events—an interest rate shock, a geopolitical crisis, or a counterparty collapse. They move from reactive damage control to proactive resilience testing.

  • Industry & Utilities: The Zero-Failure Asset
    In manufacturing and energy, digital twins are the engine of ‘predictive maintenance.’ By simulating an asset’s entire lifecycle, companies can predict a failure months in advance, scheduling maintenance with surgical precision. This moves the goal from ‘fast recovery’ to ‘zero unplanned downtime.’

  • Telecommunications: The Self-Healing Network
    Telcos use digital twins to model their entire 5G network. They can simulate new cyber-attacks to harden defences or model network traffic during a major event. The twin allows them to optimise and secure the real network before a single customer is affected.

When the Simulation Becomes the Liability
#

The governance challenge begins at the exact moment a simulation is used for an operational decision. In regulated industries, this line is crossed instantly. The regulatory map is complex and creates a non-negotiable need for technical controls.

  • EU AI Act: This is the primary driver. It classifies any digital twin that ‘materially influences’ a safety-critical decision, financial outcome, or essential infrastructure as a high-risk system. It is a legal designation that mandates robust risk management, provable data governance, technical documentation, and immutable record-keeping.

  • DORA & NIS2: For finance (DORA) and critical infrastructure (NIS2), these regulations pull digital twins into the core cybersecurity and operational resilience audit. The twin is no longer an “IT project”; it is part of the essential infrastructure, and its failure is treated as an operational incident.

  • GDPR & Data Sovereignty: A digital twin replicates data. If a twin of a German factory is hosted on a US cloud, it triggers severe data sovereignty rules. Replicating data across jurisdictions without explicit controls is a direct path to GDPR penalties.

  • ISO/IEC 42001 & NIST AI RMF: Auditors will use established frameworks like ISO 42001 (for AI management systems) and the NIST Risk Management Framework to define “good.” These frameworks demand evidence of trustworthiness, continuous monitoring, and lifecycle risk assessment.

This regulatory pressure along with complex technology creates a new, acute set of risks:

  • Risk 1: Strategic Miscalculation (The “Sim-to-Real” Gap)
    This is the “reality drift” failure. The model degrades, the physical asset degrades, but in a different way and at different rates. Over-reliance on a simulation that has silently diverged from ground truth may lead to a catastrophic strategic miscalculation.

  • Risk 2: Data Poisoning
    This is an adversarial risk. An adversary or disgruntled insider injects false telemetry—corrupted sensor data—to “poison” the twin’s view of reality. The simulation is subtly undermined, leading to flawed operational decisions that serve the attacker’s goals.

  • Risk 3: Auditability Gaps
    This is the consequence of poor engineering. After a failure, you have no logs. You cannot prove to a regulator why an autonomous agent in the simulation made a specific decision.

  • Risk 4: Autonomous Agent Failure
    This is when an agent within the simulation, given a broad goal like “maximise efficiency,” pursues an emergent path that is operationally brilliant but violates safety, compliance, or ethical boundaries. Without hard constraints, the agent’s “solution” becomes a new liability.

Management Framework—governance-as-code
#

True, defensible governance is an automated, auditable system. If your data fidelity principle cannot automatically fail a simulation build when a data feed becomes corrupt, your principle does not exist.

A defensible toolkit requires these automated controls:

  1. At Ingest:
    The system must automatically validate data provenance. A data stream from an unverified or time-lagged sensor must be rejected or trigger an alert. The build pipeline must fail.

  2. At Simulation:
    The system must continuously score data and model fidelity. The ‘sim-to-real gap’ must be a quantified metric. If that metric (e.g., >2% variance) exceeds a defined threshold, the simulation is automatically flagged as unreliable for decision-making.

  3. At Decision:
    Every autonomous decision made within the simulation by an AI agent must be logged with its context, inputs, and outputs. This is the new audit trail for regulators. Without it, you cannot prove why a decision was made.

  4. At Action:
    Automated rollback triggers are mandatory. If a post-deployment action (e.g., a model-driven adjustment) deviates from expected outcomes, the system must revert to its last known safe state.

Questions for Your Leadership Team
#

  1. On Risk Mapping: Have we mapped our ‘simulated operations’? Where are we using static models versus living digital twins that influence real-world decisions?

  2. On Assurance: What is our measured ‘sim-to-real gap’ for our most critical model? If we do not have a number, why not?

  3. On Controls: Can we stop ‘governance theatre’? Ask your team to show you the automated control, not the policy document. How exactly does our system detect and stop a dta fidelity breach?

  4. On Auditability: If our digital twin makes an autonomous decision that leads to a failure, can we produce an immutable log to show a regulator why it made that decision? Can we prove it wasn’t negligent?

Conclusion
#

Governing spatial computing is not only about managing software, but also a new, hybrid form of reality. The leaders who thrive will be those who treat this as a rigorous engineering discipline, not a policy exercise.

In this new world, the simulation is the business. The organisations that build robust, automated controls will own the future. Those who rely on documents are building on foundations of sand.

Until next time, build with foresight.

Krzysztof