When a new technology becomes boring, the initial chatter about existential risk and revolution slowly fades, and the work on applying it in enterprise scenarios gets more and more intensive. For Artificial Intelligence, that work results in billions of API calls. We are connecting our core business processes to third-party AI models at a terrific rate, treating them as just another utility.
While we were worrying about sentient machines, the real, present-day liability became the connection itself. The risk is not a Hollywood plot; it is something far more familiar and corrosive: operational instability and classic security failures, now amplified to an higher degree. Your service level agreement (SLA) with your AI vendor may guarantee uptime, not consistent behaviour. It insures you against a server fire, not against the model subtly changing its mind and breaking a critical process without a single system alert. Governing this new supply chain is not a matter of writing acceptable use policies, instead it is an engineering problem.
The Briefing#
Recent developments in the field of Artificial Intelligence indicate a broadening of its impact beyond technology and into the core of commercial practices, workforce structure, and consumer protection. Events over the past two weeks highlight this shift, with significant new regulatory actions in Europe and the United States, alongside new research into AI’s effect on the labour market. These developments signal a new phase of AI integration, where second-order consequences are now demanding attention.
EU Data Act Comes into Force, Targeting Vendor Lock-In In Europe, the EU’s Data Act began its phased implementation on 12 September 2025, introducing significant changes to the digital marketplace.1 The legislation is designed to create a fairer data economy by granting businesses and individuals new rights over their data. Key provisions include the right for users to switch seamlessly between cloud and software-as-a-service providers, with the Act mandating a gradual elimination of switching fees . The Act also allows customers to terminate contracts with just two months’ notice, a move intended to increase competition and flexibility . A crucial component of the legislation is the new right for users to access and port operational data generated by connected Internet of Things (IoT) devices. This impacts manufacturers and service providers who previously held exclusive control over this information, compelling them to make it available to the user .
Research Shows AI’s Impact on Entry-Level Employment New research is providing a clearer picture of AI’s tangible impact on the workforce, particularly at the entry level. A recent Stanford University study revealed that since the debut of ChatGPT in late 2022, employment for workers aged 22 to 25 in occupations highly exposed to AI has fallen by 13% relative to less exposed fields. The technology is automating foundational “grunt work” tasks such as summarizing reports and debugging code, which have traditionally been the training ground for junior professionals.2 This trend has raised concerns among researchers and workplace experts about the risk of “skill atrophy,” where a generation of workers may not develop the deep, foundational expertise that comes from manual problem-solving. Experts warn this could prevent junior workers from acquiring the nuanced judgment required for future senior leadership roles.
US Regulators Launch Inquiry into Psychological Harm from AI Chatbots In the United States, regulators are opening a new front in AI oversight that moves beyond data privacy into the realm of psychological impact. On 11 September, the Federal Trade Commission (FTC) launched a formal inquiry into seven major technology companies, including Alphabet, Meta, and OpenAI, regarding their AI “companion” chatbots. The investigation focuses specifically on the potential for psychological and emotional harm to children and teens.3 The FTC is seeking detailed information from the companies on how they design chatbot personalities, test for negative behavioural impacts, and mitigate the risks associated with their persuasive capabilities. This action signals a significant expansion of regulatory interest, establishing a precedent for examining the psychological consequences of human-AI interaction.
The Stability Mirage
The problem with using third-party AI is that your vendor’s definition of ‘improvement’ is not the same as yours. An enterprise requires predictable, stable behaviour from a tool integrated into a production workflow. An AI provider, on the other hand, is in a race to improve benchmarks, reduce costs, and patch safety flaws. These goals are often in direct conflict. One of the symptoms of this disconnect is known as “model drift.” It is the phenomenon where a model’s behaviour changes over time, even if you are calling the same versioned API. Research from Stanford University gave this a sharp edge when it tracked OpenAI’s models over a few months. GPT-4’s accuracy on a set of maths problems fell drastically. During the same period, the supposedly less capable GPT-3.5 got significantly better at the same task. The vendor’s update was, for a specific user, a significant downgrade. This is not an isolated incident. Developers have reported entire projects being abandoned after a vendor update made a model “almost useless” for a task it previously handled with ease. The vendor’s release notes will speak of new features and safety updates. They will not mention that the nuance you relied on for a compliance parser has been trained away. The consequence is a silent failure mode. An AI tool used for financial data extraction might not crash; it might simply start hallucinating numbers with greater confidence. A marketing copy generator might not stop working; it might just lose the tone of voice that matched your brand. Because the system does not throw an error, these degradations can go undetected for months, quietly corrupting data and leading to flawed business intelligence. The vendor’s SLA guarantees the API will answer the phone; it offers no assurance whatsoever about what it will say.
The Security Illusion#
The most probable way an AI system will cause a major security incident has little to do with sophisticated AI-specific attacks. The path of least resistance for an adversary is, as ever, the simplest one. They will not waste time crafting elaborate prompts to trick a model; they will find an API key left in a public code repository. Recent history provides a catalogue of these mundane failures. An exposed API key gave outsiders access to xAI’s private models for two months. Dropbox suffered a breach when an attacker accessed API keys in a production environment via a compromised service account. These are not new problems. But the prize at the end is now far greater. An API key for a simple weather service is a nuisance; an API key for a model connected to your customer database is a catastrophe. Compounding this is the well-intentioned insider. The most famous example involved Samsung employees who, in an effort to be more productive, pasted confidential source code and internal meeting notes into ChatGPT to have them fixed or summarised. They were not malicious; they were simply using a powerful tool to do their job. Without technical guardrails, human error is inevitable. A policy document stating that employees should not leak secrets is a comforting piece of theatre, but it is not a control. This reframes the security challenge. The focus must shift from the exotic to the pragmatic. The Open Worldwide Application Security Project (OWASP) now maintains a top ten list of risks for large language model applications. The most critical threats are not abstract, but tangible business risks.
| Selected OWASP LLM Risk | Description for Executives |
|---|---|
| LLM01: Prompt Injection | Tricking the AI into performing an unintended action, such as a customer service bot issuing unauthorised discounts. |
| LLM06: Sensitive Information Disclosure | The model accidentally revealing confidential data from its training set or the current conversation. |
| LLM08: Excessive Agency | The AI is granted too much power, allowing it to take damaging actions in other systems (e.g., deleting files, sending emails). |
| LLM04: Model Denial of Service | Overwhelming the model with resource-intensive requests, causing service degradation and high costs for legitimate users. |
The Control Imperative: The AI Gateway#
Decentralised adoption of AI creates hundreds of unmonitored, ungoverned connections to the outside world. The way to manage this in an enterprise is to centralise access through a single, architectural chokepoint: an internal AI Gateway. The concept is simple — instead of allowing every developer and application to connect directly to third-party vendors, all traffic is forced through one, and only one, managed pipeline. This gateway is not a single product, but an architectural pattern that acts as a stable abstraction layer between your internal systems and the volatile external AI market.
Its strategic purpose is threefold:
1. Vendor Agnosticism: It provides a unified interface. If a vendor’s model degrades or becomes too expensive, you can swap it for a competitor’s without rewriting every application that uses it.
2. Centralised Observability: It gives you a single place to log every request, monitor every cost, and audit every interaction. Operational blindness is replaced with a single pane of glass.
3. Technical Enforcement: It transforms abstract policies into automated, auditable controls. It is the place where you enforce budgets, redact sensitive data, and block threats before they reach the outside world.
Implementing a gateway involves a classic build-versus-buy decision. One can use open-source tools, cloud-native services from providers like Azure or AWS, or dedicated commercial platforms. The choice is secondary — the primary goal is simply to have one.
A Framework for Control: The Gateway Checklist An effective AI Gateway is not passive plumbing. It is an active control plane that enforces rules. Regardless of the implementation, it must provide the following capabilities. This is not a feature list; it is a baseline for defensible governance:
Immutable Audit Log: It must capture a complete record of every transaction: the prompt, the response, the token count, the latency, and the user who made the call. This is non-negotiable for compliance and debugging.
Automated Data Redaction: It must be able to scan outbound prompts for sensitive information—personally identifiable information, financial data, internal project names—and strip it out before it leaves your network.
Credential Vault: It must manage all third-party API keys centrally, abstracting them away from developers. Keys should never be stored in application code.
Cost Controls: It must enforce hard spending caps and token-based rate limits on a per-user, per-team, or per-project basis. This prevents a bug or an attack from turning into a multi-million-pound bill.
Semantic Caching: To reduce cost and latency, it should cache responses to common prompts, avoiding redundant API calls to the vendor.
Automated Failover: If a primary model provider suffers an outage or severe performance degradation, the gateway must automatically re-route traffic to a secondary model to ensure business continuity.
Conclusion#
The defining challenge of this phase of AI adoption is not about mastering the technology itself, but about mastering its integration. The risks are not in the model, but in the connection. Liability now flows through the API. An organisation that allows ungoverned, direct connections to third-party AI vendors is exposing itself to unacceptable operational and security risks. Hope is not a strategy, and a policy document is not a control. The only defensible position is to implement a technical architecture that re-asserts control. By forcing all traffic through a single, intelligent gateway, you transform a chaotic supply chain into a managed, stable, and auditable internal service. This is the necessary, pragmatic engineering required to build on an unstable foundation.
Until next time, build with foresight.
Krzysztof
