Skip to main content

Issue #42 — Cross-Sector Patterns: What Regulated Industries Teach the Rest

·1976 words·10 mins

Dear Reader,

In Issues #36 through #39, we examined AI deployment in four regulated sectors: banking, telecoms, pharmaceuticals, and the public sector. Each has its own regulator, its own acronyms, its own version of institutional caution. KNF in Warsaw. EBA in Paris. The FDA in Silver Spring. A Dutch tax authority answering to no one for seven years.

Each sector has its own governance problems shaped by the pressure of its particular regulator. But beneath those differences lies a shared layer: the same five failures appear in every sector, almost word for word, regardless of the regulatory regime above them.

This issue maps the shared patterns. If you operate outside banking, telecoms, pharma, and the public sector, the findings still apply — with one difference: you have slightly more time to build governance before regulation forces the change.

Five structural failures
#

1. You cannot govern what you have not counted.

In April 2025, a survey by the Polish Banking Association’s FTB working group found that only one bank in five had a complete inventory of its AI systems — even a basic list, let alone a risk assessment. Poland’s public sector has no AI system register. No Polish pharmaceutical company has published a system inventory or classification against the EU AI Act. In telecoms, most operators have not completed a formal Annex III classification exercise, despite deploying AI agents in network operations, customer service, and fraud detection simultaneously.

Without an inventory, there is no governance. You cannot classify risk in systems you have not catalogued. You cannot assign oversight to processes you have not mapped. The EU AI Act’s August 2026 compliance deadline requires a functioning inventory as a precondition. Across four sectors and hundreds of organisations, the precondition is not met.

2. Governance arrives after deployment, not before.

ING operates over 100 risk factors assessed before any generative AI system reaches production. That is the exception. The norm, in every sector examined, is deployment first and governance under pressure. The Dutch toeslagen algorithm ran for seven years before a court intervened. Poland’s own STIR system freezes bank accounts on classified criteria with no published audit. Telecoms operators have deployed AI agents into network operations (47% have reached operational autonomy for specific AIOps use cases) while only 21% report adequate governance for autonomous agents.

The pattern is consistent: the cost of building governance before deployment is visible and immediate. The cost of not building it is invisible until the system fails publicly. Organisations systematically choose the cheaper option today and the more expensive one later.

3. Human oversight exists on paper and nowhere else.

Every sector describes oversight differently, but the problem is the same. Article 26 of the AI Act requires “meaningful” human oversight of all high-risk AI systems, regardless of sector. In banking, a team that reviews flagged cases once a week does not satisfy it. In pharma, a clinician who has access to override an AI recommendation but never exercises that access is not providing oversight. They are providing liability cover. In telecoms, a retention agent who calls a customer flagged by a churn model is not a governance mechanism unless that agent has documented authority and real ability to override the model’s recommendation. In the public sector, STIR freezes bank accounts for 72 hours without notifying the account holder, and the algorithm’s decision criteria are classified as state secrets by design. NIK has not conducted a single audit of the system.

One verification question: when was the last time anyone in the oversight process actually overrode a system decision? If the answer is “never,” the oversight is fiction.

4. The system cannot explain its own decisions.

Three of four sectors have live exposure to GDPR Article 22 — the prohibition on solely automated decisions with significant individual effects. This is not an EU AI Act obligation arriving in August 2026, but rather an obligation that exists now. Banking credit scoring models produce accept/reject decisions that affect individuals. Telecoms churn scores trigger differential treatment (better offers for high-value customers, degraded service for predicted churners) without a documented human decision point. Pharmaceutical AI influences clinical pathways. Public sector algorithms determine benefit eligibility and tax compliance assessments.

In each case, the affected individual has a legal right to an explanation. In each case, the organisation’s ability to provide one ranges from limited to non-existent. The exposure is not theoretical.

5. The vendor deployed, and the organisation assumed compliance transferred with the invoice.

Across sectors, a recurring pattern: a third-party AI system is procured, deployed, and operated, and the deploying organisation assumes that compliance responsibility sits with the vendor. It does not. Under the AI Act, the deployer carries its own obligations regardless of what the vendor contract says. Under DORA, LLM API calls from banking systems constitute ICT third-party dependencies that must appear in the institution’s third-party register. Watson for Oncology was deployed globally on training data that no hospital had independently audited. Telecoms operators buy point solutions from vendors who sell use cases without accountability for the portfolio.

The question “Does your AI vendor appear in your compliance register?” has an uncomfortable answer in most organisations: nobody has checked.

Where sectors diverge
#

The five failure patterns are shared. The consequences are not.

The exit asymmetry. If a bank’s credit model treats you unfairly, you try another bank. If a telecoms provider degrades your service, you switch. In pharmaceuticals, the treating physician can override the algorithm’s recommendation. But if a government algorithm freezes your bank account — as STIR does in Poland — there is no competing tax authority to appeal to. The Dutch toeslagen scandal affected 26,000 families, led to 30,000 EUR per family in compensation, and brought down the cabinet. Australian Robodebt issued 469,000 debt letters, may have contributed to suicides, and cost AUD 1.56 billion in settlement. The UK’s A-level algorithm downgraded 40% of grades and was reversed within four days under political pressure.

Algorithmic opacity plus a subject with no alternative — that is what turns governance failures from compliance problems into political crises. The risk is categorically different when the citizen has nowhere else to go.

Deployment order is sector-dependent. Telecoms has the clearest logic: start with AIOps because it generates the structured telemetry data that feeds every subsequent use case — churn prediction, customer service, network planning. The data flywheel only works if use cases are connected. In pharmaceuticals, regulation dictates starting with manufacturing, where the burden is lowest and data quality highest, not clinical decision support. In banking, there is no natural order, but inventory must come first. The public sector had no choice in sequencing — systems are already deployed, so governance is built in reverse.

Opacity is a design choice in one sector, a failure mode in the rest. In banking, vendor LLMs add opacity through third-party dependency, addressable with procurement controls. In pharma, opaque models produce unexplainable clinical recommendations, a system failure. In telecoms, siloed data systems create accidental opacity. In the public sector, STIR’s algorithm is classified as a state secret by policy. Only in government is opacity a deliberate governance choice rather than an engineering problem.

Why non-regulated industries should care now
#

In every sector we examined, the organisations that built governance voluntarily did so well before the regulatory requirement. ING, Nordea, and BBVA had AI governance infrastructure before the AI Act existed. When regulation arrived, they had a working system. The rest started catching up.

The same pressure is reaching non-regulated industries through two routes. First, procurement: regulated clients (banks, pharma companies, government agencies) are beginning to require governance documentation from their suppliers — AI Act obligations flow down the supply chain. Second, liability: Fortune reports that 64% of companies with annual turnover above one billion dollars have lost more than one million to AI failures; 80% of organisations report risky AI agent behaviours. The question is not whether governance requirements will reach non-regulated industries but whether they arrive as regulation, as procurement requirements, or as litigation.

The voluntary governance advantage
#

The most mature governance model from this series is Canada’s Directive on Automated Decision-Making, in force since 2019. Four impact levels with escalating obligations. At the highest: a mandatory human decision-maker plus a published algorithmic impact assessment. It has been operational for seven years. Neither Polish nor EU law has anything comparable yet.

The practical takeaway is not to wait for regulation. Nordea’s head of AI governance put it plainly: “If I don’t embrace governance, I should go work for a startup.” The remark was about regulatory survival. But read it differently and it is about competitive positioning. The organisations that built governance infrastructure before they were forced to survived regulation more easily and turned it into a competitive advantage.

The Deloitte State of AI 2026 report puts the governance readiness gap at 30% across all enterprises, below technical infrastructure at 43%, below data management at 40%, and well below tool access at 60%. Tool access is twice as high as governance readiness. That disparity is where the next wave of AI failures will originate, regardless of sector.

Briefing
#

97% of enterprises expect a major AI agent incident within a year
#

The 2026 Agentic AI Security Report from Arkose Labs, based on a global survey of 300 enterprise leaders across security, fraud, identity and AI functions, found that 97% expect a material AI-agent-driven security or fraud incident within the next 12 months. Nearly half expect one within six months. The gap: only 6% of security budgets are allocated to AI agent risk. Over half of organisations have no formal AI agent governance controls in place. 87% of respondents agree that AI agents operating with legitimate credentials pose a greater insider threat than human employees. The report’s framing is direct: “The technology outran the controls.”

Shadow AI is now an executive problem
#

Forbes published a piece this week arguing that shadow AI is structurally different from shadow IT. Shadow IT was about unsanctioned infrastructure. Shadow AI is about unsanctioned cognition: data is not just moved, it is transformed. The prompt is the new exfiltration channel: context, pricing logic, competitive roadmaps leave the organisation in a copy-paste. And when agents have tool access, “generate” becomes “do.” The author’s test for readiness: if your organisation cannot answer “Which AI tools are being used today?” and “What data is flowing into prompts?” — you are not governing AI. You are guessing.

AI agents are an identity problem
#

A Security Boulevard analysis frames AI agent risk as fundamentally an identity management problem. AI agents operate through service accounts, IAM roles, and API keys, the same infrastructure as any machine identity. The finding that ties it together: 92% of cloud identities are overprivileged, and AI agents often end up with more access than the developers who built them. The proposed solution (treat AI agents as first-class identities subject to least privilege and just-in-time access) maps directly to the governance patterns this issue examines.

Questions for your leadership team
#

  1. Does your organisation maintain a current inventory of all AI systems in production, including third-party vendor tools and employee-adopted AI? Could you produce it within 48 hours?
  2. For each system on that list: who is the named individual accountable if the system produces harm?
  3. When was the last time a human in your oversight process actually overrode an AI recommendation? If the answer is “never,” what does that tell you about the oversight?
  4. Do your AI vendor contracts appear in your compliance register? Does your procurement team know they should?
  5. If an EU AI Act-style regulation applied to your sector tomorrow, how much of your current governance documentation would survive an audit?

The August 2026 deadline applies to high-risk systems, but the questions apply to everyone who uses or prepares to use AI in enterprise environments.

Stay balanced, Krzysztof Goworek