Dear Reader,
Poland ranked first in the EU for mobile and internet banking transactions in 2025. BLIK, the country’s instant payment system, processed 2.9 billion transactions worth €104.9 billion last year — figures that place it among the most active retail payment systems on the continent. PKO Bank Polski runs more than 80% of its credit decisions through machine learning models. Bank Pekao processes 1.5 million documents per quarter through AI. ING Bank Śląski launched a GenAI assistant for corporate clients in December 2025.
No Polish bank has published a standalone AI governance framework.
This is not a uniquely Polish problem. Across Europe, the pattern is the same: sophisticated AI deployments, early-stage governance infrastructure. The EBA (European Banking Authority) noted in November 2025 that most EU banks lack sufficient data governance frameworks adapted to AI-specific requirements. Bank compliance teams are largely aware of the August 2026 deadline — but awareness and operational readiness are different problems. In Poland, the gap between what the AI is doing and what the governance documents say is particularly visible. That will matter when KNF inspections start.
The window is six months.
The Deadline Most CROs Have Not Operationalised#
On 2 August 2026, Article 26 of the EU AI Act becomes enforceable for deployers. For banks, the core implication is specific: any AI system that evaluates the creditworthiness of a natural person is a high-risk AI system under Annex III, Section 5(b). That covers a wide range of modern credit scoring infrastructure — neural networks, ensemble methods, and any LLM deployed in a lending decision chain.
What Article 26 requires from a deployer:
- A complete inventory of all high-risk AI systems in production
- Human oversight — not nominal, but a competent person with actual authority to intervene
- Continuous monitoring, including anomaly detection
- Operational logs retained for at least six months
- Transparent communication to customers when AI affects decisions about them
That is the minimum for a supervisory review, not the aspiration.
The inventory requirement alone may trip most banks. In April 2025, the Forum Technologii Bankowych at ZBP published a 62-page practitioner guide to AI Act compliance — the only Polish-language operational document in this space. Written by model validation and risk teams from Polish banks alongside legal counsel and technology practitioners who work with these systems in production, it identified the inventory problem as the primary gap. At the time of their research: only one bank in five had a full inventory of its AI systems. You cannot govern what you have not counted.
The same guide addressed one of the Act’s practical ambiguities: which ML techniques actually fall under the high-risk classification, and which do not. The line between a “traditional software system” and an “AI system” under the Act is less obvious than it appears, and the classification of specific techniques has been subject to ongoing industry discussion with regulators. The EU Commission was mandated to publish guidelines on classification scope — the current position should be verified against those guidelines before committing any system to a compliance category. What remains constant is that classification cannot be assumed: every system in the portfolio requires documented analysis.
The API Exposure#
There is a separate compliance exposure that banks building on commercial LLMs need to address specifically.
A bank that uses OpenAI, Azure OpenAI, or any equivalent service for any step in a credit decision — document summarisation, customer communication, scoring commentary — becomes a high-risk AI deployer under Article 26. Same obligations as if it had built the model itself. If the bank substantially modifies or white-labels the output, it may be reclassified as a provider under Article 25, with additional requirements including EU database registration and conformity assessment.
DORA adds a separate layer. Article 28 treats every LLM API call as an ICT third-party dependency — not a SaaS subscription. It requires documented contractual obligations with the provider, exit strategies, concentration risk monitoring, and audit rights. DORA entered into force on 17 January 2025. Banks that have integrated LLM APIs without adding them to the DORA third-party register are already non-compliant, regardless of where the AI Act deadline falls.
The practical test: does your LLM vendor appear in your DORA third-party register? If not, that gap predates August 2026.
Why Nobody Has a Playbook#
The EBA’s November 2025 analysis of the AI Act against existing EU banking law reached an unusual conclusion: the EBA sees no immediate need for new guidelines. It will focus on supervisory cooperation and may publish operational implementation guidance in 2026 or 2027. Until then, banks are expected to build compliance on the Act’s text and a non-binding EBA factsheet.
The AI Act specifies what must be achieved. It does not specify how. The most operationally complete AI risk management framework currently available is NIST AI RMF 1.0 — a US standard, built around four functions (Govern, Map, Measure, Manage), with a finance sector profile. Many European banks with structured AI governance programmes are using NIST in practice to fill the operational gap. BBVA is a notable exception, combining BCBS 239 with the AI Act text as a dual framework.
The more immediately useful reference document is BaFin’s guidance on ICT risks in AI use at financial entities, published 30 January 2026. Non-mandatory, but the only document from a major EU supervisor that provides operational implementation detail for DORA plus AI across the full model lifecycle: data acquisition, development, production operation, and retirement. For Polish banks, it is the most credible available reference until KNF publishes its own guidance.
KNF is paying attention. Its 2026 supervisory priorities name AI explicitly, with credit scoring processes specifically identified.
What Operational Governance Looks Like#
Three European banks have published enough operational detail to serve as reference points.
ING Group’s GenAI risk assessment covers more than 100 distinct risk factors before any system reaches production. The stated principle: “Governance cannot live in policy documents or slide decks — it must be embedded directly into the product.” Monitoring is automated and continuous, not periodic.
Nordea built a modular GenAI platform on AWS Bedrock, now used by 10,000 employees. The design principle is certifiable components: governance is applied to small, discrete parts of the system independently, then accumulated. They build once and reuse, rather than re-governing every deployment. Their Head of AI Adoption: “If I don’t embrace governance, I should go work for a startup.”
BBVA assembled 2,500 data scientists into a single unit and built a global model inventory with continuous monitoring. They treat model risk management and regulatory AI compliance as the same discipline, governed by the same infrastructure.
The common thread: governance decisions are made at the architecture level, before the system is built. The version that does not work — an ethics board that reviews systems after deployment — produces documentation. It does not produce what Article 26 requires.
The Subsidiary Gap#
The AI Act applies to subsidiaries. Leasing companies, factoring companies, and consumer finance arms that use ML models for credit decisions face the same Annex III classification as their parent banks — with materially fewer governance resources and, typically, smaller compliance teams.
Every major Polish banking group has subsidiaries that extend credit and use automated models to do it. Parent group AI strategy does not automatically translate into subsidiary-level Article 26 governance. KNF supervision covers the group structure, and gaps at subsidiary level show up in group-level inspections.
This is where the distance between AI strategy documents and operational governance is most pronounced. Large banks have built internal AI and compliance teams capable of running their own governance programmes — external advisory adds limited value at that level, and bank teams know it. Subsidiaries are different: the internal capacity is not there, the regulatory exposure is identical to the parent, and the advisory firms that dominate banking AI governance work at price points subsidiaries cannot sustain. The gap between regulatory obligation and available support is widest exactly where the institutional resources are thinnest.
Questions for Your Leadership Team#
What is in your AI system inventory? If you cannot list every AI model in production within 30 minutes, you are not ready for the Article 26 audit.
Which systems are Annex III high-risk? Credit scoring AI almost certainly is. Fraud detection AI is explicitly excluded. Do you have a documented classification for each production system — and has it been reviewed against current Commission guidance?
What happens when you call an LLM API? “Our vendor handles the compliance” is not how the law works. You are the deployer.
Which operational framework are you using? NIST RMF, ISO 42001, BaFin’s January 2026 guidance, or your own synthesis. There is no required answer. There must be an answer.
What does your human oversight actually look like? Article 26 requires a competent person with authority to intervene. “A team reviews flagged cases” starts an answer. It does not complete one.
The Briefing#
The EU Commission Missed Its Own Deadline on Classification Guidance#
The AI Act required the European Commission to publish guidance under Article 6 — the clause that determines what counts as a high-risk AI system — by 2 February 2026. It missed the deadline. A draft for consultation is expected by end of February, with formal adoption likely in March or April. For banks completing their system inventories now, this creates a specific problem: you cannot finalise the high-risk classification of your models against a standard that does not yet exist. The situation is further complicated by the EU AI Omnibus proposal, currently in trilogue, which would shift the Annex III compliance deadline from August 2026 to December 2027. Final text is not expected before late May — meaning the current legal deadline remains August 2026, and the Omnibus is not a green light to pause.
DORA Year One: ICT Risk Is the Worst-Scored Category in European Banking Supervision#
One year after DORA became applicable, the ECB’s 2025 SREP results show that operational risk and ICT risk received the lowest average scores across all supervisory criteria — the weakest-performing dimension systemically. On 18 November 2025, IBM, Accenture, AWS EMEA, and Microsoft Ireland were formally designated Critical Third-Party Providers, placing them under direct ESA oversight. The ECB’s 2026 inspection agenda includes two on-site campaign waves on cybersecurity and third-party risk. For any bank running AI workloads or LLM APIs on these platforms: the DORA register is now a live supervisory data source, and gaps will surface in 2026, not 2027. (IBM analysis)
59% of European Banks Now Have Dedicated AI Compliance Budgets#
ComplyAdvantage’s State of Financial Crime 2026 (600 senior decision-makers) found that 59% of European financial services firms have specific AI budgets and active projects — versus 46% in North America. The driver is the August 2026 deadline, not competitive ambition: firms are investing specifically to ensure AI models are explainable and auditable. The report confirms that AI-powered transaction monitoring and creditworthiness evaluation are classified as high-risk, with mandatory transparency and human oversight requirements. The firms moving fastest are not the ones spending the most — they are the ones that started with the inventory problem.
The Practical Window#
Legal firms will tell you what the rules say. The harder problem is building the operational system that survives a KNF inspection: model inventory, classification records, oversight architecture, monitoring logs, a DORA third-party register that includes every LLM API, and documentation that ties each system to a governance decision.
The FTB working group’s April 2025 conclusion — written by practitioners who manage these models in production — was that the gap between Polish banking’s AI capabilities and its governance infrastructure is real but closable. Nearly a year on, with August 2026 now six months away, the gap has narrowed for some and grown for others. Three steps cover most of the ground: list every AI system in production, classify each against Annex III using current Commission guidance, and check whether your LLM API vendors appear in your DORA third-party register. Everything else follows from those three steps.
The window is there. It is not open indefinitely.
Until next issue,
Krzysztof
Sources: FTB ZBP AI Working Group Report (April 2025) · EBA AI Act Mapping (November 2025) · BaFin AI/DORA Guidance (January 2026) · EU AI Act Annex III and Article 26 (Official Journal, June 2024) · DORA Article 28 · ING FFNews interview (February 2026) · Nordea AWS case study (December 2025) · BBVA Responsible Innovation (March 2025) · KNF Supervisory Priorities 2026 · Deloitte European Financial Centres Power Index 2025 · IAPP AI Act classification analysis (February 2026) · IBM DORA Year One (February 2026) · ComplyAdvantage State of Financial Crime 2026
