Skip to main content

Issue #46 — The Sequencing Imperative

·2146 words·11 mins

Dear Reader,

BCG’s 2025 report on enterprise AI value generation contains a finding that most organisations are not acting on. Companies in the top quartile prioritise an average of 3.5 use cases. Everyone else attempts 6.1. The leaders expect to generate more than twice the return on AI investment of their peers.

The instinct in most enterprises is the opposite: launch pilots across multiple departments, let teams experiment, see what sticks. The data says this instinct is wrong — not because breadth costs more than depth, though it does, but because of data dependency.

Every AI deployment produces data: logs, decisions, flagged exceptions, process outputs. Whether that data makes the next deployment faster or slower depends entirely on what the first one was built to produce. Deploy credit-scoring AI before you have inventoried your systems, and you are building models on data connections you do not know exist. Deploy customer-facing AI before internal operations AI has generated labelled process data, and you are building on nothing.

In July 2024, McDonald’s ended its AI ordering partnership with IBM. Taco Bell pulled its voice pilot from over 100 locations. Both had chosen customer-facing applications as their entry point — the most complex, data-dependent, governance-demanding position in the AI stack — before the operational layers beneath existed. The technology failed, but it failed at the wrong starting point.

This issue is about the question that precedes every AI deployment and that most organisations skip: not whether to deploy, but in what order, and why the order compounds.

The portfolio trap
#

The reason BCG’s top performers run fewer initiatives is not discipline for its own sake. S&P Global Market Intelligence’s enterprise AI survey found that 42% of companies abandoned most of their AI initiatives in 2025, up from 17% the year before — a year-over-year trend based on over 1,000 respondents. The average enterprise scrapped 46% of AI proofs-of-concept before they reached production.

Running six pilots simultaneously does not give you six chances to succeed. It gives you six projects competing for the same data engineers, the same integration capacity, and the same executive attention. None of them gets enough to reach production. The portfolio trap is not about picking bad use cases. It is about picking too many at once.

Why order matters
#

The argument for sequencing goes beyond resource allocation. It is about data dependency.

A data flywheel in enterprise AI is a self-reinforcing loop: the output of one system becomes training data, evaluation data, or input data for the next. Unlike consumer data flywheels — where more users produce more data, which improves the product, which attracts more users — enterprise flywheels operate across functional boundaries. The output of procurement AI feeds supply chain AI, which feeds demand forecasting, which feeds pricing.

Li and Agarwal formalised this in Management Science in 2023. Their finding: the provider’s incentive to improve the algorithm depends on how training data volume interacts with improvement effort. Deploy a system that generates low-quality or irrelevant data first, and you weaken the incentive — and the capability — to improve every subsequent system.

In practice, this means the first use case does not just need to deliver value on its own. It needs to produce data assets that the second use case can consume. If use case number one generates unstructured logs that use case number two cannot parse, the flywheel is broken before it starts.

This is why starting with a customer-facing application fails structurally, not just tactically. Customer-facing AI requires clean data, tested integration, governance infrastructure, and proven reliability. Those capabilities need to be built by earlier deployments. They cannot be assumed.

The productive sequence
#

Across the sectors covered in this newsletter, a consistent pattern emerges in the organisations that succeed.

In telecoms (Issue #37): AIOps — automating network operations — is a natural predecessor to customer-facing analytics like churn prediction. The reason is pragmatic: AIOps generates structured telemetry data that significantly enriches customer models. Operators that build churn prediction without network data produce models that are blind to one of the strongest predictive variables — service quality as experienced by the customer.

In pharma (Issue #38): regulation dictates the sequence. Manufacturing AI — quality control, batch optimisation — carries lower regulatory burden and higher data quality than clinical decision support. The organisations that succeeded started where the regulatory friction was lowest and the data was cleanest, then expanded toward clinical applications with the governance infrastructure already built.

In banking (Issue #36): inventory must come first. You cannot classify risk in systems you have not catalogued. The banks that attempted credit scoring AI without first completing a system inventory discovered they were building models on data from systems they did not know were connected.

Across all sectors (Issue #42): the cross-sector patterns analysis found the common denominator of failure — choosing a use case that exceeds the organisation’s current governance maturity. Credit scoring AI requires documented data management processes, system classification, and audit trails before it reaches production. Internal report automation does not. Effective sequencing does not mean building a complete governance framework before starting — it means selecting a first use case that fits within the governance the organisation already has. The first deployment is a governance exercise, and it should be scoped so that exercise can be completed without systemic risk.

BCG’s data reinforces this from the opposite direction: more than 80% of AI investment by leading organisations goes to reshaping core functions and inventing new offerings — not to incremental productivity tools spread across departments. The organisations that concentrated on internal operations before deploying customer-facing AI saw measurably higher returns.

Across the cases above and the engagements I have observed, a consistent order emerges. Data quality and classification first — everything downstream depends on it. Internal operations automation second, because it generates labelled process data at scale with low external risk. Decision support and analytics third, consuming structured data and producing decision logs. Customer-facing applications last, requiring all three preceding layers to be functional. This sequence is not a consulting framework. It is what the organisations that shipped AI into production had in common.

The framework gap
#

Despite the clear evidence that sequencing matters, there is no widely adopted standard for deciding which use case to deploy first. A survey by Enterprise AI Executive in October 2025 catalogued twelve distinct prioritisation frameworks from major consulting and technology firms — BCG, Deloitte, OpenAI, Google, Capgemini, PwC, Anthropic, Gartner, Microsoft, and others. Each uses different axes: impact versus effort, value versus feasibility, automatability scoring, regulatory readiness.

What they share is four criteria that appear in nearly all of them: business value anchored on tangible baselines for savings or revenue, feasibility that blends algorithmic difficulty with systems reality, risk — regulatory, reputational, and data privacy — and data readiness.

What none of them explicitly models is the dependency chain: how does the data produced by use case number one affect the feasibility and cost of use case number two? The frameworks evaluate use cases independently, as if each one were a standalone investment. In practice, the value of use case number one is partly the option value it creates for everything that comes after it.

Deloitte’s Enterprise AI Navigator comes closest for regulated industries, evaluating AI decisions through operational, regulatory, tax, compliance, and workforce lenses simultaneously. Their data shows that high-maturity organisations — those that keep AI projects operational for three or more years — build governance infrastructure into the first use case at twice the rate of low-maturity peers. The first deployment establishes the governance pattern that all subsequent deployments inherit.

The four-criteria assessment most frameworks provide — value, feasibility, risk, data readiness — is necessary but insufficient. It evaluates each use case as a standalone investment. The question it does not answer is what the first deployment produces for the second. Mapping that dependency chain — and sequencing around it — is the assessment Quintant runs at the start of an AI programme.

Briefing
#

SAP names “false sequencing” as the primary enterprise AI trap

Manos Raptopoulos, SAP’s Global President for Customer Success and a member of the company’s extended board, published a framework on 30 April identifying five moments that determine whether enterprise AI generates value or risk. The fifth — the strategy moment — identifies “false sequencing” as the primary trap: “focusing only on embedded AI leaves value on the table and jumping to deep industry transformation without governance and data maturity multiplies risk.”

Raptopoulos describes three layers that organisations must manage in parallel: embedded AI (productivity gains in existing applications), agentic AI (multi-agent orchestration across systems), and industry AI (sector-specific deep applications). The argument is that progression must be calibrated to readiness, not ambition. Deploying agentic or industry AI before governance and data foundations exist is not boldness — it is missequencing.

For Polish enterprises running SAP — which covers most large banks, manufacturers, and retailers in the market — the implication is direct. Deploying SAP’s agentic capabilities before completing the embedded AI layer creates precisely the compliance exposure Raptopoulos describes: probabilistic intelligence layered on fragmented foundations. Under EU AI Act Article 26, that sequence also means deploying high-risk AI without the monitoring infrastructure that earlier-layer work would have built (SAP News Center, 30 April 2026).

EU AI Act high-risk deadline stays at 2 August — deferral negotiations stall

The second political trilogue on the Digital Omnibus — the European Commission’s proposal to defer EU AI Act high-risk compliance from 2 August 2026 to 2 December 2027 — ended on 28 April without agreement. A third trilogue is scheduled for 13 May. If negotiations remain incomplete before 2 August, the original Act’s high-risk obligations take effect on that date as written.

The high-risk category covers AI systems used in credit scoring, insurance risk pricing, recruitment and performance evaluation, and critical infrastructure — precisely the domains where Polish banks, insurers, and public sector organisations are most advanced in AI deployment. The Omnibus had offered a 16-month extension.

The sequencing implication is direct. Organisations that deferred compliance work on their first or second high-risk AI deployment, assuming the deferral would pass, are now 13 weeks from the original deadline with no confirmed safety net. For KNF-supervised institutions, the question is not legal. It is operational: which of your current high-risk deployments is furthest from Article 26 compliance, and what does closing that gap require? That question should have been part of the selection criteria when the use case was first chosen (DLA Piper GENIE, 29 April 2026).

Questions for your leadership team
#

  1. How many AI use cases is your organisation pursuing simultaneously? If the number is above four, what is the rationale for breadth over depth — and does the data support it? For organisations deploying AI in regulated functions — credit scoring, insurance risk pricing, medical decision support — each parallel pilot carries a separate Article 26 compliance obligation under the EU AI Act. A portfolio of six unsequenced pilots is six incomplete compliance scopes.

  2. For the use cases currently in pilot: does any of them produce data that another use case needs? If so, are they sequenced accordingly, or are they running in parallel with no data dependency mapped? For Polish banks and insurers under KNF oversight, a use case that generates training data for a subsequent high-risk AI system is itself a regulated system input. Has your AI Act compliance scope been mapped to the dependency chain, or only to individual deployments?

  3. What was the first AI use case your organisation deployed? When selecting it, did you consider what data it would produce — and whether subsequent projects would be able to consume that data? Did you match it to the governance maturity your organisation actually had at the time, or was governance bolted on after the fact?

  4. If you could only fund one AI project for the next twelve months, which one would create the most optionality for everything that follows — and which one would require the least rework of governance and compliance infrastructure as the portfolio scales?

The imperative
#

The sequencing imperative is not about caution. It is not “start small.” It is about recognising that in enterprise AI, the first use case is not just a project — it is the foundation layer. It determines which data assets exist for the next deployment, which governance patterns are established, which integration challenges are solved, and which teams have built the capability to deliver.

Organisations that treat use case selection as a portfolio diversification exercise — spread bets, see what works — consistently underperform those that treat it as an architectural decision.

BCG’s leaders do not pick fewer use cases because they are cautious. They pick fewer because they understand that three well-sequenced deployments create compounding returns, while six unsequenced ones create compounding costs.

Stay balanced,

Krzysztof Goworek