01Three reports, three vantage points
McKinsey's 2023 outlook was the first major report to elevate generative AI to its own trend, ranking it alongside applied AI, industrialized ML, and cloud and edge. Their estimate that GenAI could add $2.6T-$4.4T in annual economic value did most of the news cycle's work — but the more important passage was further down: the productivity is heavily concentrated in a small number of functions (customer operations, marketing/sales, software engineering, R&D), and capturing it requires re-architecting the workflow, not just bolting on a chatbot.
Bain's 2023 report was, characteristically, more sober. Software industry growth had decelerated from 11% to roughly 6-8%. The optimism was concentrated in AI-native vendors. Bain's underlying point: incumbents who didn't already have clean data and clear workflow ownership were going to find AI extraordinarily expensive to actually deploy.
Deloitte's framing was the most operator-friendly. Their 'through-line' was that every wave of business technology — cloud, mobile, data, now AI — has the same adoption curve: the technology is ready years before the operating model is. The companies that win the wave are the ones who closed that gap fastest.
02The pilot purgatory problem
By Q4 2023 the dominant reality across enterprise was 'pilot purgatory.' Surveys put the share of GenAI projects that had reached production at under 10%. Most were stuck in security review, data access negotiation, or — most commonly — at the moment when the team had to define what 'good' actually looked like and discovered they didn't have a baseline.
The companies escaping pilot purgatory weren't the ones with the most expensive vendors. They were the ones with workflows clean enough that 'replace this step with a model' was a coherent sentence.
"An AI pilot is a workflow audit with a deadline. Companies that already had the audit went to production. Everyone else discovered they didn't."
03What 'AI-ready' actually meant in 2023
Across the three reports, the operational prerequisites for capturing AI value clustered around four things:
- Documented decision logic — the team can articulate, in writing, what the current process actually does (most can't).
- Clean, accessible inputs — the data the model needs lives in one place, with permissions that work.
- Defined success metrics — there's a number that tells you whether the AI version is better than the human version, and you trusted that number before AI showed up.
- Ownership — one person is accountable for the workflow's outcomes, not the tool.
