01What changed between 2023 and 2024
The 2024 reports read differently from the 2023 ones in tone. The breathlessness was gone. The 'will this work?' question had been replaced by 'who's already doing it well, and what's the gap?'
McKinsey's 2024 outlook tracked the same broad arenas as 2023 — applied AI, generative AI, future of mobility, electrification, advanced connectivity, immersive reality, future of bioengineering, future of space, sustainable energy, climate technologies, quantum, and more — but added a layer of investment and adoption data that hadn't been mature enough to publish a year earlier. The headline: GenAI adoption nearly doubled in twelve months. The undercurrent: value capture was still concentrated in a small number of operators.
Deloitte's TMT Outlook focused on the supply side of the AI economy: the data center build-out, the energy demand, the semiconductor cycle, and the talent shortage. Their report was, in effect, a warning that the infrastructure layer was about to consume an extraordinary amount of capital — and the buyers had to be operationally ready to extract value from it or risk financing a curve they couldn't ride.
02Where the value actually showed up
By mid-2024 there was enough public data to draw a real conclusion: the productivity delta between AI-mature teams and AI-curious teams had become measurable. McKinsey put it at roughly 2-3x in the functions where GenAI was applied with discipline — software engineering, customer operations, sales productivity, and content workflows.
The differentiator was almost never the model choice. It was the operating model around the model: clean prompts as a documented asset, evaluation harnesses run weekly, prompt-and-output review as a defined role, and feedback loops that fed back into the underlying workflow.
"By 2024 the AI question stopped being 'which model.' It became 'who owns the prompt library, and when do we evaluate it.' That's an operating-model question, not a technology one."
03The infrastructure paradox
Deloitte's report surfaced a tension that defined the second half of 2024: AI infrastructure spend was being pulled forward at a pace that assumed near-universal enterprise adoption. But adoption was still concentrated in a relatively small set of high-maturity operators.
For mid-market companies, this created a strange opportunity. The cost of a model API call dropped roughly 80% over the year. The cost of being operationally ready to use one fell to zero — for anyone willing to do the workflow design work first. The 'AI gap' wasn't a budget gap anymore. It was an operational-design gap.
