Annual Research — 2025

Agents Arrived. Most Operating Models Aren't Ready.

McKinsey, Bain, and Deloitte's 2025 outlooks tell three versions of the same story: the unit of automation moved from a task to a workflow to an agent — and most companies are about to find out their workflows weren't built for software that acts.

Q1 2026·13 min read·3 primary sources
$500B+

Combined hyperscaler AI capex announced for 2025-26

1 in 3

Enterprises actively piloting agentic workflows by Q3 2025

<15%

Of agent pilots Bain estimates have measurable ROI

01Three reports, one shift

McKinsey's 2025 outlook elevated agentic AI to its own arena, distinct from generative AI. The framing matters: a generative model produces an output for a human to act on. An agent takes the action. That's not a UX change — it's a redesign of where work happens.

Bain's 2025 report focused on the economic consequences. Hyperscaler capex for AI infrastructure crossed half a trillion dollars in commitments for the 2025-26 cycle. Software industry margins came under pressure as the cost of inference shifted from a marginal expense to a structural one. The companies positioned to win were either the infrastructure providers themselves or the operators with workflows efficient enough to make agent deployment economical.

Deloitte's 2025 outlook surfaced the second-order question almost nobody was asking: where does accountability live when a software agent makes a decision that's wrong? Their answer — and it's the right one — is that it has to live somewhere it always lived: with a human owner of the workflow the agent runs on. Companies that haven't named that owner can't deploy agents safely.

02What an agent actually requires

The 2025 reports converge on a deceptively simple list. To deploy an agent against a workflow, you need:

  • A workflow that's been documented well enough that a human could write the runbook from scratch.
  • Inputs and outputs that are machine-addressable — APIs, structured data, defined schemas.
  • Decision logic with explicit thresholds — not 'use judgment,' but 'if X, then Y, else escalate.'
  • An owner who can read the agent's logs and intervene before drift becomes loss.
  • An evaluation regimen — how often does someone check that the agent is still doing the work better than the human alternative.
"The agent is the easy part. The five preconditions that have to be true for an agent to be safe and economical — that's the consulting engagement."

03Where the gap shows up financially

Bain's 2025 numbers tell a sharp story. Agent pilots are running everywhere; agents in production with measurable ROI are concentrated in a small number of operators — Bain's estimate is roughly 15%.

The financial gap between the two cohorts widens fast. The 15% are absorbing the cost of inference because they've collapsed the human work it replaces. The other 85% are paying for the inference and still paying for the human work, because the workflow wasn't redesigned to actually hand the work off.

This is the same operating-model gap we wrote about in 2021, 2022, 2023, and 2024 — just with a more expensive failure mode.

How this maps to the work

Our 2025 engagements are dominated by agentic workflow design. The conversation pattern is consistent: a leadership team has watched a competitor deploy an agent, has greenlit a pilot, and has discovered that the workflow they wanted to point the agent at isn't ready.

Our work is to make it ready. That means a discovery sprint to map the workflow, a design phase to define the decision logic and escalation paths, an integration phase to wire the agent into the right systems with the right permissions, and a hand-off phase where we install the evaluation cadence and name the human owner. By the end the agent is in production, the team can intervene, and the workflow is documented well enough that the next agent on the next workflow takes a fraction of the time.

Four engagements we run against this thesis.

None of these require a multi-year transformation. Each is scoped to land specific operating-model improvements with a measurable result.

01

Workflow productionization for agents

We take a workflow from 'humans handle it case-by-case' to 'documented decision logic with explicit thresholds' — the prerequisite an agent actually needs. Most of the value is in the documentation, not the agent.

02

Permissions and integration architecture

Agents fail at the seams: read access without write access, write access without rollback, integrations that work in dev and break in production. We design the integration architecture so the agent can do its job and only its job.

03

Human-in-the-loop intervention design

Every agent in production needs a human who can see what it's doing and stop it. We design the dashboards, the escalation paths, and the weekly review so the human owner is actually able to own.

04

Repeatable agent rollout playbook

After the first agentic workflow is live, we leave behind the playbook. Engagement two should take half the time. Engagement three you should be able to run in-house. We don't build dependencies — we build muscle.

If this maps to what you're carrying — let's talk.

Most engagements start with a 30-minute conversation about the specific operating-model question on your desk this quarter.