There's a number in PwC's 2026 Digital Trends in Operations Survey that should be the headline of every operations strategy meeting this quarter — and isn't.
PwC surveyed 767 US operations and supply chain executives at companies with $100M+ in revenue. The companies are spending on AI. They're running pilots. They're hiring data scientists. And then this:
89% of operations leaders say their tech investments haven't fully delivered the expected results.
Stop and read that again. Not "are taking longer than expected." Not "are hitting some integration challenges." Haven't fully delivered the expected results. In a survey where 85% of the same executives also say they're ahead of most competitors in digital transformation, that gap between confidence and outcome is the single most important diagnostic any ops leader has access to right now.
The instinct in the room when you share this number is to nod, attribute it to "change management," and move on. That instinct is wrong. The 89% problem has a specific shape. Once you see the shape, you can fix it. And you don't fix it by buying more technology.
What's Actually Going Wrong
When you ask the 89% why their investments underperformed, the same three reasons surface in the survey:
- Integration complexity — the runaway top reason
- Data issues — quality, access, fragmentation
- User adoption challenges — the people running the operation never fully bought in
Notice what's not in that list. Not "the AI models weren't smart enough." Not "we picked the wrong vendor." Not "the use case was wrong." The technology, in nearly every case, works as advertised in the demo. It's the connection between the technology and the operation it's supposed to improve that breaks.
This is why 87% of executives in the same survey say poor data quality has hampered their digital initiatives. Yet here's the paradox the survey also surfaces: 89% of those same leaders agree that actionable data matters more than comprehensive data. 84% are comfortable making decisions on imperfect data. 73% say data doesn't need to be perfect to drive value.
So the 89% are simultaneously saying bad data is killing our AI and we know perfect data isn't required. Both are true. The contradiction is the diagnosis.
The Diagnosis: It's Not a Data Quality Problem. It's a Data Gap Problem.
For five years, the standard ops leader playbook for AI has been: clean the data first, then deploy the AI. Fix the master data. Reconcile the SKU hierarchy. Standardize supplier records across systems. Build the data lake. Get the data warehouse in shape. Then — then — we'll bring in AI.
This playbook has failed at scale. Not because data hygiene is bad work — it's necessary work — but because the playbook treats data as a static asset to be perfected before use. Real supply chain data is never going to be perfected. Suppliers change. SKUs proliferate. Tariff codes shift. ERPs get acquired. Spreadsheets multiply faster than anyone can govern them. By the time you've cleaned the data you have, the data you need has changed.
The companies stuck in the 89% are the ones still running this playbook. They have data programs that are years overdue, AI pilots waiting on those programs, and operations leaders increasingly cynical about whether the next pilot will deliver any more than the last one did.
The actual problem isn't that the data is dirty. It's that the data the AI needs lives in seven systems, two spreadsheets, a supplier portal, and three people's heads — and traditional AI can't reason across that gap. It needs everything pre-stitched, pre-cleaned, pre-loaded into a single context. When that stitching fails, the AI fails. The 89% are paying for AI that can't do its job because the integration layer underneath it isn't there.
Why Agents Are Architecturally Different
Here's where the conversation usually goes wrong: people hear "agents" and think "smarter chatbot." That's not it.
An AI agent is a system designed to operate the way a senior analyst operates. Given a goal, an agent can:
- Pull data from whichever system has it, regardless of whether that system has been "integrated" in the traditional ETL sense
- Notice when a record looks wrong and flag it or work around it instead of failing silently
- Ask clarifying questions of a human, or of another agent, when the data isn't sufficient
- Cross-reference partial information from multiple sources to triangulate an answer
- Document its reasoning so the decision can be audited later
The architectural shift is this: traditional AI assumes the data gap has been closed before the model runs. Agents assume the data gap will always exist and treat closing it as part of the work. That sounds like a small distinction. It's not. It's the difference between a tool that needs perfect inputs to produce useful outputs and a system that produces useful outputs from the messy reality your operation actually lives in.
PwC's report makes this point directly in the framing of their recommendations: AI — specifically agents that can reason like humans — can bridge data gaps that traditional approaches cannot. This isn't a marketing claim from a vendor. It's a finding from a 767-leader survey that explicitly contrasts agentic AI's tolerance for imperfect data against the brittle data dependencies of conventional automation.
The 4% Cohort: What the Winners Are Doing
The most useful number in PwC's report isn't the 89%. It's the 4%.
Out of 767 executives, only 4% report success in four areas simultaneously: AI fully embedded enterprise-wide, no significant barriers to scaling autonomous agents, a collaborative horizontal operating structure, and technology investments that are fully delivering on expectations. This 4% is the cohort to study. They are the existence proof that the 89% problem is solvable.
What they share is striking:
- 87% have integrated their digital capabilities end-to-end across internal teams, suppliers, and customers — not in silos
- 74% deploy AI-native or agentic platforms in R&D, not just bolted-on assistants
- 83% measure both operational and financial impact of digital investments — not just one
- 73% have achieved broad organizational impact from digital investments, not isolated pilot wins
- 63% report significant data quality improvement over the past 2–3 years — and crucially, this happened alongside their AI deployment, not as a precondition
That last point is the one most ops leaders miss. The 4% didn't fix their data and then deploy AI. They deployed AI and used it as a forcing function to fix their data. The agentic layer surfaced the gaps that mattered, prioritized which to close, and operated through the rest. Data quality became an output of the program, not a prerequisite.
What This Means Practically: Three Shifts
If you're somewhere in the 89% right now — and statistically, you almost certainly are — three concrete shifts move you toward the 4%.
Shift 1: Stop sequencing data and AI. Run them in parallel.
The "fix data first" playbook is dead. Pick a high-value, well-bounded use case — supplier exception management, freight classification, demand sensing, inventory rebalancing — and deploy an agentic system against it using the data you have today. Let the agent surface which data gaps actually matter for this decision. Close those gaps. Move to the next decision. You'll have better data and a working system in the time it would have taken to finish the data warehouse.
Shift 2: Treat integration as a board-level priority, not a backend IT problem.
PwC's data is unambiguous on this: integration complexity is the number one reason AI investments underperform. The 4% cohort treats end-to-end integration as a strategic priority with executive ownership. The 89% treat it as a line item buried in IT's budget. Make integration simplicity a metric your board sees. Reorganize procurement so you can't buy point solutions that don't connect to anything. The unsexy work here is the work that pays.
Shift 3: Measure both operational and financial impact from day one.
The 4% measure both. The rest measure one or neither and then can't explain to their boards why the AI program should continue funding. If your AI program is reporting "12% reduction in exception handling time" without a corresponding dollar figure that connects to enterprise priorities, you're building a case for the program's cancellation, not its expansion. Build the dual-measurement framework before you scale, not after.
The Hard Truth About the 89%
PwC's report is generous in its framing, but the underlying message is sharp: most companies have spent the last three years buying technology that their operating model couldn't absorb. They have AI pilots that ran in isolation, never touched the core process, and never connected to the systems where decisions actually happen. They have data programs that consumed budget without unlocking decisions. They have point solutions that automated narrow tasks without changing how the operation works.
This is fixable. The 4% prove it's fixable. But the fix isn't another tool. The fix is an architectural reset: agents at the decision layer, integration treated as strategy, data improvement as a continuous output of the system rather than a gate in front of it, and measurement that ties operational outcomes to financial ones.
The companies that make this shift in 2026 are going to compound a lead that the 89% won't be able to close by buying more technology. The window is real, and it's open right now.
Bottom Line
The 89% problem isn't a technology problem. It's a connective tissue problem — between systems, between data and decisions, between AI and the operation it's supposed to serve. Traditional automation can't fix that gap because it was architected to assume the gap was already closed. Agents can fix it because they were architected to operate inside the gap.
If your AI program is one of the ones that hasn't fully delivered, the question isn't whether to add more AI. The question is whether what you've already deployed has the architecture to close the data gap, or whether it's another piece of underperforming tech waiting to show up in next year's survey.
That's the conversation worth having. And it's the conversation most ops leaders aren't yet having with the right diagnostic in hand.
If you're reviewing your AI program's performance this quarter and the numbers aren't where you expected — particularly if integration complexity, data fragmentation, or pilot-to-production stall is the pattern — that's a diagnosable problem with a defined path forward. The work is identifying which 89% pattern your program is actually in, and which architectural shift moves you toward the 4%. Independent of any specific platform or vendor, that's the conversation I have most weeks with operations and supply chain leaders trying to get their AI investments to deliver.
Source: PwC's 2026 Digital Trends in Operations Survey, April 2026, 767 US operations and supply chain executives.
