Companies Are Acting on AI Value They Haven’t Realized Yet

Empty enterprise office environment with desks and workstations, representing business operations, workflows, and organizational structure tied to unrealized AI value.

Key Takeaways

Most organizations report AI value, but few can tie it to consistent, measurable business outcomes.

As AI adoption expands, ROI becomes dependent on how value is measured and managed.

Analytical and rule-based AI drive most enterprise value, while generative AI is harder to measure.

Organizations are making workforce decisions based on expected AI gains rather than realized outcomes. Few tie headcount reductions to actual AI implementation, while many have already reduced hiring or cut roles in anticipation of future productivity gains.

The findings come from a March 2026 report, Economic Maturity for Artificial Intelligence: How Organizations Measure and Maximize Value from Artificial Intelligence, from the Return on AI Institute, which is sponsored by Scaled Agile.

The study, based on a survey of 1,006 executives globally, finds that most organizations say AI delivers value, but far fewer can tie that value to consistent, measurable business outcomes. As AI use grows, that disconnect will become harder to justify.

Measurement Gaps Explain Uneven AI Outcomes

Most organizations report that AI is delivering value, but the level and consistency of that value vary widely. The report finds that 45% of organizations say they are getting a “great deal” of value from AI, while another 45% report moderate value. That suggests broad adoption, but uneven outcomes across enterprises.

The Return on AI Institute argues this variance is tied to how organizations measure and manage AI once it is in production. Many companies move from pilot to deployment without putting in place consistent ways to track outcomes, validate results, or connect AI use cases to financial and operational performance.

The report outlines a six-stage economic maturity model. At the earliest stage, where AI remains in pilots without measurement, only 4% of organizations report a great deal of value. Those that move to production without assessing outcomes, that figure rises to 18%.

Outcomes change once organizations introduce post-implementation measurement. At that stage, 44% report achieving a great deal of value. The gains continue as companies aggregate value across use cases, where 58% report high-value outcomes, and increase further among organizations that formally report AI value to leadership or external stakeholders, where 85% achieve a great deal of value.

The pattern shows that organizations that measure AI after implementation and report results are far more likely to translate use cases into consistent enterprise value.

Analysis

What This Means for ERP Insiders

Organizations lack visibility into what works. Without clear feedback on outcomes, teams cannot refine AI use cases, replicate success, or scale impact across systems and workflows.

Enterprise AI Value Comes from Integration into Business Processes

The report shows that not all AI contributes equally to measurable value. Analytical AI is cited as the most valuable type by 50% of organizations, followed by rule-based AI at 40%, while generative AI accounts for 9% and agentic AI for 2%.

That distribution reflects how value is captured in practice. Systems that are embedded in core business processes—such as forecasting, pricing, and risk management—are easier to measure against established baselines.

Newer forms of AI, including generative and agentic systems, are often deployed in less structured ways, making consistent measurement more difficult. The report finds that 44% of executives say generative AI is the hardest type of AI for which to measure ROI.

Scaled Agile frames this gap as an operating model issue. Its AI-Native model emphasizes embedding AI into workflows, aligning leadership, and linking initiatives to measurable business outcomes. The report’s findings point in the same direction, showing that higher-value outcomes are associated with organizations that move beyond isolated use cases and apply more consistent measurement and management practices.

Organizations that combine broad employee use of AI tools with targeted business use cases report higher levels of value than those that take a narrow or unstructured approach. That requires more than deployment. It depends on governance, process design, data quality, and the ability to measure outcomes consistently over time.

In that context, AI value is not determined by the type of model alone. It depends on whether organizations can integrate AI into existing systems and workflows, measure its impact, and use those results to guide decision-making.

Analysis

What This Means for ERP Insiders

Value emerges from coordinated operating models. Alignment across workflows, leadership, and governance enables organizations to generate repeatable, measurable financial returns from AI.

About Us

ERP Today covers how ERP, cloud, and AI change the way businesses run. Our editors speak with practitioners, vendors, and analysts to surface the technology, contracts, and risks that matter for enterprise leaders.

Alongside our newsroom coverage, we run in‑person summits where ERP leaders compare notes on programs like yours, and a research practice that turns reporting like this into organization‑specific briefings and content.