Why Agentic AI May Succeed Where Digital Transformation Stalled

Riff logo representing the company behind the agentic AI digital transformation report.

Key Takeaways

Agentic AI may help enterprises overcome the scaling barriers that limited digital transformation in the 2010s.

New economics allow domain experts to translate operational knowledge directly into working software.

Organizations that learn fastest inside real workflows will build advantage sooner.

Businesses invested trillions in digital transformation during the 2010s. Looking back, returns proved hard to demonstrate.

A new white paper from Riff examines why the latest phase of enterprise automation could unfold differently. Why Agentic AI Might Deliver argues that agentic systems reshape how software is created and applied inside workflows.

The author, Andrzej Golebiowski, chief operating officer and chief revenue officer of Riff, writes with both skepticism and urgency. He believes AI may deliver where past transformations failed because the gap between expertise and execution is narrowing.

Where the 2010s Lost Momentum

Enterprises treated data as the new oil, Golebiowski writes, and few leadership teams believed they could avoid digitization in the 2010s without risking irrelevance. Investment accelerated, yet outcomes diverged sharply across sectors.

Tech firms and some consumer industries captured substantial gains, supported by data scale and clearer analytical targets. Asset-intensive sectors followed a different path, with far less movement despite comparable effort.

From that experience, Golebiowski identifies patterns that undermined value.

“Foundational beliefs about AI and machine learning applications were simply wrong.” Organizations often overestimated where data science could outperform physics, engineering judgment, or established practice.

Operating choices reinforced the gap. Companies pursued internal builds where partnerships might have accelerated value. Even where appetite for new digital solutions existed, “custom software economics didn’t work,” leaving long backlogs and brittle deployments that failed to keep pace with change.

Programs frequently underweighted ownership, incentives, and ROI.

“The distance between domain expertise and technical capability never closed.” Those who understood operational reality rarely had the tools to implement change.

How the Delivery Equation Is Shifting

The constraints that kept digital programs from scaling in the 2010s are starting to move.

“The technology shift is categorical rather than incremental,” Golebiowski argues, as large language models compress the time and cost of creating domain-specific software. Work that once required long specification cycles is moving closer to where it is needed.

Intelligence also now embeds more naturally into operations. Systems interpret unstructured inputs, surface context, and support decisions across the variation that previously kept automation narrow.

However, broader access to AI does not equal enterprise transformation.

Horizontal copilots are “useful for individual productivity, but nothing we’re seeing indicates they’re transformative at a business level,” he observes. Gains arrive quickly for individuals, while sustainable improvements require redesigning how work moves across systems, approvals, and exceptions.

Golebiowski notes that value concentrates in the workflow. Organizations that connect AI to operating outcomes see measurable gains in speed, accuracy, and coordination.

From his perspective, “the winning move is best at using AI, not best at building AI.” Enterprises that empower their strongest operators, tie experiments to outcomes, and enable movement through governance will outpace those debating architecture.

What Leaders Still Need to Resolve

Golebiowski warns that uncertainty remains.

The vendor market is evolving quickly, encouraging portfolio approaches that preserve flexibility. The best development models also remain unsettled, with domain experts, internal teams, and partners each taking roles depending on workflow and risk.

Questions persist around data and architecture.

LLMs may tolerate imperfect environments better than earlier tools, but reliability thresholds are still emerging. Enterprise systems were built primarily for analysis, not autonomous write-back, so early value will center on recommendations.

Given those realities, many organizations remain at the stage of broad productivity deployment, waiting for clearer playbooks before moving deeper into operations. He believes the barrier is not as imposing as many assume.

“Here’s what I’ve learned: getting started is not that hard,” he writes. Conversations between domain experts, business leaders, and technologists surface viable use cases quickly.

Organizations already building gain advantage through repetition. “The problem isn’t finding use cases. It’s committing to actually build one”. Over time, that learning compounds.

What This Means for ERP Insiders

Execution capacity will define competitive advantage. Access to foundation models will normalize across industries. Organizations that redesign incentives, workflows, and authority for rapid deployment will separate themselves before differentiation appears.

AI strategy now resembles organizational design more than technology planning.
The central question shifts from which platform to buy toward who is empowered to build, test, and adapt. Enterprises that reallocate decision rights will learn faster than peers.

Learning velocity becomes the new currency of operational performance. Early experiments generate institutional memory, integration knowledge, and risk intuition. These assets accumulate quietly and later determine which firms can scale automation.