Workday Research Finds AI Productivity Gains Are Lost to Rework

Employee slumped in front of several computer monitors.

Key Takeaways

Workday research finds that nearly 40% of AI productivity gains are lost to rework, as employees spend significant time correcting and verifying AI-generated output.

The global study shows that while AI adoption accelerates task completion, only 14% of workers consistently achieve net-positive productivity outcomes once rework is accounted for.

The findings highlight growing challenges for enterprise and ERP leaders, including uneven workforce impact and the need to redesign roles, metrics, and skills around AI.

On January 14, Workday released a global research report finding that many companies are failing to capture the full productivity gains promised by AI, with a significant share of expected AI-driven efficiency lost to rework.

Workday conducted the study, “Beyond Productivity: Measuring the Real Value of AI,” with Hanover Research, based on a survey of 3,200 employees and business leaders. They found that nearly 40% of the time saved through AI is offset by time spent correcting, verifying, or rewriting low-quality outputs. Only 14% of employees consistently report net-positive outcomes from AI use, according to the report.

The findings suggest that while AI adoption is widespread and often accelerates task completion, organizations are measuring success in ways that challenge expectations regarding AI’s utility.

Nearly 40% of AI Productivity Gains Are Offset by Rework

The report centers on what it calls an “AI tax on productivity.”

Workday’s research found that roughly 37% of the time employees save using AI is lost to rework, including correcting errors, verifying outputs, and rewriting content that fails to meet quality or context requirements. For every 10 hours of efficiency gained, nearly four hours are absorbed fixing AI-generated work.

A majority said AI adoption itself is not the issue: 85% of employees said AI saves them between one and seven hours per week, and 77% reported higher productivity over the past year. Despite those gains, only 14% of employees consistently achieve net-positive outcomes once rework is accounted for.

However, the burden of that rework is uneven across organizations.

Workers between 25 and 34 years old make up about 46% of the employees who spend the most time checking, correcting, and fixing AI-generated work. Meanwhile, HR professionals represent the largest functional share of heavy rework users. IT roles, by contrast, are more likely to convert AI use into net productivity gains.

The report suggests organizational practices may be part of the problem. Nearly nine in ten companies reported updating fewer than half of their roles to reflect AI capabilities. Only 37% of heavy AI users said they received increased skills training, despite most leaders citing training as a priority.

How AI Productivity Breaks Down Without Management 

Most organizations appear to be deploying AI faster than they redesign work around it, which explains why productivity gains show up on paper but fail to translate into better outcomes. Measuring success through hours saved rewards speed, while quietly ignoring the cost of verification, correction, and judgment that still falls on employees.

This gap exposes a structural weakness in enterprise AI adoption.

AI accelerates output, but accountability for quality remains unchanged in organizations, leaving workers to absorb the burden. Younger employees and HR professionals feel this most acutely because their roles combine high AI usage with low tolerance for error.

In these settings, imperfect AI output means more checking, more fixing, and more time spent cleaning up work that was supposed to save time. The result illustrates what Workday describes as a hidden tax on talent, increasing cognitive load while offering little authority or support to reduce it.

The report also challenges a common assumption about competitive advantage. Faster deployment does not produce better results if roles, skills, and metrics remain static.

Organizations that capture net gains treat AI as a decision-support layer, not a labor substitute, and invest accordingly. Training, role clarity, and outcome-based metrics matter more than expanding use cases or increasing task volume.

AI advantage will accrue to companies that redesign work around human judgment, rather than asking employees to compensate for systems that were not built to support it.

What This Means for ERP Insiders

AI productivity gains are not linear. AI accelerates work, but rework pulls results backward. In ERP environments, automation often delivers speed first, while contributing to quality later, creating drag inside daily processes in the meantime.

AI impacts teams and workers differently. Younger employees and HR functions absorb more verification and correction work, according to the study. ERP leaders need to account for where AI risk and effort concentrate, not assume uniform benefits.

Static job design pushes AI costs onto employees. When ERP roles remain unchanged, AI adds responsibility without authority. Redefining roles around AI determines whether systems reduce effort or quietly increase workload.