AI is forcing businesses to confront hard truths about how their data works.
In a new e-book from CData, Six Moves to Rewire Software for the AI Age, Mark Palmer, enterprise software product advisor at Warburg Pincus, argues that software providers are bolting AI onto outdated data architectures rather than redesigning their data foundations.
The e-book defines six data pivots that Palmer believes product leaders must make for their enterprises to move from experimental AI use to scalable AI-driven products.
Pivot 1: Stop Feeding AI Stale Data
Many software providers still feed AI with stale, batch-processed customer data, so copilots miss real-time context. As Palmer explains, this starves AI of the context it needs.
A new approach is needed: augmenting heavyweight ETL pipelines with lightweight, direct data access paths tailored to AI. This lets teams keep data in original systems while giving AI real-time, governed access through direct connectors, model-callable interfaces, live RAG queries, or SQL-based access layers.
Pivot 2: Replace Always-On Pipelines with On-Demand Integration
Traditional data integration relies on permanent, heavyweight pipelines that are expensive to operate and too slow for AI-driven workflows, leaving users with inconsistent results.
Pop-up data integration, where connections are created only when needed and removed once tasks are complete, enables real-time AI access without constant infrastructure costs. Palmer likens this to “a seasonal retail shop that sets up and then disappears when the season is over,” instead of a full-time store on every corner.
To make this work, systems need to be self-describing and self-governing, with automated schema discovery and standards, so AI can safely navigate data on demand.
Pivot 3: Move From Data Lakes to Live Data Access
Data lakes and centralized warehouses—designed for batch processing—create bottlenecks for AI-driven applications, often delivering overly abstracted data. Palmer argues some centralized data environments are turning into “data swamps” as a result.
Live, direct access to operational data enables AI to explore and respond in real-time. Implementing this shift requires bypassing warehouse pipelines and adopting distributed governance, metadata management, and security controls, with AI-enabled applications handling auditing, privacy, and access management at the point of interaction.
Pivot 4: Prepare for Many Connections, Not Just One
Success with AI multiplies demand for data connections, overwhelming architectures designed around single or limited integration points. As users uncover new insights, they expect AI to correlate data across many systems, creating new connectivity requirements.
When “integration speed is market speed,” Palmer explains, connectivity becomes a core capability. This change requires a broad library of prebuilt connectors that enable new data sources to be added quickly, so integration keeps pace with AI-driven innovation.
Pivot 5: Design Data for AI-Generated Code
“In a world where AI can generate code automatically, software providers whose data can’t communicate with AI development tools aren’t just slower; they’re like farmers using horses to compete against tractors,” according to Palmer.
Data platforms need to be designed for AI code generation, enabling machines to discover, reason over, and safely use data assets.
Implementing this change requires exposing machine-readable schemas, governed CRUD access, and Model Context Protocol-accessible tools, with governance embedded directly into data connectivity so AI assistants can generate code and integrations reliably at scale.
Pivot 6: Make AI Assistants the Primary Interface
Traditional BI workflows assume humans are the primary interface to data, so insights depend on analysts, dashboards, and batch reports that are too slow for the AI era.
A new approach treats AI assistants and agents as the primary interface to data, shifting focus from human-speed dashboards to AI-speed, in-product experiences.
Teams must reorient data services to serve AI directly. This means real-time access, semantic clarity, permission-aware design, and simple integration into conversational and agentic interfaces. Otherwise, platforms risk building “beautiful reports that end users bypass,” Palmer warns, while engagement shifts toward embedded AI experiences.
What This Means for ERP Insiders
ERP architecture now competes on AI readiness. Vendors that modernize data flows, connectivity, and governance for AI-speed interactions will pull away from those treating AI as a cosmetic feature layered on top of legacy integration patterns.
Data strategy is becoming product strategy. Decisions about pipelines, connectors, and access models increasingly dictate what AI-powered ERP experiences are possible, how fast they ship, and how quickly they can respond to evolving business scenarios.
User experience is shifting from reports to conversations. As AI assistants become the front door to ERP data and workflows, the winners will be platforms that translate complex back-end rewiring into simple, conversational experiences for everyday business use.





