Salesforce Agentforce Playbook: Five Steps to Fund and Scale Enterprise AI

Key Takeaways

In a crowded AI market, success depends more on the strength of the business case than on the technology itself, emphasizing a disciplined approach to framing value and ensuring long-term ROI.

A five-step playbook for deploying agentic AI includes: defining a specific problem, building a financial business case, creating an internal coalition for support, starting small with scalable solutions, and prioritizing governance and adoption before the launch.

Successful enterprise AI initiatives rely on narrow use cases for funding, reusable patterns for scalability, and strong governance structures to integrate AI into core business operations, rather than treating them as separate projects.

In an AI-crowded market, the real differentiator is less the power of the technology and more the strength of the business case behind it. In this session at the 2025 Agentforce Tour in New York City, “Unlocking Value in Your Salesforce Strategy,” the presenters promised to turn AI from a speculative bet into an investable asset, teaching leaders how to frame value, harden governance, and prove long-term ROI instead of hoping adoption takes care of itself.

To make that shift concrete, speakers Ada Buser, director of business consulting at Salesforce, and Tyler Herrmann, business consultant, healthcare and life sciences, laid out a disciplined five-step playbook for moving AI agents from pilot to production, using real-world examples from global firms to show how narrow focus on high-impact use cases unlocks executive buy-in and measurable ROI. Instead of more AI theory, they broke the work into five practical steps leaders can follow:

1. Define the problem in one sentence.

The first step is ruthless focus. Instead of cataloging 50 issues across the business, teams should capture a single problem in plain language that would make sense to the C-suite. If the problem cannot be expressed in one sentence, it is not ready. One example was a global distributor whose “callings are spiking because of orders,” with 50% of calls boiling down to a simple question: “Where’s my order?” That clarity matters because it ties directly to measurable strain on operations, like needing “to hire 10 additional people” just to keep up.

“If you can’t explain the problem in one sentence to your CFO, you’re not ready,” Buser said. The point is to move away from vague AI ambition toward a concrete, operationally painful issue that comes with a built‑in business audience and an obvious success condition. That focus makes it much easier to get leaders’ attention and build momentum.

2. Build a business case grounded in cost, risk, and experience.

Once the problem is defined, the second step is building a business case that quantifies cost avoidance, risk mitigation, and customer impact. The speakers stressed that most credible AI cases hinge on “cost reduction or avoidance,” such as not having to build out a larger contact center, but must also capture patient or customer experience and downstream risk if nothing changes. In the distributor example, the logic was simple—call volumes jumped, capacity did not, and hiring sprees were neither sustainable nor desirable.

The business case was framed as a story with “three chapters”: the pain point, the relief, and proof that the pain is actually being removed. That means putting numbers on both the current state and the future state: higher call volume as sales grow, stable human headcount because a digital agent takes the “Where’s my order?” load of routine interactions, and clear service metrics that show customers get answers faster. The speakers summed it up as needing to show leadership something “material and actionable,” not an abstract promise.

3. Build the internal coalition and find the money.

The third step acknowledges a political reality: AI funding rarely lives in one place. Salesforce cited research indicating organizations “spend up to 10% of their budgets on redundant efforts stemming from departmental silos,” with multiple teams tackling the same problem without talking to each other. The advice was to “know the seven people between you and yes,” mapping every stakeholder whose P&L, KPIs, or teams are touched by the problem and the proposed solution.

This coalition-building is not just about getting a signature, but about ensuring that business unit leaders understand how much of “their P&L is now going to be going towards this initiative,” Buser explained. Without that alignment, a central funding win can still evaporate when local owners push back. The speakers stressed getting these leaders “in early on and have them be part of the business case,” so that when the ask goes up the chain, it arrives as a coordinated signal, not a set of conflicting requests.

4. Start with one agent and design for scale.

The fourth step is where Agentforce enters the picture as “digital labor.” Instead of trying to “fix all of the problems at once,” Herrmann urged the audience to “fix one problem, do it in a scalable way,” then reuse what they learn. In the case of the order‑status spike, that meant deploying a single agent to handle the “Where’s my order?” use case and tracking its impact on call volumes, hiring plans, and customer satisfaction.

Visuals shared in the session showed three lines: case volume continuing to grow, human labor “staying stabilized,” and a band of “all‑in savings” widening over a five‑year horizon. The projected savings came from shifting to a digital labor strategy, backed by spreadsheets “broken down by business unit” to show exactly how each line of business benefits. The message to the room was that a “successful business case” gives leaders both the big number and the detailed breakdown, all the while establishing a blueprint for the next agent.

5. Engineer adoption and governance before go‑live.

The fifth step flips typical sequencing: adoption and governance must be designed before deploying agents, not after. Herrmann noted that “70% of transformations are failing” and “only 12% of organizations are achieving their original ambition,” often because teams still operate on a “build it and they’ll come” mindset. Instead, both speakers advocated for real “meta champions” inside the business and “embedded workflows” that make the agent part of daily work rather than an optional tool.

Governance should go through a formal Center of Excellence or committee, described as “a cross‑functional governing body” that sets rules, monitors performance, and decides where AI goes next. Leaders need to “think about these things prior to the deployment,” including who owns what, how success is measured, and how frequently adoption and performance will be reviewed. One of the closing lines captured the shift in stakes: this is not “an investment in a system,” but “an investment in a strategic growth engine,” and that demands clarity on metrics, ownership, and scalability from the outset.

What This Means for ERP Insiders

Narrow use cases drive enterprise AI funding. The session showed how ERP leaders can break broad transformation wish lists into single, UX-ready problems with clear cost avoidance math, pulling budget from business units rather than central IT. Product teams and program owners gain a template for pitching agents that stabilize operations amid growth, not just cut costs.

A reusable pattern for agents hints at an ecosystem shift. The emphasis on solving one problem “in a scalable way,” documenting how it was built, and then repeating that pattern suggests enterprises will expect ERP platforms and partners to deliver agent blueprints as much as individual automations. That means designing AI capabilities that can be cloned, governed, and iterated across domains, not rebuilt from scratch each time.

Governance must precede deployment. By emphasizing the role of centralized centers or committees of excellence, embedded workflows, and designated champions before go‑live, the speakers pushed enterprises to treat AI agents as part of core operating models, not experimental add‑ons. ERP vendors and system integrators that cannot match this maturity around metrics and ownership may find their AI features sidelined.