GenAI guardrails ahoy with LLM sandboxes

GenAI LLM | Image of a Minecraft sandcastle

Key Takeaways

The partnership between ServiceNow and Accenture aims to leverage AI to transform enterprise technology while ensuring governance and security around data usage.

Effective integration of AI in enterprises relies on establishing clear governance policies and embedding AI models within secure platforms to protect sensitive data.

As organizations experiment with generative AI, they must balance innovation with risk management, adapting governance structures to keep pace with rapid technological changes.

Back in October 2020, Bill McDermott, CEO of ServiceNow and Julie Sweet, CEO of Accenture formed an agreement to help their joint clients around the world truly transform the way we work. Just three years later, with AI marking the next big transformational leap in enterprise technology, the feeling that the state of work has begun to shift at its foundations is palpable. Faced with an undefined horizon, everyone wants to ensure they land on solid ground, but both workers and businesses remain unsure if they will hold up under pressure or whether the uncertain waters will see their castles crumble.

As Accenture stands tall as ServiceNow’s global partner of the year, ERP Today chatted to the ServiceNow-Accenture partnership at the former’s Knowledge 23 event in Las Vegas. Through separate interviews, Jeremy Barnes, VP AI product at ServiceNow and Dave Kanter, global leader of the Accenture ServiceNow Business Group gave us the lowdown on how AI can transform enterprise without rocking the boat.

Stephanie Ball (SB): What kind of return on investment are you hoping customers are going to get from AI embedded into their platforms? Will they actually see results from it straightaway or is this something that they need to pump so much into and take so much human checking off the back of it that the benefits are, in the end, delayed?

Jeremy Barnes, ServiceNow (JB): I’ve done a lot of homework here, and I would say that the thing that kept coming back when we did an analysis was that if we just look at the model itself, it’s a challenge. But, if we look at it in the context of being embedded inside a platform, then the platform itself provides a lot of kit, with governance models to put a human in the loop so you can maybe use a less expensive model that will fail one percent of the time, but with confidence that your customers are still going to get the right outcome.

So, you can then design extremely productive experiences and workflows around a technology that is not 100 percent right. But what’s key is that we believe in generative AI plus the ServiceNow platform, which complement each other so well that we’re in a really good position to bring this to our customers.

SB: One thing that I’m really interested in is the governance of it. Take Accenture’s work in the financial sector and also system services, where regulation surely is going to be tight and if not now, will be tightened moving forward. How therefore do you foresee integrating AI in a way that is future-proofed?

David Kanter, Accenture (DK): We think the most important start is leading with a set of policies. This idea that we have a set of models that will be used, for example, with Azure or other hyperscalers, and then having some that are very domain specific. How this plays in the market will be determined, but we believe this scenario gives enterprises and governments more confidence that their data will always remain their own. Their own data will be the one that wins. As such that’s our overview and it’s just step one in the conversation.

Most of our clients are very early on in the test and learn on this. Everyone’s experimenting and I see great potential, but the biggest thing we’re doing is making sure that the data that’s being leveraged across the enterprise is not leaving the four virtual walls.

SB: How will these sandboxes and guardrails work for the domain specific large language models (LLMs) within company uses of generative AI?

JB: It’s a little bit of an unknown point, especially if you don’t have AI research to understand how it works, right? And so, if you say I’m going to connect that to the controls on an airplane, then you don’t want to wait late in the game. So what you want to do is you want to put some limits around it, in terms of which of the information is about access and what action one can take.

What we are doing is to say it is as if the language model is embedded inside this boundary – Jeremy Barnes, ServiceNow

In ServiceNow, we already have all this security context, which is around anything to do with a user; you can’t just go and get access to do certain things. Administrators can do other things, but it’s very controlled. And therefore rather than saying we have one LLM which is running everything, what we’re doing is to say it’s as if the language model is embedded inside this boundary. That way it can’t access information and shouldn’t have access to the content of the world all the time. It’s not as efficient, and you need more obviously. Also, you can only do this inside a platform. You can’t do it by having generative AI just bolted on; it’s got to be people-led within a platform.

SB: So can enterprise LLMs be properly sandboxed in for sure? And what are you seeing generally in the industry with AI adoption?

DK: We’ve found as we’re doing our own focus internally within Accenture, we’re putting all those rules in place for everyone who’s experimenting. For the care of clients with AI, most of them are coming needing or asking for discussion and we have to figure this out. Quickly, ChatGPT was released, and then there was perhaps one of the biggest moments that we’ve seen in our careers. I mean, you could call it the iPhone moment. This came out and it was built on the technology infrastructure that exists today, using all of the language models that we know and we can actually feel AI for the first time. For years we’ve been talking about machine learning and all these different models, but it was complicated to build. This is so easy to use and our clients are asking for help here on how to get started.

Dave KanterWe are doing some early views of how to set this up for governments – Dave Kanter, Accenture

We’re doing some early views of how to set this up for governments and what the potential opportunities are to help set up the infrastructure to rapidly start test and learn and make sure that the data is all protected. So that’s what we’re seeing, and this is all very much in real time. We’re starting to find incredible insights in terms of, what are the patterns? How can we make all these things better for how we interact and get more of it? Because our goal is to make sure that all of us are focused on clients every day. As such we’re in the early days, but we see so much promise.

We especially can’t wait to get our hands on what ServiceNow are doing with domain specific models via call centers. We think of the tremendous opportunity to accelerate and build out. We’ve been working very closely with the Microsoft side on the Azure OpenAPIs. That’s what we’ve been doing to start running this early experimentation on our ServiceNow datasets internally.

SB: You have claimed we are in the “dreaming stage of AI” where it seems anything is possible, and we’re not even quite sure what the endgame is yet. What kind of AI courses are going to be integrated into the RiseUp training program for ServiceNow, and should be integrated into any AI training program?

JB: What I would expect for a program like that to be successful is that you would provide people the basis in the grounding elements they’d need to know, so that they have a general picture. If we want them to “rise up”, we don’t want them to rise up to do tasks that will be automated in a couple of years.

For instance, there’s an art to the ServiceNow platform. When you use it in the right way, it starts completing tasks that you couldn’t have imagined you would do in a month, let alone a week. Our goal would be for users to continue to do all that part, but to allow the generative AI part to augment them. I expect that we will use AI courses in that way, to equip users to leverage those tools just like any developer going to university now.

SB: Quite often, it’s one-off mavericks within an enterprise team who are experimenting with AI on the sly, mainly because companies are apprehensive to invest or jump into such untested tech. Is it the case that, for now, it’s probably too dangerous for companies to jump straight in, because what’s fine for the time being, might turn into whopping regulatory mistakes for AI management in a year or so’s time?

JB: Using any tool indiscriminately without putting any governance in place is a challenge. Especially as the more powerful the tool, the more governance you need. I think that often governance is seen as being a static thing and I think this is part of the problem – that we have made these laws that change every decade or two and these large governance processes come in a big wave and roll through everything.

Now we’re seeing a technology change every three months and so in order to be able to do all this, it’s not so much that you want to skip governance, because essentially when you’re doing something on the side, you’re kind of skipping the governance. The goal instead should be to help companies get to the point where their governance is agile enough to alter and jump from point to point.

The other thing I’d say is that there are many kinds of risks in our company. If you ban all your employees from using generative AI, you do make yourself less competitive as compared to a company that is using it and so there is a question of balancing the business risk that you want to take – there’s no “no risk” solution. It’s important to have a global perspective on governance and push it down into individual departments or some of their various governance functions spread throughout an organization.

SB: How would you recommend a company that is completely new to this in how it should start putting in some simple algorithms and use cases with governance in place?

JB: One of the valuable features that customers want from a platform is that sometimes it is a little bit painful going through the whole audit process, so that’s one of the reasons that we are building some LLMs to enable a live set of safeguards built into the platform.

There are two main aspects. Being explicit about the outcomes you want to achieve using generative AI is vital because, though it might lead to an outcome, it’s not an actual outcome in itself. Understanding what you’re trying to achieve is the first thing. The second thing is, if you look at the shape of the legislation that’s coming, it’s not just about each individual component, it’s about the whole system. Companies are going to have to get their heads around the idea that there is governance that is going to occur at a different scale to development or a different scale to deployment that it’s not going to sit nice. Understanding how to put in place something which is able to tackle those kinds of issues and resolve them is really important.

For ServiceNow, we believe it’ll be by far the easiest and most effective way, but even if you want to use generative AI with someone else, you’re still going to have to do the work. The legislation is coming. Everyone’s going to have to grapple with it – so start now!