When it can take only one mistake — or a perception of a mistake — for a user to stop trusting AI, how can you earn and sustain user trust?
By Cathy Cobey, EY Global Trusted AI Consulting Leader
Every challenge in business is an opportunity for AI. However, organizations are holding back in leveraging these opportunities because of mistrust in AI — and so, being cautiously selective in where it is used.
Trust is the foundation on which organizations can build stakeholder confidence and active participation with their AI systems. However, in this era of instantly accessible information, mistakes can be costly, and second chances are harder to come by. Organizations who want to succeed in an AI world must embed a risk-optimization mindset across the AI lifecycle. They do this by elevating risk from a mere responsive function to a powerful, dynamic and future-facing enabler for building trust.
Building trust in AI
AI is introducing new risks and impacts that have historically been the purview of human decision-making, not technology development.
With the risks and impacts of AI spanning across technical, ethical and social domains, a new framework for identifying, measuring and responding to the risks of AI is needed; one that is built on the solid foundation of existing governance and control structures, but also introduces new mechanisms to address the unique risks of AI.
Managing the risks
Managing the risks of AI is about more than preventing reputational, legal and regulatory impacts. It’s also about being considered trustworthy. With public discourse on AI heavily skewed to its risks, it will take time and active dialogue with stakeholders to build trust in AI systems.
Building trust in AI will take a coordinated approach. EY team believes there are five pillars of trust:
- Advocacy – Do stakeholders understand the benefits of AI and how it will enhance the products and services they receive?
- Proficiency – Does AI enhance and improve an organization’s brand, product, service and stakeholder experience?
- Consistency – Is the AI use in alignment with an organization’s stated purpose and support its achievement over time?
- Openness – Has the organization effectively communicated and engaged with its core stakeholder groups on its use of AI and the potential benefits and risks?
- Integrity – Is the organization’s approach to the design and operation of trusted AI in line with the expectations of its stakeholders?
In establishing the five pillars of trust, the overarching element that connects them all is accountability.
Accountability is the foundation on which trust is built and is the inflection point at which an organization translates intentions into behaviors. Regardless of the level of autonomy for an AI system, ultimate responsibility and accountability for an algorithm needs to reside with a person or organization. By embedding risk management into its design enablers and monitoring mechanisms for AI, organizations can demonstrate their commitment to accountability and for being held to account for AI systems predictions, decisions and behaviors.
“Leading AI organizations are building Trust by Design into AI systems from the outset to help organizations move from ‘what could go wrong?’ to ‘what has to go right?’ “
With understanding still evolving on how AI operates and when and how risks could develop, many AI systems are considered high risk by default and approached with caution. To counteract this response, various tools and platforms are being developed to help organizations quantify the impact and trustworthiness of their AI systems.
Quantifying the risks of AI
If AI is to reach its full potential, organizations need the ability to predict and measure conditions that amplify risks and undermine trust.
Understanding the drivers of risk in relation to AI requires consideration across a wide spectrum of contributing factors including its technical design, stakeholder impact and control maturity. Each one of these, in their design and operation, can affect the risk level of an AI system. Developing an understanding of the risk drivers for an AI system is a complex undertaking. It requires careful consideration of potential stakeholder impacts across the full lifecycle of the AI system.
In developing a trusted AI platform, there are three important components to managing the risks of an AI system:
- Technical risk — It evaluates the underlying technologies, technical operating environment and level of autonomy.
- Stakeholder impact — It considers the goals and objectives of the AI agent and the financial, emotional and physical impact to external and internal users, as well as reputational, regulatory and legal risk.
- Control effectiveness — It considers the existence and operating effectiveness of controls to mitigate the risks of AI.
Together, these provide an integrated approach to evaluate, quantify and monitor the impact and trustworthiness of AI. A trusted AI platform uses interactive, web-based schematic and assessment tools to build the risk profile of an AI system, and then an advanced analytical model to convert the user responses to a composite score comprising technical risk, stakeholder impact and control effectiveness of an AI system.
This kind of platform can be leveraged by organizations to develop a risk quantification during a robust desk-top design and challenge function at the beginning of their AI project. Embedding trust requirements in the design of AI systems from the outset will result in more efficient AI training and higher user trust and adoption.
Responding to the risks of AI
Responding to the risks of AI will require the use of new, innovative control practices that can keep pace with AI’s fast-paced adaptive learning techniques.
In developing a risk mitigation strategy it’s important for an organization to use an integrated approach which considers the objectives of the AI system, the potential impacts to stakeholders (both positive and negative), the technical feasibility and maturity of control mechanisms and the risk tolerance of the AI operator.
With AI, which can continue to learn and adapt its decision framework after it’s put into production, it’s important that strong monitoring mechanisms are in place to establish trust. Organizations need to be able to continually evaluate whether an AI system is operating within acceptable performance levels and identify when a new risk is forming.