As technology companies progressively embrace AI and its potential to help people with tasks – from reducing personal admin to creating invoices and generating action steps based on cues – trust and trustworthiness have been front and center on the agenda for employees and businesses.
At the Mastering ERP 2024 conference, Insiders heard from Maria Axente, head of AI public policy and ethics at PwC UK, who shared her compelling narrative on the role of technical expertise, governance and multidisciplinary collaboration in ensuring ethical AI. Her insights drew attention to the philosophical and practical aspects of fostering trust in AI systems, whilst touching on mythological themes.
Icarus and the perils of ethical oversight
As literary tropes often come in handy to describe contemporary issues, Axente likened the trajectory of AI development to the Greek mythology fable of Icarus. “Our wings are dreams,” she explained, “on the precipice of unfurling through the potential of AI.” Yet, like Icarus, professionals must be wary of the metaphorical “solar heat” – the biases and ethical missteps that can erode trust and safety.
Explore related questions
Her rhetorical question, “Who are we to decide when it comes to ethical AI?” also highlights a critical challenge – while engineers are essential for technical execution, organizations must define ethical boundaries and mitigate biases. This separation is necessary because, as Axente points out, “Engineers can inadvertently bring their own biases to the table” due to human nature.
Defining trust and trustworthiness
Examining trust, Axente draws a distinction between trust and trustworthiness as trust is a belief in another party’s ethical and reliable behavior over time – something relational, shaped by perception and potentially fragile. However, trustworthiness involves objective qualities – consistency, transparency and accountability that support trust.
“For all the excitement around AI, we’ve not always been honest with ourselves about its strings,” Axente remarked. Drawing parallels to the aviation industry’s ecosystem of safety and trust, she emphasizes the need for robust governance frameworks and proactive risk management in AI deployment. “Building trustworthiness is not just about technical excellence, it’s about demonstrating it.”
Three-layer framework for responsible AI
Building on this knowledge, Axente introduced a three-layer framework for fostering trustworthiness in AI including Technical Tools, Institutional Processes and Stakeholder Engagement.
For Technical Tools, robust testing, monitoring and validation are essential. “Technical experts play a critical role,” she notes, “but they must also communicate their decisions transparently.” Collaboration across disciplines ensures that technical safeguards align with broader organizational and societal goals.
Regarding Institutional Processes, standardized procedures and clear accountability structures are vital. Axente shared that many organizations lack visibility into the AI systems they deploy, leading to fragmented and inconsistent governance. “Engineers need to work closely with compliance and business leaders to enforce standardized processes and ensure accountability,” she says.
Finally, for Stakeholder Engagement, Axente pointed out the importance of involving diverse voices in AI development. “No other technology has required such complex stakeholder engagement.” Social scientists, policymakers and end-users must contribute to the decision-making process to address AI’s socio-technical implications effectively.
The role of engineers in ethical AI
With engineers being pivotal to building trustworthy AI systems, their role often extends beyond the technical domain. She shared a striking anecdote about Microsoft engineers who hesitated to choose a fairness definition for AI, saying, “Who are we to decide?” This illustrates the complex ethical terrain engineers navigate.
At the same time, “It’s not just about coding,” Axente asserted. Engineers must be prepared to address ethical considerations and collaborate with non-technical stakeholders to understand the broader impact of their work.
The need for improved communication is also a recurring theme in this field. Engineers, business leaders and policymakers often speak “different languages,” creating barriers to effective collaboration. Axente highlighted the importance of plain language and active listening: “We need to support engineers in finding their voice and ensure they are part of the dialogue.”
Trustworthiness before trust
Emphasizing the importance of establishing trustworthiness as the foundation for trust, Axente concluded that “trustworthiness is the best and most solid foundation we can have”. Organizations must lead by example, integrating responsible AI practices at a micro level while awaiting broader regulatory frameworks.
Citing efforts by the Turing Institute and Tech UK, she noted promising initiatives like citizen juries, which engage the public in discussions about AI use in sensitive domains such as recruitment and medical diagnostics.
The path to trustworthy AI may require a collective effort and Axente’s insights emphasize the importance of multidisciplinary collaboration, clear governance and open communication to make this possible. By focusing on trustworthiness, organizations can build AI systems that inspire confidence and deliver long-term value.
As she says, “It’s not rocket science to build AI solutions that demonstrate reliability and ethical integrity. It’s common sense – but common sense must be implemented systematically.”
For businesses and policymakers navigating the AI landscape, Axente’s call to action is clear – prioritize trustworthiness, foster collaboration and ensure that AI serves as a force for good.
What it means for ERP Insiders
At the Mastering ERP 2024 conference, Maria Axente, PwC UK’s head of AI public policy and ethics, shared a compelling framework for ensuring ethical AI. Her keynote highlights the critical importance of trustworthiness in AI systems, focusing on governance, technical rigor and collaboration. Axente’s perspective is a call for ERP professionals to champion responsible AI practices in their organizations. Below are the key takeaways:
-
- Trustworthiness vs. Trust: Axente distinguished between trust (a belief shaped by perception) and trustworthiness (measurable qualities like consistency, transparency and accountability). Trustworthiness must precede trust to ensure AI reliability.
- The Icarus Metaphor: Axente likened AI’s potential to Icarus’s wings, cautioning that unchecked biases and ethical oversights could erode confidence and safety in AI systems.
- Three-Layer Framework for Responsible AI:
-
- Technical Tools: Emphasize robust testing, monitoring and transparency in decision-making.
- Institutional Processes: Standardized governance frameworks and clear accountability are critical.
- Stakeholder Engagement: Foster interdisciplinary collaboration with diverse voices, including policymakers, social scientists and end-users.
- Engineers’ Ethical Role: Beyond coding, engineers must grapple with ethical dilemmas and work closely with business and compliance leaders to ensure AI’s societal alignment.
- Communication Across Disciplines: Clear, jargon-free dialogue among engineers, business leaders and policymakers is essential for effective collaboration.
Axente’s insights highlight that achieving trustworthy AI is not a technical challenge alone but a collective endeavor requiring ethical foresight, rigorous governance and inclusive collaboration. By prioritizing trustworthiness, organizations can inspire confidence and deliver lasting value in AI systems. As Axente put it: “Common sense must be implemented systematically.”