Can AI regulations help build trust in the technology?

a red traffic sign with a figure of a person inside it and the word trust written | Can AI regulations help build trust in the technology?

Key Takeaways

Voluntary codes of conduct for AI lack accountability, prompting calls for more comprehensive regulations like the EU Artificial Intelligence Act, which aims to balance innovation with safety and ethical standards.

Businesses prioritize rapid AI deployment for market advantage, often at the expense of safety and ethics, highlighting the urgent need for robust regulation and independent auditing to ensure responsible AI use.

The push for globally harmonized AI regulation is essential for fostering innovation and economic opportunity, yet conflicting national interests may lead to insular regulatory frameworks that hinder collaborative progress.

Some are advocating for voluntary codes of conduct to foster a climate of trustworthy AI.  It is argued that it is in the self-interest of AI developers to deliver AI systems that safeguard the health, safety and fundamental rights of users.

However, voluntary codes of conduct lack accountability for ensuring that the potential adverse impacts of AI systems are mitigated. This inherent weakness of voluntary codes of conduct was acknowledged as “a stopgap measure designed to lay down lines in the sand while waiting for the passing and implementation of Europe’s rules,” by the signatories of the EU-US voluntary Codes of Conduct.

Balancing innovation and harm 

There is also considerable debate about the nature and scope of AI regulation that will govern the responsible use of AI systems. Should it favor a pro-innovation approach as proposed by the UK House of Lords Select Committee Artificial Intelligence Bill? The principle espoused in the Bill is not to create “an outsized, do-it-all, regulator” but rather a principled-centered approach based on “safety, security and robustness; appropriate transparency and explainability”.

At the other end of the spectrum is the proposed EU Artificial Intelligence Act, which recently received approval from the EU Parliament. The Act is expected to be the world’s first and most comprehensive regulation of AI. It provides a risk-based and prescriptive regulation that creates obligations for different tiers of AI systems that span prohibited, high-risk and minimal-risk AI systems.

While the ambition of the EU Act is to balance innovation and harm, some commentators argue that its scope is an overreach and impractical to enforce due to its complexity. Furthermore, regulations tend to lag innovation; an example is the length of time since the Act was initially introduced in 2021 and its ultimate ratification in 2024. In the meantime, rapid advances in the state of the art of AI, particularly LLMs resulted in protracted negotiations as part of the Trilogue Process, leading to last-minute amendments.

The United States has seen a patchwork of state and federal efforts to address AI, particularly in light of the rise of LLMs. At the Federal level, the emphasis is on promoting AI governance best practices such as enunciated in the blueprint for an “AI Bill of Rights” and the executive order issued by the Biden administration. These initiatives, while a step in the right direction, lack teeth. Proposals have been considered to strengthen AI regulatory enforcement measures including more rigorous AI safety standards such as those proposed by the NIST Risk Management Framework. However, given its current political climate, it’s unlikely that the US will pass meaningful AI regulation at the Federal level.

Finally, there is a push toward globally harmonized AI Regulation as demonstrated by the G7 Hiroshima AI Process to foster an environment for “the common good worldwide”. Given the transformative and disruptive impact of AI globally, the need for legal certainty governing the use of AI systems is a desirable ambition. According to the Stanford University 2023 AI Index, 127 countries have proposed legislation focusing on AI regulation.

Globally harmonized AI regulation removes trade barriers, fosters innovation and creates a framework for economic opportunity. If, on the other hand, AI is seen as a geopolitical advantage then the prospects of a more insular regulatory framework will likely become the norm, which may result in a zero-sum mindset of “winner takes all”.

Businesses will not wait for AI regulation

Businesses aren’t going to wait patiently for regulators to tell them how to govern the development, use and monitoring of the performance of AI. They have a self-serving motivation to get product to market as fast as possible in order to achieve first mover advantage. The mentality is one of “ship now and fix later,” resulting in demonstrated harm to the health, safety and fundamental rights of consumers. Consumers are also at a distinct disadvantage in enforcing their rights. AI systems are opaque, uncontestable and unregulated, as commented by Cathy O’Neil (a leading advocate of responsible AI) in her groundbreaking book Weapons of Math Destruction.

Leaving regulation of AI in the hands of developers is clearly not the solution. The bottom line is that operationalizing trustworthy AI demands a “whole-of-society” effort that embraces a combination of approaches including voluntary codes of ethical AI best practices, AI standards and risk management frameworks, augmented by practical regulation that balances innovation while safeguarding against its adverse impacts.

Moreover, there is a need for an independent audit of AI systems similar to financial audits by certified financial auditors. There is momentum toward such an independent oversight by certified AI auditors. Just like auditors of financial compliance and performance, subject matter experts in the objective assessment of AI systems will become invaluable in ensuring the responsible use of AI in business. A non-profit public charity ForHumanity is at the forefront of empowering individuals and organizations to develop audit criteria endorsed by regulators chartered to conduct independent audits.

Operationalizing trustworthy AI

Apart from common sense AI governance, AI risk management, regulation and independent audit criteria, organizations may consider instituting measures that can mitigate AI risks while benefiting from its use.

First, many of the concerns surrounding LLMs come from their vulnerability to hallucinations, considerable environmental consequences and the large volume of data they need to leverage.

An effective and proven alternative is to implement “purpose-built” AI platforms known as Small Language Models (SLMs), which are likely to be the subject of innovation leaders’ attention as businesses move to deliver positive business outcomes while mitigating compliance risks.

By adopting SLMs, businesses can reduce the risk of harmful inaccuracies and biases that might permeate larger-scale models, enabling more secure and ethical outcomes. This strategy also builds familiarity and dialogue between business and development professionals, as the development of AI becomes more closely intertwined with core business needs and processes. Such collaboration bodes well for a purposeful AI strategy that considers ethics, compliance and efficiency as much as versatility.

Second, applications of process mining technologies are proven to help organizations mitigate compliance risks by providing granular insights into the performance of compliance processes, surface potential gaps and identify their root causes, thereby demonstrating auditability and traceability of conformance with mandated AI risk management frameworks.

Business benefits

Implementation of regulatory compliance best practices that enable organizations to proactively navigate a complex and rapidly evolving regulatory framework is only one obvious and compelling business benefit. Perhaps more important is that such investments foster a culture of trustworthy AI internally, as well as externally, with customers and consumers. Effective AI governance and best practices result in increased brand loyalty and repeat business. Therefore, instituting trustworthy AI best practices and governance framework is simply good business. It engenders confidence and sustainable competitive advantages.

Describing trust as “the remarkable force that pulls you over that gap between certainty and uncertainty; the bridge between the known and the unknown,” author and trust expert Rachel Botsman, reminds us that as AI innovations continue developing, bridging that gap through regulation will become even more important to guarantee trust in the technology.