Google addresses concerns about lack of AI regulation

image of Google sign on building/Google adresses concerns about lack of AI regulation

AI is developing at a rapid rate, which has raised concerns about its lack of restrictions. Because of this, Google has been having early conversations with the EU about implementing AI regulation, as revealed this week. 

These concerns involve how to distinguish between human-generated content and AI. In response, Google Cloud CEO Thomas Kurian has highlighted that Google is working on technologies to solve this issue; the company unveiled a ‘watermarking’ solution that labels AI-generated images at its I/O event last month. 

Kurian has told CNBC that “We’re having productive conversations with the EU government. Because we do want to find a path forward,” adding “we do think these technologies are powerful enough, they need to be regulated in a responsible way, and we are working with governments in the European Union, United Kingdom and in many other countries to ensure they are adopted in the right way.”

Subsequently, there is also apprehension that generative AI models could harm artists and other creative professionals who rely on royalties to make money, as generative AI models are trained on huge sets of publicly available internet data, much of which is copyright-protected. 

Among these AI critics are several former high-profile researchers at Google, like Timnit Gebru, a former senior leader in AI ethics research who has lost her job partly due to her criticism of the corrupt nature of AI. She has stated that: “We need regulation and we need something better than just a profit motive” as well as: “AI is not magic. There are a lot of people involved – humans”, which is a reminder that the people who are behind AI generations are to blame for the concerns, not the technology. 

Legislation has been approved earlier this month, named the EU AI Act, which will bring oversight to AI development in the EU. This will be the world’s first comprehensive AI law, and it seeks to “promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects.”

As an example of how this act would work in action; systems such as ChatGPT would have to disclose that their content was AI-generated, distinguish deep-fake images from real ones and provide safeguards against the generation of illegal content.

However, outside of the EU, AI regulation legislation becomes far more vague. In the US for example, there are several guiding federal documents from the White House on AI harms, but they have not created an even or consistent federal approach to AI risks. But there is hope – the White House Office of Science and Technology Policy (OSTP) published a blueprint for an AI Bill of Rights in late 2022.

The blueprint contained five principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and alternative options. Nonetheless, Some advocates for government controls believe that it does not go far enough and will be largely ineffectual. Instead, they wish the document had more of the checks and balances that are available in the EU AI Act.