Vendors are all over UK’s AI Safety Summit – but will it “generate” any impact?

a man holding a sticky note with AI written on it | Will the UK’s AI safety summit actually make an impact?

The first-ever AI Safety Summit is drawing to a close in the UK, gathering government representatives, tech industry heads and academics, including the likes of tech mogul Elon Musk and US Vice President Kamala Harris. The event has all eyes set on it and many hope it will trailblaze the first-ever international discussion on AI – having the potential to change industries and the planet as a whole.

As an outcome of the high-profile gathering of attendees, everyone is expecting a one-of-a-kind approach to the newly emerging dangers that accompany the hype of AI. But in reality, what are vendors expecting from the summit and could it ‘generate’ any actual difference?

Sharing the common attitude among industry leaders, an Amazon spokesperson tells ERP Today that the company is attending the summit led by a dedication to driving innovation on behalf of customers and consumers, “while also establishing and implementing the necessary safeguards to protect them”.

Amazon likely represented both its retail and cloud ventures at the summit. Arguably AWS is where the real AI focus is for Andy Jassy and company, with the vendor keen to set out its GenAI stall for the very profitable enterprise marketplace.

The summit was also attended by fellow vendors Google, IBM, Microsoft and Salesforce, with the wider enterprise and consultancy ecosystem keen to comment on PM Rishi Sunak’s grand affair.

EY’s view on the summit is that it offers “a unique opportunity for the UK to position itself as a world leader in AI.” This is according to Harvey Lewis, partner, consulting for artificial intelligence at EY.

Ultimately, the summit is an opportunity for countries to explore a common understanding of the risks of AI and how to prevent those risks. In Britain’s case, this comes exemplified by the “world first” formation of a UK AI Safety Institute.

Lewis believes that the institute’s aim to “explore all risks of AI, from the most extreme, through to social harms such as bias and misinformation, is a welcome step in addressing the spectrum of dangers posed by [AI], not just the ‘existential’ risks.”

At the same time, the EY partner points out that while AI safety is a priority, the UK government has recognized the immense opportunities of ‘AI for good’ in improving productivity and addressing significant world challenges, such as climate change.

Regulation versus roaring ahead

AI regulation was keenly discussed at the summit – particularly how existing AI principles and governance frameworks can be joined to address the challenges of quickly advancing GenAI systems while still encouraging responsible innovation.

Praising the element of collaboration between institutions and governments, Zahra Bahrololoumi, UKI CEO at Salesforce, said: “Pausing to contemplate the potential risks and benefits of AI in front of us now and in the future, will pay dividends” as she believes that gathering varied perspectives and experiences, having open and honest discussions and collaborating will help create a path forward.

Similarly, Eric Loeb, executive vice president of global government affairs at Salesforce, says that “the principles developed by the G7 members around safety and trust in AI are an important step.

“Now, the AI Safety Summit is providing a crucial forum for discussion between the private and public sector.”

Hold your own AI safety summit

Standing at the dawn of the AI revolution as multiple businesses already use AI to enhance productivity and customer experiences, the summit’s outcomes are hotly anticipated because they have the power to determine innovation in the works as we speak.

But perhaps there is just one glaring issue with global initiatives of this size, such as the recently announced executive order by US President Biden. These initiatives themselves, not followed by constructive action, can be of no avail.

Biden’s order on the safe use of AI entails government-set guidelines for so-called red-team testing, where assessors emulate rogue actors in their test procedures; official guidance on watermarking AI-made content; and new standards for biological synthesis screening – to identify potentially harmful gene sequences and compounds.

The outlined plans will need more follow-through, though, both from the government and also the software giants to make a real-life impact on emerging AI threats.

As such, the UK government’s AI summit is arguably a helpful step in the right direction, but just that for now – one step closer to establishing thought-through recommendations and guidelines backed by research.

One recommendation comes courtesy of Rashik Parmar, CEO of BCS, The UK’s Chartered Institute for IT, who said that post-summit they would like to see government and employers insisting that everyone working in a high-stakes AI role is a licensed professional and that they and their organizations are held to the highest ethical standards. 

Parmar added: “It’s also important that CEOs who make decisions about how AI is used in their organization are held to account as much as the AI experts, that should mean they are more likely to heed the advice of technologists.”

Execs and software heads reading this would be wise to hold internal AI safety summits of their own, if the pleas for AI safety from Sunak, Salesforce and others are to have any impact.