AI in the House of Lords

AI house of lords

 The experts have descended, so what have we learned? 

Last week was London Tech Week, and technology specialists descended on the capital in force. A true celebration of London as a global tech hub, over 70 technology events took place around the central venue in Westminster and surrounding ‘fringe event’ locations, including an artificial intelligence (AI) summit. There were keynote and vision stages, a technology demo area to try innovations, and plenty of networking opportunities. 

At the UK House of Lords, we enjoyed a champagne afternoon tea and heard about the current state of AI, hosted by the Institute of Directors ERP/Digital group (IOD). ‘Leading the way for professionalism and good governance in business’ for almost 120 years, the IOD group offers professional development, connections, and influence for its global community of directors. But, bubbly, scones, and clotted cream aside, what did we learn about AI? 

AI technology is no doubt advancing at a rapid rate. As Dev Govender, financial services lead director at PwC, explained, “We’ve now seen the large tech companies, such as Microsoft, SAP, Oracle, and Salesforce, acquire automation companies. It’s here to stay, playing an increasingly essential role in driving value, and is now using technologies like deep learning, machine learning, neural networks, natural language processing, and computer vision.”

Recent AI endeavors have seen some clear wins. From the University of Surrey and the Surrey Institute for People-Centred AI (SIPCAI), a world-leading creator of AI technologies, the conversation centered around the huge potential to address global challenges, including sustainability goals, healthcare, and communications. SIPCAI founding director, Professor Adrian Hilton, shared how AI-assisted sleep research can deliver personalized care for people living with dementia and, also, how AI algorithms are improving social accessibility, with the world’s first sign language translation system. 

Clearly, AI curation is doing some good. However, countless problems of lacking diversity and representation have shown continual false steps, embarrassments, and even acts of discrimination. Though we think businesses have long learned from the Amazon AI recruitment debacle, AI mistakes remain anything from farcical to incredibly damaging. As warned by another event speaker, Dr Claire Thorne, Co-CEO of TechSheCan, “bias and prejudices are built in from the very beginning by the people that created the technology.”

Only last year, the Dutch government resigned after using a ‘self-learning’ algorithm to classify benefit claims. It mistakenly labeled over 20,000 parents as fraudsters, a disproportionate amount of which had a dual nationality or immigrant background. Meanwhile, at a football match during COVID restrictions, an AI camera operator kept mistaking a lineman’s head for the ball, making it a confusing game for the fans watching at home.  

With clear preferences and gaps in our knowledge, the technology we are currently creating, and may create in the future, is being infused with the same principles. 

It is estimated that 65 percent of children are set to do jobs that currently don’t exist. For Dr Claire Thorne, the education gap is only compounded by “reports of racism across higher education institutions for mental health students” and “the government social mobility commissioner stating, the reason girls don’t pursue physics is their aversion to hard maths”.  

How has the UK government answered the educational, ethical, and regulatory issues surrounding AI? If going solely on the words of headline speaker Lord Clement-Jones, they have not progressed much in the last two years. Back in a 2020 UK Parliament statement, specifically the Select Committee on Artificial Intelligence, it was said there was “no room for complacency”. An appointed committee was to create a “five-year strategy” for the UK to “take advantage of AI, rather than be taken advantage of by it”. It called for “more and better coordination” from the top, with the government leading the way to make ethical AI a reality. 

Two years down the line and that strategy has gained limited ground, not just in education and diversity, but in setting workable regulations to control the creation and installation of AI algorithms. Lord Clement-Jones spoke of a need for boardrooms to be heavily involved and warned of the risks of getting development wrong. He called for the UK “to make AI our servant, not our master” and stated the importance of asking “though we can, whether we should” be using this technology for the problems at hand. 

Nonetheless, there was no mention of exactly how to regulate something so freely accessible. Now, with “over 150 sets” of AI common ethical principles created worldwide, the question remains: how can principles be installed in practice? Although Lord Clement-Jones stated the need for a “regulated, but free, AI world”, there were no definitive moves to help us achieve this feat.

We are entering a world where anyone can create an algorithm to action AI, and businesses and individuals alike are only increasing their development of intelligence tools. As much as the likes of Google and Microsoft may be battling to ensure fair AI outcomes, how much visibility can we really have of AI ethics until it’s too late? 

Only last month, the Information Commissioner’s Office (ICO) issued a seven million pound fine to Clearview AI Inc. The AI company had wrongfully collected over 20 billion online images and data of UK residents, and individuals elsewhere, to create a global database for facial recognition technologies. 

From all the speakers, what resonated was the sheer amount of work needed to ensure AI technology can work fairly and securely at scale. While our technology is advancing, our progress has emphasized the challenges in defining regulations and legislations that work in a realistic development world.

As it stands now, it’s apparent that AI cannot be responsible for inclusivity drives. Indeed, any problems we wish to solve with AI must be specifically and carefully addressed, to ensure further unanticipated issues can’t emerge from algorithms or data gathering initiatives.

What is essential for AI moving forward is a means to ensure development is safe and fair for widespread societal use. What is still lacking is a clear government strategy to address this global concern.