How AI is turning issues into opportunities

Artificial_Intelligence

AI is changing tech on a global basis. But the background behind the transformation varies by region – can enterprise keep up?

 

As with any high growth industry or sector, nations around the world compete to lead the way on innovation and applications that benefit governments, citizens and businesses. The artificial intelligence (AI) industry is growing at an incredible speed and companies around the world are investing billions of dollars to win the ‘AI race’ and to secure the largest market share. Predictions show that by 2030 about 70 percent of companies will have adopted some sort of AI technology. According to Google CEO Sundar Pichai, the impact of AI will be even greater than that of fire or electricity on our development as a species. The reason is simple. Whether modelling climate change, developing new medical treatments, exploring space, or increasing speed to market in manufacturing, AI is changing the way we all live and work.

Marc Andreessen, the American entrepreneur, investor and software engineer, said that “software will eat the world”. He implies that every company will become a software company or die, and this applies equally to AI. Every company will eventually leverage AI since AI is a new paradigm of software development that extends the reach of the software. This doesn’t mean, however, that every company needs to build huge data science teams; for example, as AI matures, increased AI capabilities are available embedded in business software, and as low-code/no-code development tools. What’s more, large pre-trained models in the public domain mean companies need less or no training data. Therefore, we define an AI company as any organisation that leverages artificial intelligence to improve business processes and products at scale. 

Aside from the mass investment in the technologies, another main driver of the adoption of AI is the urgent need for automation and intelligence in global civil infrastructure. As populations continue to grow, innovation – in particular, big data and AI technologies – is needed to improve the standard of life and work. Despite macro-economic factors impacting innovation and productivity globally in 2022, there’s no slowing the development of AI. While there are certain challenges, there are even more opportunities.

Companies, like ourselves, across Europe, US and Asia are engaging with policymakers to discuss suitable approaches

Regulation is coming, but will it slow down innovation?

In Asia, governments tend to be very open to the use of big data and AI and the state invests massively in digital solutions. The commercialisation of AI applications has been very successful. In the US, AI innovation is led by large corporations and is enabled by their investments. The US is currently leading the AI technology research and AI applications. Finally, the European approach is often focussed on regulation and safeguarding before innovation, and public opinion is still rather sceptical about digital transformation, AI and big data. Europe has been very successful in basic research and also has a long tradition in AI research. But when it comes to commercialising AI, the European industry has fallen behind the US and China, especially in AI for the internet and consumer products.

2022 has seen markets continue down the regulatory path. In July, the UK Government set out its emerging thinking on how it would regulate the use of AI. It is expected to publish proposals in a white paper later this year, which the committee would examine in its inquiry. AI’s role in the UK economy and society is growing. However, there are concerns around its use. MPs will examine the potential impacts of biased algorithms in the public and private sectors. A lack of transparency on how AI is applied and how automated decisions can be challenged will also be investigated.

The European Commission is also proposing the first-ever legal framework on AI, which addresses the risks of AI and positions the EU to play a leading role globally. The regulatory proposal aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. At the same time, the proposal seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises.

When it comes to commercialising AI, the European industry has fallen behind the US and China

Regulation is tricky to introduce, given how quickly the technology and its use cases are developing. Developers may shiver at the thought of restrictions being introduced that may stifle their innovation, but it will not generally be the plan of regulators to prohibit or slow down the use of AI, but to limit intentional or unintentional issues. Companies, like ourselves, across Europe, US and Asia are engaging with policymakers to discuss suitable approaches to achieve both, enabling growth and innovation in AI as well as managing the risks.

Regulators and governments are not typically technology experts. When lobbying or collaborating with governments on what AI regulation should include, it is important that companies help governments to focus on the nature of the use cases and not so much on the technologies themselves. The risk-based approach of the EU’s AI Act is a right step in this direction. The law assigns applications of AI to three risk categories. 

Firstly, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Secondly, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are left unregulated. In any regulation, the defined obligations need to be general enough so they can cover all existing and emerging AI approaches, for example by defining processes rather than technical parameters to be adhered to.

 

Business challenges impacting AI development and adoption 

Earlier this year, a report from the European Parliament’s special committee on artificial intelligence in a digital age said that the EU had ‘fallen behind’ in the global tech leadership race. “We neither take the lead in development, research or investment in AI,” the committee stated. “If we do not set clear standards for the human-centred approach to AI that is based on our core European ethical standards and democratic values, they will be determined elsewhere.” While the potential frameworks in Europe do seem strict, they force us to develop rules and methods to deal with challenges. GDPR and the emerging AI regulations require the AI solutions to be transparent, and while this may create hurdles initially, they urge AI research and development to invest more effort in trustworthy AI.

Another challenge for businesses is managing expectations and building a better understanding of AI. A baseline level of data literacy is a prerequisite. Businesses must also get comfortable with probabilistic modelling, the statistical process that uses the effect of random occurrences or actions to forecast the possibility of future results. Probabilistic modelling considers new situations and a wide range of uncertainty while not underestimating dangers. Using this method, businesses can quickly determine how confident any AI model is and how accurate its prediction is.   

People are increasingly sceptical of AI, and yet technology is spreading into all areas of life and becoming more integrated into the way we live. So how can we bring more transparency to how AI works and help allay people’s fears? Many excellent executive education programmes on the strategic and practical implications of AI have sprung up over the past few years that can help executives navigate this new world. 

The fast-moving and diverse international regulatory environment is creating uncertainty and risk. For example, individual US states are now releasing their own diverse regulations ranging from data privacy laws in California to algorithmic bias audits in the recruitment process in New York. While this remains up in the air, companies must continue to engage with policymakers and closely monitor the situation. The aim isn’t to prevent the use of technologies, but to ensure they are safe and beneficial to every citizen and business. 

Individual US states are releasing their own regulations ranging from California’s data privacy laws to algorithmic bias recruitment audits in New York

Closing the AI skills gap

 The hangover from the pandemic and macroeconomic disasters continue to damage markets and industries globally, which has had a detrimental impact on recruitment, education and upskilling in various sectors. The skills shortage in tech in Europe has been well documented and filling this void must be a priority for governments across the region to keep up with Asia and the US. 

In October, the European Institute of Innovation and Technology called on partners to get behind its new Deep Tech Talent initiative, which aims to address the current skills gap across Europe’s deep-tech sector. Over the next three years, it will provide one million people with the skills they will need for the EU to become an innovation and tech powerhouse.

Initiatives like this are a great start, and it has also been fantastic to see universities across Europe starting to teach courses focussed on AI. One of the biggest criticisms of AI and a leading cause for concern in its adoption and use cases is both conscious and unconscious biases feeding into decision making. Bias is an inherent human trait, reflected and embedded in everything we create. European AI adoption will only be able to reach its true potential if diversity and inclusion is at the core. It’s a complex topic. Diversity is not just about gender, it is also about age, nationality, sexual orientation, socioeconomic backgrounds, neurodiversity and ethnicity. When we talk about closing the skills gap, there must be diversity in development teams, so the skills needed cannot be hired from one place or using one process.

We need more data scientists, but we also need AI experience with people designing, developing and operating applications that integrate AI. Taking an open and collaborative approach to data science can pave the way for a fairer and more equitable world by reducing bias in AI. 

It has been an incredibly challenging year for everyone, and I hope that the developments and innovations we are seeing in artificial intelligence will continue to make both work and personal lives easier for everyone in future. Whoever wins the AI race, it is clear that there is a global concerted effort to improve the lives of everyone with a fascinating and brilliant technology.  

Ulf Brackmann, vice president – artificial intelligence technology, SAP