Artificial Intelligence

Do you know what it is? AI is the hottest topic in tech right now. But, like with so many other hot topics there is a lot of confusion, misunderstanding and false narrative. In this editorial, Paul Esherwood looks at the history of AI, how it will impact your business and asks what are we all doing to ensure AI is developed responsibly?

Don’t feel inadequate if you’re not up on your AI

If you’re not talking about artificial intelligence, blockchain and kubernetes you’re just not keeping up, or at least that’s what many commentators would like you to think. But let’s have a bit of a reality check; What is AI? How will it affect you and your ERP platform and should you really be worried about it?

For the purposes of this editorial I will reference three stages of AI. The early iteration of narrow AI, the current emerging AI and the prospect of AI which will change the world as we know it.

The continuous development of AI will be the most disruptive invention ever. More revolutionary than the discovery of fire, more disruptive than the agricultural and industrial revolutions and more pervasive than the internet and social media. AI will solve the world’s hardest problems and, when fully mature AGI (artificial general intelligence) is developed, it will be the last thing humanity ever invents. AGI may be some way off – perhaps ten, twenty or thirty years, we just don’t know. But once we are able to create a general intelligence algorithm it will set the agenda, make the breakthroughs and determine the future.

The continuous development of ai will be the most disruptive invention ever

We’ve been using AI for 50 years

AI is not a new technology; we have all been using AI for decades. Every time you create a query in Excel. Every time you type something into Google. Every time you use your SatNav. There are literally millions of real-world applications most of us take for granted, that were once considered ground breaking and powered by AI.

Many emerging technologies were called AI in the development stage and once they have been built and deployed it’s just called software. Since the 1950s, programmers have been hand-crafting expert systems to make our lives easier or to challenge us in game play. In its simplest terms, a pocket calculator uses AI – it is an expert system that is very good at maths. The vast majority of historical AI applications follow the same logic. They are all expert systems, loaded with knowledge tokens to perform a very narrow task quicker and more accurately than a human can. Whilst these systems have developed significantly and can now perform very complex tasks, you only got out what you put in – at some point, the knowledge was programmed by a human. The advantage was that AI could access the knowledge and deploy it quicker than we could.

Historically emerging technologies were called ai in the development stage and once they have been built and deployed it’s just called software

So why is there so much hype around AI now when we have been using it for the last fifty years? To understand this, you first need to appreciate that AI has recently evolved from the type of narrow expert systems I described above to more autonomous deep learning applications that perform tasks which we could only have dreamed of a few years ago. In the strictest sense of the word, the original flavour of AI wasn’t that intelligent because its capacity to learn was zero. That has changed and the type of applications that were once called AI look like a very pale comparison to the emerging ML-based algorithms being used today.

There are two key drivers behind these recent advancements; firstly, hardware has improved immeasurably and the exponential developments in processing power, memory and storage have accounted for roughly half of the overall growth in application power. Secondly, we are getting better at developing algorithms and coding AI. Many current applications go far beyond the historically narrow expert systems of years gone by and can now ‘learn on the job’ and improve themselves without further coding from a human.

What is AGI and Superintelligence?

The holy grail for AI developers is AGI and superintelligence. AGI is a term coined to describe an AI that has many of the attributes that we do as humans; it learns from raw data in the way that a baby does and will become an expert across multiple domains in the same way an adult does. It stores knowledge, learns from experiences and ultimately will be able to perform a broad range of tasks equal to, or better than a human.

Applications that were once called ai look like a very pale comparison to the emerging deep learning algorithms being used today

Superintelligence is one step forward, and although it’s just one more step, it’s so prodigious in terms of its implications that it should not be under estimated. Conservative estimates suggest AGI could be twenty or thirty years away – although it could come much sooner, or never – but it’s almost certain that if we do reach that level of AI maturity, superintelligence would follow very soon after. Perhaps months, or weeks – even days or hours is possible – as the AGI teaches itself and learns at an exponential rate, an intelligence explosion will occur and put all of humanities combined achievements into the shadows at a stroke.

The term ‘superintelligence’ was popularised by Nick Bostrom from the Future of Humanity Institute at Oxford University and describes an intelligence that far outstrips human capacity. Comparing superintelligence to human level intelligence is similar to comparing the intelligence of an ant with a Cambridge Don…there really is no comparison.

What does this all mean for Enterprise Applications?

The impact of AI development really depends on what platform you use and what type of user you are. Undoubtedly, even the most basic out of the box solution will have some element of AI, but you won’t know about it let alone have to worry about it. There are certain enterprise functions which have taken the lead in AI adoption whereas others have lagged behind for a variety of reasons such as regulation, governance and suitability.

HCM – or HR to the more traditional reader – which includes workforce management, absence, recruitment and payroll et al – is an enterprise function that has taken full advantage of AI capability. More on this in the next issue in our HR Systems review – a comprehensive guide to digital HR.

AI has spawned a revolution in the most luddite of industries

The same applies to verticals; some are more suited to take advantage of AI than others. The action area for the last few years has been in manufacturing and advanced supply chain management. The cosmic leaps that have been made in this area have revolutionised many traditional metal bashers into lean and agile Industry 4.0 forerunners. The ability to manage an end-to-end process that includes supply chain, manufacturing, distribution and sales with intelligent planning, predictions and analytics has spawned a revolution in the most luddite of industries.

Another vertical that has adopted AI, as Georgina Elrington highlights in her editorial on the same subject, is the financial services industry. Very sophisticated AI algorithms are being used to play the markets, tackle fraud and drive investment decisions. However, if you’re not deploying your ERP in an industry that lends itself so well to AI, the benefits of all these bounds in capability will be almost imperceptible. As Anni Harju describes elsewhere, most adopters of AI tend to take a ‘toe in the water approach’ where AI is used to solve simple problems rather than address the big picture questions. And the adoption rates are largely driven by the vertical or function rather than the overall capability.

As consumers, we have also favoured this approach in our daily lives. Not many of us live in connected homes where our lives are organised and managed by AI – even though this is possible. Instead we have an Alexa that plays our favourite songs when we ask it to or a Hive device that keeps the temperature in our house just so. Hardly solving our biggest personal issues but all the same convenient and a nice easy opener into the world of AI for most people.

Trust and Responsibility

In the same way, enterprises of all shapes and sizes are starting to embrace AI to solve their ‘small problems’ without having to dive headfirst into a world that frankly we don’t yet fully understand and certainly don’t yet trust. I won’t labour the trust issue for now – it will be resolved – but only with time. And I don’t see any way that vendors or buyers can expedite that process. The more adopters we have, the more positive use cases the vendors can quote and the more likely it is that others will follow suit. There are plenty of AI faux pas to hold the trust issue back, not to mention significant regulatory and compliance concerns, so for now the timeline for trusting AI is an unknown quantity.

The idea of ‘ring-fencing’ the ai with someone’s finger hovering over the off switch in case it all goes wrong is fanciful

Allied to the trust issue is that of responsibility. Most, if not all, of the effort in AI is geared towards creating more capability rather than considering the potential negative implications. Very little serious effort is currently being put into future proofing a technology which even the most learned experts agree they cannot predict what its final incarnation will look like – and more importantly, be capable of. The idea of ‘ring-fencing’ the AI with someone’s finger hovering over the off switch in case it all goes wrong is as fanciful as the storyline in Terminator 2 when Skynet became self-aware and launched Armageddon against its creator. Terminator 2 was released nearly 30 years ago and whilst it’s a Hollywood blockbuster designed to titillate, the central theme of that film should still give concern to those with the capability to one day build AGI – it’s a fundamental, potentially existential, problem that we don’t have an answer for.

Supporting this view, Cathy Cobey, EY’s trusted global AI advisor, said: “AI has the potential to transform many industries, but its pace of development has been impacted by a lack of trust. In fact, although there is a growing consensus on the need for AI to be ethical and trustworthy, the development of AI functionality is outpacing developers’ ability to ensure that it is transparent, unbiased, secure, accurate and auditable.

“The implications of a failed AI agent cascade beyond operational challenges, it may also lead to litigation, negative media attention, customer churn, reduced profitability and regulatory scrutiny.

Ai has the potential to transform many industries, but its pace of development hasbeen impacted by a lack of trust

“As a result, there needs to be a recognition that AI as a technology is still in its infancy, and therefore needs to have the appropriate guard-rails put in place. There also needs to be active dialogue and transparency with all parties, including the final user, on the opportunities and risks of AI – awareness will bring understanding and eventually acceptance.”

What are the ERP vendors saying about AI?

ERP vendors are all hard at play developing their cloud ERP platforms and all have some level of AI embedded. At Oracle’s recent OpenWorld Europe event, CEO Mark Hurd delivered his verdict on the impact of AI saying that all cloud applications would contain AI by 2025 and that cloud and AI strategy was far more than a technological advancement, it was “a business model” that all organisations must adopt. Hurd went on to say that 60 percent of the jobs needed to manage an AI economy had not yet been created and that cloud (in relation to AI) was “the movement of innovation” and “central to everything that we do at Oracle.” His speech was definitive and left no room for misunderstanding; AI and cloud is the only game in town and any organisation that doesn’t adopt its values will be left behind.

I strongly believe that ai is going to be a new human right. Every person and every country needs to have access to this new critical technology

Another heavyweight from the cloud arena went further, believing that AI was not just central to corporate strategy, it was vital for the prosperity of humanity. Marc Benioff, CEO at Salesforce said at the World Economic Forum: “We are risking a new tech divide between those who have access to AI and those who don’t. Those without AI are going to be weaker and poorer, less educated and sicker.” Given the far-reaching implications of AI, not just for your bottom line but for the future prosperity of humanity, it is high time that the issue of responsibility was placed front and centre by all vendors and developers. Summing up, Cobey of EY said: “Creating trust in AI will require both technical and cultural solutions. On top of operating reliably, AI must also comply with ethical and social norms, including cultural values. In fact, teaching AI is analogous to parenting a child – you need to teach AI not only how to do a task but also all the social norms and values that determine acceptable behaviour. This will be the biggest barrier in moving from narrow to general AI.”