What happens when AI stops CHATting to us and starts its own conversation?

Key Takeaways

Digital disruption is accelerating rapidly, with technology's influence on work expanding to determine not only how and when we work, but potentially who or what is involved.

Generative AI, exemplified by advancements like ChatGPT, is approaching a critical inflection point, raising concerns about the potential development of artificial general intelligence (AGI) and its far-reaching implications for humanity.

The urgency for responsible AI regulation is paramount as technology vendors compete for supremacy, necessitating a focus on safeguards to prevent irresponsible development of AI that could surpass human control.

Digital disruption started a decade ago; it gathered pace five years thereafter; and two years ago it exploded. Today, the pace of change is unlike anything we have seen before and it shows no sign of slowing.

When technology leaders warned us that, ‘This is the slowest rate of change you will ever know’, I used to think it was a convenient marketing phrase designed to drive fear into CxOs who weren’t up to speed. Now it seems like an appropriate and astute prediction that is being played out in every domain on the planet. Technology has pervaded where, why and when we work and its influence will soon determine the ‘who’ (or the what) as well.

Generative AI is here and, as predicted in the very first article I wrote for ERP Today way back in 2018, it’s going to have a profound effect on us all. ChatGPT has already evolved into GPT-4 and will continue to morph towards AGI which could come as soon as tomorrow, or maybe as late as a few years down the road. While the timescales are unclear, what is certain is there is no putting the genie back in the bottle. We are now facing an inflection point, not just for business and commerce, but for humanity itself.

For as long as AI has been a topic for discussion, a small non-commercial cohort of people and organizations have pushed for tighter AI regulation to prevent irresponsible development of a technology that we may not be able to control. Now more than ever, those safeguards need to be the primary focus for technology vendors that are battling it out for AI supremacy with little regard for the wider consequences.

As you would expect, AI is designed to be intelligent and at the moment we have some control over that intelligence because we are still the orchestrators. However, the recent transition to generative artificial intelligence is a cosmic leap towards artificial general intelligence and that is something that we should all be acutely cautious of. While a general intelligence algorithm would be a technical masterstroke, it could – and most likely would – lead to super intelligence. This would be an algorithm that was able to learn from raw data across a broad spectrum of domains, much like GPT-4, and teach itself to become more intelligent at a rate that humans could not fathom – like Skynet in The Terminator. (That film is almost four decades old, by the way).

The notion of ring fencing this kind of super intelligent algorithm would be an impossible task, and hovering a finger over the ‘off button’ in case it all goes wrong is nonsense. Anything you have thought of, it has thought of too – and it’s already smarter than you are.

Presently generative AI is programmed with human values and it is not self-aware – although it can learn, it is not conscious. But, speak to a few sacked Google employees and ask them if we are lurching into a world where algorithms are on the brink of becoming sentient, and you may have second-thoughts about how much fun you are having asking ChatGPT supposedly harmless questions.

We’ve been promised transformative AI for decades. Now we’ve got it, do we really want it?

Chat carefully.