Race for the prize: An update on the AI race as ChatGPT speeds ahead

A robotic figure captured in a mid-running motion | AI

Key Takeaways

The competition in the AI landscape is intensifying, with Microsoft leveraging its partnership with OpenAI to challenge Google and prompt innovation across the sector.

Understanding the distinction between language models and knowledge models is crucial, as the quality of knowledge extracted from data is vital for the success of generative AI technologies.

Cloud vendors are being assessed on their ability to support AI strategies, efficient compute capabilities, LLM creation, and their overall architecture, with companies like Oracle and Nvidia positioning themselves strongly in the evolving AI market.

I previously looked at the state of the AI race about a year ago – and as you can imagine, a lot has changed with AI since then due to the rise of ChatGPT.

Credit goes to Microsoft through its partner-ownership relationship with OpenAI, which has made AI a widely talked about topic. The result is that AI is now tangible, as everybody with a smartphone could practically test out OpenAI’s natural language powered LLM, making LLMs part of our common parlance.

While the euphoria has ebbed a little bit, solid first-generation use cases of AI are being delivered to enterprises. With that, AI has derailed a lot of product roadmaps and development plans as the whole industry has had to react on the fly, not only on a platform (PaaS) level but also on the application level (SaaS).

It’s not about the language. It’s about the knowledge, stupid

Humans like to make technology human, anthropomorphizing it, and the same has happened with LLMs. LLMs are not really large language models as such, but large knowledge models. But that would lead us to possibly challenging and concerning discussions on what knowledge is being used, where does it come from etc. But whenever you see LLM, instead picture LKM – a large knowledge model.

And while we are at it – do not ask about the data. Data can be misleading and wrong. The question needs to be – what knowledge can be extracted from the data, and what knowledge gets fed to the LKM, as accessed through natural language? The quality of that knowledge is what is key for the success of generative AI.

And so, Microsoft poked Google

In what likely will become one of the technology industry’s biggest strategy moves, Microsoft managed to make Google afraid of its own innovation. Through lobbying with legislators in North America and Europe, using a small army of ethical and legal experts, Microsoft did something that is rare in its technology – it managed to stall a market leader through legislative, statutory and PR lobbying. Suddenly, Google was afraid of its own progress.

Google was so afraid of its own progress that it let go of one of its AI leaders when he stated that Google AI had become sentient.

Fast forward to spring of 2023 – and Alphabet CEO Sundar Pichai shares freely that somehow, on its own, Google AI algorithms picked up a language to its translation fold, Bengali, without ever being asked to. If that is sentience, I’ll leave it up to you to decide.

For enterprise, these strategies are interesting to understand, but do not matter too much. What matters is that there is tremendous AI competition now, and competition means innovation and lower costs, all critical for enterprises that need any piece of automation they can get.

Handicapping the key AI cloud vendors

In last year’s article I used three criteria to rate the cloud vendors. This year, reflecting on the importance of generative AI, I’m adding a fourth one – the ability to create LLMs on an AI supercomputer. This ability matters as the availability of the supercomputer to build LLMs is critical and creates gravity for all other cloud services, which have to move closer to the orbit of that supercomputer.

For cloud vendors to be successful with AI, their offerings need the following key technical capabilities:

Data capability supporting AI strategy. Efficient compute hardware to train and operate AI models. An AI platform allowing efficient creation of AI applications. The ability to create LLMs on “AI supercomputers” (new!)

Now, let’s look at the top five cloud players in the race for the AI prize (in alphabetical order):

Amazon is playing with fire

AI remains the only area where AWS has fumbled across the whole portfolio; indeed, in a recent interview with an AWS exec, ERP Today was told questions about ChatGPT weren’t allowed. And while AWS has reacted by building its own custom chips, it is only in the second phase of the journey. What we see is a lot of unifying activity with Amazon Bedrock or AWS AppFabric, and both offerings have hit a nerve. But AWS has still not unveiled a supercomputer to compete with Google, Microsoft or even Oracle.

The crux may be that Amazon as a retailer does not need them, and the CAPEX needed to build an AI supercomputer (north to $100m) may not be in Amazon, AWS’s investment plans. The risk is that gravity of automation may shift to other cloud vendors. At the end, AWS’s number one position in cloud may become embattled – even lost.

Google took a jab, but is back on its feet, swinging

As stated earlier, Google was afraid at some point of its own progress and to some point held at ransom by its own employees (see the voluntary opt-out of the U.S. Department of Defense JEDI project). All that is history now as it’s all hands on deck and Google is now back and largely ahead on the topic of generative AI. While Microsoft is working hard to make its version of ChatGPT be up to speed for English language data (fair enough the largest body of data out there) Google is building 20 languages, and just casually mentioned the general availability of Japanese and Korean at Google I/O in May.

Google has also caught up at infusing generative AI across its suite; its AI Builder product is easy to use and well adopted in enterprises. Generative AI may be the inflection point that propels Google Cloud beyond the number three position in cloud overall.

Microsoft shows remarkable vendor-wide discipline to embed AI everywhere

A year ago, I wrote that the lack of custom silicon makes all AI at Microsoft questionable from a commercial and architectural elasticity perspective. We now know the answer, as Microsoft has both folded internal hardware and software plans, using Nvidia and OpenAI. It does not come cheap to be late – a $10bn+ investment into OpenAI, a $100m+ Nvidia-powered supercomputer – and many more of these are needed.

But as said before, Microsoft deserves praise for turbocharging AI in 2023, and clearly Satya Nadella enjoyed poking Google in the eye (as did the press). For a technology vendor that used to operate in fiefdoms, the generative AI adoption across Microsoft has been more than impressive. Microsoft product capabilities have never been aligned around a single technology, so that worked out very well indeed.

Nvidia went from wild card to Switzerland

In the last 12 months Nvidia has been able to put its chips into any public cloud of any relevance, most prominently all vendors under our focus. They may have missed building custom silicon (AWS was late, for example) or not listened to customers (Google Cloud) – but all large clouds covered here are using Nvidia chips, making the vendor a stock market darling.

AWS’ number one position in cloud may become embattled – even lost. Consider that Nvidia uses Oracle Cloud to run its AI.

The core value proposition for Nvidia and its attraction is that its AI models are portable – not only across the clouds but also back on premises. That said, only if one can afford the necessary hardware. The entry level DGX platform sells for more than $40k. If anything matters in how an industry insider decides on which cloud to run its own generative AI, consider that Nvidia uses Oracle Cloud to run its artificial intelligence.

Oracle is in the game – turns out DB clouds are also good AI clouds

Oracle has focused on making its cloud work well for its most organic workload – its database. It turns out that when you build a great cloud for databases, it is also a great cloud for AI. Fine-tuning a cloud architecture to keep the expensive chips crunching data, through a fast network that is connected to faster storage, is a common design point. And so is being able to configure computer and storage pods that are connected by intelligent and fast networking.

All that makes Oracle Cloud a great cloud for AI – all it needs are chips and Oracle is buying them for “billions”. A look at Oracle’s Q4 shows that Oracle is investing more than 50 percent of available cash flow in CAPEX, showing how expensive the move to AI is – and how much Oracle leadership believes in the power of AI to transform the cloud space.

The takeaways

CxOs must understand the core differences between major cloud and AI platform players. Factor in also the organizational DNA of each vendor and chart the trajectory of their AI offerings. Using this as a lens for comparison, Google is keeping its AI crown, despite the good scare it got from Microsoft. Oracle enjoys benefits of its cloud architecture for AI workloads and will do well. Microsoft needs to build on its early success – and it has big shoes to fill. AWS needs to make careful investment decisions, but also needs one (and more) AI supercomputer(s) in the second half of 2023. Nvidia offers portability and is the portability choice for AI workloads.

All vendors bring distinctive value propositions to enterprises – choose wisely or find yourself obsolete in the age of AI!