Cloud vendors can’t ditch on-prem: It’s a love/hate relationship

An illustration of a person looking ahead at the choice of two paths.

Is it a good match? The history and happenings of cloud vendors’ difficult relationship with on-premise solutions.


The public cloud is the future of enterprise automation, even more so in 2023 – the year of generative AI – as AI’s insatiable demand for compute can only be addressed by the cloud. So why do the public cloud vendors still care and provide offerings to run workloads on-premises in 2023? Let’s investigate this interesting market development.


A short history of cloud vendor on-premise platforms

We live in a hybrid IT world; where enterprises run part of their automation in the public cloud, part of their automation still resides on-premises. For a long time, cloud vendors ignored the need for on-premise support, as this was the “old” way of doing IT. Microsoft changed that approach with Azure Stack in 2015, largely under pressure from customers. Other players ignored the need for on-premise support for a long time, notably market leader AWS even questioning the overall need for such an offering.

That all changed with Thomas Kurian taking over at Google Cloud and announcing Google Anthos. Anthos changed the need for cloud vendors to offer on-premise computing, as Google – the forever creative number three in the public cloud – not only offered on-premises support but also made Anthos available on Azure and AWS (originally Anthos was launched in 2018, as GKE on-premises). AWS could not hold out much longer and announced AWS Outposts at re:Invent in 2018.

Like Microsoft – based on customer requests – Oracle offered Cloud@Customer in 2016 but did not have a viable public cloud option back then, which of course has changed today to Oracle Cloud Infrastructure (OCI). IBM announced its on-premise offering with IBM Cloud Satellite in 2020 (its precursor being IBM Cloud Private since 2016). We call all these next generation computing platforms, as they allow enterprises to run the same workloads in the public cloud and on-premises, something which was previously not possible or available.


So why run cloud stacks on-premises?

For the enterprise: It comes down to three key drivers for why enterprises demand to run workloads on-premises. For performance-critical processes that connect to locations and on-premise systems (think manufacturing, IoT etc.) cloud performance may not cut it for a real-world use case, and network speeds to out-of-country cloud data centers may also be too slow.

Data residency is also crucial; legal requirements force enterprises to keep data inside of country sovereignty borders. If there is no cloud data center available, then local in-house operation of workloads is the only option for an enterprise to stay compliant. Lastly, skepticism lingers around public cloud; there are several CxOs left that do not trust cloud operations and still want to have physical access and ownership of their computing environments.

On-premises workloads can practically be the “piggy bank” of any cloud vendor.

For the cloud vendors: Those arguments are strong for cloud vendors, but they additionally have their own set of drivers to offer their cloud stacks on-premises. Firstly, it allows vendors to get early insights into on-premise workloads. Therein, on-premise workloads can be practically the “piggy bank” of any cloud vendor as they can be converted and offer potential future cloud revenue, with that scale.

Secondly, vendors can reuse research and development efforts. The ability to make a tech stack available on-premises is critical for its viability. But tech stacks are complex and hard to create, operate and maintain, and no application and technology stacks have been created to run on-premises for approximately 20+ years. No Platform-as-a-Service tech stack, for instance, has been built for on-premises during this time.

Lastly, and vitally, it means vendors can increase lock-in. Cloud vendors can not only get insights into workloads with on-premises, they can also lock in an enterprise early. That is just as valid on both the architecture and the commercial side. In that case, the on-premise contract also operates and therefore gives cloud credits.

So, as we have seen, there is a substantial goal congruency between enterprises and cloud vendors, leading to pretty much all cloud vendors offering on-premise availability: the next-generation computing offerings.


Identicality rules it all

For the next-generation computing platform vendors to deliver value, they need to allow workload portability between on-premises and the public cloud. There are special kudos when the workload can also be transferred to public clouds of other vendors – or if a vendor like IBM does not play in the public cloud market anymore. CxOs want to avoid lock-in and be able to see their workloads run where it is commercially, architecturally and legally feasible, as well as most advantageous, for their enterprise.

Identicality can be quite a challenge for the cloud vendors, as they need to run the same APIs and technology stack on-premises as they do in the cloud. Unfortunately, cloud stacks were designed to run in the cloud – not on-premises – which makes the provision on-premises a challenge.

It took Microsoft – the instigator – the better part of three years to provide general availability with Azure Stack. The notable difference for Identicality is Oracle as, but for a quirk of cloud history, when Oracle shipped it had not yet delivered a competitive public cloud offering, so Oracle Exadata and Cloud@Customer became the cornerstone of OCI. Practically, it means Oracle went from an on-premise system to the public cloud – in contrast to all other vendors who brought pieces of their public cloud technology stacks to on-premises.

Next to identicality, CxOs want to see a single pane of glass that allows them to monitor their workloads, no matter where they run. The ability to monitor, operate and manage workloads in a hybrid and multi-cloud setting is key for the success of enterprises today.


Checking out their current profiles

So where are these big tech names sitting today with their cloud offerings?

Table showing the next generation comuting platform vendors, side by side.

AWS Outposts is standing still, with AWS pushing beyond. Today the AWS Outpost offering is split between AWS Outposts rack and AWS Outposts servers. The former is a full rack that is installed by AWS, with enterprises providing power and network. The latter is a customer-installed server shipped by AWS and installed by the customer or a third party.

AWS Outposts rack has more local services available than AWS Outposts server and, therefore, offers higher identicality. In recent years, AWS has also pushed its AWS Local Zones, investing into more regional data centers, with a smaller footprint of the large regions. That has alleviated a non-substantial number of AWS customers requiring an AWS Outposts footprint extension. With AWS re:Invent looming at the end of 2023, it will be interesting if AWS keeps pushing Local Zones – or adds services via AWS Outposts rack and server.

Google keeps pushing but looks to be taking a break for now. Google’s attempts to improve its number three position have been an ongoing thorn in the side of AWS and Microsoft. So was Anthos when announced – with the ability to port modern, container-based applications to both on-premises and to its key competitor clouds. It prompted AWS to Outposts, so kudos to Google Cloud for pushing the market leader. More recently, possibly due to the focus on AI in the whole industry, Google has not expanded Anthos since Fall 2022. Meanwhile, CxOs welcomed the additional ability to run Anthos on VMware, a decade-plus-long trusted platform in the enterprise. The Anthos Service Mesh is the architecture offering for Google Anthos customers to operate in the multi-cloud.

The IBM Cloud Satellite is growing, and enterprise tech’s Switzerland gets bigger. With IBM having shed its own public cloud ambitions and while partnering with all major cloud vendors, it is easy for IBM to be “Switzerland” – the neutral place to do business with all clouds. And IBM Satellite that leverages Red Hat OpenShift under its covers is key to enable this (and get some of the astronomic Red Hat acquisition price back). IBM is now continuously expanding the satellite footprint: the IBM Cloud Pak offerings, IBM Cloud Databases and more all are available, and IBM’s Cloud Catalog is becoming a popular place for enterprises to evaluate and procure third-party applications.

Microsoft Azure Arc remains popular but has not advanced much. Microsoft has continuously built out Azure Arc (formerly Azure Stack) and added services to it. It has made a special effort on both application development and (cyber)security that resounds well with its install base. Like Google, Microsoft is embroiled in an all-around overhaul of its offerings with AI, which likely has given Azure Arc some pause. As the instigator of the current generative AI movement, Microsoft may be the first vendor here to offer some Azure Arc-based generative AI runtime offering. The future will tell.

Oracle leads with Identicality and is becoming a multi-dimensional player. Oracle is on the remarkable journey from on-premises to the public cloud and now multi-cloud. The latest addition for multi-cloud support was unveiled at this year’s CloudWorld conference, with the native provisioning of the Oracle Database (and with that Oracle Exadata) inside of Azure. As it is always the same underlying Exadata platform, the capabilities offered are identical across the platforms, creating the highest Identicality in the industry. Being fully invested in AI, it will be equally interesting how Oracle will add AI model execution to its platforms.


It’s complex, but it’s good news

The enterprise IT landscape gets more complex by the day. Workload proliferation across cloud infrastructures is already a headache. Generative AI will not make the picture easier, as new training and execution platforms emerge, along with specialized hardware platforms for AI.

Now, AI model training and execution make the case for next-generation computing platforms: enterprises need to leverage the public cloud to not only train AI models but execute them across their automation deployments – on other clouds and on-premises, even the edge. The final thought from me – the cloud is winning but it is coming to on-premises. It’s good news; CxOs have a choice.