The cloud contained, key Kubernetes cornerstones

Kubecon + CloudNativeCon Europe 2023 Kubernetes

Key Takeaways

Kubernetes offers flexibility and power for managing cloud-native applications, but this flexibility can lead to increased complexity that organizations must navigate.

The evolution of Kubernetes is essential for enabling organizations to adopt modern application architectures, such as MACH principles, while also posing challenges for developers who may find it difficult to manage Kubernetes environments effectively.

There is a need for greater standardization and simplification of Kubernetes to make it more accessible to a broader range of users, similar to how VMware vCenter democratized virtualization, as the current complexity and lack of intuitive tools limit widespread adoption.

The ERP Today team has just wrapped up attendance at the Cloud Native Computing Foundation (CNCF) host Kubecon + CloudNativeCon Europe 2023 in Amsterdam. Given the firehose of information we were exposed to, the question now is what should technology practitioners be thinking about when it comes to Kubernetes, how does this cloud container platform now evolve and at what point can we enjoy a more mature level of standardization and progress in this space?

Also known as K8s, Kubernetes manages Linux or Windows (and potentially other) containers and microservice architectures across all types of cloud environments. 

Kubernetes enables software engineers to automatically deploy, scale, maintain, schedule and operate a number of application containers, an array of compute power that would typically be deployed across a number of cloud networked clusters, nodes or other locations.

ERP Today spoke to five key specialists in cloud to assess some of what Kubernetes means today. 

PagerDuty

The flexibility offered by Kubernetes provides a powerful platform for organizations to meet the needs of many types of applications. This is the opinion of Mandi Walls in her role as developer advocate at PagerDuty, a company known for its SaaS-based incident response platform.

“Flexibility [in the case if Kubernetes… and for that matter elsewhere], however, breeds complexity and K8s environments are not immune from increasing complexity as environments encompass more of an organization’s workloads,” said Walls. “With so many k8s components having any number of options, depending on environments, vendors, and requirements, a layer of abstraction will help application developers and responders access the information they need during an incident.”

She suggests (somewhat obviously drawn from the PagerDuty playbook) that providing one-button low-code and no-code solutions to teams gives them tools to run diagnostics or apply fixes in complex environments and allows infrastructure and platform engineering teams to make strategic improvements ‘under the hood’ without negatively impacting incident response activities.

Splunk

Talking to Morgan McLean in his role as director of product management at file analytics specialist company Splunk, some of the same sentiments seems to surface. He says that Kubernetes has made the development and deployment of high-scale web services accessible to organizations that never could have achieved this state just a few years ago. 

“This has allowed thousands of startups, enterprise businesses, institutions, independent developers and others to launch new products, extend existing businesses, and do tons of great things” explains McLean. “However, as was the case with the popularization of cloud infrastructure, while these platforms have made creating large distributed systems easier; managing huge sets of intertwined services and keeping them healthy, fast and easily modified is a huge challenge and is critical to the success of any remotely complex set of applications.”

Splunk claims to be ‘extremely attuned’ to these challenges, which is why it invests so heavily in OpenTelemetry and relies on it for capturing distributed traces and metrics from customers’ applications and infrastructure. It’s also why the company has been working closely with the OpenTelemetry community to add logs as a third signal type to the project – it’s also why the company has just made this the default log processor in the Splunk OpenTelemetry Collector for Kubernetes (SOCK). 

“This means that the default mechanism for Kubernetes data collection for all Splunk products relies on OpenTelemetry’s native log, metric, and tracing capabilities and benefits from excellent performance, Collector processing, common semantics and protocol, along with all of the community benefits of OpenTelemetry,” said McLean. “This data, when analysed in Splunk Enterprise, Enterprise Cloud, or Observability Cloud, allows our customers to improve the reliability, performance and customer experience of their Kubernetes services and improves their developer productivity and feature velocity.”

CloudBolt

Rick Kilcoyne, CTO at hybrid cloud management platform company CloudBolt is upbeat about K8s and says that Kubernetes is a wonderful platform for building and deploying services. But there’s an if and it’s – IF you know Kubernetes.

“What about the vast majority of developers, end-users and consumers of IT workloads that do not know all the ins-and-outs of Kubernetes? To many it can be a daunting platform full of yak-shaving, gotchas and cognitive load. No amount of Chat GPT prompts will get them closer to getting their workload deployed and getting back to work,” said Kilcoyne.

Because the ‘vCenter for Kubernetes’ (a centralized management center for VMware deployments) still doesn’t exist, Kilcoyne proposes that we need to think wider.

“VMware vCenter revolutionized the datacenter by taking virtualization mainstream and democratizing it for all. We at CloudBolt feel there’s a big opportunity to empower more users to be able to deploy K8s-based workloads with minimal fuss and cognitive load by encapsulating this toil into an easy button for deploying workloads that leverage the power, speed, and flexibility of Kubernetes,” he added.

Vultr

Kevin Cochrane, CMO of managed cloud services company Vultr explains that Kubernetes has played a pivotal role in revolutionizing cloud-native architecture since its open source release in 2014. 

By going open source, it has enabled start-ups and enterprises to adopt MACH (microservices, API-first, cloud-native, headless) principles and move away from monolithic application architectures, resulting in composable and pre-packaged business applications,” detailed Cochrane, in a press briefing.

Now, he says, with the advancement of GPU infrastructure and AI frameworks, Kubernetes is taking innovation to the next level by driving a second wave of transformation in MLOps and containerized machine learning models on composable cloud infrastructure. 

“From its roots in web application development, Kubernetes has now evolved into a cutting-edge technology capable of running and scaling new machine learning models across globally distributed clusters of GPUs,” added Cochrane.

Cycloid

Wrapping us up then, founder of hybrid cloud DevOps company Cycloid Benjamin Brial thinks that if we look at the investment that has gone into Kubernetes compared to its overall adoption, it’s not equal.  

“The central challenge is that it’s not an easy platform to use, which means that even though it is trendy and fashionable to talk about Kubernetes, we still don’t have the talent and experts to manage it… and even then, the majority of apps at the enterprise level are simply not Kubernetes ready. There is still too much complexity, and too many tools to make sure that what you are running is production ready, secure and scalable. For me, the evolution of Kubernetes is currently following a similar pattern to Open Stack. There is clearly a need for it, but will it replace everything? Clearly not,” stated Brial.

The road ahead for Kubernetes appears to be paved with the industry and the Cloud Native Computing Foundation’s efforts to now stablize, simplify and standardize this cloud container orchestration technology so that, one day, eventually… we can start to think about Kubernetes as a contained (no pun intended) inner utility element of computing. 

It will happen, just contain yourself.