Cloudy vision begone? How to clarify our view of SaaS in the name of security

Cloud computing is generally agreed to have been so-named after the swirly bubbles that software architects used to draw to explain where pools of system resources were located in a network. While some also suggest that the term “cloud” was simply cooked up by sales strategists looking for a new label to put on managed service provision, the term was popularized and quickly moved from de facto to de rigueur in the global lexicon of information technology terms.

What the early cloud-focused engineers didn’t perhaps take into account was that, while real-world clouds are generally opaque, perhaps through some form of serendipitous prescience, the opaqueness of computing clouds is what much of the SaaS sector is now working hardest to combat.

A murky mass of cloud instances is not good for business. Instead, we want cloud clarity, cloud visibility and cloud observability if we want to be able to secure our virtualized compute and data services. The questions we have to ask are why is this such a challenge, what aspects of cloud data center operations are so hard to see, and how do we clear the air in general?

“As organizations continue their cloud adoption journeys at pace, the challenge of observability grows in line with the scale and complexity of the new architectures, infrastructure and platform components that today’s applications demand,” explains Christian Reilly, EMEA field CTO for infrastructure automation specialist at HashiCorp.

Reilly says that in “traditional” on-premise deployments, the IT team’s focus was primarily on monitoring a relatively simple collection of underlying technologies and the monolithic applications that ran on top. Monitoring in this scenario is worthy enough, but it is based on a set of pre-known conditions, or thresholds, that trigger alerts when these thresholds are reached or exceeded.

Needle in a cloud-stack

“Cloud-based environments, in which applications or other services are deployed across multiple cloud providers, pose an entirely different challenge,” heeds Reilly.

“Microservices, modular ephemeral infrastructures and dynamic components in modern cloud environments, often increasingly abstracted from the physical infrastructure on which they ultimately run, provide the archetypal ‘needle in a haystack’ conundrum when operations teams are faced with finding and fixing the sources of failures or performance issues.”

Microservices provide the archetypal “needle in a haystack” conundrum – Christian Reilly, HashiCorp

Pointing to the growth of DevOps and what many in the industry are now calling platform engineering, the HashiCorp team highlights just how fundamental cloud observability has become in the context of site reliability engineering (SRE). Reilly is firm on where observability now goes beyond “basic” monitoring and says that it demands instrumenting all components within an application stack. He notes that this approach helps quickly discover patterns in modern distributed systems. Observability in this context gives deeper insights into what’s happening and what may fail before the impact is felt by end users.

As we attempt to “see” into the opaqueness of cloud and understand the shape, structure and state of the SaaS services that we ourselves have built, we come back to the question of why we’re concerned about the inner state of cloud in the first place – and that’s a question of security.

At the heart of it all, the foundation for risk in any cloud deployment lies within the software application code itself. The quality and security of code, regardless of its position in the technology stack, help dictate the strength of an organization’s application security posture. By prioritizing robust and secure coding practices, organizations can fortify defenses and safeguard systems while helping reduce the risk of compromise. This is the opinion of Ben Denkers, chief services officer at Qwiet AI.

In the virtualized arena of the cloud, risks exist at various levels – Ben Denkers, Qwiet AI 

“In modern cloud environments – if not also elsewhere – application security initiatives must be comprehensive and wide-ranging, covering multiple layers of the system. In the virtualized arena of the cloud, risks exist at various levels, including APIs with the potential to expose business logic flaws, at the application and database layer with the specter of occurrences such as SQL [structured query language] injection attacks… and even the underlying infrastructure that an application leverages, all of which can be potentially exploited,” advises Denkers.

Strategic tools

The Qwiet AI team say that they have seen many different application security programs carried out over the years. The best of these strategies typically encompasses activities such as threat modeling, static code analysis, developer training, risk assessments and validating controls through penetration testing and other assessments.

“Ideally, organizations should address security issues during system planning stages, before developing or deploying the application to production environments. It is crucial to review every layer of the application to prevent misconfigurations, poor development practices and the usage of vulnerable libraries or other elements that could lead to compromises,” notes Denkers, reiterating some time-worn advice that still needs to be openly highlighted almost a quarter century into the new millennium.

We also know that Infrastructure as Code (IaC) plays a significant role as well in modern cloud environments. The CSO highlights the numerous tools available that enable the secure definition and deployment of infrastructure resources in line with IaC methodologies. However, notes Denkers, validation of these toolset functions is always necessary, as organizations may mistakenly believe they are following best practices without actually doing so.

“We find this lack of validation is the case more often than not,” he warns. “Regular audits and validation of security practices are essential for promptly identifying and addressing security gaps if a business wants to achieve true cloud visibility and observability.”

It seems clear that organizations need to adopt a comprehensive cloud observability security strategy that dovetails tightly with the overall security posture of the company. Only in this way can firms hope to get a view into the various layers and components of their applications, data and services stacks. But it is perhaps that essential middle tier – data – that may get overlooked in the wider rush to observe and monitor the inner columns of air spiraling up through any given cloud.

Data resilience: how to make sure it’s protected against internal malicious users and external bad actors – Subbiah Sundaram, HYCU

Subbiah Sundaram, SVP of products at hybrid cloud uptime company HYCU concurs with this sentiment. He agrees that most organizations developing cloud applications spend a lot of time thinking about securing access to the application and making sure the end-user security is done very, very well.

“A big part which is sometimes overlooked, is around data resilience. But when we talk about data resilience in this context, we are talking about how to make sure it’s protected against internal malicious users, as well as external bad actors,” warns Sundaram.

Remember the olden cloud days?

But hang on a minute – in the era of new-age cloud, wasn’t everything supposed to be easier, service-based and essentially more controllable? Sundaram clarifies the challenge at hand by explaining that contemporary cloud applications make use of different services available in the public cloud. But he notes, to make these modern applications visible, observable and resilient, it is not like in the “olden cloud days” where we would just concern ourselves with protecting one (or a few) virtual machines.

“Today, organizations have to protect the configuration for each of their public cloud services and the data in each of these services. As we all know, each of the public cloud service providers offers hundreds of services, and there is no easy way to protect the breadth of these services and the total scope of data in a unified fashion,” says Sundaram.

That is the challenge many are trying to overcome in today’s modern and public cloud IT environment. In the ongoing quest for clear and present vision into cloud status and security, many organizations are adopting a DevSecOps approach to Agile code development. This sees them build best practice security approaches to development early in lifecycles, as well as stringent orchestrated code security testing as part of the route to running a “live” change management process.

So as “delicate” as cloud computing clouds clearly are, perhaps we should be starting with the users who interact with them to examine who is driving which cloud service in which direction. Kyndryl UKI security and resiliency practice leader Duncan Bradley draws special attention to this aspect of network management and says that it is a key element of cloud application development security which is often missed.

User access is often overlooked – Duncan Bradley, Kyndryl

“In the realm of operational cloud, user access – and especially privileged user access – is often overlooked,” clarifies Bradley. “This tends to be seen as another challenge or responsibility that should fall to the infrastructure team. Consequently, it is often not given consideration as part of the cloud application development lifecycle and the ongoing need to be able to look into cloud performance with clarity.

“Gaining a deeper understanding of how to restrict user’s access to data, especially to sensitive data, can have a dramatic effect in reducing an organization’s attack surface, especially if you can significantly reduce the sensitive data exposed, only to those that need it, by securing it outside the shared platforms.”

Bradley recommends extending the notion of DevSecOps to DevSecRecOps, with the “Rec” element referring to data and system recoverability built into design lifecycles. Because we know that cyber breaches are now “when” and not “if” events – with many of these attacks having data corruption or encryption payloads which result in the need for mass data recovery – being able to recover critical business systems from backup quickly is now being seen as high priority to many organizations.

Business impact tolerance windows

“With various global industry regulators now mandating [that] critical business systems must be recovered within defined business impact tolerances, the ability to recover critical application platforms within business impact tolerance windows cannot be viewed solely as an infrastructure support team problem,” concludes Bradley.

Is cloud getting clearer to look at then? Given the ever-greater complexities in the cloud-native application landscape, the connected market for information with data exchange technologies now burgeoning across the cloud, and the increasingly complex cloud topographies now being built as a result of componentized containerization, it almost feels like cloud is getting cloudier.

“As organizations have transitioned to the cloud, even mission-critical applications are often running across complex, hybrid technology stacks. Alongside this trend, the growth of Agile delivery practices and use of open-source code libraries is enabling teams to push updates to these applications faster.

There are more opportunities for vulnerable code to enter production – Andreas Kroier, Dynatrace

“Whilst these approaches accelerate innovation, they also heighten security risk. They create more opportunities for vulnerable code to enter production and make it more difficult to quickly identify indicators of compromise given the dynamic and highly distributed nature of modern applications,” says Andreas Kroier, senior principal and application security solutions lead at Dynatrace.

Kroier suggests that this has led to a growing need to converge observability and security, and to unlock the insights needed to detect code vulnerabilities continuously and enable rapid remediation. By combining observability and security, the suggestion here is that teams can access real-time insights into vulnerabilities and get a precise understanding of how they could compromise the application.

Many say that this is invaluable in terms of helping teams to eliminate time spent chasing false positives and avoid focusing on code vulnerabilities that pose a minimal risk – for example, those that are not exposed to public networks or could not be exploited to access sensitive data.

“Observability also helps improve security response efforts through the ability to baseline application performance,” emphasizes Kroier. “This enables teams to instantly identify workloads that deviate from normal behavior, helping to detect and block some of the most common threats, such as SQL and command injections. The ability to combine observability context with security events also provides another vital source of insights.”

Because cloud computing has become more powerful, it has also become more complex. As cloud observability has become more complex, it has needed to match this complexity by becoming more powerful in the face of more complex and powerful security concerns. Because the scope and breadth of data itself has become vaster and more intrinsically connected, our cloud data services have also had to match the wider tide that is driving the multi-helix strands of DNA now building the modern cloud.

What we need to consider going forward are the forces of virtualized and abstracted nature now building the cloud ecosystem to ensure one element is always matched by an equal and opposite gravitational force. If we can do this, then we can bring the clouds down to earth, or at least keep them lashed down and under control. Still, best take a coat and flashlight.