Application Programming Interfaces (APIs) are now ubiquitous and act as the glue between applications, determining the data that can be taken from or modified within those applications and the process for doing so.
They make it far easier for organizations to access and use applications and infrastructure, allowing the business to cost effectively and quickly roll out services. But the downside of this is that these APIs need to be managed, which is no easy task if you have deployed thousands of them.
Consequently, it’s easy for APIs to be used and forgotten or to be replaced but remain active (so called “zombie” APIs), or to be in use while being undocumented and untracked (known as “shadow” APIs). These APIs fly under the radar of both operations and security teams so they aren’t monitored or audited and go unsupported, so they become more exposed overtime.
These “haunting” APIs exist due to any number of reasons. Perhaps the business was acquired, and the inventory lost, seeing the larger company to inherit the problem. Or, perhaps, poor governance processes were to blame or the developer responsible left.
An invisible problem
Whatever the reason, there is now a vast plethora of shadow APIs out there, with most businesses significantly underestimating their number. The recently published API Protection Report 2H 2002 found 68 percent of the organizations surveyed had shadow APIs they didn’t know about and this represents a significant problem. That’s because these APIs are prime targets for attackers who can hunt for, discover and exploit them at their leisure with no detection.
In the last six months of 2022, approximately 45 billion search attempts were made for shadow APIs out of a total of one trillion transactions, according to the report. This is up from 5 billion in the first half of the year, representing a seven-fold increase in malicious activity. The reconnaissance activity here was observed by looking at the number of unique APIs accessed over time, which should fluctuate in accordance with the introduction or removal of new API services.
The primary form of reconnaissance involved the fuzzing of API hostnames, HTTP header values and Uniform Resource Identifier (URI) paths using simple techniques such as enumeration or naming variations. This activity seeks to identify shadow APIs by mimicking typical production traffic by sending various test inputs and requests to the API via API calls. The response from the API reveals it’s active, but such fuzzing can also be used to reveal possible ways in which the API could be compromised.
Detecting shadow APIs
In addition, the research team were also able to search for shadow APIs themselves by mapping the status codes of these unique APIs that were not part of existing definitions. If they saw a spike in successful HTTP responses they alerted the owner of the application to the possibility of a shadow API (an API uses HTTP response status codes between HTTP 200-209 if something is right, versus HTTP400-499 if something is wrong). For example, on one occasion they alerted a large retail customer to a surge in reconnaissance traffic on a login application that they felt indicated the presence of a shadow API. Their suspicions proved correct and the API was then taken down.
In fact, zombie and shadow APIs are now such a problem that they’ve made it into the new OWASP Top Ten API Security Risks which was released in June. Number nine on the list (API9:2023) refers to Improper Inventory Management. This highlights the need to have a proper inventory of both hosts and deployed API versions to mitigate issues such as deprecated API versions. In other words, outdated or flawed APIs that persist on the network.
Discovery and inventory
Discovery is key to detecting existing shadow APIs. The network should be scanned to capture the entire API footprint – both public-facing and internal APIs – and you want to be able to discover not just APIs that conform to specification but also hidden APIs, shadow APIs, deprecated APIs and unvetted third-party APIs. This comprehensive map should be updated continuously and recorded in a runtime inventory.
Many businesses find it challenging to gain visibility of their APIs but this needn’t be the case. Taking both an outside-in and inside-out perspective can provide the business with an attacker’s eye view and enable it to understand how data is moving through all the APIs. The external sweep should discover and categorize publicly accessible API endpoints, find their hosting providers and deliver remediation notifications and reports. The internal sweep should look for non-public as well as public APIs. The security team can then use all this information to create a 360 view and determine the current risk exposure, as well as what corrective actions to take.
Once the API estate is known, continuous risk analysis can be performed to uncover and remediate sensitive data, authentication or specification non-conformance and related coding errors for production and non-production APIs. Should you be unfortunate enough to find that an attack is in progress, this can be detected using runtime attack detection which uses behavioral finger printing to identify the specific threat. Countermeasures can then be deployed such as real-time blocking or deception.
API discovery is the most critical aspect of a strong API security posture but it’s often neglected because organizations resort to repurposing web application defences such as Web Application Firewalls or use an API Gateways to manage their APIs. The latter are designed aggregate and manage APIs by controlling access and providing some basic security functions (e.g., rate limiting, IP block lists) but are unable to proactively discover APIs.
Discovery and continuous runtime analysis are therefore vital in detecting shadow APIs and doing so provides the kind of visibility needed to ensure they don’t reappear.