top of page

API Security: Best Practices for API Activity Data Acquisition

Source From / Akamai Official Blog


API data collection is the foundation for API security. Data collection has two core purposes: API discovery and API traffic analysis.

  • API discovery is crucial for any effective API security program. More important, continuous API discovery is needed to find rogue and zombie APIs.

  • API traffic analysis for vulnerabilities and threats, as mentioned in the OWASP Top 10 API Security Risks, is also crucial. As with any network security endeavor, the security product needs to see the right data to analyze it for threats.

4 recommendations for effective API data collection

Our four recommendations for effective API data collection include: 

  1. Start with the broadest possible vantage point

  2. Add depth to your primary API monitoring stack

  3. Sanitize your API activity data

  4. Take the first step to better API discovery and advanced threat detection 

Start with the broadest possible vantage point


Visibility breadth is essential when designing your API security approach. If you have a generally standardized API deployment and management process, you will likely focus API data collection according to that standard process. Detecting the unexpected, however, is arguably a more critical element of your API security strategy. 


After all, rogue APIs implemented outside of your sanctioned process — and forgotten shadow APIs created on legacy API stacks — likely pose more significant risks than new APIs on your primary stack. Make sure that your API data collection mechanisms enable you to find your APIs with continuous discovery. 


This is an area in which traffic monitoring and log collection have an advantage over host-based sensors. Mirroring traffic from key points in the network, and using logs (for example, from API gateways) provide, by definition, a broad coverage that is more likely to capture unsanctioned API activity, as long as your API security system is architected to discover them. In contrast, most types of host-based sensors, especially code instrumentation, will only see activity for the specific API hosts equipped with sensors.


Even when taking advantage of the broader visibility that log and traffic monitoring provide, it’s still important to seek out and eliminate blind spots. Hybrid cloud and multicloud 

environments often have API traffic flowing in many different locations, each of which must be accessed separately. (Learn how to satisfy the compliance team that sensitive data in detailed logs is protected in the section on sanitizing data later in this post.)


Add depth to your primary API monitoring stack


Log and traffic data can come from many different sources. It is helpful from a coverage standpoint to collect this data from infrastructure sources, such as packet brokers, cloud platforms, API gateways, container and mesh orchestration tools, proxies, content delivery networks, web application firewalls, and centralized logging platforms. 


The coverage these platforms provide is excellent for baseline API discovery. But it’s also important to consider the depth and fidelity of data when designing your API data collection approach.


For example, suppose you are collecting activity data directly from your API gateway. It isn’t guaranteed that your API gateway logs will include sufficient detail to perform the most advanced types of behavioral analytics. For example, even if all HTTP activity appears in the logs, request and response headers may have varying amounts of detail. And useful data like message payloads may be missing entirely, depending on the vendor.


For platforms like Akamai API Security that identify entities involved in API activity, piece together business context, and monitor for behavioral anomalies, additional data fidelity provided by API traffic mirroring technologies can be invaluable.


Sanitize your API activity data


Most of what we’ve covered so far is focused on collecting the broadest and deepest set of API activity data possible. Doing so ensures that your API discovery and security analyses are as comprehensive as possible and that you have the necessary foundation for the types of behavioral analytics that are needed to stay ahead of advanced API threats.


So, before you allow a security vendor to analyze your API activity, it’s important to challenge them to demonstrate that they can sanitize the data before it leaves your on-premises or virtual private cloud environments.


Adopting this type of cloud security model will give you the confidence to allow your vendor’s sensors to send highly granular details to the cloud for analysis while protecting your sensitive user data and intellectual property.


Take the first step to better API discovery and advanced threat detection


Identifying the best API data collection techniques for your organization — and deciding how to best weigh the trade-offs between them — may seem daunting. But it doesn’t have to be. 


Our team at Akamai will guide you on this journey and ensure that your approach incorporates the latest best practices. Read about how we helped another Akamai customer on this journey.




Kommentare


bottom of page