Detection: Part 3 of Multi-Cloud Security Architecture CTO Deep Dive

Detection: Part 3 of Multi-Cloud Security Architecture CTO Deep Dive

In part two of my blog series on the Multi-Cloud Security Architecture, we covered the key elements needed to prevent security incidents in multi-cloud environments. Unfortunately, it is nearly inevitable in any organization that at some point or another, an attacker will successfully get in - which makes rapid and accurate detection necessary to limit the damage of a breach. Today, we will cover how detection is critical within a dynamic and adaptive security system.

In a multi-cloud security architecture, deep detection capabilities allow the system to recognize new threats in order to effect a response – either in an automated fashion or through assisted operator involvement. Accurate detection of security events depends largely upon two factors – the data and the analysis applied to the data - either through machine-learned algorithms or human processing.

Data

By consuming data from controls with deep levels of processing insight (including application behaviour, endpoint posture and behavior, signatures within communication flows, and user behavior) and other sources of metadata (including threat feeds and application inventory metadata), it is possible to establish accurate context for precise threat detection.

In addition to the context itself, it is imperative to have clean data. If you're having to deal with duplicate data, modified log records, etc., your ability to accurately detect deviations from statistical norms is greatly hampered. Therefore, as a foundational element of the MCSA, security controls existing directly next to workloads (not inside the assets where they can be “turned off” in a successful compromise) can gather the exact requests as they leave the workload and before they even leave the hypervisor. This is a massive distinction and benefit over network security solutions that require tapping the network at selected locations and then trying to divine the true nature of the network traffic.

Once data is collected, algorithms may then be applied to the rich data set in order to recognize deviations from expected or whitelisted behavior – driven by a combination of heuristic and mathematical functions from humans and machines.

Analysis: Humans vs. Machines

On the scale of human vs. machine detection, you have on the one side, signatures and heuristics and on the other, statistical algorithms, machine learning, etc. The human-driven side of the scale (i.e. signatures and heuristics) are ideal for when you know the exact conditions you're looking for and/or when the conditions in question are required to execute an attack. For example, when an exact sequence of bytes must be transmitted in order for a specific vulnerability to be exploited. Alternatively, machine-driven detection (algorithms, ML, etc.) is useful when the exact conditions are a bit more flexible or when you're looking for scenarios requiring a large amount of data processing that would be impossible for a human to wrap their head around. For example, determining the "normal" sequence and timing of commands exchanged by a client and server over a long time frame, so you could monitor for deviations.

Returning to the data, both of these approaches to detection require breadth and depth of data in order to be as effective as possible. If you have limited breadth, you can't monitor for specific signatures or heuristics where the system is blind and you're limited in your training and monitoring capabilities from an algorithm perspective. Depth is important because it allows you to deploy your detection capabilities across more layers of the stack, adding additional context (which translates to higher accuracy). Foundational controls of the MCSA account for broad collection of data across multi-clouds using flexible APIs, as well as deep data gathering from Layer 2 to 7.

Lastly, when it comes to detection systems, customers generally want two things: 1) they will have a list of things they know they want to be notified of if they occur, and 2) at the same time, they want the system to simply show them things that are "bad" or maybe "unexpected." This basically boils down to alerting on the knowns and on the unknowns - which goes back to the same human-machine scale. The "knowns" are generally signatures and heuristics. The "unknowns" are generally ambiguous because they are too complex to detect manually (e.g. doing SQL queries, staring at a spreadsheet, etc.), and thus require algorithms to achieve. Security solutions that support multi-clouds must be built so that customers can generate their own alerts, based on their own signatures/heuristics and also have the ability to share this detection logic as appropriate across other peer organizations (i.e. in similar industries, trusted partners, etc.). At the same time, security must also run dedicated algorithms to highlight potential issues that would be otherwise challenging for customers to craft their own logic for (e.g. detecting DGA or Fastflux domains).

What's Next?

The trick with detection is ensuring security systems are creating actionable events, which can be integrated into an organization’s security processes. We will cover how to respond to these security events in multi-cloud environments in next week’s blog on the deep dive into each MCSA component: Response. And to learn more about the pathway to multi-cloud security, watch our on-demand webinar.