Organizations are entering the first phase of micro-segmentation architectures: the act of deploying security policies across scale-out data center networks in order to separate diverse workloads, collapse legacy physical zones, and reduce attack surfaces. In the early phases of this architectural revolution, we are witnessing several technological approaches applied to this problem – some are legacy methods (i.e. agents and virtualized appliances) and some are more innovative in applying new approaches (such as SDN-based overlays and distributed systems architectures). Each of these approaches has its own merits, but the critical issue for operators to consider is whether the approaches available will address their functional requirements today and into the future.
Today, the two critical functional differentiators of micro-segmentation architectures are scalability and security policy capability (or security proposition). There are other non-functional considerations related to implementations (such as operational efficacy and complexity), but the final general consideration is the security properties of the architecture itself (i.e. how easy it is to overcome or disable). In addition, security architectures and requirements have to be designed looking 24-36 months from now.
The modern data center and cloud is all about scale and efficiency. Scalability is an area where virtual firewall appliance solutions demonstrate their limitations. These solutions require a means of steering traffic into a topologically local appliance and restrict dynamic state to a small number of devices within a cluster. Because of this, they struggle to deploy beyond either coarsely grained segmentation topologies (such as subnet boundaries) or support dynamic synchronization of state that can allow efficient migration of workloads within a pod.
Other approaches, such as agent-based methods, are designed to scale out to data center pod size and beyond. However, the ‘fan out’ challenges presented by the sheer number of agents requiring synchronization (*10-100+ the number of vSwitches or distributed sensors in other approaches) need to be carefully assessed before deployment.
Lastly, in terms of scaling IO performance through an individual enforcement point, it is also worth understanding where scalable user space methods are preferred over kernel-based approaches that have dependency on the performance of a single core. As we move towards 40Gbps server connectivity, limitations of kernel-based solutions will be exposed.
All micro-segmentation architectures are not created equal when it comes to the means and the variation in security capability. This functionality has considerable impact on the ability to enforce security policies and understand threat context.
Network overlay and agent-based technologies generally offer little more than semi-stateful protection. This equates to the basic ability to match on a flow and install a return path into the forwarding path; more advanced solutions may conduct some rudimentary processing of specific applications in order to set up certain flow entries correctly. Nevertheless, any solutions that rely on IPtables and conntrack have limited security processing capabilities at best.
Alternatively, ‘network security-rooted’ products, such as appliances and distributed security systems, provide an altogether different level of control. Such solutions provide full, stateful management of transport connections and also the ability to understand application context through application identification engines. Application-layer processing is tremendously important within today’s data centers where many applications employ HTTP connections - it’s essential to understand the specific application executing across this simplified protocol transport. Security propositions that offer this richer level of processing capability can also understand behavior at a much more granular level, including the ability to interrogate user behavior, understand access to files and object, and analyze DNS usage - all important indicators of potential malevolent behavior.
Not everyone will require full application-aware security policy enforcement within their dynamic data center and cloud, so your own organization’s requirements will dictate the options available to you. It is, however, important to be aware of differences in the security proposition in various approaches and consider your control requirements, including those required to mitigate future threats and meet regulatory requirements.
Figure 1: Security Architectures - Control Matrix
Solution integrity addresses how security policy controls can either be deployed independent of the infrastructure being protected or deployed within the same trust boundary as the asset being protected. As we have seen with the number of successful compromises, once an attacker gains control of an asset, it is common practice to disable security controls implemented therein (for example, logging data). Agent-based technologies are exposed to such weaknesses, and also complicate workload configuration, which can have an additional impact on security posture.
Oppositely, overlays, security appliances and distributed systems are all designed for implementation outside the trust boundary of data center and cloud assets. This means they cannot be exposed through this vector, particularly those that are completely invisible to an attacker with control of a workload.
Figure 2: Security Architectures – Approach Comparisons
A final factor to consider is the applicability of the technology selected to address data center and cloud challenges within 24-36 months. Any micro-segmentation security architecture should be able to accommodate expected scope changes within the data center, including:
- The introduction of application containers (for example, Docker) as a common unit of processing resource alongside the OS level virtualization of today.
- The requirement to deploy additional security control capabilities scaled out across the data center (as opposed to confined to the perimeter as today).
- Any changes required to the network architecture that could expose tight dependencies between the security controls chosen and the network services at that time. Tight coupling between selected micro-segmentation (security) and network architectures will lead to a lack of agility and flexibility in the long run.
- Migrations to hybrid and public cloud architectures.
This blog explored the different properties of common data center network security and micro-segmentation architectures and addresses many of the questions customers ask as they evaluate approaches. It illustrates the thought process that led vArmour to adopt a distributed systems approach to data center and cloud security, something that is unique within the industry today.