The Problem with Agents - as a Primary Means of Security Policy Enforcement

The Problem with Agents - as a Primary Means of Security Policy Enforcement

In earlier blogs, I talked about the differences between micro-segmentation approaches, and the fundamental differences to look out for. One of the differences I mentioned is whether: 1) Controls are independent of the workload or application - as in most SDN implementations or distributed security systems, like vArmour; or 2) Controls are running as an agent within the same namespace as the application or data under protection, or worse, an attacker who has gained access to a host or workload.

On marketing ‘snake oil’

First, I want to address the nonsense that certain agent vendors peddle: that deploying network filters on a workload via an agent somehow “attaches them to the application”. The ‘application orientation’ of a segmentation technology is largely determined by the richness of the security policy model, and its ability to introspect traffic - to understand application behavior rather than just conducting some basic packet filtering on a box CONTAINING the application. To be crystal clear, today’s agents conduct basic packet filtering*, in comparison to more advanced security products, which understand behavior of the application, user, and data inside a cloud data center and can process traffic accordingly. This is genius marketing of the most opaque kind (that is, to conflate location with capability), but as any engineer can see, it is utter nonsense.

Where agents CAN gain more insight into the application is by understanding information about processes executing on the host, but you don’t need to put your (primitive) 'packet filter' there to gain that information. And putting your primary control in an agent exposes you to some of the problems I’m about to discuss.

An illustration of the security problems with agents

Few of you will have missed the recent Symantec vulnerability found within their Norton Anti-Virus product. Now, first of all, Symantec are the best there is at building personal security products - they have been doing so for decades and are broadly recognized as a world leader with a large team of world-class engineers. However, even when you are the ‘best there is’ at this type of control, the impact of bugs or vulnerabilities when executing on the endpoint can be catastrophic, and exploited as a weakness by an attacker - increasing the attack surface rather than reducing it.

In the case of Symantec, it was shown that sending malware designed to exploit the Symantec ‘decomposer engine’ could cause it to be executed on a workstation, without any user intervention. Pretty bad (in fact, to quote Google Zero; “as bad as it gets”) and that’s just on a workstation or laptop. Now, imagine if you were reliant on agents to protect your customer’s confidential data running on your data center servers…

Step-by-step attack using agent-based micro-segmentation for protection

Step up the micro-segmentation security solutions implementing ‘packet filters’ in an agent running on your most critical systems. Imagine I’m an attacker who has found his way onto a system (the presence of the agent alone does not preclude the possibility, which is why we implement ‘defense-in-depth’). So, what do I do to get the data I’m after?

  • Step 1: Gather information. I have gained access to the OS, and probably have some privilege. So, I look to see what controls are enabled on this system (probably IPtables settings in /etc/sysconfig/iptables or somewhere else very easy to find) - and (probably) every other system in the data center. Ah, look, there is a basic packet filter enabled to implement micro-segmentation - I have taken a step in understanding the security posture of my victim! Were the security controls implemented independently, I could never ‘know’ this information, even if I could ascertain that some of my attempted connections fail for no apparent reason.
  • Step 2: Attack the security system itself. OK, so I know my victim has chosen a weak micro-segmentation solution. I know these systems are managed from a central controller and, yes, I can see where that is from the machine’s configuration and connection table. So, I am now able to identify and directly attack the critical security infrastructure, which is probably on the same segment as other critical infrastructure. When it comes to an independent security control, it is nearly impossible to ascertain the location of its control or policy definition system, which in addition is often protected over separate private networks for good measure.
  • Step 3: Disable security controls. And once I’ve collected all this information, why don’t I just disable the control? Kill the agent process or stop IPtables? Again and again, we have seen how easy it is for attackers to disable security measures (such as verbose logging) on systems that have been compromised in the data center. Same goes for specialist controls, like agents providing packet filters. Independent security controls are far more challenging to successfully attack and disable.

Independent security controls and experience from the real world

One of the lessons we have learned over many years of IT security practice is that many security functions (for example, executing potential malware in a partitioned sandbox, implementing security policies on traffic) are most safely and securely implemented independent of the applications and data under protection. It separates the control from the asset and, more importantly, from the attacker (should they happen to gain access). This is a reason we depend upon hardened, perimeter firewalls as opposed to ‘personal firewall’ technology running upon every server in a DMZ. Security operators discuss trust boundaries and the importance of separation, and that is what we mean here in discussing independent security controls.

Where agents fit in data center security?

So, with that said, am I saying all agents are bad? Of course not - the world of Information Security is not so black and white.

There are some functions that only agents running upon a workload or endpoint can achieve; monitoring Kernel calls, monitoring local file system contents, and also understanding information about executing processes. In this space, there are a number of vendors who provide tremendous value, such as Tanium and Cisco’s Tetration agent. The ability to collect context from the workload in order to enrich security decision- making is a huge value proposition with Tetration. What’s even better is taking that information and enriching it with application-layer information from network interactions and using it to build rich policy, enforced in a secure and independent manner - nearing nirvana for data center security. And this is one of the reasons we are so proud to be partnering with Cisco in their Tetration ecosystem.

Over the past couple of years, I have been asked on numerous occasions “why didn’t you build an agent- based security solution? Surely it would have been easier?” to which I generally answer “In my decades architecting mission-critical systems, mostly in organizations we would consider critical national infrastructure, I have never felt it appropriate to use agents as a primary defense for the ‘crown jewels’ of the organization. Similarly, at vArmour we didn’t take the path of least resistance mostly because agents suck for this purpose - in the data center, as a primary security control."

To learn more about the importance of independent security controls in multi-cloud architectures, watch my webinar on-demand: CTO Perspective: Unveiling a Pathway to Security in the Multi-Cloud World.

* For the purpose of this discussion, ‘basic’ controls provide the ability to filter traffic based upon packet header information up to Transport Layer (OSI Layer 4), whereas advanced security controls allow the operator to construct policies around application-layer semantics. Advanced controls prevent the attacker from hijacking commonly used protocol ports, such as HTTP, and also provide far more visibility into the applications and users. As we saw a decade ago in the perimeter, where only ‘basic’ controls are implemented, attackers will quickly move to exploit commonly used protocol ports to conceal their attacks. This led to the introduction of next-gen firewall technologies, which provided a set of more advanced defense in response. Given that we already know how attackers exploit basic filters, why would you choose to deploy them to protect your most critical data center assets?

Related Posts