Unpacking Containerization Part 1: Defining Container Technology and Its Benefits

Unpacking Containerization Part 1: Defining Container Technology and Its Benefits

Containers are exploding in popularity and are becoming the preferred method for deploying applications. In this four-part Unpacking Containerization blog series, we will explain the benefits of containers, the challenges of containers from a security perspective, how security can be integrated into your DevOps cycle, and how vArmour can improve operational and security practices. 

What is a Container?

Containers are a portable, platform-agnostic way to package an application’s configuration, code, and dependencies - They are the next step in the evolution toward fast application delivery, and they are gaining acceptance in organizations that use devops practices. Containers both simplify and speed up application prototyping, development, deployment, and administration, more commonly known as continuous integration and continuous delivery (CI/CD). We will address some of the pros and cons of using containers in this post, and in a later post detail how vArmour approaches their security implications.

What are the Benefits of Containers?

Containers are popular due to their portability and role in reducing the organizational and technical conflict of moving an application through development, testing, and production lifecycle - essentially placing control for an application’s lifecycle and its technical dependencies in the hands of the development team.

All the required application files and software dependencies are assembled into containers allowing them to be deployed on any computer. The contents that are packaged into a container will run the same way whether in testing or production, within a private datacenter or the public cloud (as least in theory). These characteristics are beneficial because companies have the ability to deploy applications reliably and consistently regardless of environment. Ultimately, this allows companies to release new applications and features faster than if they were required to manually configure each server. 

Containers also encourage operational efficiency, allowing companies to easily run multiple applications on the same instance. This isolates each process, allowing each application and its dependencies to be put into a separate namespace and run on the same instance. There are no shared dependencies or incompatibilities because each container is isolated. Containers can spin up almost instantly because each container is a process in it’s own protected namespace on an operating system. The reduced footprint that containers encourage allow quick creation and termination of applications, of ‘just-in-time’ instantiation and destruction. This feature allows containers to rapidly scale applications depending on demands. It is simple to run new versions of applications because the application and its dependencies are contained in what is referred to as an image. Container images can be created to serve as the basis for other images. An operations team can now create a base image composed of the necessary dependencies (often libraries pertaining to a specific requirement) and configurations, development teams can then build their applications on top of these base images.This avoids complexities of server configuration and tuning.

Developer productivity is increased with the use of containers. Each component of the application can be broken into different units which each provide a discrete function (known as a microservice) which execute in their own container. Containers are isolated from one another, so their dependencies and requirements do not need to be consistent. Developers then have the ability to upgrade, scale, and operate each service independently. This ‘looser coupling’ of application functions can provide a tremendous degree of flexibility and reduce the fragility associated with large, tightly coupled application requirements.

Finally, containers allow companies to formally track versions of application code and their dependencies; through well described manifests and configurations files. Efforts such as Cloud Native Computing Foundation (CNCF, which is standardizing container ecosystem around Kubernetes and related projects) and Open Container Initiative (OCI, which is standardizing runtime and image formats) will ensure that the power and velocity of Open Source and the benefits of Open Standards will supercharge Container-related technological innovation. 

Challenges Associated with Containers and Microservices Based Architectures

Though containers make life easier for developers and system administrators, container technology wasn’t necessarily designed with security defaults - at least not initially - so while containers are not inherently insecure, they can be deployed in an insecure manner by developers, with little guidance from their security counterparts. There are several risks an organization using containers should consider.

Microservices architectures often result in a different type of application architecture; where previously a traditional application was responsible for many functions joined together with internal communications (internal IPCs) microservices address simpler problems individually but communicate via network-exposed APIs. While the simplification of individual functions can lead to better security, the increased API attack surface needs to be considered. Additionally, the increased rate of change within modern applications can challenge traditional, static appliance-based security systems. Fortunately, container-based ecosystems offer the opportunity to integrate security into the development process and many organizations are taking this opportunity to ‘build in’ security best processes and controls from the outset.

Even though best practices should mostly address the earlier concerns around host security, if the applications running inside the containers are not (and there is no such thing as fully secure software), then other problems can arise - containers running vulnerable software provide an attacker with entry points into the network that only exist for the lifetime of the container. Furthermore, container networks that are flat by default can provide an open attack surface, allowing any container to access any other container - meaning application vulnerabilities could be used to pivot from one container application to another. Images executing in containers should be secured through the CI/CD pipeline and runtime Containers should be dynamically micro segmented based upon their described properties and relationships.

With these security concerns in mind, one of the biggest disadvantages of containers is the difficulty involved in monitoring and understanding these transient and highly dynamic entities. With a multitude of containers fulfilling multiple functions, it is difficult to understand what is happening within each container, and among the containers. Enterprises need more than just logs, they need a way to monitor east-west traffic, with layer 7 visibility (in particular to understand the role of complex services such as ‘message oriented middleware’ and load balancers) in real time between their data center, public and private cloud, and between their container infrastructure and traditional runtime environments.

In our next post, we will take a closer look at how vArmour does this and how this information can be used to improve operational and security practices.

Related Posts