Lately with all the buzz around containers, I have been focusing on learning as much as I can about them. At Ignite 2016, Microsoft’s conference for ITPros, there was a lot of excitement and interest around containers. I heard everything from what is a container, to what is the difference between a container and a VM, to just how secure are containers. If you have been hiding under a rock or have just returned from a remote location without internet access, Microsoft announced General Availability of Server 2016 with support for containers, as well as containers running in Hyper-V. If you thought virtualization made a huge impact in the world of IT, keep your eye on containers. Moving forward, living in a containerized world is going to become the norm.
There are a lot of topics that I would like to blog about but in order to do so I need to break the ice with a quick primer. This is important for those people who may be following this blog and have yet to get a basic understanding of containers. For those of you who understand containers already, you can wait for my next blog.
Overview
So what is a container? In its most simplistic definition, its labeled as OS virtualization. So what does that mean? Operating-system-level virtualization, as defined by Wikipedia, is a server virtualization method in which the kernel of an operating system allows the existence of multiple isolated user-space instances, instead of just one. I have heard several analogies of what a container is but the one that resonates best with me is the concept of a container ship. Think of the ship itself as the host with all the machinery and workers as the OS. The containers are filled with goods (applications) which are stored on the ship. There may be hundreds or thousands of containers on one ship and they each are isolated from one another. Each container does not have its own machinery or workers assigned to it or else it wouldn’t be a very efficient operation. Instead, the workers are tasked with supporting all the containers. The crane is responsible for orchestrating the movement of the containers. The big benefit comes when you can move the container from one location to another in one operation which allows the container to be portable. The container can be transported via ship, plane, train, or truck which means it has no dependencies on its host environment.
Most of us are familiar with the traditional method of virtualization where we run a software layer, called a hypervisor, such as Hyper-V or VMware directly on top of the hardware to emulate the underlying hardware. This was great as it allowed us to better utilize our hardware and consolidate multiple servers which in turn saved us a lot of money. Virtualization itself has made a big impact on the industry for many years with more to come. However, as technology evolves, we continuously look for better ways to allow for resource utilization, environmental efficiency, and operational efficiency. This is where containers come in.
Containerization
With containers, the key to understanding them is that multiple containers run on a shared operating system. What this means is that the applications running in a container are sharing the resources of the same operating system instead of each having its own copy of the operating system like a traditional VM. With that, the hardware requirements can be reduced which lowers cost and increases container capacity which in turn improves performance. Applications within containers operate as if they have their own resources and file system. The operating system kernel is shared between containers but each container has its own namespace to provide isolation. In addition, each host is able to control the amount of resources used by a container so as not to impact performance. The combination of instant startup, namespace isolation and resource governance makes containers ideal for application development and testing.
So when would you use a VM over a container? Lets say you have an application that was written for both Windows and Linux and required different OS versions for testing. At this point it would make more sense to deploy in a VM as each VM can have its own OS. In short, the answer is it depends on your requirements. In reality, I expect most will continue to run both containers and VMs.
To bring the power of containers to all developers, Microsoft decided to bring container technology to Windows Server. To enable developers that use Linux Docker containers the same experience on Windows Server, Microsoft announced a partnership with Docker to support Windows Server Containers. In Windows Server 2016 there are two flavors of containers, Windows Server Containers and Hyper-V Containers. When would you use one versus the other? Windows Server Containers would be used when you run trusted applications in a secure environment. If you needed to run untrusted applications, Hyper-V Containers would be a better choice. Hyper-V Containers take a slightly different approach to containerization. To create more isolation, Hyper-V Containers each have their own copy of the Windows kernel and have memory assigned directly to them, a key requirement of strong isolation. With this, Hyper-V containers have a bit less efficiency in startup times and density than Windows Server Containers.
With the ability to build, ship, and run multiple containers, sometimes reaching hundreds or more, there comes challenges and complexities. This is where orchestrators step in to facilitate. Listed here are just a few of the more well-known orchestration solutions available today.
- Docker Compose and Docker Swarm ( Docker)
- Marathon ( Mesos and DC/OS))
- Kubernetes ( developed by Google)
Ok, so now that you understand what containers are, where does Docker come in? Docker is an open platform for developing, shipping, and running applications using container virtualization technology. Docker allows you to focus on application development and getting your code deployed quickly and consistently. The Docker platform itself consists of multiple products such as:
- Docker Engine
- Docker Hub
- Docker Machine
- Docker Swarm
- Docker Compose and Kitematic
Container Management
Some of the capabilities for running and managing containers in the cloud are:
- Docker Datacenter – now available in the Azure Marketplace
- Docker Datacenter managing a hybrid, container-based application spread across Azure and Azure Stack
- Operations Management Suite (OMS) managing containers across Microsoft Azure and Azure Stack
- Azure Container Service
- SQL Server running on Linux, in a Docker container
Resources
The information presented above is a brief overview of containers and Docker to get you familiar with some of the terminology. To help you gain a deeper understanding of these technologies, follow the recommended training below.
Introduction to Docker, Docker Fundamentals, Docker Operations
Container Management using Docker
Build and Run Your First Docker Windows Server Container
Summary
Microsoft, in general, has seen a huge uptick in container customers in 2016. As enterprises start to show interest in containers, most conversations revolve around Open Source, Choice, and Hybrid solutions. As more organizations start to realize the flexibility, portability, and openness of containerization – we will start to see a much greater adoption of containers. Now is the time to get onboard with containerization as its evolution is inevitable. Stay tuned for future articles based on containerization.