Today I wanted to talk about Azure Container Service, aka ACS. ACS went into Genearal Availability back in April of 2016. So what is ACS you ask? ACS is a service in Azure that allows you to create, configure, and manage a cluster of virtual machines that are preconfigured to run containerized applications. Microsoft is taking a bit of the hard work on for you in setting up a cluster to support containers. You could surely deploy a set of virtual machines on your own and install the docker engine and create a cluster. But why go through the hassle when you can let the Container Service handle it all while taking advantage of the enterprise features in Azure. With ACS, you select the size, the number of hosts, and choice of orchestrator tools, and Container Service does its magic. You can continue to use the tools you are familiar with and use the orchestrator of your choice. Azure Container Service leverages the Docker container format to ensure that your application containers are fully portable. Container Service offers you a choice of using DC/OS or Docker Swarm for scale and orchestration operations. Simply put, Azure Container Service is a container hosting environment that utilizes open source tools. With all the momentum around containers these days, ACS is a service that will allow you to start testing containers in a matter of minutes.
Getting Started
Some people may argue that they are not ready to start testing with containers. Maybe they think having a monolithic application excludes them from using containers. Well, thats the good thing about containers, you can do a lift-and-shift of a monolithic app into a container if desired. There are several advantages to moving to containers, such as fast deployment, scalability, portability and avoiding vendor lock-in. Based on a state of the app development survey done in early 2016, 80% of respondents felt that Docker would be central to cloud strategy with 3 out of 4 stating their top initiatives would revolve around modernizing legacy apps and moving to a microservices architecture. Some will argue that microservices has been around for a long time and debate how containers would help. Yes, microservices have been around for a while but what Docker and containers does is to allow it to be faster, more streamlined, and offers better collaboration across the dev and ops teams. Its all about Build, Ship, Run, any app, on any stack, anywhere. Are you starting to see the value of containers now?
Installation
Lets begin by deploying an ACS cluster using Docker Swarm as the orchestrator. Since this is our first introduction to ACS I will demonstrate how this is done in the portal. However, you can deploy a cluster using ARM templates which you can find in the GitHub repo for azure-quickstart-templates.
Login to the Azure Portal, click +New, type “azure container service” and select.
Select Azure Container Service and click Create.
Note: You are going to need an SSH Public Key that will be used for authentication against the ACS VMs. It is important for the key to include a prefix of ‘ssh-rsa’ and a suffix of ‘username@acsdemo’. You can accomplish this by using PuttyGen to create a public and private key.
Open PuttyGen and click Generate. In order to create randomness move your cursor around the blank space while the key is being generated. Save the public and private key to a local folder. Optionally you can create a password to protect the private key. You can leave PuttyGen open as you will need the key in the next step.
On the Basics blade fill in the appropriate info including the SSH public key you just generated making sure to add the suffix of ‘username@linuxvm’ and click OK. In my example I used ‘daryl@acsdemo55’.
On the Framework Configuration blade select your choice of orchestrator to use when managing the applications on the cluster. Here you have two options to choose from, DC/OS or Swarm. We will choose Swarm in this demo. So what is Docker Swarm you ask? Swarm is a simple tool which controls a cluster of Docker hosts and exposes it as a single virtual host. Since Swarm uses the standard Docker API, you can use any tool that understands Docker commands. In addition, Swarm follows the “batteries included but removable” principle meaning that its extensible with other 3rd party plugins.
On the Azure Container Service Settings blade select the number of agents and masters, the size of the agent VMs, and a unique DNS name prefix. The agent count will be the initial number of VMs in the VM scale set and will be members of the swarm cluster that you will create.
On the Summary blade, ensure validation has passed. At this point you can download the template that will be used for the deployment. If you wanted to customize the deployment, you could cancel this deployment, edit the json template and then deploy the template. In our demo we will continue with the default. Click OK to continue and click on Purchase to agree to the terms.
At this point the container service is being deployed as per your configuration. You can see this pinned on the dashboard.
Once the deployment has completed, mine took around 10 minutes, you will get a notification on the dashboard that the deployment was successful. In addition, you can click on the resource group and the Last deployment should say Succeeded.
Now that the deployment has completed, let’s see what we have from a resource standpoint. Here you can see that we have two load balancers, one for the swarm agent and another for the swarm master. In addition, we have an availability set that includes the three masters (spread across 3 fault domains and 5 update domains) and a VM scale set that includes the three agents. All of which makes up the container service.
Here we have the virtual network along with the public IP for the LBs, and the associated nics for each master server.
And finally the storage accounts. You may be wondering why are there so many. The storage account ending in …swarmdiag0 is for the diagnostic logs. The storage account ‘ye4r6pe25gdmsswarm0’ includes the 3 OS disks for the masters. The remaining 5 storage accounts are for the OS disks of the agents. Since the VMSS is limited to 100 VMs with the recommendation of 20 VMs per storage account, this gives us 5 storage accounts. Since we only have 3 agents you will notice that 2 of the storage accounts are empty. However, when you scale the VMSS, the VMs will be distributed across all 5 storage accounts. This will also help to avoid IO contention since standard storage accounts are limited to 20,000 IOPS.
Now that we have the container service deployed and understand what our resources are, let’s see what this looks like from an architectural standpoint. Here you can see that by default there are 3 NAT rules configured with port 22 load-balanced for the swarm master. In addition, application traffic is being load-balanced for the swarm agents.

Connecting to ACS Cluster
To connect to the cluster we will create an SSH tunnel with Putty on Windows. If you don’t already have Putty downloaded to your system, you can find it from the link mentioned above. The reason for creating a tunnel is to have encrypted communications between my localhost and the ACS cluster (specifically the swarm master).
First, we need to locate the public DNS name of load-balanced masters by opening the Resource Group and clicking on the swarm master IP resource. Copy the DNS name.
Open Putty and enter a host name that is comprised of the cluster admin user name and the public DNS name of the master (daryl@acsdemo55mgmt.westus.cloudapp.azure.com). Enter 2200 for the port. Select SSH and Auth and Browse for the private key we created earlier. Next, select Tunnel and enter 2375 for the Source Port and localhost:2375 for the Destination and click Add. Optionally you can give the session a name and Save it for later use. Click Open when you are ready to connect. Click Yes to accept the connection.
We can confirm our connection by reviewing the Putty event log. Now we are ready to start using Docker.
Managing Containers with Docker Swarm
Now that we are connected to the master, let’s verify that we have two containers running on each of the masters by typing “docker ps”. The containers present are “swarm” and progrium/consul” which are used to create and manage the cluster.
Now lets check to see if we have a swarm cluster. Type “docker node ls” to view the nodes that are participating in the cluster. The response shows us that the node is not a swarm manager. To create the swarm cluster we type “docker swarm init” on the master. The response shows the node is now acting as a manager.
If we have more than one master, which in most cases we would, we will need to join the other members to the cluster. You can type “docker swarm join-token manager” to get the command for joining another node as a manager to the swarm.
Now we need to login to the other two masters and run this command. Following the same approach as connecting to the first master but changing the port to 2201 and 2202 respectively.
Once all the master have been joined to the swarm we will connect back to the first master and type “docker node ls” which will show all the masters that have joined the swarm as a manager. Notice the one marked with Leader.
Note: An important concept to understand is that this is not where you will run your application containers. Those will run on the agents.
(Optional) If we needed to remove a master we can do so by logging into that master and type “docker node demote <name>” then “docker swarm leave”. Then log back into the leader and type “docker node ls” shows that the master is in a down status. Now we can remove it by typing “docker node rm swarm-master-xxxxx-x”. Typing “docker node ls” will show the master has been removed.
(Optional) To add a master back into the swarm, we just log into that master and type the “docker swarm join <token> host:port” command that we discovered earlier.
After logging back into the leader again, we can also see the details of each master by typing “docker node inspect <ID>”.
Now that we have our swarm cluster. Let’s review the agent nodes and their status. Type “docker -H tcp://localhost:2375 info” which shows the swarm agents and their details.
Now we need to export the the DOCKER_HOST variable to ensure that our application container will start against the agent endpoint and not on the local swarm master. Type “export DOCKER_HOST=tcp://172.16.0.6:2375” which can be found from the previous output listed as Primary. Now when we type “docker ps” we don’t see any containers.
Now we can create a container by typing “docker run -d -p 80:80 dubuqingfeng/docker-web-game” and verify that its running. If this image is not available you can swap it with another.
To verify that we can access our container we need to locate the DNS name of the swarm agent load-balancer. We can find this by selecting the swarm-agent IP resource and copy the DNS name.
Open your favorite browser, paste, and voila. We have games. Translate the page if you like and select Get Started and you can play Mario Brothers.
Summary
Azure Container Service really helps to get containers up and running and allows you to use either Swarm or DC/OS as the orchestrator. In this blog post I showed you how to stand up ACS, how to connect to it, and then how to deploy a Swarm cluster with a simple container running on it. I hope that you have enjoyed this post and feel a little more comfortable with Azure Container Service and Docker Swarm. Stay tuned for more on Docker.