Single Instance Virtual Machine SLA in Azure

Were you aware that Microsoft Azure offers single Instance Virtual Machine SLA? SLA stands for service level agreement which is a contract between you, the end user, and Microsoft wherein a level of service is expected. If a specified level of service is not provided, the end user may claim a financial credit. Yes, that’s right, Microsoft Azure now offers single instance VM SLA as long as you use premium storage for the OS disk and data disks. Microsoft guarantees you will have VM connectivity of at least 99.9% of the time. This equates to about 43 mins per month of potential unavailability.

availability set

You may ask, “Why would I want to deploy a single instance VM?” In many situations a customer may have an application that is not built for scaling out. Or scaling out could be cost prohibitive for the application and the customer is fine with running their application in a single VM. In fact, many customers are used to deploying single VMs in their data center and taking advantage of live migration or vMotion. However, when you operate in the cloud you need to think differently to obtain high availability.

If high availability and a higher level SLA is needed, then you should deploy at least 2 VMs into an availability set. This will provide you with 99.95% uptime guarantee. An availability set is a logical grouping of VMs that spans across fault and update domains. In the event of a planned maintenance event or an unplanned event, at least 1 VM will remain running. This is due to hardware clusters in Azure being divided into update and fault domains. These domains are defined by hosts that share a common update cycle or physical infrastructure. In addition, these clusters support a range of VM sizes. When the first VM of an availability set is deployed, the hardware cluster is chosen based on the supported VMs. By default availability sets are configured with 5 update domains (configurable to 20) and 3 fault domains.

If you are still using VMs with unmanaged disks (traditional storage), it is recommended to convert those VMs to use Managed Disks. If you are not familiar with Managed Disks, stay tuned as I will be talking about these in a future blog post.

Azure Usage and Quotas

Recently Microsoft added a feature that I requested in early 2016. Back then I would hear from customers about failed deployments to find out that it was the limitation on the core count. All they needed to do was to open a ticket to request a core quota increase. The problem comes when someone else had increased the quota already so you had no idea what the actual limit was at that point in time.

Eventually Microsoft exposed the current quota for cores in the portal when submitting a ticket for a quota increase. That was better but we still needed to see more info. Recently Microsoft added the capability to see Azure Usage and Quotas. You can see this by opening the Subscriptions blade, select a subscription and click on Usage + Quotas. You can also get to the same place through the Billing feature which is still in Preview at the time of this writing.

Here you can see a few quotas listed with the usage. Only a small subset are listed today but expect to see more added soon. There is also an integrated support experience from within this blade to allow you to request an increase.

Note: Core quotas are per region accessible by your subscription. So you would need to request an increase based on the region you are deploying into as well as the size of VM series.

Free Trial subscriptions are not eligible for limit or quota increases. You will need to upgrade to a PAYG subscription.

For a complete listing of all the subscription and service limitations, quotas and constraints, go to this link.

 

Azure in a Box

This summer Microsoft is expected to go GA with their Azure Stack offering. If you are not familiar with this, think of it as “Azure in a box”. This would get installed in your datacenter or offered up by a service provider for you to consume. The release of Azure Stack has been a long while coming as it was first announced in late 2015. Currently Technical Preview 3 (TP3) is available for evaluation so some of this info could change. Everything that I am about to talk about is publicly available on Microsoft’s website.

A Walk Down Memory Lane..

Remember those days when you needed to spin up a VM to test something. There wasn’t a lot of choices. You had to either reach out to IT to request that they spin up a VM for you and provide you access, or you could spin one up on your laptop using a type 2 hypervisor which had its limitations. Typically, this had to go through an approval process that sometimes took weeks. For those of you graybeards, who have been around the industry as long as I have, you didn’t even have the luxury of virtualization and you had to request a server. How did we ever get anything done back then?

Not so long ago companies like Amazon, Microsoft, and Google started to offer up hyperscale environments where you could allocate resources and pay for them as you use them. The sheer size of these environments allowed them to offer these resources at a low-price point. This allowed companies to move from a CAPEX to an OPEX model which started the evolution of the public cloud. However, there are many companies that have either security concerns or compliance requirements that keep them from taking advantage of the public cloud and all it has to offer or those who wish to have complete control of their data and resources.

Enter the private cloud, hosted in your datacenter or a service provider’s datacenter. Companies must evolve and to stay relevant in the industry they need to focus on providing value to their customers. In the case of the private cloud, those customers could be your own internal business units. These BUs need the capability to have self-service deployment and agility.

Prior to Azure Stack there was Azure Pack which provided these capabilities. This was a big step in the right direction as it provided a lot of these capabilities but there was something missing. What if you had a workload running on Azure Pack and you wanted to be able to run it in public Azure without having to make any changes. This was not easily done as the APIs were not consistent. Microsoft Azure had already seen so much growth that a new user experience was in order which gave Microsoft a chance to build a new portal and a new deployment model (resource manager). This left a lot of users in a state of confusion having to keep up with learning and supporting both.

Fast Forward to Today..

Now that Azure Stack has been developed, you get consistency across both private and public clouds. The APIs are consistent, the deployment models are consistent, and the user experience is consistent. Users can now log in with the same identities utilizing ADFS or Azure AD. Developers can use the same tools that they are familiar with. Think of Azure Stack as being an extension of Azure bringing agility and faster innovation to cloud computing, all behind your firewall. This allows for a true hybrid cloud experience.

Purchasing, Licensing, Pricing and Support..

At GA, Azure Stack is going to be available through three hardware vendors, Dell EMC, HPE, and Lenovo on preapproved hardware, delivered as an integrated system, with software preinstalled. Cisco recently announced they will be joining the other three hardware manufacturers. Expect to see their offering soon after GA. I wouldn’t be surprised to see other manufacturers jump in on this as well.

The software licensing will be available via EA and CSP only. If you have an existing EA Azure subscription, you can use that same one for consuming Azure Stack. CSP providers will be able to use the same tenant subscriptions for customers as well. MSDN, Free trials, and Biz Spark offers cannot be used with Azure Stack. You will be able to use on-premises Windows Server and SQL Server license with Azure Stack as long as you comply with product licensing. If you BYOL you will only be charged for consumption on the base VM.

Azure Stack services will be priced the same as Azure, on a pay-as-you-use model. At GA, the following services will be charged on a consumption basis: Virtual Machines, Azure Storage, App Service, and Azure Functions. You will be billed for Azure Stack usage as part of your regular Azure invoice. See below chart for how those services will be metered.

Image credit to Microsoft

With Azure Stack, there will be two support contracts, one purchased from the hardware vendor and one from Microsoft. For those customers who have an existing Premiere or Azure support contract with Microsoft today, it will cover Azure Stack as well.

Summary

Azure stack brings purpose-built integrated systems to your datacenter that allows speed and agility to help you modernize your applications across a hybrid environment. It allows developers to build applications using a consistent set of services, tools, and processes. Operations is now able to deploy to the location that meets the needs of their business meeting technical and regulatory requirements, all while paying only for what you use.

For additional information on Azure Stack refer to this link.

 

Alexa Skill App for Azure Infrastructure Exam

Like a lot of you out there, I received an Echo Tap from my daughter for Christmas. The funny thing is I had the same idea as a gift and bought the Echo for my wife and grandkids. I also received the Echo Dot through my company. Needless to say, Alexa is all over my house. For those of you who are not familiar, Alexa is the wake word. This triggers the Echo to listen to your command. Immediately after doing a little research to see what Alexa was capable of, I noticed there were skills that you could enable for further functionality. As I have a passion for learning new things, I thought how cool it would be if I could build my own Alexa skill. What if Alexa could help me study for an exam? Well, now she can.

As my focus is on cloud technologies, I searched through the skills library to see if I could find anything around Azure and AWS. It turns out that there were a couple around AWS but nothing around Microsoft, specifically Azure. So I knew what my next project was going to be. I did a little research and found some guidance around building your own Alexa skills app, based on the trivia framework.

The really cool thing about this is I could use the AWS Lambda service which allows you to run code in the cloud without having to worry about deploying or managing the servers. If the traffic to my skill all of the sudden ramps up, the service will scale out as needed and when the traffic slows it will scale back in. This is what is referred to as “Serverless Computing”. You mean I can run code in the cloud without having to worry about the server or the infrastructure, it scales as needed and its highly available? The answer is Yes. This is a very powerful service which I feel is going to get a lot of attention moving forward.

I simply created a Lambda function in AWS, added my code, set the configuration and added a trigger for the Alexa Skills Kit. I then logged into the developer console and opened the Alexa Skills Kit where I defined my skill. This included providing skill information, defining the interaction model, setting the configuration using my newly created Lambda function ARN (amazon resource name), configuring the publishing information, and specifying the privacy and compliance. The capability exists within the developer portal to be able to test and validate as well. Once my skill was tested and validated I submitted it for certification. The certification process can take up to 7 days to go through testing at AWS to ensure you have met security requirements and the functional and user experience passes. If your skill doesn’t meet the criteria you will be notified and asked to resolve the issues and resubmit. Make sure you do not infringe on anyone’s intellectual property.

My Alexa skill is based on studying for the 70-533 Azure Infrastructure exam. I was excited to receive an email that my skill has been certified and has been published to the skill store. I encourage you to check it out and please rate it or provide a review. The name of my skill is “Azure Quiz Buddy” and you can search for it or locate it under the Education and Reference category.

I hope you will enjoy my new Alexa skill and be inspired to go and build your own.

Using Terraform Across Multiple Cloud Providers

Are you utilizing multiple cloud providers and finding it difficult to keep track of how to compose the different definition or configuration files? I feel your pain. Depending on what cloud provider you are using, you need to use a specific format when building your templates, scripts, etc. Previously most deployments were done imperatively. This means that you tell the machine how to do something and as a result what you want to happen will happen. This works and can be useful in certain scenarios. However, the more preferable way of deployment would be to use a declarative deployment model. This is done by telling the machine what you would like to happen and the it figures out how to do it. What if there was a way to create definition files that could be used across multiple cloud platforms wherein a common language was used to lay down the infrastructure. This would be beneficial right? Well, you are in luck! Have you heard of HashiCorp, more specifically Terraform? In this post I am going to show you how Infrastructure as Code works by using terraform across multiple cloud providers.

So what is Terraform

Terraform is one of many tools available in the HashiCorp Ecosystem.

  • Atlas – commercial product. Combines Packer, Terraform, and Consul.
  • Packer – tool for creating images.
  • Terraform – tool for creating, combining, and modifying infrastructure.
  • Consul – tool for service discovery, service registry, and health checks.
  • Serf – tool for cluster membership and fault detection.
  • Vagrant – tool for managing dev environments

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. It can be used across multiple providers such as Azure, AWS, Google Cloud, among many others. Basically you define the components or resources in your configuration file, tell it which provider to use, generate an execution plan prior to deployment, and then execute the build. So, how is this done? It’s done through abstraction of resources and providers.

Key features of terraform are:

  • Infrastructure as Code – process of managing and provisioning infrastructure and configuration through high-level configuration syntax.
  • Execution Plans – shows what the deployment will do prior to actually doing anything. Helps avoids failed deployments.
  • Resource Graph – allows terraform to map out the resources and dependencies to deploy the infrastructure as efficiently as possible.
  • Change Automation – helps avoid human error while changes are orchestrated.

This all sounds good, but aren’t there other tool sets out there that do the same thing? Sure, but the power comes by being able to use across multiple providers. From a deployment perspective I could use Resource Manager in Azure, Cloud Formation in AWS, or Cloud Deployment Manager in GCE but they are all specific to their respective cloud. Terraform can be used across all three. Let’s be clear though, terraform is not a configuration management tool such as Chef and Puppet. You can still continue to use those tools for their intended purpose by using provisioners once the resource has been created. Terraform focuses on the higher-level abstraction of the datacenter and its services. Terraform is an open source tool that provides a simple, easy to read syntax that allows for standardization within organizations.

Installation

Terraform can be installed on your workstation and is distributed as a packaged zip archive that supports multiple platforms and architectures.

  • Download the appropriate package for your system. For me, its Windows 64-bit but Linux distros are suppored as well.
  • Create a folder at the root of the OS drive called terraform. Unzip the package to this folder. A single executable will be present.
  • Ensure the install directory is included in the PATH variable for the user. For Windows you can type the following in PowerShell. (click on the code block to expand)
[Environment]::SetEnvironmentVariable("PATH", $env:PATH + ({;C:\terraform},{C:\terraform})[$env:PATH[-1] -eq ';'], "User")

Once that is done, we are ready. Verify the installation by opening a terminal session and typing “terraform”. This can be done using Cmd, PowerShell, or VS Code (with the PS plugin).

Deployment across Multiple Providers

Now I will demonstrate deploying some simple resources across Microsoft’s Azure, Amazon’s AWS and Google’s Compute Engine using terraform.

Microsoft Azure

There are a few prerequisites for getting started.

  • ensure you have an Azure subscription
  • ensure you have your client_id, client_secret, subscription_id, and tenant_id
  • setup two entities – AAD Application and AAD Service Principal
  • have Azure PowerShell tools installed (Windows users) and/or Azure CLI for Linux distros.

To setup terraform to access Azure there is a script that you can download and run that will create the AAD Application and AAD Service Principal. Download and execute the “./azure-setup setup” command and login to your Azure subscription with admin privileges. Provide an arbitrary application name such as “terraform” and supply a password. Once the script has completed, it will display the values for the needed prerequisites.

I created a new file called “azure.tf” with the contents below and stored it in C:\terraform\Azure. Plugin the values for the provider.

variable "resourcesname" {
  default = "helloterraform"
}

# Configure the Microsoft Azure Provider
provider "azurerm" {
    client_id       = "your_client_id"
    client_secret   = "your_client_secret"
    subscription_id = "your_subscription_id"
    tenant_id       = "your_tenant_id"
}

# create a resource group if it doesn't exist
resource "azurerm_resource_group" "helloterraform" {
    name = "terraformtest"
    location = "West US"
}

# create virtual network
resource "azurerm_virtual_network" "helloterraformnetwork" {
    name = "vnet1"
    address_space = ["10.0.0.0/16"]
    location = "West US"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"
}

# create subnet
resource "azurerm_subnet" "helloterraformsubnet" {
    name = "subnet1"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"
    virtual_network_name = "${azurerm_virtual_network.helloterraformnetwork.name}"
    address_prefix = "10.0.2.0/24"
}


# create public IPs
resource "azurerm_public_ip" "helloterraformips" {
    name = "tfvm1-pip"
    location = "West US"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"
    public_ip_address_allocation = "dynamic"
}

# create network interface
resource "azurerm_network_interface" "helloterraformnic" {
    name = "tfvm1-nic"
    location = "West US"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"

    ip_configuration {
        name = "testconfiguration1"
        subnet_id = "${azurerm_subnet.helloterraformsubnet.id}"
        private_ip_address_allocation = "static"
        private_ip_address = "10.0.2.5"
        public_ip_address_id = "${azurerm_public_ip.helloterraformips.id}"
    }
}

# create storage account
resource "azurerm_storage_account" "helloterraformstorage" {
    name = "tfstor01"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"
    location = "westus"
    account_type = "Standard_LRS"
}

# create storage container
resource "azurerm_storage_container" "helloterraformstoragestoragecontainer" {
    name = "vhd"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"
    storage_account_name = "${azurerm_storage_account.helloterraformstorage.name}"
    container_access_type = "private"
    depends_on = ["azurerm_storage_account.helloterraformstorage"]
}

# create virtual machine
resource "azurerm_virtual_machine" "helloterraformvm" {
    name = "tfvm1"
    location = "West US"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"
    network_interface_ids = ["${azurerm_network_interface.helloterraformnic.id}"]
    vm_size = "Standard_A0"

    storage_image_reference {
        publisher = "Canonical"
        offer = "UbuntuServer"
        sku = "14.04.2-LTS"
        version = "latest"
    }

    storage_os_disk {
        name = "tfvm1-osdisk"
        vhd_uri = "${azurerm_storage_account.helloterraformstorage.primary_blob_endpoint}${azurerm_storage_container.helloterraformstoragestoragecontainer.name}/myosdisk.vhd"
        caching = "ReadWrite"
        create_option = "FromImage"
    }

    os_profile {
        computer_name = "hostname"
        admin_username = "daryl"
        admin_password = "PrivD3m0!"
    }

    os_profile_linux_config {
        disable_password_authentication = false
    }
}

Navigate to that same directory and type “terraform plan”. The output shows what changes terraform will make. Here we can see that there is going to be eight new resources added. (as the output is quite long I have omitted most of it)

Once confirmed that this is what we really want to do, we can execute by typing “terraform apply”. This will start the deployment in your Azure account. In just a few minutes we will have a virtual machine running Ubuntu-14.04.

Here we can see all the resources that were deployed in our resource group. Wasn’t there eight resources within our file? Yes, but some resources such as the subnet, nic, and storage container are not exposed.

We can verify (inspect) the state using “terraform show”.

Once we are done, we can clean up by removing what was installed previously. We can run “terraform plan -destroy” as a pre-check validation which shows 8 resources to destroy.

Running “terraform destroy” and confirming with “yes” will cleanup for us.

Let’s move on and see how this looks in AWS.

AWS

There are a few prerequisites for getting started.

  • ensure you have an AWS account
  • ensure you have an access key and secret key

To create an access key, from the AWS portal, click on the Service drop down and select IAM. Click on Users, select your user account, click on the Security Credentials tab and click on “Create Access Key”. You will need the Access Key Id and Secret.

I created a new file called “aws.tf” with the contents below and stored it in C:\terraform\AWS. Plugin your values for the provider.

provider "aws" {
  access_key = "your_access_key"
  secret_key = "your_secret_key"
  region     = "us-west-2"
}

resource "aws_instance" "example" {
  ami           = "ami-d07d8db0"
  instance_type = "t2.micro"
}

Navigate to that same directory and type “terraform plan”. The output shows what changes terraform will make. Here we can see that there is going to be one new resource added.

Once confirmed that this is what we really want to do, we can execute by typing “terraform apply”. This will start the deployment in your AWS account. In just a few minutes we will have a virtual machine running Ubuntu-16.04.

We can verify (inspect) the state using “terraform show”.

Once we are done, we can clean up by removing what was installed previously. We can run “terraform plan -destroy” as a pre-check validation prior to running “terraform destroy” and confirming with “yes”.

Let’s move on and see how this looks in GCE.

Google Compute Engine

There are a few prerequisites for getting started.

  • ensure you have a Google Cloud account
  • ensure you have an authentication file in the same folder as your terraform file

To create an authentication file, from the GCE portal, search API Manager. Click on the Credentials tab and click “Create Credential” and select Service Account Key. Select “Compute Engine default service account”, select “JSON”, and Create. Save the file as account.json.

I created a new file called “gce.tf” with the contents below and stored it in C:\terraform\Google.

// Configure the Google Cloud provider
provider "google" {
  credentials = "${file("account.json")}"
  project     = "triple-baton-144217"
  region      = "us-west1-a"
}

// Create a new instance
resource "google_compute_instance" "default" {
  name = "testvm1"
  machine_type = "n1-standard-1"
  zone = "us-west1-a"
  
  disk {
      image = "debian-cloud/debian-8"
  } 

  network_interface {
      network = "default"
      access_config {

      }
  }

}

// Create a rule to allow http access 
resource "google_compute_firewall" "default" {
    name = "http"
    network = "default"

    allow {
        protocol = "tcp"
        ports = ["80", "8080"]
    }
}

Navigate to that same directory and type “terraform plan”. The output shows what changes terraform will make. Here we can see that there is going to be two new resources added.

Once confirmed that this is what we really want to do, we can execute by typing “terraform apply”. This will start the deployment in your GCE account. In just a few minutes we will have a virtual machine running Debian-8.

We can verify (inspect) the state using “terraform show”.

Once we are done, we can clean up by removing what was installed previously. We can run “terraform plan -destroy” as a pre-check validation prior to running “terraform destroy” and confirming with “yes”.

Keep in mind that these are just examples to use in a test environment and are not meant for production. For production we would take advantage of defining variables and using a vault for storing credentials and secrets.

Summary

As you can see I was able to deploy resources in Azure, AWS, and Google with very little effort. You can see how much easier terraform is to read than traditional JSON or ARM syntax. I’m not saying that this should be the only tool in your toolbox, but you can see just how powerful and simple it is. If you are looking to deploy resources across multiple providers, take advantage of Infrastructure as Code by using Terraform. Stay tuned as I will be discussing some of the other tools in the HashiCorp Ecosystem in future posts.