Using Terraform Across Multiple Cloud Providers

Are you utilizing multiple cloud providers and finding it difficult to keep track of how to compose the different definition or configuration files? I feel your pain. Depending on what cloud provider you are using, you need to use a specific format when building your templates, scripts, etc. Previously most deployments were done imperatively. This means that you tell the machine how to do something and as a result what you want to happen will happen. This works and can be useful in certain scenarios. However, the more preferable way of deployment would be to use a declarative deployment model. This is done by telling the machine what you would like to happen and the it figures out how to do it. What if there was a way to create definition files that could be used across multiple cloud platforms wherein a common language was used to lay down the infrastructure. This would be beneficial right? Well, you are in luck! Have you heard of HashiCorp, more specifically Terraform? In this post I am going to show you how Infrastructure as Code works by using terraform across multiple cloud providers.

So what is Terraform

Terraform is one of many tools available in the HashiCorp Ecosystem.

  • Atlas – commercial product. Combines Packer, Terraform, and Consul.
  • Packer – tool for creating images.
  • Terraform – tool for creating, combining, and modifying infrastructure.
  • Consul – tool for service discovery, service registry, and health checks.
  • Serf – tool for cluster membership and fault detection.
  • Vagrant – tool for managing dev environments

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. It can be used across multiple providers such as Azure, AWS, Google Cloud, among many others. Basically you define the components or resources in your configuration file, tell it which provider to use, generate an execution plan prior to deployment, and then execute the build. So, how is this done? It’s done through abstraction of resources and providers.

Key features of terraform are:

  • Infrastructure as Code – process of managing and provisioning infrastructure and configuration through high-level configuration syntax.
  • Execution Plans – shows what the deployment will do prior to actually doing anything. Helps avoids failed deployments.
  • Resource Graph – allows terraform to map out the resources and dependencies to deploy the infrastructure as efficiently as possible.
  • Change Automation – helps avoid human error while changes are orchestrated.

This all sounds good, but aren’t there other tool sets out there that do the same thing? Sure, but the power comes by being able to use across multiple providers. From a deployment perspective I could use Resource Manager in Azure, Cloud Formation in AWS, or Cloud Deployment Manager in GCE but they are all specific to their respective cloud. Terraform can be used across all three. Let’s be clear though, terraform is not a configuration management tool such as Chef and Puppet. You can still continue to use those tools for their intended purpose by using provisioners once the resource has been created. Terraform focuses on the higher-level abstraction of the datacenter and its services. Terraform is an open source tool that provides a simple, easy to read syntax that allows for standardization within organizations.

Installation

Terraform can be installed on your workstation and is distributed as a packaged zip archive that supports multiple platforms and architectures.

  • Download the appropriate package for your system. For me, its Windows 64-bit but Linux distros are suppored as well.
  • Create a folder at the root of the OS drive called terraform. Unzip the package to this folder. A single executable will be present.
  • Ensure the install directory is included in the PATH variable for the user. For Windows you can type the following in PowerShell. (click on the code block to expand)
[Environment]::SetEnvironmentVariable("PATH", $env:PATH + ({;C:\terraform},{C:\terraform})[$env:PATH[-1] -eq ';'], "User")

Once that is done, we are ready. Verify the installation by opening a terminal session and typing “terraform”. This can be done using Cmd, PowerShell, or VS Code (with the PS plugin).

Deployment across Multiple Providers

Now I will demonstrate deploying some simple resources across Microsoft’s Azure, Amazon’s AWS and Google’s Compute Engine using terraform.

Microsoft Azure

There are a few prerequisites for getting started.

  • ensure you have an Azure subscription
  • ensure you have your client_id, client_secret, subscription_id, and tenant_id
  • setup two entities – AAD Application and AAD Service Principal
  • have Azure PowerShell tools installed (Windows users) and/or Azure CLI for Linux distros.

To setup terraform to access Azure there is a script that you can download and run that will create the AAD Application and AAD Service Principal. Download and execute the “./azure-setup setup” command and login to your Azure subscription with admin privileges. Provide an arbitrary application name such as “terraform” and supply a password. Once the script has completed, it will display the values for the needed prerequisites.

I created a new file called “azure.tf” with the contents below and stored it in C:\terraform\Azure. Plugin the values for the provider.

variable "resourcesname" {
  default = "helloterraform"
}

# Configure the Microsoft Azure Provider
provider "azurerm" {
    client_id       = "your_client_id"
    client_secret   = "your_client_secret"
    subscription_id = "your_subscription_id"
    tenant_id       = "your_tenant_id"
}

# create a resource group if it doesn't exist
resource "azurerm_resource_group" "helloterraform" {
    name = "terraformtest"
    location = "West US"
}

# create virtual network
resource "azurerm_virtual_network" "helloterraformnetwork" {
    name = "vnet1"
    address_space = ["10.0.0.0/16"]
    location = "West US"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"
}

# create subnet
resource "azurerm_subnet" "helloterraformsubnet" {
    name = "subnet1"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"
    virtual_network_name = "${azurerm_virtual_network.helloterraformnetwork.name}"
    address_prefix = "10.0.2.0/24"
}


# create public IPs
resource "azurerm_public_ip" "helloterraformips" {
    name = "tfvm1-pip"
    location = "West US"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"
    public_ip_address_allocation = "dynamic"
}

# create network interface
resource "azurerm_network_interface" "helloterraformnic" {
    name = "tfvm1-nic"
    location = "West US"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"

    ip_configuration {
        name = "testconfiguration1"
        subnet_id = "${azurerm_subnet.helloterraformsubnet.id}"
        private_ip_address_allocation = "static"
        private_ip_address = "10.0.2.5"
        public_ip_address_id = "${azurerm_public_ip.helloterraformips.id}"
    }
}

# create storage account
resource "azurerm_storage_account" "helloterraformstorage" {
    name = "tfstor01"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"
    location = "westus"
    account_type = "Standard_LRS"
}

# create storage container
resource "azurerm_storage_container" "helloterraformstoragestoragecontainer" {
    name = "vhd"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"
    storage_account_name = "${azurerm_storage_account.helloterraformstorage.name}"
    container_access_type = "private"
    depends_on = ["azurerm_storage_account.helloterraformstorage"]
}

# create virtual machine
resource "azurerm_virtual_machine" "helloterraformvm" {
    name = "tfvm1"
    location = "West US"
    resource_group_name = "${azurerm_resource_group.helloterraform.name}"
    network_interface_ids = ["${azurerm_network_interface.helloterraformnic.id}"]
    vm_size = "Standard_A0"

    storage_image_reference {
        publisher = "Canonical"
        offer = "UbuntuServer"
        sku = "14.04.2-LTS"
        version = "latest"
    }

    storage_os_disk {
        name = "tfvm1-osdisk"
        vhd_uri = "${azurerm_storage_account.helloterraformstorage.primary_blob_endpoint}${azurerm_storage_container.helloterraformstoragestoragecontainer.name}/myosdisk.vhd"
        caching = "ReadWrite"
        create_option = "FromImage"
    }

    os_profile {
        computer_name = "hostname"
        admin_username = "daryl"
        admin_password = "PrivD3m0!"
    }

    os_profile_linux_config {
        disable_password_authentication = false
    }
}

Navigate to that same directory and type “terraform plan”. The output shows what changes terraform will make. Here we can see that there is going to be eight new resources added. (as the output is quite long I have omitted most of it)

Once confirmed that this is what we really want to do, we can execute by typing “terraform apply”. This will start the deployment in your Azure account. In just a few minutes we will have a virtual machine running Ubuntu-14.04.

Here we can see all the resources that were deployed in our resource group. Wasn’t there eight resources within our file? Yes, but some resources such as the subnet, nic, and storage container are not exposed.

We can verify (inspect) the state using “terraform show”.

Once we are done, we can clean up by removing what was installed previously. We can run “terraform plan -destroy” as a pre-check validation which shows 8 resources to destroy.

Running “terraform destroy” and confirming with “yes” will cleanup for us.

Let’s move on and see how this looks in AWS.

AWS

There are a few prerequisites for getting started.

  • ensure you have an AWS account
  • ensure you have an access key and secret key

To create an access key, from the AWS portal, click on the Service drop down and select IAM. Click on Users, select your user account, click on the Security Credentials tab and click on “Create Access Key”. You will need the Access Key Id and Secret.

I created a new file called “aws.tf” with the contents below and stored it in C:\terraform\AWS. Plugin your values for the provider.

provider "aws" {
  access_key = "your_access_key"
  secret_key = "your_secret_key"
  region     = "us-west-2"
}

resource "aws_instance" "example" {
  ami           = "ami-d07d8db0"
  instance_type = "t2.micro"
}

Navigate to that same directory and type “terraform plan”. The output shows what changes terraform will make. Here we can see that there is going to be one new resource added.

Once confirmed that this is what we really want to do, we can execute by typing “terraform apply”. This will start the deployment in your AWS account. In just a few minutes we will have a virtual machine running Ubuntu-16.04.

We can verify (inspect) the state using “terraform show”.

Once we are done, we can clean up by removing what was installed previously. We can run “terraform plan -destroy” as a pre-check validation prior to running “terraform destroy” and confirming with “yes”.

Let’s move on and see how this looks in GCE.

Google Compute Engine

There are a few prerequisites for getting started.

  • ensure you have a Google Cloud account
  • ensure you have an authentication file in the same folder as your terraform file

To create an authentication file, from the GCE portal, search API Manager. Click on the Credentials tab and click “Create Credential” and select Service Account Key. Select “Compute Engine default service account”, select “JSON”, and Create. Save the file as account.json.

I created a new file called “gce.tf” with the contents below and stored it in C:\terraform\Google.

// Configure the Google Cloud provider
provider "google" {
  credentials = "${file("account.json")}"
  project     = "triple-baton-144217"
  region      = "us-west1-a"
}

// Create a new instance
resource "google_compute_instance" "default" {
  name = "testvm1"
  machine_type = "n1-standard-1"
  zone = "us-west1-a"
  
  disk {
      image = "debian-cloud/debian-8"
  } 

  network_interface {
      network = "default"
      access_config {

      }
  }

}

// Create a rule to allow http access 
resource "google_compute_firewall" "default" {
    name = "http"
    network = "default"

    allow {
        protocol = "tcp"
        ports = ["80", "8080"]
    }
}

Navigate to that same directory and type “terraform plan”. The output shows what changes terraform will make. Here we can see that there is going to be two new resources added.

Once confirmed that this is what we really want to do, we can execute by typing “terraform apply”. This will start the deployment in your GCE account. In just a few minutes we will have a virtual machine running Debian-8.

We can verify (inspect) the state using “terraform show”.

Once we are done, we can clean up by removing what was installed previously. We can run “terraform plan -destroy” as a pre-check validation prior to running “terraform destroy” and confirming with “yes”.

Keep in mind that these are just examples to use in a test environment and are not meant for production. For production we would take advantage of defining variables and using a vault for storing credentials and secrets.

Summary

As you can see I was able to deploy resources in Azure, AWS, and Google with very little effort. You can see how much easier terraform is to read than traditional JSON or ARM syntax. I’m not saying that this should be the only tool in your toolbox, but you can see just how powerful and simple it is. If you are looking to deploy resources across multiple providers, take advantage of Infrastructure as Code by using Terraform. Stay tuned as I will be discussing some of the other tools in the HashiCorp Ecosystem in future posts.