AWS re:Invent 2018 Monday Night Live Wrap-Up

reinvent

The first of 4 keynotes, Monday Night Live with Peter DeSantis, kicked off tonight at AWS re:Invent 2018. There’s always a big anticipation of what AWS will roll out next. There was no disappointment with several launches of new services along with a few that are coming soon. I have compiled a summary of these along with their links for you to dive deeper for a better understanding.

 

AWS Global Accelerator – improves global application availability and performance

AWS Transit Gateway – easily scale connectivity across Amazon VPCs, AWS Accounts and on-premises networks

Amazon EC2 A1 instances – up to 45% lower costs for scale-out workloads

Amazon EC2 C5n instances – offering 100 Gbps of networking throughput

Firecracker – secure and fast microVMs for serverless computing

Amazon SageMaker Neo (available this week) – deep learning performance optimization for multiple frameworks and hardware

Amazon EC2 P3dn instances (coming soon) – most powerful GPU instances, designed for distributed training with 100 Gbps of network throughput

Elastic Fabric Adapter (coming soon) – run HPC applications at scale in the cloud

 

If you find it difficult to keep up with all the announcements, you can go to Whats new with AWS for a comprehensive list.

 

Cloud Native Computing with Kubernetes

This week I am in Austin, Texas attending KubeCon + CloudNativeCon North America 2017 conference along with 4,000 other technologists to discuss cloud native computing with kubernetes. You may be wondering what this is all about and why the interest. This event is hosted by the Cloud Native Computing Foundation (CNCF) where adopters and technologists from leading open source and cloud native communities gather to further the education and advancement of cloud native computing. Discussions focus around trends in the rapidly-evolving cloud native landscape, containers, orchestration, microservices, serverless as well as DevOps. Cloud native computing uses an open source software stack to deploy applications as microservices to increase agility and maintainability. Each part gets packaged in its own container that is orchestrated and scheduled to optimize resource utilization.

Background

The CNCF was founded in 2015, is 160 members strong and growing. It’s purpose is to provide an open, cloud-neutral, container-native technology stack that enables cloud portability and avoids vendor lock-in. Today, the CNCF hosts 14 projects, compared to only 4 projects in 2016. This gives you an idea of the interest and adoption rate of cloud native technologies. Refer to the CNCF Landscape for a map of the most popular projects and product offerings in the cloud native space. Currently hosted projects include the following.

Announcements

Some of the more notable announcements were around the upcoming Kubernetes v1.9 release that focuses on the DaemonSet, Deployment, ReplicaSet and StatefulSet. In addition, there was mention of the v2.0 release of Prometheus, the release of fluentd v1.0, Jaeger v1.0 release, CoreDNS v1.0 release and containerd v1.0 release. These are pretty significant milestones. Recently a new Certified Kubernetes Conformance program was announced. This gives enterprise organizations the confidence that workloads running on any Certified Kubernetes distribution or platform will work correctly on other Certified Kubernetes distributions or platforms. To date the CNCF has certified offerings from 40 vendors.

Intel’s Imad Sousou took the stage to talk about the launch of Kata Containers which brings the speed of containers with the security of VMs. Kata Containers is a new open source project, hosted by the OpenStack Foundation, building extremely lightweight virtual machines that seamlessly plug into the container ecosystem. It combines the technology from Intel Clear Containers and Hyper runV. This project is getting a lot of attention and is one to keep an eye on.

Next up, Dianne Marsh, Dir of Engineering at Netflix talked about the relationship between tools and culture and how that impacts the technology. Netflix developed Spinnaker, an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. It was developed to be used across both AWS and Google Cloud. Netflix performs around 8,000 orchestrations per day and allows their engineers to decide on deployment strategy. The take away was to choose your tools wisely, don’t fight culture.

Adrian Cockcroft of AWS took the stage to talk about some of the principles behind cloud computing. Those being pay-as-you-go pricing, self-service, globally distributed services, availability, turning off resources to maximize utilization and immutable code deployments. AWS has built an open source CNI plugin that anyone can use with their Kubernetes clusters on AWS. This allows you to natively use Amazon VPC networking with your Kubernetes pods. This works by creating multiple ENIs with secondary IPs and then assigning those to pods. A recent survey showed 63% of Kubernetes deployments were running on AWS. That is pretty astounding. There was mention of EKS (currently in preview) which is a highly available, scalable Kubernetes service on AWS. If you haven’t heard about Fargate, this will be one to check out. It’s a technology that allows you to use containers without having to manage the underlying instances.

Summary

For those of you who are running Kubernetes or who are planning on running Kubernetes, you have choices. You can run Kubernetes in your data center on bare-metal, in AWS, Azure, Google Cloud, Oracle Cloud, IBM Cloud, Alibaba Cloud, OpenShift and Platform9 to name a few. What I’m trying to say is you have plenty of options for deployment with plugins and integrations for just about everything. The container ecosystem is thriving and growing fast, especially Kubernetes.

 

 

Resources

If you are just starting to learn Kubernetes (K8s for short) here are a few resources.

Introduction to Kubernetes

Fundamentals of Containers, Kubernetes, and Red Hat OpenShift

Katacoda

AWS Workshop for Kubernetes

Kubernetes Tutorials

 

Good luck on your journey.

 

Single Instance Virtual Machine SLA in Azure

Were you aware that Microsoft Azure offers single Instance Virtual Machine SLA? SLA stands for service level agreement which is a contract between you, the end user, and Microsoft wherein a level of service is expected. If a specified level of service is not provided, the end user may claim a financial credit. Yes, that’s right, Microsoft Azure now offers single instance VM SLA as long as you use premium storage for the OS disk and data disks. Microsoft guarantees you will have VM connectivity of at least 99.9% of the time. This equates to about 43 mins per month of potential unavailability.

availability set

You may ask, “Why would I want to deploy a single instance VM?” In many situations a customer may have an application that is not built for scaling out. Or scaling out could be cost prohibitive for the application and the customer is fine with running their application in a single VM. In fact, many customers are used to deploying single VMs in their data center and taking advantage of live migration or vMotion. However, when you operate in the cloud you need to think differently to obtain high availability.

If high availability and a higher level SLA is needed, then you should deploy at least 2 VMs into an availability set. This will provide you with 99.95% uptime guarantee. An availability set is a logical grouping of VMs that spans across fault and update domains. In the event of a planned maintenance event or an unplanned event, at least 1 VM will remain running. This is due to hardware clusters in Azure being divided into update and fault domains. These domains are defined by hosts that share a common update cycle or physical infrastructure. In addition, these clusters support a range of VM sizes. When the first VM of an availability set is deployed, the hardware cluster is chosen based on the supported VMs. By default availability sets are configured with 5 update domains (configurable to 20) and 3 fault domains.

If you are still using VMs with unmanaged disks (traditional storage), it is recommended to convert those VMs to use Managed Disks. If you are not familiar with Managed Disks, stay tuned as I will be talking about these in a future blog post.

Azure Usage and Quotas

Recently Microsoft added a feature that I requested in early 2016. Back then I would hear from customers about failed deployments to find out that it was the limitation on the core count. All they needed to do was to open a ticket to request a core quota increase. The problem comes when someone else had increased the quota already so you had no idea what the actual limit was at that point in time.

Eventually Microsoft exposed the current quota for cores in the portal when submitting a ticket for a quota increase. That was better but we still needed to see more info. Recently Microsoft added the capability to see Azure Usage and Quotas. You can see this by opening the Subscriptions blade, select a subscription and click on Usage + Quotas. You can also get to the same place through the Billing feature which is still in Preview at the time of this writing.

Here you can see a few quotas listed with the usage. Only a small subset are listed today but expect to see more added soon. There is also an integrated support experience from within this blade to allow you to request an increase.

Note: Core quotas are per region accessible by your subscription. So you would need to request an increase based on the region you are deploying into as well as the size of VM series.

Free Trial subscriptions are not eligible for limit or quota increases. You will need to upgrade to a PAYG subscription.

For a complete listing of all the subscription and service limitations, quotas and constraints, go to this link.

 

Azure in a Box

This summer Microsoft is expected to go GA with their Azure Stack offering. If you are not familiar with this, think of it as “Azure in a box”. This would get installed in your datacenter or offered up by a service provider for you to consume. The release of Azure Stack has been a long while coming as it was first announced in late 2015. Currently Technical Preview 3 (TP3) is available for evaluation so some of this info could change. Everything that I am about to talk about is publicly available on Microsoft’s website.

A Walk Down Memory Lane..

Remember those days when you needed to spin up a VM to test something. There wasn’t a lot of choices. You had to either reach out to IT to request that they spin up a VM for you and provide you access, or you could spin one up on your laptop using a type 2 hypervisor which had its limitations. Typically, this had to go through an approval process that sometimes took weeks. For those of you graybeards, who have been around the industry as long as I have, you didn’t even have the luxury of virtualization and you had to request a server. How did we ever get anything done back then?

Not so long ago companies like Amazon, Microsoft, and Google started to offer up hyperscale environments where you could allocate resources and pay for them as you use them. The sheer size of these environments allowed them to offer these resources at a low-price point. This allowed companies to move from a CAPEX to an OPEX model which started the evolution of the public cloud. However, there are many companies that have either security concerns or compliance requirements that keep them from taking advantage of the public cloud and all it has to offer or those who wish to have complete control of their data and resources.

Enter the private cloud, hosted in your datacenter or a service provider’s datacenter. Companies must evolve and to stay relevant in the industry they need to focus on providing value to their customers. In the case of the private cloud, those customers could be your own internal business units. These BUs need the capability to have self-service deployment and agility.

Prior to Azure Stack there was Azure Pack which provided these capabilities. This was a big step in the right direction as it provided a lot of these capabilities but there was something missing. What if you had a workload running on Azure Pack and you wanted to be able to run it in public Azure without having to make any changes. This was not easily done as the APIs were not consistent. Microsoft Azure had already seen so much growth that a new user experience was in order which gave Microsoft a chance to build a new portal and a new deployment model (resource manager). This left a lot of users in a state of confusion having to keep up with learning and supporting both.

Fast Forward to Today..

Now that Azure Stack has been developed, you get consistency across both private and public clouds. The APIs are consistent, the deployment models are consistent, and the user experience is consistent. Users can now log in with the same identities utilizing ADFS or Azure AD. Developers can use the same tools that they are familiar with. Think of Azure Stack as being an extension of Azure bringing agility and faster innovation to cloud computing, all behind your firewall. This allows for a true hybrid cloud experience.

Purchasing, Licensing, Pricing and Support..

At GA, Azure Stack is going to be available through three hardware vendors, Dell EMC, HPE, and Lenovo on preapproved hardware, delivered as an integrated system, with software preinstalled. Cisco recently announced they will be joining the other three hardware manufacturers. Expect to see their offering soon after GA. I wouldn’t be surprised to see other manufacturers jump in on this as well.

The software licensing will be available via EA and CSP only. If you have an existing EA Azure subscription, you can use that same one for consuming Azure Stack. CSP providers will be able to use the same tenant subscriptions for customers as well. MSDN, Free trials, and Biz Spark offers cannot be used with Azure Stack. You will be able to use on-premises Windows Server and SQL Server license with Azure Stack as long as you comply with product licensing. If you BYOL you will only be charged for consumption on the base VM.

Azure Stack services will be priced the same as Azure, on a pay-as-you-use model. At GA, the following services will be charged on a consumption basis: Virtual Machines, Azure Storage, App Service, and Azure Functions. You will be billed for Azure Stack usage as part of your regular Azure invoice. See below chart for how those services will be metered.

Image credit to Microsoft

With Azure Stack, there will be two support contracts, one purchased from the hardware vendor and one from Microsoft. For those customers who have an existing Premiere or Azure support contract with Microsoft today, it will cover Azure Stack as well.

Summary

Azure stack brings purpose-built integrated systems to your datacenter that allows speed and agility to help you modernize your applications across a hybrid environment. It allows developers to build applications using a consistent set of services, tools, and processes. Operations is now able to deploy to the location that meets the needs of their business meeting technical and regulatory requirements, all while paying only for what you use.

For additional information on Azure Stack refer to this link.