Today, in part 3 of this series, I wanted to discuss some of the options available when deploying Service Fabric Cluster. In part 2 we presented an overview of the Azure Service Fabric. If you haven’t done so already, you may want to start at the beginning of this series which will help build the foundation of today’s topic.
Deploy Service Fabric Cluster
Note: You must have enough cores available in the region where you plan to deploy your service fabric cluster. You can verify this by using the command below and inserting the value for the region you plan to deploy to.
Get-AzureRmVMUsage -Location 'West US'
In my subscription I am only using 6 out of 100 cores available for this region. If you do not have enough cores, open a support request and increase your core quota. Make sure you select Resource Manager for the deployment model and specify the region.
When creating a Service Fabric cluster you have a few different deployment options available to you:
- Setup a Service Fabric Cluster using the Azure Portal
- Setup a cluster using an ARM template
- Setup a cluster using Visual Studio
- Setup a cluster in what is referred to as cluster anywhere
- Setup a cluster using a Party Cluster
- Setup a cluster on your development machine
- Setup a cluster using PowerShell
As you can see we have many options to choose from. I will not be covering all of these in this post but plan to cover some of these in future posts.
Let’s cover the first option and deploy a Service Fabric Cluster using the Azure Portal.
Sign into the Azure Portal, if you haven’t done so already.
Click on +New, type “service fabric” and press Enter. In the Everything blade, click Service Fabric Cluster and select Create.
In the Service Fabric Cluster blade we need to provide details for the deployment. Give the cluster a name which must be between 4 and 23 characters and contain lowercase letters, numbers, and hyphens. Select your subscription and location, and choose to create a new resource group. Doing so will help in lifecycle management and billing.
In the Node Type Configurations we will select the number of node types, VM size and number of VMs to include in the cluster. We can have multiple node types, (ie. if we wanted to specify different size VMs and properties) but for this deployment we will just stick to one. For the VM size we will accept the default which is Medium (Standard A2). Enter a Node Type Name and choose the number of VMs to include in the cluster. The minimum number of VMs is set to 5 and is a requirement for the first node type, however this can be scaled up or down at a later time. For Application input endpoints enter the ports to open for your application. These can be added later as well. You can leave the Placement properties at the default for now and add additional name/value pairs for constraint if needed. Add a User Name and Password to use for the VMs.
For the Security Configuration, as this is a test environment, I will choose “Unsecure”. In a production environment you would want to use “Secure” to prevent unauthorized access. At a high level you would acquire a certificate, create an Azure Key Vault and upload the certificate to it, and provide the Source Vault, certificate URL and thumbprint during creation of the Service fabric Cluster. Optionally you can provide the details for the Admin Client and Read Only Client.
Under Diagnostic Settings, Support logs are enabled by default. This setting is required for the Azure Support team to resolve support issues. Application Diagnostics is disabled by default but can be enabled if desired.
Leave the Fabric Settings at default, review the Summary and click Create to start the deployment.
The deployment took about 14 mins to complete. Now that its complete let’s look and see what was deployed. If we click on our resource group we see that there is a summary of the resources. Here is what we have as a part of our Service Fabric Cluster deployment.
We have a VM Scale Set called WebFE that has a capacity of 5 nodes labeled 0-4. VM Scale Sets are an Azure Compute resource that are used to deploy and manage a collection of virtual machines as a set.
We have a load balancer that has a public IP address assigned to it, 5 load balancing rules (the 3 we added for ports 80, 83, 8081, and 2 others that are added automatically for management operations; 19000, 19080), 5 probes configured for checking the health to see if the LB should continue to send new connections, and 5 inbound NAT rules for RDP to connect to each node as needed.
We have our public IP resource for the LB and the virtual network with 2 subnets.
There are 3 storage accounts; 1 blob service that stores the vhds, 1 table service for diagnostics, and another table service for log files.
Last but not least, we have our Service Fabric Cluster where we can see the health status of the nodes and applications (currently there are no applications deployed yet). Another important item here is the Service Fabric Explorer link.
Clicking on the link opens another browser tab which brings us this really nice portal for visualizing the cluster. Right away, from the Essentials menu, you get an easy to look at overall dashboard view of the cluster and its health. Clicking on the Details menu brings you Health Events, Load Information, and Upgrade info. Clicking on the Cluster Map menu we can see the Fault/Upgrade domain info for the cluster. Clicking on the Manifest menu will show the cluster manifest that was generated and used during upgrades.
Under the Cluster tree menu we have two subtrees, Applications and Nodes. Drilling down into Applications and into System we can see there are 5 services running. Those being:
Under each service is a partition showing each replica contained within it. You can see that node 2 is currently listed as Primary with the others being ActiveSecondary. Additional applications installed will show up here along with their stateless and stateful services, partitions, and replicas.
Drilling down into Nodes we can see all 5 nodes that make up the cluster and the details around the health state, status, upgrade/fault domain, IP address, unhealthy evaluations, and deployed applications. On the far right there is an Actions menu where you can Activate, Pause, Restart, Remove data, and Remove node state.
This has been a brief overview of a cluster just after deployment. For additional info on Service Fabric Explorer click here.
Now that the cluster deployment has completed, you can connect to your cluster and deploy your applications.
Connecting to the VM
To connect to an individual VM in a Scale Set, you can use either the DNS name or IP address of the Public IP address resource that is associated to the load balancer.
The NAT rules defined during the deployment define an incoming port range starting at say 3389 to port 3389 of the first VM, an incoming port of 3390 to port 3389 for the second VM, and so on. You can view these by clicking on the Inbound NAT rules setting of the load balancer resource.
Unfortunately this is the default setting when deploying through the Azure Portal. If you wish to change these port ranges you will want to download the json template prior to deployment and edit the resource for loadBalancers>inboundNatPools>frontendPortRangeStart and frontendPortRangeEnd values, then perform the deployment using the modified template.
I hope this helps to get you started deploying and testing Service Fabric. Initially this series was planned for 3 parts but I soon found there is just too much to discuss. So stay tuned for my next post as we continue to explore further.