Tag Archives: Windows

Moving from Azure VMs to Azure VM Scale Sets ‚Äď Runtime Instance Configuration

In my previous post I covered how you can move from deploying a solution to pre-provisioned Virtual Machines (VMs) in Azure to a process that allows you to create a custom VM Image that you deploy into VM Scale Sets (VMSS) in Azure.

As I alluded to in that post, one item we will need to take care of in order to truly move to a VMSS approach using a VM image is to remove any local static configuration data we might bake into our solution.

There are a range of options you can move to when going down this path, from solutions you custom build to running services such as Hashicorp’s Consul.

The environment I’m running in is fairly simple, so I decided to focus on a simple custom build. The remainder of this post is covering the approach I’ve used to build a solution that works for me, and perhaps might inspire you.

I am using an ASP.Net Web API as my example, but I am also using a similar pattern for Windows Services running on other VMSS instances – just the location your startup code goes will be different.

The Starting Point

Back in February I blogged about how I was managing configuration of a Web API I was deploying using VSTS Release Management. In that post I covered how you can use the excellent Tokenization Task to create a Web Deploy Parameters file that can be used to replace placeholders on deployment in the web.config of an application.

My sample web.config is shown below.

The problem with this approach when we shift to VM Images is that these values are baked into the VM Image which is the build output, which in turn can be deployed to any environment. I could work around this by building VM Images for each environment to which I deploy, but frankly that is less than ideal and breaks the idea of “one binary (immutable VM), many environments”.

The Solution

I didn’t really want to go down the route of service discovery using something like Consul, and I really only wanted to use Azure’s default networking setup. This networking requirement meant no custom private DNS I could use in some form of configuration service discovery based on hostname lookup.

…and…. to be honest, with the PaaS services I have in Azure, I can build my own solution pretty easily.

The solution I did land on looks similar to the below.

  • Store runtime configuration in Cosmos DB and geo-replicate this information so it is highly available. Each VMSS setup gets its own configuration document which is identified by a key-service pair as the document ID.
  • Leverage a read-only Access Key for Cosmos DB because we won’t ever ask clients to update their own config!
  • Use Azure Key Vault as to store the Cosmos DB Account and Access Key that can be used to read the actual configuration. Key Vault is Regionally available by default so we’re good there too.
  • Configure an Azure AD Service Principal with access to Key Vault to allow our solution to connect to Key Vault.

I used a conventions-based approach to configuration, so that the whole process works based on the VMSS instance name and the service type requesting configuration. You can see this in the code below based on the URL being used to access Key Vault and the Cosmos DB document ID that uses the same approach.

The resulting changes to my Web API code (based on the earlier web.config sample) are shown below. This all occurs at application startup time.

I have also defined a default Application Insights account into which any instance can log should it have problems (which includes not being able to read its expected Application Insights key). This is important as it allows us to troubleshoot issues without needing to get access to the VMSS instances.

Here’s how we authorise our calls to Key Vault to retrieve our initial configuration Secrets (called on line 51 of the above sample code).

My goal was to make configuration easily manageable across multiple VMSS instances which requires some knowledge around how VMSS instance names are created.

The basic details are that they consist of a hostname prefix (based on what you input at VMSS creation time) that is appended with a base-36 (hexatrigesimal) value representing the actual instance. There’s a great blog from Guy Bowerman from Microsoft that covers this in detail so I won’t reproduce it here.

The final piece of the puzzle is the Cosmos DB configuration entry which I show below.

The ‘id’ field maps to the VMSS instance prefix that is determined at runtime based on the name you used when creating the VMSS. We strip the trailing 6 characters to remove the unique component of each VMSS instance hostname.

The outcome of the three components (code changes, Key Vault and Cosmos DB) is that I can quickly add or remove VMSS groups in configuration, change where their configuration data is stored by updating the Key Vault Secrets, and even update running VMSS instances by changing the configuration settings and then forcing a restart on the VMSS instances, causing them to re-read configuration.

Is the above the only or best way to do this? Absolutely not ūüôā

I’d like to think it’s a good way that might inspire you to build something similar or better ūüôā

Interestingly, getting to this stage as well, I’ve also realised there might be some value in considering moving this solution to Service Fabric in future, though I am more inclined to shift to Containers running under the control an orchestrator like Kubernetes.

What are you thoughts?

Until the next post!

Tagged , , , , , ,

Moving from Azure VMs to Azure VM Scale Sets – VM Image Build

I have previously blogged about using Visual Studio Team Services (VSTS) to securely build and deploy solutions to Virtual Machines running in Azure.

In this, and following posts I am going to take the existing build process I have and modify it so I can make use of VM Scale Sets to host my API solution. This switch is to allow the API to scale under load.

My current setup is very much fit for purpose for the limited trial it’s been used in, but I know (at minimum) I’ll see at least 150 times the traffic when I am running at full-scale in production, and while my trial environment barely scratches the surface in terms of consumed resources, I don’t want to have to capacity plan to the n-nth degree for production.

Shifting to VM Scale Sets with autoscale enabled will help me greatly in this respect!

Current State of Affairs

Let’s refresh ourselves with what is already in place.


My existing build is fairly straightforward – we version the code (using a PowerShell script), restore packages, build the solution and then finally make sure all our artifacts are available for use by the Release Management process.

Existing build process

The output of this build is a Web Deploy package along with a PowerShell DSC module that configures the deployment on the target VM.

Release Management

I am using multiple Environments for Release Management to manage transformations of the Web Deploy Parameters file along with the Azure Subscription / Resource Group being deployed to. The Tasks in each Environment are the same though.

My Release Management Tasks (as shown below) open the NSG to allow DSC remote connections from VSTS, transform the Web Deploy Parameters file, find the VMs in a particular Azure Resource Group, copy the deployment package to each VM, run the DSC script to install the solution, before finally closing the NSG again to stop the unwashed masses from prying into my environment.

Existing release process

All good so far?

What’s the goal?

The goal is to make the minimum amount of changes to existing VSTS and deployment artifacts while moving to VM Scale Sets… sounds like an interesting challenge, so let’s go!

Converting the Build

The good news is that we can retain the majority of our existing Build definition.

Here are the items we do need to update.

Provisioning PowerShell

The old deployment approach leveraged PowerShell Desired State Configuration (DSC) to configure the target VM and deploy the custom code. The DSC script to achieve this is shown below.

The challenge with the above PowerShell is it assumes the target VM has been configured to allow WinRM / DSC to run. In our updated approach of creating a VM Image this presents some challenges, so I redeveloped the above script so it doesn’t require the use of DSC. The result is shown below.

As an aside, we could also drop the use of the Parameters file here too. As we’ll see in another post, we need to make the VM Image stateless, so any local web.config changes that are environment-specific are problematic and are best excluded from the resulting image.

Network Security Group Script

In the new model, which prepares a VM Image, we no longer need the Azure PowerShell script that opens / closes the Network Security Group (NSG) on deployment, so it’s removed in the new process.

No more use of Release Management

As the result of our Build is a VM Image we no longer need to leverage Release Management either, making our overall process much simpler.

The New Build

The new Build definition shown below – you will notice the above changes have been applied, with the addition of two new Tasks. The aspect of this I am most happy about is that our core build actually remains mostly unchanged – we have had to add two additional Tasks and change one PowerShell script to make this work.

New build process

Let’s look at the new Tasks.

Build Machine Image

This Task utilises Packer from Hashicorp to prepare a generalised Windows VM image that we can use in a VM Scale Set.

The key items to note are: you need an Azure Subscription where a temporary VM, and the final generalised VHD can be created so that Packer can build the baseline image for you.

New Build Packer Task

You will notice we are using the existing artifacts staging directory as the source of our configuration PowerShell (DeployVmSnap.ps1) which is used by Packer to configure up the host once the VM is created using an Azure Gallery Image.

The other important item here is the use of the output field. This will contain the fully qualified URL in blob storage where the resulting packed image will reside. We can use this in our next step.

Create VM Image Registration

The last Task I’ve added is to invoke an Azure PowerShell script, which is just a PowerShell script, but with the necessary environmental configuration to allow me to execute Cmdlets that interact with Azure’s APIs.

The result of the previous Packer-based Task is a VHD sitting in a Blob Storage account. While we can use this in various scenarios, I am interested in ensuring it is visible in the Azure Portal and also in allowing it to be used in VM Scale Sets that utilised Managed Disks.

The PowerShell script is shown below.

.. and here is how it is used in the Task in VSTS..

New build VM Image

You can see how we have utilised the Packer Task’s output parameter as an input into this Task (it’s in the “Script Arguments” box at the bottom of the Task).

The Result

Once we have this configured and running the result is a nice crisp VM Image that can be used in a VM Scale Set. The below screenshot shows you how this looks in my environment – I wrapped the Azure Storage Account where the VHDs live, along with the VM Image registrations in the same Resource Group for cleaner management.

Build Output

There are still some outstanding items we need to do with, specifically: configuration management (our VM Image has to be stateless) and VM Scale Set creation using the Image. We will deal with these two items in the following posts.

For now I hope you have a good grasp on how you can convert an existing VSTS build that deploys to existing VMs to one that produces a generalised VM Image that you can use either for new VMs or in VM Scale Sets.

Until the next post.


Want to see how I dealt with instance configuration? Then have a read of my next post in this series.

Tagged , , , , ,

Family IT Support – Backups for Christmas

Having just returned from holiday the need to provide Family IT support over the Christmas period is top of my mind. What I found is something I think all IT literate people should be able to quickly solve for their friends and family.

Hard Drives fail (some fairly frequently it appears according to Backblaze). Computer’s fail (rarely) or get stolen (much easier with modern machines). ¬†Your computer might get damaged or otherwise incapacitated due to some other form of disaster.

In this modern age our lives revolve around our computers, probably much more than we realise. ¬†Your music? On your computer. Your photographs? On your computer. Your videos? On your computer. Last year’s tax return? You get the idea.

Now I can image that most of us couldn’t care less if we lose our tax return (though most tax authorities might take a dim view if they ever audit you), and we can probably rebuild our music collection (or maybe it’s already “in the cloud”). ¬†Personal photographs and videos are an entirely different issue though.

Luckily the solution is not a hard one.

To The Cloud!

Let me start by saying that cloud backup services (and I’m a cloud type of guy) sound great in theory. ¬†In practice, unless you’re living somewhere with fat, unlimited internet it’s probably not going to help you much. ¬†Most consumer internet services remain asynchronous with the outbound channel being substantially smaller than inbound which severely limits the possibilities of using cloud backup for any substantially large backup.

Rich media such as photographs are increasing in size (high resolution on many mid range consumer cameras now produce 5 – 10 MB files per photograph) which means you’ve most likely got Gigabytes of stuff to be backed up. ¬†The initial backup run with either take so long as to be a real bottleneck for your process OR¬†you’ll bust your bandwidth allowance in the space of a few hours.

Hard Drive and Sneakernet

The alternative arrangement to using cloud backup services is to go local and utilise an external hard drive (buy two of the same).  All modern Operating Systems will give you access to built-in software:

If you’re on Linux, chances are you’re not reading, so good luck :).

Backup software and schedules are all good but there’s no point in just backing the content up to a local disk because that’s not guaranteeing you anything in the event of a massive local problem (house fire or burglary). ¬†As I mentioned above we should be using an external hard drive – these can be picked up cheaply these days at your friendly tech website or physical storefront. ¬†You should buy two identical drives.

Once you’ve setup your backup you should run it once with one of your hard drives. ¬†You should then take this hard drive and physically store it away from your PC – ideally in another location away from your house or office. ¬†Then take the second drive and do the exact same thing and leave it connected to your PC. ¬†You should then regularly swap the two drives. Depending on your Operating System you may also be able to encrypt the drive’s contents.

So, there we go, you know what I spent my Christmas holiday’s doing (and what you should be doing for your family)!

Tagged , ,

Clean Backups Using Windows Server Backup and EBS Snapshots

One of the powerful features of AWS is the ability to snapshot any EBS volume you have operating. The challenge when utilising Windows 2008 R2 is that NTFS doesn’t support point-in-time snapshotting which can result in an inconsistent EBS snapshot. You can still create a snapshot but it may be that one or more file that was mid-change may not be accurate (or usable) in the resulting snapshot.

How to solve

The answer, it turns out, is fairly easy (if a little convoluted) – Windows Server Backup PLUS EBS snapshots.

In order for this to work you will need:

1. Windows Server Backup installed on your Instance (this does not require a reboot on install).

2. An appropriately sized EBS volume to support your intended backup (see the Technet arcticle on how to work out your allowance – remember you can always create a new volume later if you get the size wrong!)

3. Windows Task Scheduler (who needs Quartz or cron ;)).

4. Some Powershell Foo.

Volume Shadow Copy using the new EBS Volume

Once you have created your new EBS volume and attached it to the Instance you want to backup, open up Windows Server Backup and select what you want to backup (full, partial, custom – to be honest you shouldn’t need to do a “bare metal” backup….) ¬†Note that Windows Server Backup will delete *everything* on the target EBS volume and it will no longer show up in Windows Explorer – it will still be attached and show up in Disk Manager, but you can’t copy files to or from it.

The benefit of Windows Server Backup is that it will utilise Windows Volume Shadow Copy Service (VSS) ¬†resulting in a clean copy of files – even those in-flight at the time of backup. ¬†This gets around the inability to “freeze” NTFS at runtime.

Set your schedule and save the backup.

Snapshot the backup volume

Now you have a nice clean static copy of the filesystem you can go ahead and create an EBS snapshot. ¬†The best way I found to schedule this was to use some Powershell magic written by the guys over at messors.com. ¬†I slightly modified the Daily Snapshot Powershell to snapshot a pre-defined Instance and Volume by calling the¬†CreateSnapshotForInstance function that takes an Instance and Volume identifier. ¬†The great thing with this script setup is it auto-manages the expiry of old snapshots so my S3 usage doesn’t just keep on growing.

I invoke this Powershell on the success of the initial Windows Server Backup scheduled task by using the Task Scheduling Рsee this post at Binary Royale on how to do this (they send an email РI use the same approach to do a snapshot instead).

The great thing about the Powershell is that it will send you an email once completed – I have also setup mail notifications at completion of the Windows Server Backup (success, failure, disk full) so I can know when / if the backup fails at an early stage. ¬†Note that in Windows Server 2008 R2 you can send an email right in Windows Task Scheduler – on AWS you’ll need to have an IIS smart host up and running to support this though – see my earlier post on how to set up a IIS SES relay host.


The benefit of this approach is a consistent state for your backup contents as well as an easy way to do a controlled restore at a later date (“create volume from snapshot” in the AWS management console and then attach to an Instance). ¬†Additionally you achieve a reasonably reliable backup for not much more than the cost of EBS I/O and S3 I/O and storage costs.

Hope you find this useful.

Tagged , , , ,