Continuous Deployment of Windows Services using VSTS

I have to admit writing this post feels a bit “old skool”. Prior to the last week I can’t remember the last time I had to break out a Windows Service to solve anything. Regardless, for one cloud-based IaaS project I’m working on I needed a simple worker-type solution that was private and could post data to a private REST API hosted on the other end of an Azure VNet Peer.

While I could have solved this problem any number of ways I plumped for Windows Service primarily because it will be familiar to developers and administrators at the organisation I’m working with, but I figured if I’m going to have to deploy onto VMs I’m sure not deploying in an old-fashioned way! Luckily we’re already running in Azure and hosting on VSTS so I have access to all the tools I need!

Getting Setup

The setup for this process is very similar to the standard “Deploy to Azure VM” scenario that is very well covered in the official documentation and which I added some context to in a blog post earlier in the year.

Once you have the basics in place (it only takes a matter of minutes to prepare each machine) you can head back here to cover off the changes you need to make.

Note: this process is going to assume you have a Windows Service Project in Visual Studio 2015 that is being built using VSTS’s in-built build infrastructure. If you have other configurations you may need to take different steps to get this to work🙂

Tweak build artefact output

First we need to make sure that the outputs from our build are stored as artefacts in VSTS. I didn’t use any form of installer packaging here so I needed to ensure my build outputs were all copied to the “drops” folder.

Here is my build definition which is pretty vanilla:

Build Process

The tweak I made was on the Visual Studio build step (step 2) where I defined an additional MSBuild Argument that set the OutputPath to be the VSTS build agent’s artifacts directory which will automatically be copied by the Publish Artifacts step:

Build Update

If I look at a history entry for my CI build and select Artifacts I can see that my Windows Service binary and all its associated assemblies, config files (and importantly Deployment Script) are stored with the build.

Build Artefacts

Now we have the build in the right configuration let’s move on to the deployment.

Deploying a Service

This is actually easier than it used to be🙂. Many of us would remember the need to package the Windows Service into an MSI and then use InstallUtil.exe to do the install on deployment.

Fear not! You no longer need this approach for Windows Services!

PowerShell FTW!

Yes, that Swiss Army knife comes to the rescue again with the Get-Service, New-Service, Stop-Service and Start-Service Cmdlets.

We can combine these handy Cmdlets in our Deployment script to manage the installation of our Windows Service as shown below.

The Release Management definition remains unchanged – all we had to do was ensure our build outputs were available to copy from the ‘Drop’ folder on the build and that they are copied to C:\temp\ on the target VM(s). Our Desployment Script takes care of the rest!

That’s it! Next time your CI build passes your CD kicks in and your Windows Service will be updated on your target VMs!

Tagged ,

AAD B2C Talk – Innovation Days 2016 Wrap

I recently spoke at the Innovation Days 2016 event held in Sydney on Azure AD B2C.

The presentation for my talk is available here:!AqBI2LiKM4LHwNJvTxrXNAblpTBCJA

and you can find the sample code for the web and API apps here:

Deploying to Azure VMs using VSTS Release Management

I am going to subtitle this post “the missing manual” because I spent quite a bit of time troubleshoothing how this should all work.

Microsoft provides a bunch of useful information on how to deploy from Visual Studio Team Services (VSTS) to different targets, including Azure Virtual Machines.

In an ideal world I wouldn’t be using VMs at all, but for my current particular use case I have to use VMs so the above (linked) approach worked.

The approach sounds good but I ran into a few sharp edges that I thought I would document here (and hopefully the source documentation will be updated to reflect this in due course).

Preparing deployment targets

Azure FQDNs

I thought I’d do the right thing by configuring the Azure IP of my hosts to have a full FQDN rather than just an IP address.

As I found out this is not a good idea.

The main issue you will run into is the generated certs on target hosts only have the hostname in them (i.e. azauhost01) rather than the full public FQDN (i.e.

When the Release Management infrastructure tries to connect to a host this cert mismatch causes a fatal error. I didn’t spend much time troubleshooting so decided to revert to use of IP addresses only.

When using dynamic IP addresses the first Release Management action “Azure Deployment:Select Resource Group action” is important as it allows for discovery of all VMs and their IPs (i.e. no hardcoding required). This apprach does mean, however, you need to consider how you group VMs into Resource Groups to allow any VM in the Resource Group to be used as the deployment target.

Select Resource Group

Local permissions

I was running my deployments to non-Domain joined Windows 2012 R2 server instances with local administrative accounts and had opened the necessary port in the NSG rules to allow WinRM into the servers from the VSTS infrastructure.

Everything looked good on deployment until PowerShell execution on the target hosts resulted in errors due to permissions. As it turns out the error message was actually useful in resolving this problem🙂

In order to move beyond this error I had to prepare each target host by running these commands at an admin command prompt on the host:

winrm quickconfig


We could drop these into a DSC module and run that way if we wanted to make this repeatable across new hosts.

There is a good troubleshooting overview for this from Microsoft.

Wait! Where did my PowerShell script go?

If you follow the instructions provided by Microsoft you need to add a deployment Powershell script (actually a DSC module) to your Web App (their example uses “ConfigureWebserver.ps1” for the filename).

There is one issue with this approach – the build step to package the Web App actually ends up bundling the PowerShell inside of a deployment zip which means once the files are copied to your target VM the PowerShell can’t be invoked.

The fix for this is to add an additional build step that copies the PowerShell to the drops folder on VSTS which means the PowerShell will be transferred to the target VM.

Your final build definition should look like the below

Build definition

and the Copy Files task should be configured like this (note below that /Deploy is the folder in my solution that contains the PowerShell I’ve included for deployment purposes):

Build Step

Once you have done this you will find that the script is now available in the VSTS drops folder and can be copied to the VMs which allows you to execute it via the Release Management action.

Wrapping up

Once I had these changes in place and had made some minor path / project name tweaks to match my project I could run the process end-to-end.

The one final item I’ll call out here is the default deployment location of the solution on the target VM ends up being the wwwroot of your inetpub folder with a subfolder named ProjectName_deploy. If you map this to an Application in IIS you should be good to go🙂.

Happy days!

Tagged , ,

Migrating resources from AWS to Microsoft Azure

Kloud Blog

Kloud receives a lot of communications in relation to the work we do and the content we publish on our blog. My colleague Hugh Badini recently published a blog about Azure deployment models from which we received the following legitimate follow up question…

So, Murali, thanks for letting us know you’d like to know more about this… consider this blog a starting point🙂.

Firstly though…

this topic (inter-cloud migrations), as you might guess, isn’t easily captured in a single blog post, nor, realistically in a series, so what I’m going to do here is provide some basics to consider. I may not answer your specific scenario but hopefully provide some guidance on approach.

Every cloud has a silver lining

The good news is that if you’re already operating in a…

View original post 926 more words

Creating Azure AD B2C Service Principals with PowerShell

I’ve been lucky enough over the last few months to be working on some cool consumer-facing solutions with one of my customers. A big part of the work we’ve been doing in building Minimum Viable Product (MVP) solutions to allow us to quickly test concepts in-market using stable, production ready technologies.

As these are consumer solutions, the Azure Active Directory (AAD) B2C service was an obvious choice for identity management, made even more so by AAD B2C’s ability to act as a source-of-truth for consumer identity and profile information across a portfolio of applications and services.

AAD B2C and Graph API

The AAD B2C schema is extensible which allows you to add custom attributes to an identity. Some of these extension attributes you may wish the user to manage themselves (i.e. mobile phone number), and some may be system-managed or remotely-sourced value associated with the identity (i.e. Salesforce ContactID) that a user may never see or edit.

When we have attributes that the user doesn’t necessarily manage themselves, or we wish to do some other processing that isn’t part of the AAD B2C Policy framework we need to use the Graph API to programmatically access AAD B2C identities.

The AAD B2C team has a good overview document on how use Graph API with AAD B2C, but I ran into an issue creating a Service Principal for my Graph API code because I used an Azure AD (Enterprise) identity to create and manage my B2C instance. As I suspect this will be how the majority of instances are created I thought I would document my solution here.


I have a demo AAD B2C setup below and you can clearly see my Kloud identity (creator / admin of the tenant) is sourced from “Microsoft Azure AD (other directory)”.

Admin user from another directory

Note that with this user I am still able to manage identities contained in the B2C directory via the web UI, but where I run into issues is with PowerShell as we will see.

As you can see in the AAD B2C post referenced earlier, I need to use the Azure AD PowerShell module to setup a Service Principal. Firstly, let’s connect:


At the prompt I enter my admin credentials ( and am connected.

You can probably already spot the issue… there is no way to pass a TenantId to this command – the context is entirely based on the user’s User Principal Name (UPN).

When I run:


all I see is the verified domains attached to my home tenant:

Home tenant domains

.. and my B2C domain isn’t one… so… no luck😦

I read on through the documentation and looked at the PowerShell Cmdlets and found what I thought would be my solution – the ability to specify a Tenant ID on the New-MsolServicePrincipal Cmdlet as shown:

New-MsolServicePrincipal -DisplayName "Demo AAD B2C Graph Client" `
                         -TenantId bc1ec9c8-xxxx-xxxx-xxxx-e10e3ee114a8 `
                         -Type Password -Value "notmypassword"

I promptly received an error message advising me that I was not authorised to make changes in the specified tenant🙂

The Solution

It’s actually pretty straight-forward – create a local adminstrative account in the AAD B2C directory and use this to authenticate when using PowerShell.

Add user step 1

Add user step 2

AAD B2C with extra admin

Once you have done this make sure to log into the Azure Portal using this new user ( in my example) and reset their password. If you are using the new AAD PowerShell Module that supports modern authentication you can do this in-line at login time.

Note: in order for MFA to work for this user at the PowerShell command prompt you should install the preview AAD module that supports modern authentication.

If I now run


I see the B2C directory I expect:

B2C tenant domains

I am now able to create the Service Principal I need for my Graph API client too:

New-MsolServicePrincipal -DisplayName "Demo AAD B2C Graph Client" `
                         -Type Password -Value "notmypassword"

returns the expected result of creating a Service Principal I can use for my Graph client.

Happy Days!


Tagged , , , , ,

Understanding Azure App Service Plans and Pricing

Like many things in Azure, Azure App Service has a multitude of consumption options available that can sometimes make it hard to determine what option suits your use.

In this post I’m going to walk through App Service, and for simplicity’s sake, I’m going to stick to deploying just Web Apps.

So, what do we have available and how does it best fit what I want to do?

Firstly, you can deploy more than a single app into a Plan at no additional cost. New apps will be deployed alongside existing apps and share the resource allocation available in Plan Tier (this is how the old Azure Websites worked, so not much has changed here).

Beyond this there are nuances that it’s worth exploring.

Free Tier (F1)

Charge Model: free

Does what it says on the tin – gives you some Azure App Service capacity for free.

Your application runs on shared infrastructure. You can deploy up to 10 apps into a single Free Plan.

As with anything free, there is a trade-off – with this tier you get a maximum of 60 minutes CPU daily, with 1 GB RAM, 1 GB disk space and no SLA. You also can’t use a custom domain or SSL.

Suitable for Proof-of-Concepts (PoCs) or simple dev/test. Recommendation to avoid for production use as there is no SSL or support (SLA) in place.

Shared (S1)

Charge Model: fixed per-hour charge.

Provides you with a small step up from the Free tier, but still not really aimed at production workloads.

Your application runs on shared infrastructure. You can deploy up to 100 apps into a single Shared Plan.

The Shared tier provides SSL and custom domain support along with additional CPU minutes per day (up to 240).

While still not backed by an SLA, this tier may be more suited to simple non-critical workloads (such a smartlink redirect hosts) where they may only be used occasionally during any 24 hour period and that some service disruption isn’t impactful on anyone. A 302 redirect doesn’t burn many of those 240 minutes😉.

Basic (B1 – B3)

Charge Model: per-hour charge based on number of instances.

For example: B2 tier (in USD in US West) with 3 instances = 3 x $0.15/hr = $0.45/hr

This is where App Service Plans meets production workload hosting. You are now running on dedicated instances and benefit from a 99.95% availability SLA. You can deploy unlimited apps into a single Basic Plan (though you’ll likely hit resource limits before hitting “unlimited”😉 ).

You also have access now to manual scale-out (increase number of instances) and scale-up (shift from B1 – B2 – B3 tiers) options. Traffic is automatically balanced between your instances.

Limitations at this tier includes support for only a single SSL certificate per Plan. If you can leverage SNI then you can run multiple web apps on SSL. If not, it’s one app per Plan then!

If you need to connect to a private Azure network or use deployment Slots then this tier is also *not* suitable for you.

The Basic tier is a good starting place if you’re bringing an existing app into App Service, particularly if your current application is unlikely to support load balancing or auto-scale or does not require substantial resources to run.

Standard (S1 – S3)

Charge Model: per-hour charge based on number of instances.

For example: S3 tier (in USD in US West) with 5 instances = 5 x $0.40/hr = $2.00/hr

Standard tier provides everything Basic does (the instances are the same apart from increased disk space), but there are a few add-ons that make this a serious proposition for modern apps.

You gain additional SSL support (both SNI and IP address), additional scale-out support (up to 10 instances with auto-scale included), plus you can use automated daily backups, deployment slots and Azure Traffic Manager for geo-availability.

Slots (or “Staging Environments”) are a bit of grey area too – the “5” listed for this Tier means you get up to 5 slots per deployed Web App (note each slot shares the same pool of resources as your live site.. so don’t do stress / performance testing here😉 ).

Premium (P1 – P4)

Charge Model: per-hour charge based on number of instances.

For example: P1 tier (in USD in US West) with 10 instances = 10 x $0.30/hr = $3.00/hr

The name says it all really – this tier offers the best features and provides you with access to dedicated App Service Environments (ASEs) that carve out private network space in Azure for just your Apps.

Beyond what Standard gives you, you now get support for up to 50 instances (more if you ask support nicely😉 ) along with 50 daily backups and 20 slots.

It’s worth noting that maybe some of these costs sound substantial, but with each tier, as I pointed out at the start of the blog post, you can deploy multiple apps onto each instance at no additional cost. This means that you may have many similar apps that can co-exist and as such you could deploy them all into a single App Plan.

Note that there is currently a limitation around App Service Environments that means they can only leverage the classic “v1” virtual networking in Azure which could be an issue if you are using just the new Resource Manager model.

As of July 2016, App Service Environments now support the new “ARM” virtual networking model so you good to go if you need to provision via this method.


As you can see, there is a lot of flexibility available when hosting Web Apps in App Service. If you have lots of small web apps that can coexist on the same machines (a fairly typical model in traiditional web hosting) then you should look closely at App Service as a solution to your needs in Azure.

Finally, Microsoft has even developed some tooling that can even help you figure out how to move – check out

Happy days!

Tagged , , ,

Speaking at Global Azure Bootcamp 2016 in Sydney

Global Azure Bootcamp 2016

Just letting everyone know that I’ll be speaking at the upcoming Global Azure Bootcamp on April 16 in Sydney. If you, or anyone you know, wants a free day of training on Azure and what it can do, make sure to reserve your spot and come along!

The presentation from my session can be viewed or downloaded from OneDrive and my (extremely simple) samples are available on Gitbhub.

Tagged , ,

Azure Automation Runbooks with Azure AD Service Principals and Custom RBAC Roles

If you’ve ever worked in any form of systems administrator role then you will be familiar with process automation, even only for simple tasks like automating backups. You will also be familiar with the pain of configuring and managing identities for these automated processes (expired password or disabled/deleted account ever caused you any pain?!)

While the cloud offers us many new ways of working and solving traditional problems it also maintains many concepts we’d be familiar from environments of old. As you might have guessed, the idea of specifying user context for automated processes has not gone away (yet).

In this post I am going to look at how in Azure Automation Runbooks we can leverage a combination of an Azure Active Directory Service Principal and an Azure RBAC Custom Role to configure an non-user identity with constrained execution scope.

The benefits of this approach are two-fold:

  1. No password expiry to deal with, or accidental account disablement/deletion. I still need to manage keys (in place of passwords), but these are centrally managed and are subject to an expiry policy I define.
  2. Reduced blast radius. Creating a custom role that restricts actions available means the account can’t be used for more actions that intended.

My specific (simple) scenario is stopping all running v2 VMs in a Subscription.

Create the Custom Role

The Azure Resource Manager (ARM) platform provides a flexible RBAC model within which we can build our own Roles based on a combination of existing Actions. In-built Roles bundle these Actions into logical groups, but there are times we may want something a little different.

The in-built “Virtual Machine Contributor” Role isn’t suitable for my purposes because it provides too much scope by letting assigned users create and delete Virtual Machines. In my case, I want a Role that allows an assigned user to start, stop, restart and monitor existing Virtual Machines only.

To this end I defined a custom Role as shown below which allows any assigned user to perform the functions I need them to.

Let’s add this Role by executing the following PowerShell (you’ll need to be logged into your Subscription with a user who has enough rights to create custom role definitions). You’ll need to grab the above definition, change the scope and save it as a file named ‘vm-power-manager-customerole.json’ for this to work.

New-AzureRmRoleDefinition -InputFile vm-power-manager-customrole.json

which will return a result similar to the below.

Name             : Virtual Machine Power Manager
Id               : 6270aabc-0698-4380-a9a7-7df889e9e67b
IsCustom         : True
Description      : Can monitor, stop, start and restart v2 ARM virtual machines.
Actions          : {Microsoft.Storage/*/read, Microsoft.Network/*/read, Microsoft.Compute/*/read
NotActions       : {}
AssignableScopes : {/subscriptions/c25b1c8e-1111-4421-9090-1a12d7012dd3}

that means the Role shows up in the Portal and can be assigned to users🙂

VM Power Manager Role

Now we have that, let’s setup our Service Principal.

Setup Service Principal

Microsoft provides a good guide on creating a Service Principal on the Azure documentation site already so I’m not going to reproduce that all here.

When you get to “Assign application to role” hop back here and we’ll continue on without needing to dive into the Azure Portal.

For the purpose of the rest of this post, these are the parameters I used to create my Service Principal.

Name: Azure VM Start Stop SP
Sign-on URL / App URI: http://myvmautomation
Client ID*: c6f7c745-1234-5678-0000-8d14611e75f4
Tenant ID*: c7a48abc-1990-4fef-e941-a1cd55422e41

* Client ID is returned once you save your Application. Tenant ID comes from your Azure AD tenant ID (see Microsoft setup instructions referenced above).

Important: you will also have to generate and grab a key value that you will need to use as it is the password for the Service Principal. Don’t forget to grab it when it’s displayed!

Assign the Service Principal the necessary Azure Roles

# Assign our custom Role
New-AzureRmRoleAssignment -ServicePrincipalName http://myvmautomation `
                          -RoleDefinitionName 'Virtual Machine Power Manager' `
                          -Scope '/subscriptions/c25b1c8e-xxxx-1111-abcd-1a12d7012123'

# Assign the built-in 'Reader' Role
New-AzureRmRoleAssignment -ServicePrincipalName http://myvmautomation `
                          -RoleDefinitionName 'Reader' `
                          -Scope '/subscriptions/c25b1c8e-xxxx-1111-abcd-1a12d7012123'

We now have all the baseline mechanics out of the way – next it’s onto using this information in our Runbooks.

Asset Setup

Azure Automation has the concept of an Asset that can be one of six items: Schedules, PowerShell Modules, Certificates, Connections, Variables and Credentials.

These are shared between all Runbooks in an Automation Account and are extremely useful in helping you deliver generic re-usable Runbooks.

For this post we are going to create a new Credential using the following process.

Our Automation Account is called ‘Core-Services’ and is hosted in a Resource Group ‘rg-test-01’

$username = "c6f7c745-1234-5678-0000-8d14611e75f4"
$password = ConvertTo-SecureString -String "YOUR_SERVICE_PRINCIPAL_KEY" -AsPlainText -Force

$newCreds = New-Object –TypeName System.Management.Automation.PSCredential –ArgumentList $username,$password

New-AzureRmAutomationCredential -Name "VMPowerServicePrincipal" `
                                -Description 'Service Principal used to control power state of VMs' `
                                -Value $newCreds `
                                -ResourceGroupName 'rg-test-01' `
                                -AutomationAccountName 'Core-Services'

This creates a Credential we can now use in any Runbook.

The sample PowerShell Runbook below shows how we do this using the Login-AzureRmAccount Cmdlet using the -ServicePrincipal switch.

I also specify a Tenant identifier (this is the Azure AD Tenant identifier from when you setup the Service Principal) and the Subscription identifier so we set context in one call.

The Tenant and Subcsription identifiers are held as Automation Account Variables which we read in at the start of execution (the pattern below allows you to override which Variables you pass in should you want to use different ones).

So there we have it – a way to perform VM power state management in an Azure Automation Runbook that uses a non-user account for authentication along with custom RBAC roles for authorisation.


Tagged , , ,

Using Active Directory Security Groups to Grant Permissions to Azure Resources

Kloud Blog

The introduction of the Azure Resource Manager platform in Azure continues to expose new possibilities for managing your deployed resources.

One scenario that you may not be aware of is the ability to use scoped RBAC role assignments to grant limited rights to Azure AD-based users and groups.

We know Azure provides us with many built-in RBAC roles, but it may not be immediately obvious that you can control their assignment scope.

What do I mean by this?

Simply that each RBAC role (including custom ones you create) can be used at various levels within Azure starting at the Subscription level (i.e. applies to anything in the Subscription) down to a Resource (i.e. applies just to one particular resource such as a Storage Account). Role assignments are also cascading – if I assign “Owner” rights to a User or Group at the Subscription level then they have that role…

View original post 355 more words

Tagged ,

Easy Debugging of PowerShell DSC for Azure Virtual Machines

I’ve been doing a lot of PowerShell DSC on Azure VMs recently, so I thought I’d share my experience in debugging custom DSC Modules when working in Azure.

My blog entry is over on the Kloud blog so head on over and have a read.

Tagged , , ,