Category Archives: Automation

What happened with my Azure VM Scale Set Instance IDs?

In my last couple of posts (here and here) I've covered moving from Azure VMs to running workloads on VM Scale Sets (VMSS).

One item you use need to adjust to is the instance naming scheme that is used by VMSS.

I was looking at the two VM Scale Sets I have setup in my environment and was surprised to see the following.

The first VM Scale Set I setup has Instances with these IDs (0 and 1):

VMSS 01

and my second has Instances with IDs of 2 and 3:

VMSS 02

I thought this was a bit weird that two unrelated VM Scale Sets seem to be sharing a common Instance ID source – maybe something to do with the underlying provisioning engine continuing on Instance IDs across VMSS on a VNet, or something similar?

As it turns out, this is entirely coincidental.

When you provision a VMSS and you set the "overprovision" flag to "true" the provisioning engine will build more Instances than required (you can see this if you watch in the portal while the VMSS is being provisioned) and then delete the excess above what is actually required. This is by design and is described by Microsoft on their VMSS design considerations page.

Here's a snippet of what your ARM template will look like:

"properties": {
"overprovision": "true",
"singlePlacementGroup": "false",
"upgradePolicy": {
"mode": "Manual"
}
}

So, for my scenario, it just happens that for the first VMSS that the engine deleted the Instances above Instance 1, and for the second VMSS the engine deleted Instances 0 and 1!

Too Easy! 🙂

Tagged , ,

Deploy a PHP site to Azure Web Apps using Dropbox

I’ve been having some good fun getting into the nitty gritty of Azure’s Open Source support and keep coming across some amazing things.

If you want to move away from those legacy hosting businesses and want a simple method to deploy static or dynamic websites, then this is worth a look.

The sample PHP site I used for this demonstration can be cloned on Github here: https://github.com/banago/simple-php-website

The video is without sound, but should be easy enough to follow without.

It’s so simple even your dog could do it.

Dogue

Tagged , , ,

Continuous Deployment of Windows Services using VSTS

I have to admit writing this post feels a bit “old skool”. Prior to the last week I can’t remember the last time I had to break out a Windows Service to solve anything. Regardless, for one cloud-based IaaS project I’m working on I needed a simple worker-type solution that was private and could post data to a private REST API hosted on the other end of an Azure VNet Peer.

While I could have solved this problem any number of ways I plumped for Windows Service primarily because it will be familiar to developers and administrators at the organisation I’m working with, but I figured if I’m going to have to deploy onto VMs I’m sure not deploying in an old-fashioned way! Luckily we’re already running in Azure and hosting on VSTS so I have access to all the tools I need!

Getting Setup

The setup for this process is very similar to the standard “Deploy to Azure VM” scenario that is very well covered in the official documentation and which I added some context to in a blog post earlier in the year.

Once you have the basics in place (it only takes a matter of minutes to prepare each machine) you can head back here to cover off the changes you need to make.

Note: this process is going to assume you have a Windows Service Project in Visual Studio 2015 that is being built using VSTS’s in-built build infrastructure. If you have other configurations you may need to take different steps to get this to work 🙂

Tweak build artefact output

First we need to make sure that the outputs from our build are stored as artefacts in VSTS. I didn’t use any form of installer packaging here so I needed to ensure my build outputs were all copied to the “drops” folder.

Here is my build definition which is pretty vanilla:

Build Process

The tweak I made was on the Visual Studio build step (step 2) where I defined an additional MSBuild Argument that set the OutputPath to be the VSTS build agent’s artifacts directory which will automatically be copied by the Publish Artifacts step:

Build Update

If I look at a history entry for my CI build and select Artifacts I can see that my Windows Service binary and all its associated assemblies, config files (and importantly Deployment Script) are stored with the build.

Build Artefacts

Now we have the build in the right configuration let’s move on to the deployment.

Deploying a Service

This is actually easier than it used to be :). Many of us would remember the need to package the Windows Service into an MSI and then use InstallUtil.exe to do the install on deployment.

Fear not! You no longer need this approach for Windows Services!

PowerShell FTW!

Yes, that Swiss Army knife comes to the rescue again with the Get-Service, New-Service, Stop-Service and Start-Service Cmdlets.

We can combine these handy Cmdlets in our Deployment script to manage the installation of our Windows Service as shown below.

The Release Management definition remains unchanged – all we had to do was ensure our build outputs were available to copy from the ‘Drop’ folder on the build and that they are copied to C:\temp\ on the target VM(s). Our Desployment Script takes care of the rest!

That’s it! Next time your CI build passes your CD kicks in and your Windows Service will be updated on your target VMs!

Tagged ,

Deploying to Azure VMs using VSTS Release Management

I am going to subtitle this post “the missing manual” because I spent quite a bit of time troubleshoothing how this should all work.

Microsoft provides a bunch of useful information on how to deploy from Visual Studio Team Services (VSTS) to different targets, including Azure Virtual Machines.

In an ideal world I wouldn’t be using VMs at all, but for my current particular use case I have to use VMs so the above (linked) approach worked.

The approach sounds good but I ran into a few sharp edges that I thought I would document here (and hopefully the source documentation will be updated to reflect this in due course).

Preparing deployment targets

Azure FQDNs

Note: Please see my update at the bottom of this post before reading this section. While you can use IP addresses (if you make them static) it’s worth configuring test certs with the FQDN.

I thought I’d do the right thing by configuring the Azure IP of my hosts to have a full FQDN rather than just an IP address.

As I found out this is not a good idea.

The main issue you will run into is the generated certs on target hosts only have the hostname in them (i.e. azauhost01) rather than the full public FQDN (i.e. azauhost01.australiaeast.cloudapp.azure.com).

When the Release Management infrastructure tries to connect to a host this cert mismatch causes a fatal error. I didn’t spend much time troubleshooting so decided to revert to use of IP addresses only.

When using dynamic IP addresses the first Release Management action “Azure Deployment:Select Resource Group action” is important as it allows for discovery of all VMs and their IPs (i.e. no hardcoding required). This apprach does mean, however, you need to consider how you group VMs into Resource Groups to allow any VM in the Resource Group to be used as the deployment target.

Select Resource Group

Local permissions

I was running my deployments to non-Domain joined Windows 2012 R2 server instances with local administrative accounts and had opened the necessary port in the NSG rules to allow WinRM into the servers from the VSTS infrastructure.

Everything looked good on deployment until PowerShell execution on the target hosts resulted in errors due to permissions. As it turns out the error message was actually useful in resolving this problem 🙂

In order to move beyond this error I had to prepare each target host by running these commands at an admin command prompt on the host:

winrm quickconfig

Enable-PSRemoting

We could drop these into a DSC module and run that way if we wanted to make this repeatable across new hosts.

There is a good troubleshooting overview for this from Microsoft.

Wait! Where did my PowerShell script go?

If you follow the instructions provided by Microsoft you need to add a deployment Powershell script (actually a DSC module) to your Web App (their example uses “ConfigureWebserver.ps1” for the filename).

There is one issue with this approach – the build step to package the Web App actually ends up bundling the PowerShell inside of a deployment zip which means once the files are copied to your target VM the PowerShell can’t be invoked.

The fix for this is to add an additional build step that copies the PowerShell to the drops folder on VSTS which means the PowerShell will be transferred to the target VM.

Your final build definition should look like the below

Build definition

and the Copy Files task should be configured like this (note below that /Deploy is the folder in my solution that contains the PowerShell I’ve included for deployment purposes):

Build Step

Once you have done this you will find that the script is now available in the VSTS drops folder and can be copied to the VMs which allows you to execute it via the Release Management action.

Wrapping up

Once I had these changes in place and had made some minor path / project name tweaks to match my project I could run the process end-to-end.

The one final item I’ll call out here is the default deployment location of the solution on the target VM ends up being the wwwroot of your inetpub folder with a subfolder named ProjectName_deploy. If you map this to an Application in IIS you should be good to go :).

Update – 8 December 2016

After I’d been running happily for a while my Release started failing. It turns out my target VMs were cycled and moved to different public IP addresses. As the WinRM HTTP certificate had the old IP address in the remote calls failed.

I found a great blog post on how to rectify this situation though: http://www.dotnetcurry.com/windows-azure/1289/configure-winrm-execute-powershell-remote-azure-with-arm

Happy days!

Tagged , ,

Azure Automation Runbooks with Azure AD Service Principals and Custom RBAC Roles

If you’ve ever worked in any form of systems administrator role then you will be familiar with process automation, even only for simple tasks like automating backups. You will also be familiar with the pain of configuring and managing identities for these automated processes (expired password or disabled/deleted account ever caused you any pain?!)

While the cloud offers us many new ways of working and solving traditional problems it also maintains many concepts we’d be familiar from environments of old. As you might have guessed, the idea of specifying user context for automated processes has not gone away (yet).

In this post I am going to look at how in Azure Automation Runbooks we can leverage a combination of an Azure Active Directory Service Principal and an Azure RBAC Custom Role to configure an non-user identity with constrained execution scope.

The benefits of this approach are two-fold:

  1. No password expiry to deal with, or accidental account disablement/deletion. I still need to manage keys (in place of passwords), but these are centrally managed and are subject to an expiry policy I define.
  2. Reduced blast radius. Creating a custom role that restricts actions available means the account can’t be used for more actions that intended.

My specific (simple) scenario is stopping all running v2 VMs in a Subscription.

Create the Custom Role

The Azure Resource Manager (ARM) platform provides a flexible RBAC model within which we can build our own Roles based on a combination of existing Actions. In-built Roles bundle these Actions into logical groups, but there are times we may want something a little different.

The in-built “Virtual Machine Contributor” Role isn’t suitable for my purposes because it provides too much scope by letting assigned users create and delete Virtual Machines. In my case, I want a Role that allows an assigned user to start, stop, restart and monitor existing Virtual Machines only.

To this end I defined a custom Role as shown below which allows any assigned user to perform the functions I need them to.

Let’s add this Role by executing the following PowerShell (you’ll need to be logged into your Subscription with a user who has enough rights to create custom role definitions). You’ll need to grab the above definition, change the scope and save it as a file named ‘vm-power-manager-customerole.json’ for this to work.

New-AzureRmRoleDefinition -InputFile vm-power-manager-customrole.json

which will return a result similar to the below.

Name             : Virtual Machine Power Manager
Id               : 6270aabc-0698-4380-a9a7-7df889e9e67b
IsCustom         : True
Description      : Can monitor, stop, start and restart v2 ARM virtual machines.
Actions          : {Microsoft.Storage/*/read, Microsoft.Network/*/read, Microsoft.Compute/*/read
                   Microsoft.Compute/virtualMachines/start/action...}
NotActions       : {}
AssignableScopes : {/subscriptions/c25b1c8e-1111-4421-9090-1a12d7012dd3}

that means the Role shows up in the Portal and can be assigned to users 🙂

VM Power Manager Role

Now we have that, let’s setup our Service Principal.

Setup Service Principal

Microsoft provides a good guide on creating a Service Principal on the Azure documentation site already so I’m not going to reproduce that all here.

When you get to “Assign application to role” hop back here and we’ll continue on without needing to dive into the Azure Portal.

For the purpose of the rest of this post, these are the parameters I used to create my Service Principal.

Name: Azure VM Start Stop SP
Sign-on URL / App URI: http://myvmautomation
Client ID*: c6f7c745-1234-5678-0000-8d14611e75f4
Tenant ID*: c7a48abc-1990-4fef-e941-a1cd55422e41

* Client ID is returned once you save your Application. Tenant ID comes from your Azure AD tenant ID (see Microsoft setup instructions referenced above).

Important: you will also have to generate and grab a key value that you will need to use as it is the password for the Service Principal. Don’t forget to grab it when it’s displayed!

Assign the Service Principal the necessary Azure Roles

# Assign our custom Role
New-AzureRmRoleAssignment -ServicePrincipalName http://myvmautomation `
                          -RoleDefinitionName 'Virtual Machine Power Manager' `
                          -Scope '/subscriptions/c25b1c8e-xxxx-1111-abcd-1a12d7012123'

# Assign the built-in 'Reader' Role
New-AzureRmRoleAssignment -ServicePrincipalName http://myvmautomation `
                          -RoleDefinitionName 'Reader' `
                          -Scope '/subscriptions/c25b1c8e-xxxx-1111-abcd-1a12d7012123'

We now have all the baseline mechanics out of the way – next it’s onto using this information in our Runbooks.

Asset Setup

Azure Automation has the concept of an Asset that can be one of six items: Schedules, PowerShell Modules, Certificates, Connections, Variables and Credentials.

These are shared between all Runbooks in an Automation Account and are extremely useful in helping you deliver generic re-usable Runbooks.

For this post we are going to create a new Credential using the following process.

Our Automation Account is called ‘Core-Services’ and is hosted in a Resource Group ‘rg-test-01’

$username = "c6f7c745-1234-5678-0000-8d14611e75f4"
$password = ConvertTo-SecureString -String "YOUR_SERVICE_PRINCIPAL_KEY" -AsPlainText -Force

$newCreds = New-Object –TypeName System.Management.Automation.PSCredential –ArgumentList $username,$password

New-AzureRmAutomationCredential -Name "VMPowerServicePrincipal" `
                                -Description 'Service Principal used to control power state of VMs' `
                                -Value $newCreds `
                                -ResourceGroupName 'rg-test-01' `
                                -AutomationAccountName 'Core-Services'

This creates a Credential we can now use in any Runbook.

The sample PowerShell Runbook below shows how we do this using the Login-AzureRmAccount Cmdlet using the -ServicePrincipal switch.

I also specify a Tenant identifier (this is the Azure AD Tenant identifier from when you setup the Service Principal) and the Subscription identifier so we set context in one call.

The Tenant and Subcsription identifiers are held as Automation Account Variables which we read in at the start of execution (the pattern below allows you to override which Variables you pass in should you want to use different ones).

So there we have it – a way to perform VM power state management in an Azure Automation Runbook that uses a non-user account for authentication along with custom RBAC roles for authorisation.

Enjoy!

Tagged , , ,