Category Archives: Automation

Easy Filtering of IoT Data Streams with Azure Stream Analytics and JSON reference data

I am currently working on an next-gen widget dispenser solution that is gradually being rolled out to trial sites across Australia. The dispenser hardware is a modern platform that provides telemetry data that can be used for various purposes by the locations at which the dispenser is deployed and potentially by other third parties.

In addition to these next-gen dispensers we already have existing dispenser hardware at the locations that emits telemetry that we already use for other purposes in our solution. To our benefit both the new and existing hardware emits the same format telemetry data 🙂

A sample telemetry entry is shown below.

We take all of the telemetry data from new and old hardware at all our sites and feed it into an Azure Event Hub which allows us to perform multiple actions, such as archival of the data to Blob Storage using Azure Event Hub Capture and processing the streaming data using Azure Stream Analytics.

We decided we wanted to do some additional early stage testing with some of the next-gen hardware at a few sites. As part of this testing we also wanted to push the data for just specific hardware to a partner organisation we are working with. So how did we achieve this?

The first step was to setup another Event Hub. We knew this partner would not have any issues consuming event data from a Hub and it made the use of Stream Analytics an obvious way to process the incoming complete stream and ensure only the data for dispensers and sites we specify is sent to the partner.

Stream Analytics has the concept of Reference Data which takes the form of slow-moving (or static) data that can be read from a blob storage account in Azure.

We identified our site and dispensers and created our simple Reference Data JSON file – sample below.

The benefit of this format is that we can manage additional sites and dispenser by simply editing this file and uploading to blob storage! Stream Analytics even helps us by providing a useful naming scheme for files so you don’t even need to stop your Stream Analytics Job to update it! We uploaded our first file to a location that had the path of

/siterefdata/2018-01-09/11-40/sitedispensers.json

In future when we want to update the file, we edit it and then upload to blob storage at, say

/siterefdata/2018-02-01/00-00/sitedispensers.json

When the Job hits this date / time (UTC) it will simply pick up the new reference data – how cool is that?!

In order to use the Reference Data auto-update capability you need to set up the path naming scheme when you define the reference data as an input into the Stream Analytics Job. If you don’t need the above capability your can simply hard code the path to, say, a single file.

The final piece of the puzzle was to write a Stream Analytics Job that used the Reference Data JSON as one input and read the site identifier and dispensers from the included integer array. Thankfully, the in-built GetArrayElements Function came in handy, along with CROSS APPLY which gives us the ability to iterate over the elements and use them handily in the WHERE clause of the query!

The resulting solution now handily carves off the telemetry data for just the dispensers we want at the sites we list and writes them to an Event Hub the partner organisation can use.

I commented online that this sort of solution, and certainly one that scales as easily as this will, would have been something unachievable for most organisations even just a few years ago.

The cloud, and specifically Azure, has changed all of that!

Happy Days 😎

Tagged , , , ,

What happened with my Azure VM Scale Set Instance IDs?

In my last couple of posts (here and here) I've covered moving from Azure VMs to running workloads on VM Scale Sets (VMSS).

One item you use need to adjust to is the instance naming scheme that is used by VMSS.

I was looking at the two VM Scale Sets I have setup in my environment and was surprised to see the following.

The first VM Scale Set I setup has Instances with these IDs (0 and 1):

VMSS 01

and my second has Instances with IDs of 2 and 3:

VMSS 02

I thought this was a bit weird that two unrelated VM Scale Sets seem to be sharing a common Instance ID source – maybe something to do with the underlying provisioning engine continuing on Instance IDs across VMSS on a VNet, or something similar?

As it turns out, this is entirely coincidental.

When you provision a VMSS and you set the "overprovision" flag to "true" the provisioning engine will build more Instances than required (you can see this if you watch in the portal while the VMSS is being provisioned) and then delete the excess above what is actually required. This is by design and is described by Microsoft on their VMSS design considerations page.

Here's a snippet of what your ARM template will look like:

"properties": {
"overprovision": "true",
"singlePlacementGroup": "false",
"upgradePolicy": {
"mode": "Manual"
}
}

So, for my scenario, it just happens that for the first VMSS that the engine deleted the Instances above Instance 1, and for the second VMSS the engine deleted Instances 0 and 1!

Too Easy! 🙂

Tagged , ,

Deploy a PHP site to Azure Web Apps using Dropbox

I’ve been having some good fun getting into the nitty gritty of Azure’s Open Source support and keep coming across some amazing things.

If you want to move away from those legacy hosting businesses and want a simple method to deploy static or dynamic websites, then this is worth a look.

The sample PHP site I used for this demonstration can be cloned on Github here: https://github.com/banago/simple-php-website

The video is without sound, but should be easy enough to follow without.

It’s so simple even your dog could do it.

Dogue

Tagged , , ,

Continuous Deployment of Windows Services using VSTS

I have to admit writing this post feels a bit “old skool”. Prior to the last week I can’t remember the last time I had to break out a Windows Service to solve anything. Regardless, for one cloud-based IaaS project I’m working on I needed a simple worker-type solution that was private and could post data to a private REST API hosted on the other end of an Azure VNet Peer.

While I could have solved this problem any number of ways I plumped for Windows Service primarily because it will be familiar to developers and administrators at the organisation I’m working with, but I figured if I’m going to have to deploy onto VMs I’m sure not deploying in an old-fashioned way! Luckily we’re already running in Azure and hosting on VSTS so I have access to all the tools I need!

Getting Setup

The setup for this process is very similar to the standard “Deploy to Azure VM” scenario that is very well covered in the official documentation and which I added some context to in a blog post earlier in the year.

Once you have the basics in place (it only takes a matter of minutes to prepare each machine) you can head back here to cover off the changes you need to make.

Note: this process is going to assume you have a Windows Service Project in Visual Studio 2015 that is being built using VSTS’s in-built build infrastructure. If you have other configurations you may need to take different steps to get this to work 🙂

Tweak build artefact output

First we need to make sure that the outputs from our build are stored as artefacts in VSTS. I didn’t use any form of installer packaging here so I needed to ensure my build outputs were all copied to the “drops” folder.

Here is my build definition which is pretty vanilla:

Build Process

The tweak I made was on the Visual Studio build step (step 2) where I defined an additional MSBuild Argument that set the OutputPath to be the VSTS build agent’s artifacts directory which will automatically be copied by the Publish Artifacts step:

Build Update

If I look at a history entry for my CI build and select Artifacts I can see that my Windows Service binary and all its associated assemblies, config files (and importantly Deployment Script) are stored with the build.

Build Artefacts

Now we have the build in the right configuration let’s move on to the deployment.

Deploying a Service

This is actually easier than it used to be :). Many of us would remember the need to package the Windows Service into an MSI and then use InstallUtil.exe to do the install on deployment.

Fear not! You no longer need this approach for Windows Services!

PowerShell FTW!

Yes, that Swiss Army knife comes to the rescue again with the Get-Service, New-Service, Stop-Service and Start-Service Cmdlets.

We can combine these handy Cmdlets in our Deployment script to manage the installation of our Windows Service as shown below.

The Release Management definition remains unchanged – all we had to do was ensure our build outputs were available to copy from the ‘Drop’ folder on the build and that they are copied to C:\temp\ on the target VM(s). Our Desployment Script takes care of the rest!

That’s it! Next time your CI build passes your CD kicks in and your Windows Service will be updated on your target VMs!

Tagged ,

Deploying to Azure VMs using VSTS Release Management

I am going to subtitle this post “the missing manual” because I spent quite a bit of time troubleshoothing how this should all work.

Microsoft provides a bunch of useful information on how to deploy from Visual Studio Team Services (VSTS) to different targets, including Azure Virtual Machines.

Updated Nov 2017: it looks like Microsoft has updated their documentation site making the above link invalid. Take a look at the bottom of this post to see how you can use Release Management to deploy the package and run the PowerShell I talk about.

In an ideal world I wouldn’t be using VMs at all, but for my current particular use case I have to use VMs so the above (linked) approach worked.

The approach sounds good but I ran into a few sharp edges that I thought I would document here (and hopefully the source documentation will be updated to reflect this in due course).

Preparing deployment targets

Azure FQDNs

Note: Please see my update at the bottom of this post before reading this section. While you can use IP addresses (if you make them static) it’s worth configuring test certs with the FQDN.

I thought I’d do the right thing by configuring the Azure IP of my hosts to have a full FQDN rather than just an IP address.

As I found out this is not a good idea.

The main issue you will run into is the generated certs on target hosts only have the hostname in them (i.e. azauhost01) rather than the full public FQDN (i.e. azauhost01.australiaeast.cloudapp.azure.com).

When the Release Management infrastructure tries to connect to a host this cert mismatch causes a fatal error. I didn’t spend much time troubleshooting so decided to revert to use of IP addresses only.

When using dynamic IP addresses the first Release Management action “Azure Deployment:Select Resource Group action” is important as it allows for discovery of all VMs and their IPs (i.e. no hardcoding required). This apprach does mean, however, you need to consider how you group VMs into Resource Groups to allow any VM in the Resource Group to be used as the deployment target.

Select Resource Group

Local permissions

I was running my deployments to non-Domain joined Windows 2012 R2 server instances with local administrative accounts and had opened the necessary port in the NSG rules to allow WinRM into the servers from the VSTS infrastructure.

Everything looked good on deployment until PowerShell execution on the target hosts resulted in errors due to permissions. As it turns out the error message was actually useful in resolving this problem 🙂

In order to move beyond this error I had to prepare each target host by running these commands at an admin command prompt on the host:

winrm quickconfig

Enable-PSRemoting

We could drop these into a DSC module and run that way if we wanted to make this repeatable across new hosts.

There is a good troubleshooting overview for this from Microsoft.

Wait! Where did my PowerShell script go?

If you follow the instructions provided by Microsoft you need to add a deployment Powershell script (actually a DSC module) to your Web App (their example uses “ConfigureWebserver.ps1” for the filename).

There is one issue with this approach – the build step to package the Web App actually ends up bundling the PowerShell inside of a deployment zip which means once the files are copied to your target VM the PowerShell can’t be invoked.

The fix for this is to add an additional build step that copies the PowerShell to the drops folder on VSTS which means the PowerShell will be transferred to the target VM.

Your final build definition should look like the below

Build definition

and the Copy Files task should be configured like this (note below that /Deploy is the folder in my solution that contains the PowerShell I’ve included for deployment purposes):

Build Step

Once you have done this you will find that the script is now available in the VSTS drops folder and can be copied to the VMs which allows you to execute it via the Release Management action.

Wrapping up

Once I had these changes in place and had made some minor path / project name tweaks to match my project I could run the process end-to-end.

The one final item I’ll call out here is the default deployment location of the solution on the target VM ends up being the wwwroot of your inetpub folder with a subfolder named ProjectName_deploy. If you map this to an Application in IIS you should be good to go :).

Update – 8 December 2016

After I’d been running happily for a while my Release started failing. It turns out my target VMs were cycled and moved to different public IP addresses. As the WinRM HTTP certificate had the old IP address in the remote calls failed.

I found a great blog post on how to rectify this situation though: http://www.dotnetcurry.com/windows-azure/1289/configure-winrm-execute-powershell-remote-azure-with-arm

Update – 18 November 2017

As mentioned above, Microsoft has reorganised some of their content and seems to have retired the post I used originally. The missing piece to my post here is how does Release Management (RM) fit into the picture – see below for what my RM process looks like. The highlighted Task is the one that executes the PowerShell on the VM – details below as well!

Release Management Process

Here is the PowerShell Task.

PowerShell Script

Tagged , ,

Azure Automation Runbooks with Azure AD Service Principals and Custom RBAC Roles

If you’ve ever worked in any form of systems administrator role then you will be familiar with process automation, even only for simple tasks like automating backups. You will also be familiar with the pain of configuring and managing identities for these automated processes (expired password or disabled/deleted account ever caused you any pain?!)

While the cloud offers us many new ways of working and solving traditional problems it also maintains many concepts we’d be familiar from environments of old. As you might have guessed, the idea of specifying user context for automated processes has not gone away (yet).

In this post I am going to look at how in Azure Automation Runbooks we can leverage a combination of an Azure Active Directory Service Principal and an Azure RBAC Custom Role to configure an non-user identity with constrained execution scope.

The benefits of this approach are two-fold:

  1. No password expiry to deal with, or accidental account disablement/deletion. I still need to manage keys (in place of passwords), but these are centrally managed and are subject to an expiry policy I define.
  2. Reduced blast radius. Creating a custom role that restricts actions available means the account can’t be used for more actions that intended.

My specific (simple) scenario is stopping all running v2 VMs in a Subscription.

Create the Custom Role

The Azure Resource Manager (ARM) platform provides a flexible RBAC model within which we can build our own Roles based on a combination of existing Actions. In-built Roles bundle these Actions into logical groups, but there are times we may want something a little different.

The in-built “Virtual Machine Contributor” Role isn’t suitable for my purposes because it provides too much scope by letting assigned users create and delete Virtual Machines. In my case, I want a Role that allows an assigned user to start, stop, restart and monitor existing Virtual Machines only.

To this end I defined a custom Role as shown below which allows any assigned user to perform the functions I need them to.

Let’s add this Role by executing the following PowerShell (you’ll need to be logged into your Subscription with a user who has enough rights to create custom role definitions). You’ll need to grab the above definition, change the scope and save it as a file named ‘vm-power-manager-customerole.json’ for this to work.

New-AzureRmRoleDefinition -InputFile vm-power-manager-customrole.json

which will return a result similar to the below.

Name             : Virtual Machine Power Manager
Id               : 6270aabc-0698-4380-a9a7-7df889e9e67b
IsCustom         : True
Description      : Can monitor, stop, start and restart v2 ARM virtual machines.
Actions          : {Microsoft.Storage/*/read, Microsoft.Network/*/read, Microsoft.Compute/*/read
                   Microsoft.Compute/virtualMachines/start/action...}
NotActions       : {}
AssignableScopes : {/subscriptions/c25b1c8e-1111-4421-9090-1a12d7012dd3}

that means the Role shows up in the Portal and can be assigned to users 🙂

VM Power Manager Role

Now we have that, let’s setup our Service Principal.

Setup Service Principal

Microsoft provides a good guide on creating a Service Principal on the Azure documentation site already so I’m not going to reproduce that all here.

When you get to “Assign application to role” hop back here and we’ll continue on without needing to dive into the Azure Portal.

For the purpose of the rest of this post, these are the parameters I used to create my Service Principal.

Name: Azure VM Start Stop SP
Sign-on URL / App URI: http://myvmautomation
Client ID*: c6f7c745-1234-5678-0000-8d14611e75f4
Tenant ID*: c7a48abc-1990-4fef-e941-a1cd55422e41

* Client ID is returned once you save your Application. Tenant ID comes from your Azure AD tenant ID (see Microsoft setup instructions referenced above).

Important: you will also have to generate and grab a key value that you will need to use as it is the password for the Service Principal. Don’t forget to grab it when it’s displayed!

Assign the Service Principal the necessary Azure Roles

# Assign our custom Role
New-AzureRmRoleAssignment -ServicePrincipalName http://myvmautomation `
                          -RoleDefinitionName 'Virtual Machine Power Manager' `
                          -Scope '/subscriptions/c25b1c8e-xxxx-1111-abcd-1a12d7012123'

# Assign the built-in 'Reader' Role
New-AzureRmRoleAssignment -ServicePrincipalName http://myvmautomation `
                          -RoleDefinitionName 'Reader' `
                          -Scope '/subscriptions/c25b1c8e-xxxx-1111-abcd-1a12d7012123'

We now have all the baseline mechanics out of the way – next it’s onto using this information in our Runbooks.

Asset Setup

Azure Automation has the concept of an Asset that can be one of six items: Schedules, PowerShell Modules, Certificates, Connections, Variables and Credentials.

These are shared between all Runbooks in an Automation Account and are extremely useful in helping you deliver generic re-usable Runbooks.

For this post we are going to create a new Credential using the following process.

Our Automation Account is called ‘Core-Services’ and is hosted in a Resource Group ‘rg-test-01’

$username = "c6f7c745-1234-5678-0000-8d14611e75f4"
$password = ConvertTo-SecureString -String "YOUR_SERVICE_PRINCIPAL_KEY" -AsPlainText -Force

$newCreds = New-Object –TypeName System.Management.Automation.PSCredential –ArgumentList $username,$password

New-AzureRmAutomationCredential -Name "VMPowerServicePrincipal" `
                                -Description 'Service Principal used to control power state of VMs' `
                                -Value $newCreds `
                                -ResourceGroupName 'rg-test-01' `
                                -AutomationAccountName 'Core-Services'

This creates a Credential we can now use in any Runbook.

The sample PowerShell Runbook below shows how we do this using the Login-AzureRmAccount Cmdlet using the -ServicePrincipal switch.

I also specify a Tenant identifier (this is the Azure AD Tenant identifier from when you setup the Service Principal) and the Subscription identifier so we set context in one call.

The Tenant and Subcsription identifiers are held as Automation Account Variables which we read in at the start of execution (the pattern below allows you to override which Variables you pass in should you want to use different ones).

So there we have it – a way to perform VM power state management in an Azure Automation Runbook that uses a non-user account for authentication along with custom RBAC roles for authorisation.

Enjoy!

Tagged , , ,