Microsoft Open Source Roadshow – Free training on Open Source and Azure

Microsoft Open Source Roadshow

In early August I’ll be running a couple of free training days covering how developers who work in the Open Source space can bring their solutions to Microsoft’s Azure public cloud.

We’ll cover language support (application and Azure SDK), OS support (Linux, BSD), Datastores (MySQL, PostreSQL, MongoDB), Continuous Deployment and, last, but not least, Containers (Docker, Container Registry, Kubernetes, et al).

We’re starting off in Sydney and Melbourne, so if you’re interested here are the links to register:

  • Sydney – Monday 7 August 2017: Register
  • Melbourne – Friday 11 August 2017: Register

If you have any questions you’d like to see answered at the days feel free to leave a comment.

I hope to see you there!

Tagged , ,

Deploy a PHP site to Azure Web Apps using Dropbox

I’ve been having some good fun getting into the nitty gritty of Azure’s Open Source support and keep coming across some amazing things.

If you want to move away from those legacy hosting businesses and want a simple method to deploy static or dynamic websites, then this is worth a look.

The sample PHP site I used for this demonstration can be cloned on Github here:

The video is without sound, but should be easy enough to follow without.

It’s so simple even your dog could do it.


Tagged , , ,

Zero to MySQL in less than 10 minutes with Azure Database for MySQL and Azure Web Apps

I’m a long-time fan of Open Source and have spent chunks of my career knocking out LAMP solutions since before ‘LAMP’ was a thing!

Over the last few years we have seen a revived Microsoft begin to embrace (not ’embrace and extend’) the various Open Source platforms and tools that are out there and to actively contribute and participate with them.

Here’s our challenge today – setup a MySQL environment, including a web-based management UI, with zero local installation on your machine and minimal mouse clicks.

Welcome to Azure Cloud Shell

Our first step is to head on over to the Azure portal at and login.

Once you are logged in open up a Cloud Shell instance by clicking on the icon at the top right of the navigation bar.

Cloud Shell

If this is the first time you’ve run it you will be prompted to create a home file share. Go ahead and do that :).

Once completed, run this command and note down the unique ID of the Subscription you’re using (or note the ID of the one you want to use!)

az account list

MySQL Magic

Now the fun begins! I bet you’re thinking “lot’s of CLI action”, and you’d be right. With a twist!

I’m going to present things using a series of simple bash scripts – you could easily combine these into one script and improve their argument handling, but I wanted to show the individual steps without writing one uber script that becomes impenetrable to understand!

Here’s our script to setup MySQL in Azure.

Now I could get you to type that out, or upload to your cloud share via the Portal, but that’s no fun!

At your Cloud Shell prompt run the following (update the last command line with your arguments):

curl -O -L
chmod 755

# make sure you update the parameters for your environment
./ 368ff49e-XXXX-XXXX-XXXX-eb42e73e2f25 westus mysqldemorg mysqldemo01 yawadmin 5ecurePass@word!

After a few minutes you will have a MySQL server ready for use. Note that by default you won’t be able to connect to it as the firewall around it is shut by default (which is a good thing). We’ll rectify connectivity later. For now, on to the next piece of the puzzle.

Manage MySQL from a browser

No, not via some super-duper Microsoft MySQL tooling, but via everyone’s old favourite phpMyAdmin.

Surely this will take a lot of work to install I hear you ask? Not at all!

Enter Azure Web App Site Extensions! Specifically the phpMyAdmin extension.

Let’s get started by creating a new App Service Plan and Web App to which we can deploy our management web application.

We’re going to re-use the trick we learned above – pulling a Gist from Github using curl. First have a ready through the script :).

You can download this Gist from within your Cloud Shell and execute it as follows. Make sure to update the command line arguments

curl -O -L
chmod 755

# make sure you update the parameters for your environment
./ 368ff49e-XXXX-XXXX-XXXX-eb42e73e2f25 westus mysqldemorg mysqldemo01 yawadmin 5ecurePass@word! mydemoapplan msqlmgewebapp

In order to deploy the Web App Sit Extension we are going to dig a bit behind the covers of Azure App Services and utilise the REST API provided by the kudu site associated with our Web App (this appears as ‘Advanced tools’ in the Portal). If you want to understand more about its capabilities you can, and specifically about how to work with Site Extensions read their excellent documentation.

Note: if you haven’t previously setup a Git / FTP deployment user you should uncomment the line that does this. Note that this step sets the same credentials for all instances in the Subscription, so if you already have credentials defined think twice before uncommenting this line!

curl -O -L
chmod 755

# make sure you update the parameters for your environment
./ 368ff49e-XXXX-XXXX-XXXX-eb42e73e2f25 mysqldemorg msqlmgewebapp deployuser d3pl0yP455!


Browse to your freshly setup phpMyAdmin instance.

Connection Error

Oh noes!

Yes, we forgot to open up the firewall surrounding the Azure Database for MySQL instance. We can do this pretty easily.

Remember those ‘outboundIpAddresses’ values you captured when you created the Web App above? Good, this is where you will need them.

You should find that you have four source IP addresses from which outbound traffic can originate. These won’t change as long as you don’t “stop” or delete the Web App.

Here’s our simple script to enable the Web App to talk to our MySQL instance.

Now the usual drill.

Note: You might have more than four outbound IP addresses to allow – if so simply edit the script to suit.

curl -O -L
chmod 755

# make sure you update the parameters for your environment
./ 368ff49e-XXXX-XXXX-XXXX-eb42e73e2f25 mysqldemorg mysqldemo01

Once the rules are applied, try refreshing the web browser and you will now see the you are viewing the glorious phpMyAdmin.

phpMyAdmin on Azure!

Congratulations, you now have a fully functional MySQL environment, with no installs and minimal configuration required!

Tagged , , , , , , ,

Speaking at Office 365 Saturday

If you’re interested to learn more about Microsoft Graph API and how you can leverage it to build compelling solutions in the form of Bots in Microsoft Teams, I’ll be speaking at Office 365 Saturday in Sydney this week on June 3rd.

Tickets are free, but get in while there are still some left!

O365 Saturday Sydney

Saturday, Jun 3, 2017, 8:45 AM

Clifton’s Sydney
60 Margaret Street Sydney, AU

115 Members Went

Welcome to the 2017 edition of Sydney Office 365 Saturday.Join administrators, end users, architects, developers, and other professionals that work with Microsoft Technologies for a great day of awesome sessions presented by industry experts.Did you attend and want to leave feedback.Leave feedback here.O365 Saturday is a fantastic day to learn…

Check out this Meetup →


If you are interested in the demonstration I ran during my talk you can download the code from Github.

The way the bot hangs together is shown below.

How is the bot built?


A Year with Azure Functions

“The best journeys answer questions that in the beginning you didn’t even think to ask.” – Jeff Johnson – 180° South

I thought with the announcement at Build 2017 of the preview of the Functions tooling for Visual Studio 2017 that I would take a look back on the journey I’ve been on with Functions for the past 12 months. This post is a chance to cover what’s changed and capture some of the lessons I learned along the way, and that hopefully you can take something away from.

In the beginning

I joined the early stages of a project team at a customer and was provided with access to their Microsoft Azure cloud environment as a way to stand up rapid prototypes and deliver ongoing solutions.

I’ve been keen for a while to do away with as much traditional infrastructure as possible in my solutions, primarily as a way to make ongoing management a much easier proposition (plus it aligns with my developer sensibilities). Starting with this environment I was determined to apply this where I could.

In this respect the (at the time) newly announced Function Apps fit the bill!

The First Release

I worked on a couple of early prototype systems for the customer that leveraged Azure Functions in their pre-GA state, edited directly in the Browser. I’m a big C# fan, so all our Functions have been C# flavoured.

After hacking away in the browser I certainly found out how much I missed Visual Studio’s Intellisense, though my C# improved after a few years away from deep use of the language!

We ran some field tests with the system, and out of this took some lessons away.

I blogged about Functions with KeyVault and also on how to deliver emails using Functions with SendGrid.

At this point I also learnt a key lesson: Functions are awesome, but may not be best suited to time-critical operations (where the trigger speed is measured in milliseconds or low seconds). This was particularly the case with uncompiled Function Apps (which at the time was the only option).

I wrote a Function to push a sythentic transaction into our pipeline periodically to offset some of this behaviour. Even after multiple revisions we’re still doing this certain activities such as retrieving and caching Graph API access tokens.

Another pain point for us at this juncture was the lack of continuous deployment options. Our workaround was a basic copy / paste into a VSTS Web project held in Git repository in VSTS.

Around the end of our initial field trials Functions hit GA which was perfect for us!

The Second Release

I now had a production-ready system running Functions at its core. At this stage we had all our Functions in a single App because the load wasn’t sufficient to break them out.

Our service was yet to launch when the Functions team announced a CI / CD experience using the ‘Deployment Options’ setting in the Azure Portal.

We revved to a second release, using the new CI / CD deployment option to enforce deployments to a production App Service Plan. We were still living with on-demand compilation which we found impactful at deployment time as everything had to be restored and recompiled.

The Third Release (funproj all round!)

Visual Studio 2015 tooling was announced!

Yep, we used it – welcome to funproj Revision (3) of the codebase!

About now, one of my earlier blogs on sending email came back to haunt me as I began to (and still do periodically) receive emails from people clearly trying out my sample code!

Ah… the dangers of writing demonstrations that have your real email address in the To: field.

One item we’ve done with each major revision of the Functions code is to publish a new App Service Plan and use this is as our deployment target. We paramaterise as much as we can so we can pre-populate things like Service Bus connection strings. Our newly deployed Functions are disabled and then manually cut over one at a time until we’re satisfied that the new deployment is in a happy state. This n-1 Function App will live for a couple of weeks after initial deployment with its Functions disabled. This was all decided before deployment slot support which may change this design 🙂

We have a Function we can run that checks that all your environment variables are setup in a new App Plan – hopefully in future we’ll get this automated away as well!

Additionally, if we have a major piece of code changing, but not sufficient for a major revision, we’ll do an Azure-level backup of the App before we run the deployment so we have a solid recovery point to return to if need be.

Function runtime versioning

We had one particularly bad day where our Functions started playing up out-of-the-blue. It turned out that an unintended breaking change had been made by the Functions team when they did a new release.

The key learning from this experience for us was that our production Function Apps always run a known-good version of the runtime which is set as a specific version by setting the FUNCTIONS_EXTENSION_VERSION to something like ‘1.0.10576’ rather than ‘~1’ which means ‘latest non-breaking 1 release’.

We run our development App Services using ‘~1’ however, so we can determine periodically to update our production runtime to the latest known good runtime version so we don’t lag too far behind the good work the Functions team is doing.

Our other main issue during this revision was that our App Plan stopped being able to read the x509 cert store for some reason and our KeyVault client started failing. I never got to the bottom of it, but it was fixed through deploying to a new Plan.

Talking Functions

I was lucky enough to get to talk a bit about the solution we’d been building in early February.

The video quality varies, but here it is if you’re interested.

Hello Compiled Functions… oooooh Application Insights! (Rev 4!)

Compiled Functions? Sure, why not?!

The benefit here is the reduced startup times produced by the pre-compiled nature of the solution. The only downside we’ve found is deployments can cause transient thread-abort exceptions in some libraries, but these are not substantial and we deal with them gracefully in our solution.

As an example, we had previously seen Function cold start times some times up to 15 seconds. We now rarely see any impacts, some of which is the result of lessons we’ve learned along the way, coupled with the compiled codebase and the hard work the Functions team is doing to continuously improve their platform.

Early on in the life of our solution we had looked at Application Insights as a way to track our Function App performance and trace logging. We abandoned it because we ended up sometimes writing more App Insights code than app code! The newly introduced App Insights support, however, does away with all of this and it works a treat:

Functions App Insights

Unfortunately as part of the move here we also dropped Visual Studio 2015 support which meant we dropped the ‘funproj’ project layout. As Compiled Functions now give us local design time benefits we didn’t get before, it’s a good trade-off for my mind.

So… revision 5 anyone?!

We’re not here yet, though once the Visual Studio 2017 tooling hits GA we’ll probably look seriously at moving to it.

Ideally we’ll also move to a more fully testable solution and one that supports VSTS Release Management which we use for many other aspects of our overall solution.

Gee… sounds like a lot of work!

This might sound like a lot of busy work, but to be honest, the microservices we’re running using Functions are so small as to be easy to port from one revision to the next. We make conscious decisions around the right time to move, when we feel that the benefits to be realised are worth the small hit to our productivity.

Ultimately our goal is a shown below (minus the flames of course!)

That’s a wrap!

The Functions team is open in their work – ask them questions on Stack Overflow

Raise your issues and submit your PRs here:

Happy Days 🙂

Tagged , ,

Global Azure Bootcamp 2017 Session – .Net Core, Docker and Kubernetes

If you are attending my session and would like to undertake the exercise here’s what you’ll need to install locally, along with instructions on working with the code.

Pro-tip: As this is a demo consider using an Azure Region in North America where the compute cost per minute is lower than in Australia.


Note that for both the Azure CLI and Kubernetes tools you might need to modify your PC’s PATH variable to include the paths to the ‘az’ and ‘kubectl’ commands.

On my install these ended up in:

az: C:\Users\simon\AppData\Roaming\Python\Python36\Scripts\
kubectl: c:\Program Files (x86)\

If you have any existing PowerShell or Command prompts open you will need to close and re-open to pick up the new path settings.

Readying your Docker-hosted App

When you compile your solution to deploy to Docker running on Azure, make sure you select the ‘Release’ configuration in Visual Studio. This will ensure the right entry point is created so your containers will start when deployed in Azure. If you run into issues, make sure you have this setting right!

If you compile the Demo2 solution it will produce a Docker image with the tag ‘1.0’. You can then compile the Demo3 solution which will produce a Docker image with the tag ‘1.1’. They can both live in your local environment side-by-side no issues.

Log into your Azure Subcription

Open up PowerShell or a Command Prompt and log into your subscription.

az login

Note: if you have more than one Subscription you will need to use the az account command to select the right one.

Creating an Azure Container Service with Kubernetes.

Before you do this step, make sure you have created your Azure Container Registry (ACR). The Azure Container Service has some logic built-in that will make it easier to pull images from ACR and avoid a bunch of cross-configuration. The ACR has to be in the same Subscription for this to work.

I chose to create an ACS instance using the Azure CLI because it allows me to control the Kubernetes cluster configuration better than the Portal.

The easiest way to get started is to follow the Kubernetes walk-through on the Microsoft Docs site.

Continue until you get to the “Create your first Kubernetes service” and then stop.

Publishing to your Registry

As a starting point make sure you enable the admin user for your Registry so you can push images to it. You can do this via the Portal under “Access keys”.

Start a new PowerShell prompt and let’s make sure we’re good to by seeing if our compiled .Net Core solution images are here.

docker images

REPOSITORY                          TAG      IMAGE ID       CREATED  SIZE
siliconvalve.gab2017.demowebcore    1.0      e0f32b05eb19   1m ago   317MB
siliconvalve.gab2017.demowebcore    1.1      a0732b0ead13   1m ago   317MB


Now let’s log into our Azure Registry.

docker login -u ADMIN_USER -p PASSWORD
Login Succeeded

Let’s tag and push our local images to our Azure Container Registry. Note this will take a while as it will publish the image which our case is 317MB. If you updated this in future only the differential will be re-published.

docker tag siliconvalve.gab2017.demowebcore:1.0
docker tag siliconvalve.gab2017.demowebcore:1.1
docker push
The push refers to a repository []
f190fc6d75e4: Pushed
e64be9dd3979: Pushed
c300a19b03ee: Pushed
33f8c8c50fa7: Pushed
d856c3941aa0: Pushed
57b859e6bf6a: Pushed
d17d48b2382a: Pushed
latest: digest: sha256:14cf48fbab5949b7ad1bf0b6945201ea10ca9eb2a26b06f37 size: 1787

Repeat the ‘docker push’ for you 1.1 image too.

At this point we could now push this image to any compatible Docker host and it would run.

Deploying to Azure Container Service

Now comes the really fun bit :).

If you inspect the GAB-Demo2 project you will find a Kubernetes deployment file, the contents of which are displayed below.

If you update the Azure Container Registry path and insert an appropriate admin secret you now have a file that will deploy your docker image to a Kubernetes-managed Azure Container Service.

At your command line run:

kubectl create -f PATH_TO_FILE\gab-api-demo-deployment.yml
service "gabdemo" created
deployment "gabdemo" created

After a few minutes if you look in the Azure Portal you will find that a public load balancer has been created by Kubernetes which will allow you to hit the API definition at http://IP_OF_LOADBALANCER/swagger

You can also find this IP address by using the Kubernetes management UI, which we can get to using a secured tunnel to the Kubernetes management host (this step only works if you setup the ACS environment fully and downloaded credentials).

At your command prompt type:

az acs kubernetes browse --name=YOUR_CLUSTER_NAME --resource-group=YOUR_RESOURCE_GROUP
Proxy running on
Press CTRL+C to close the tunnel...
Starting to serve on

A browser will pop open on the Kubernetes management portal and you can then open the Services view and see your published endpoints by editing the ‘gabdemo’ Service and viewing the public IP address at the very bottom of the dialog.

If you hit the Swagger URL for this you might get a 500 server error – this is easy to fix (and is a bug in my code that I need to fix!) – simply change the URL in the Swagger page to include “v1.0” instead of “v1”.

Kubernetes Service Details

Upgrade the Container image

For extra bonus points you can also upgrade the container image running by telling Kubernetes to modify the Service. You can do this either via the commandline using kubectl or you can edit the Service definition via the management web UI (shown below) and Kubernetes will upgrade the Container image running. If you hit the swagger page for the service you will see the API version has incremented to 1.1 now!

Edit Service

You could also choose to roll-back if you wanted – simply update the tag back to 1.0 and watch the API rollback.

So there we have it – modernising existing .Net Windows-hosted apps so they run on Docker Containers on Azure, managed by Kubernetes!

Tagged , ,

When to use Azure Load Balancer or Application Gateway

One thing Microsoft Azure is very good at is giving you choices – choices on how you host your workloads and how you let people connect to those workloads.

In this post I am going to take a quick look at two technologies that are related to high availability of solutions: Azure Load Balancer and Application Gateway.

Application Gateway is a bit of a dark horse for many people. Anyone coming to Azure has to get to grips with Azure Load Balancer (ALB) as a means to provide highly available solutions, but many don’t progress beyond this to look at Application Gateway (AG) – hopefully this post will change that!

The OSI model

A starting point to understand one of the key differences between the ALB and AG offerings is the OSI model which describes the logical layers required to implement computer networking. Wikipedia has a good introduction, which I don’t intend to reproduce here, but I’ll cover the important bits below.

TCP/IP and UDP are bundled together at Layer 4 in the OSI model. These underpin a range of higher level protocols such as HTTP which sit at the stop of the stack at Layer 7. You will often see load balancers defined as Layer 4 or Layer 7, and, as you can guess, the traffic they can manage varies as a result.

Load balancing HTTP using a Layer 4 capable device works, though using Cookie-based session affinity is out, while a pure Layer 7 solution has no chance to handle TCP or UDP traffic which sit below at Layer 4.

Clear as mud? Great!

You might ask what does it mean to us in this post?

I’m glad you asked 🙂 The first real difference between the Azure Load Balancer and Application Gateway is that an ALB works at traffic at Layer 4, while Application Gateway handles just Layer 7 traffic, and specifically, within that, HTTP (including HTTPS and WebSockets).

Anything else to know?

There are a few other differences it is worth calling out. I will summarise as follows:

  • Load Balancer is free (unless you wish to use multiple Virtual IPs). Application Gateway is billed per-hour, and has two tiers, depending on features you need (with/without WAF)
  • Application Gateway supports SSL termination, URL-based routing, multi-site routing, Cookie-based session affinity and Web Application Firewall (WAF) features. Azure Load Balancer provides basic load balancing based on 2 or 5 tuple matches.
  • Load Balancer only supports endpoints hosted in Azure. Application Gateway can support any routable IP address.

It’s also worth pointing out that when you provision an Application Gateway you also get a transparent Load Balancer along for the ride. The Load Balancer is responsible for balancing traffic between the Application Gateway instances to ensure it remains highly available 🙂

Using either with VM Scale Sets (VMSS)

The default setup for a VMSS includes a Load Balancer. If you want to use an Application Gateway instead you can either leave the Load Balancer in place and put the Application Gateway in front of it, or you can use an ARM template similar to the following sample to swap out the Load Balancer for an Application Gateway instance instead.

The key item to ensure is that the Network Interface Card (NIC) is configured for each VM in the Scale Set to be a part of the Backend Address Pool of the Application Gateway.

So there we have a quick overview of the Azure Load Balancer and Application Gateway offerings and when to consider one over the other.

Either the Azure Load Balancer overview or the Application Gateway introduction provides a good breakdown of where you should consider using one or the other.

Happy (highly available) days! 🙂

Tagged , ,

Secure your VSTS Release Management Azure VM deployments with NSGs and PowerShell

One of the neat features of VSTS’ Release Management capability is the ability to deploy to Virtual Machine hosted in Azure (amongst other environments) which I previously walked through setting up.

One thing that you need to configure when you use this deployment approach is an open TCP port to the Virtual Machines to allow remote access to PowerShell and WinRM on the target machines from VSTS.

In Azure this means we need to define a Network Security Group (NSG) inbound rule to allow the traffic (sample shown below). As we are unable to limit the source address (i.e. where VSTS Release Management will call from) we are stuck creating a rule with a Source of “Any” which is less than ideal, even with the connection being TLS-secured. This would probably give security teams a few palpitations when they look at it too!

Network Security Group

We might be able to determine a source address based on monitoring traffic, but there is no guarantee that the Release Management host won’t change at some point which would mean our rule blocks that traffic and our deployment breaks.

So how do we fix this in an automated way with VSTS Release Management and provide a secured environment?

Let’s take a look.

The Fix

The fix is actually quite straightforward it turns out.

As the first step you should go to the existing NSG and flip the inbound rule from “Allow” to “Deny”. This will stop the great unwashed masses from being able to hit TCP port 5986 on your Virtual Machines immediately.

As a side note… if you think nobody is looking for your VMs and open ports, try putting a VM up in Azure and leaving RDP (3389) open to “Any” and see how long it takes before you start seeing authentication failures in your Security event log due to account enumeration attempts.

Modify Project Being Deployed

We’re going to leverage an existing Release Management capability to solve this issue, but first we need to provide a custom PowerShell script that we can use to manipulate the NSG that contains the rule we are currently using to block inbound traffic.

This PowerShell script is just a simple wrapper that combines Azure PowerShell Cmdlets to allow us to a) read the NSG b) update the rule we need c) update the NSG, which commits the change back to Azure.

I usually include this script in a Folder called “Deploy” in my project and set the build action to “Copy always”. As a result the file will be copied to the Artefacts folder at build time which means we have access to it in Release Management.

Project Setup

You should run a build with this included file so that it is available in your

Modify Release Management Defintion

Note that in order to complete this step you must have a connection between VSTS and your target Azure Subscription already configured as a Service Endpoint. Typically this needs to be done by a user with sufficient rights in both VSTS and the Azure Subscription.

Now we are going to modify our existing Release Management definition to make use of this new script.

The way we are going to enable this is by using the existing Azure PowerShell Task that we have available in both Build and Release Management environments in VSTS.

I’ve shown a sample where I’ve added this Task to an existing Release Management definition.

Release Management Definition

There is a reason this Task is added twice – once to change the NSG rule to be “Allow” and then once, at the end, to switch it back to “Deny”. Ideally we want to do the “Allow” early in the process flow to allow time for the NSG to be updated prior to our RM deployment attempting to access the machine(s) remotely.

The Open NSG Task is configured as shown.

Allow Script

The Script Arguments should match those given in the sample script above. As sample we might have:

-resourceGroupName MyTestResourceGroup -networkSecurityGroupName vnet01-nsg 
-securityRuleName custom-vsts-deployments -allowOrDeny Allow -priority 3010

The beauty of our script is that the Close NSG Task is effectively the same, but instead of “Allow” we put “Deny” which will switch the rule to blocking traffic!

Make sure you set the “Close” Task to “Always run”. This way if any other component in the Definition fails we will at least close up the NSG again.

Additionally, if you have a Resource Group Lock in place (and you should for all production workloads) this approach will still work because we are only modifying an existing rule, rather than trying to add / remove it each time.

That’s it!

You can now benefit from VSTS remote deployments while at the same time keeping your environment locked down.

Happy days 🙂

Tagged , , , , ,

Inviting Microsoft Account users to your Azure AD-secured VSTS tenant

I’ve done a lot of external invite management for VSTS after the last few years, and generally without fail we’ll have issues getting everyone on-boarded easily. This blog post is a reference for me (and I guess you too) to understand the invite process and document the experience the invited user has.

There are two sections to this blog post:

1. Admin instructions to invite users.

2. Invited user instructions.

Select whichever one applies to you.

The starting point for this post is that external user hasn’t yet been invited to your Azure AD tenant. The user doing in the inviting is also not an Azure AD Global Admin, but I has rights in an Azure tenant.

The Invite to Azure AD

These steps assume your Azure AD user has the “Guest Inviter” role and that your Azure AD administrators have enabled guest invites for your Directory.

The Short Way

Log into an Azure subscription using your Azure AD account and then browse to the Directory that is tied to your VSTS subscription. At the top of the screen click on the “New guest user” link and enter the email address of the user you are inviting.


The Long Way

Log into an Azure subscription using your Azure AD account and select Subscriptions. Ideally this shouldn’t be a production tenant!

Select Subscription

I am going to start by inviting this user to my Azure tenant as a Reader-level user which means they will receive an Azure AD invite. I will later revoke this access once they have accepted my invite.

Click “Add” on the IAM blade for the Subscription.

Select Add

Ensure you set the role to “Reader” which provides no ability to execute changes.

Set Role

Now enter the user’s email address. Note you can add multiple email addresses if you want. Click “Save” button to apply the change.

Enter Email

Once I click “Save” the portal will say it is inviting the user. A short while later the invitee will receive an invite email in their inbox. See later in the blog post for their experience.

Add Invited User to VSTS

Now the invited user is in your Azure AD tenant they will show up in the User Search Dialog in VSTS. You must be a VSTS Admin to manage users.

Log into your VSTS tenant and navigate to Users and then search for the newly added user and assign them the license you want them to use.

VSTS invite

Click “Send Invitation” which will be enabled once you select the invitee’s account from the drop-down. Note that VSTS won’t actually send this user an invite.

At this stage the user now has access to your VSTS tenant, but not any projects it contains – make sure you add them to some!

Let’s take a look and see what the invited user sees.

Invited User Experience

If I log in to the invited user’s mailbox I will see an Azure AD invite awaiting.

The invited user should click the “Get Started” button to accept the invite. Unless they complete this process they won’t have access to VSTS.

Invite email

This will open a web browser on the invited tenant’s redemption page that will be branded with any extended branding the Azure AD tenant has.

The user must click ‘Next’ on this screen to accept the invite.

Invite web experience

It will take a few moments to setup the Microsoft Account in the Azure AD tenant.

Adding user to tenant

Once done the user will end up at the default “My Apps” screen but will see nothing at this point as they have not be granted access to anything.

Empty My Apps screen

Invited User Accesses VSTS

The invited user can now navigate to your VSTS tenant in a browser –

If they aren’t already logged into their Microsoft Account they will be prompted to login and then directed to VSTS.

As this is their first time logging in they will be asked to enter some information which will auto-populated, but editable.

VSTS Invite

They then get dropped to the home page for VSTS and are ready to work. If you didn’t add them to any existing projects and haven’t granted them additional privileges they might see the screen below.

VSTS Invite

Make sure they bookmark your VSTS tenant and that they use their invited Microsoft Account each time they want to access it.

Login Experience for User

If the user logs out or their session times out they will be directed to your Azure AD tenant login page firstly, as this is what VSTS is configured to use when you attach an Azure AD tenant to it.


The invited user should enter their Microsoft Account into the email address box and when the username box loses focus they will be redirected to the Microsoft Account login screen.


This step quite often catches people out as they aren’t expecting the redirect, particularly if they haven’t used Office 365 or similar systems.


At the Microsoft Account login page (shown below) they enter their password and they will be directed back to VSTS.

MSA login page

Don’t forget!

If you’re the inviting Admin you can now remove the invited user as a reader from your Azure tenant.

If you want extra security, get the Microsoft Account user’s to turn on two-step verification which will require them to enter a code to login.

Happy coding!

Post credit-roll Admin bonus!

If you find out that some of the users you invited didn’t have a mailbox attached to their Microsoft Account and therefore didn’t get the original invite you can resend the invite. Log into your Azure tenant, open Azure Active Directory and then find the invited user.

Open their profile and click on the ‘Resend invitation’ button – it is greyed out but will work just fine :).

Re-invite a user

Tagged , ,

Per-environment config value tokenization for Azure Web Apps using VSTS Release Management

For the majority of the last ten years I’ve been working with delivery of solutions where build and deployment comes from some centralised location.

When Microsoft made InRelease part of TFS as Release Management, I couldn’t wait to use it. Unfortunately in its state at that time the learning curve was quite steep and the immediate value was outweighed by the effort to get up and running.

Roll forward to 2016 and we find Release Management as a modern, web-based feature of Visual Studio Team Services (VSTS). The cherry on the cake is that a lot of the learning curve has dropped away as a result.

In this post I’m going to look at how we can deploy a Web Deploy (or MS Deploy) packaged Web Application to an Azure Web Application and define different deployment environments with varying configurations.

Many people would apply configuration transformations at build time, but in my scenario I want to deploy the same compiled package to multiple environments without the need to recompile anything.

My Challenge

The build definition for my Web Application results in a package that allows it to be deployed to an Azure Web App by Web Deploy. The result is the web.config configuration file is in a zip file that is transferred to the server for deployment by Web Deploy.

Clearly at this point I don’t have access to the web.config file in the drop folder so I can’t transform it with Release Management. Or can I?!

Using Web Deploy Parameters

Thankfully the design of Web Deploy provides for the scenario I described above though use of either commandline arguments or a specially formatted input file that I will call the “SetParameters” file.

Given this is a first-class feature in the broader Microsoft developer toolkit, I’d expected that there would be a few Tasks in VSTS that I could use to get all of this up and running… I got close, but couldn’t quite get it functioning as I wanted.

Through the rest of this post I will walk you through the setup to get this going.

Note: I am going to assume you have setup Build and Release Management definitions in VSTS already. Your Build should package to deploy to an Azure Web App and the Release Management definition to deploy it.

VSTS Release Management Setup

The first thing to get all of this up and running is to add the Release Management Utilities extension to your subscription. This extension includes the Tokenizer Task which will be key to getting the configuration per-environment up and running.

You also need to define an “Environment” in Release Management for each deployment target we have, which will also be used as a container for environmental configuration items to replace at deployment time. A sample is shown below with two Environments defined


We’ll come back to VSTS later, for now, let’s look at the project changes you need to make.

Source Project Changes

For the purpose of this exercise I’m just worrying about web.config changes.

First of all, you need to tokenise the settings you wish to transform. I have provided a sample below that shows how this looks in a web.config. The format of two underscores on either side of your token placeholder is required.

The next item we need to do is to add a new XML file to our Visual Studio project at the root level. This file should be called “Parameters.xml” and I have included a sample below that shows what we need to add to if it we want to ensure we replace the tokens in the above sample web.config.

You’ll notice one additional item in the file below that isn’t related directly to the web.config above – the IIS Website name that will be used when deployed. I found if I didn’t include this the deployment would fail.

When you add this file, make sure to set the properties for it to a Build Action of “None” and Copy to Output Directory of “Do not copy”.

Note: if you haven’t already done so, you should run a Build so that you have Build Artifacts ready to select in a later step.

Add the Tokenizer to your Release Management Definition

We need now to return to VSTS’ web interface and modify our existing Release Management definition (or create a new one) that adds the Tokenizer utility to the process.

You will need to repeat this so all your environments have the same setup. I’ve shown how my Test environment setup looks like below (note that I changed the default description of the Tokenizer Task).

Release Management Definition

Configuration of the Tokenizer is pretty straight forward at this point, especially if we’ve already run a build. Simply select the SetParameters.xml file your build already produced.

Tokenizer setting

Define values to replace Tokens

This is where we define the values that will be used to replace the tokens at deployment time.

Click on the three dots at the top right of the environment definition and from the menu select “Configuration variables…” as shown below.

Variable Definition

A dialog loads that allows us to define the values that will go into our web.config for this environment. The great thing you’ll note is that you can obfuscate sensitive details (in my example, the key to access the Document DB account). This is non-reversible too – you can’t “unhide” the value and see the plain-text version.

Token Values

We’re almost done!

Explicitly select SetParameters file for deployment

I’m using the 3.* (preview) version of the Deploy Azure App Service Release Management Task, which I have configured as shown.

App Service Task

At this point, if you create a new Release and deploy to the configured environment you will find that the deployed web.config contains the values you specified in VSTS and you will no longer need multiple builds to send the same package to multiple environments.

Happy Days! 🙂

Tagged , , , , , ,