Author Archives: Simon

Continuous Deployment for Docker with VSTS and Azure Container Registry

I’ve been watching with interest the growing maturity of Containers, and in particular their increasing penetration as a hosting and deployment artefact in Azure. While I’ve long believed them to be the next logical step for many developers, until recently they have had limited appeal to many every-day developers as the tooling hasn’t been there, particularly in the Microsoft ecosystem.

Starting with Visual Studio 2015, and with the support of Docker for Windows I started to see this stack as viable for many.

In my current engagement we are starting on new features and decided that we’d look to ASP.Net Core 2.0 to deliver our REST services and host them in Docker containers running in Azure’s Web App for Containers offering. We’re heavy uses of Visual Studio Team Services and given Microsoft’s focus on Docker we didn’t see that there would be any blockers.

Our flow at high level is shown below.

Build Pipeline

1. Developer with Visual Studio 2017 and Docker for Windows for local dev/test
2. Checked into VSTS and built using VSTS Build
3. Container Image stored in Azure Container Registry (ACR)
4. Continuously deployed to Web Apps for Containers.

We hit a few sharp edges along the way, so I thought I’d cover off how we worked around them.

Pre-requisites

There are a few things you need to have in place before you can start to use the process covered in this blog. Rather than reproduce them here in detail, go and have a read of the following items and come back when you’re done.

  • Setting up a Service Principal to allow your VSTS environment to have access to your Azure Subscription(s), as documented by Donovan Brown.
  • Create an Azure Container Registry (ACR), from the official Azure Documentation. Hint here: don’t use the “Classic” option as it does not support Webhooks which are required for Continuous Deployment from ACR.

See you back here soon 🙂

Setting up your Visual Studio project

Before I dive into this, one cool item to note, is that you can add Docker support to existing Visual Studio projects, so if you’re interested in trying this out you can take a look at how you can add support to your current solution (note that it doesn’t magically support all project types… so if you’ve got that cool XAML or WinForms project… you’re out of luck for now).

Let’s get started!

In Visual Studio do a File > New > Project. As mentioned above, we’re building an ASP.Net Core REST API, so I went ahead and selected .Net Core and ASP.Net Core Web Application.

New Project - .Net Core

Once you’ve done this you get a selection of templates you can choose from – we selected Web API and ensured that we left Docker support on, and that it was on Linux (just saying that almost makes my head explode with how cool it is 😉 )

Web API with Docker support

At this stage we now have baseline REST API with Docker support already available. You can run and debug locally via IIS Express or via Docker – give it a try :).

If you’ve not used this template before you might notice that there is an additional project in the solution that contains a series of Docker-related YAML files – for our purposes we aren’t going to touch these, but we do need to modify a couple of files included in our ASP.Net Core solution.

If we try to run a Docker build on VSTS using the supplied Dockerfile it will fail with an error similar to:

COPY failed: stat /var/lib/docker/tmp/docker-builder613328056/obj/Docker/publish: no such file or directory
/usr/bin/docker failed with return code: 1

Let’s fix this.

Add a new file to the project and name it “Dockerfile.CI” (or something similar) – it will appear as a sub-item of the existing Dockerfile. In this new file add the following, ensuring you update the ENTRYPOINT to point at your DLL.

This Dockerfile is based on a sample from Docker’s official documentation and uses a Docker Container to run the build, before copying the results to the actual final Docker Image that contains your app code and the .Net Core runtime.

We have one more change to make. If we do just the above, the project will fail to build because the default dockerignore file is stopping the copying of pretty much all files to the Container we are using for build. Let’s fix this one by updating the file to contain the following 🙂

Now we have the necessary bits to get this up and running in VSTS.

VSTS build

This stage is pretty easy to get up and running now we have the updated files in our solution.

In VSTS create a new Build and select the Container template (right now it’s in preview, but works well).

Docker Build 01

On the next screen, select the “Hosted Linux” build agent (also now in preview, but works a treat). You need to select this so that you build a Linux-based Image, otherwise you will get a Windows Container which may limit your deployment options.

build container 02

We then need to update the Build Tasks to have the right details for the target ACR and to build the solution using the “Dockerfile.CI” file we created earlier, rather than the default Dockerfile. I also set a fixed name for the Image Name, primarily because the default selected by VSTS typically tends to be invalid. You could also consider changing the tag from $(Build.BuildId) to be $(Build.BuildNumber) which is much easier to directly track in VSTS.

build container 03

Finally, update the Publish Image Task with the same ACR and Image naming scheme.

Running your build should generate an image that is registered in the target ACR as shown below.

ACR

Deploy to Web Apps for Containers

Once the Container Image is registered in ACR, you can theoretically deploy it to any container host (Azure Container Instances, Web Apps for Containers, Azure Container Services), but for this blog we’ll look at Web Apps for Containers.

When you create your new Web App for Containers instance, ensure you select Azure Container Registry as the source and that you select the correct Repository. If you have added the ‘latest’ tag to your built Images you can select that at setup, and later enable Continuous Deployment.

webappscontainers

The result will be that your custom Image is deployed into your Web Apps for Containers instance and which will be available on ports 80 and 443 for the world to use.

Happy days!

I’ve uploaded the sample project I used for this blog to Github – you can find it at: https://github.com/sjwaight/docker-dotnetcore-vsts-demo

Also, please feel free to leave any comments you have, and I am certainly interested in other ways to achieve this outcome as we considered Docker Compose with the YAML files but ran into issues at build time.

Tagged , , ,

Microsoft Open Source Roadshow – Free training on Azure – Canberra and Sydney

Microsoft Open Source Roadshow

I’m excited to have the opportunity to share Azure’s powerful Open Source support with more developers in November.

Our first run of these sessions in August proved popular, so if you, or someone you know, wants to learn more they can sign up below.

We’ll cover language support (application and Azure SDK), OS support (Linux, BSD), Datastores (MySQL, PostreSQL, MongoDB), Continuous Deployment and, last, but not least, Containers (Docker, Container Registry, Kubernetes, et al).

We’re starting off in Canberra this time, then Sydney, so if you’re interested here are the links to register:

  • Canberra – Wednesday 1 November 2017: Register
  • Sydney – Friday 10 November 2017: Register

We’re also running two days in New Zealand in November too if you know anyone who might want to come along.

If you have any questions you’d like to see answered at the days feel free to leave a comment.

I hope to see you there!

Tagged , ,

Microsoft Open Source Roadshow – Free training on Azure – Auckland and Wellington!

Microsoft Open Source Roadshow

Hello New Zealand friends!

I’m really happy to share that we are bringing the Open Source Roadshows to Auckland and Wellington in November 2017!

We’ll cover language support (application and Azure SDK), OS support (Linux, BSD), Datastores (MySQL, PostreSQL, MongoDB, SQL Server on Linux), Continuous Deployment and, last, but not least, Containers (Docker, Container Registry, Kubernetes, et al).

If you’re interested in learning more here are the links to register:

  • Auckland – Tuesday 7 November 2017: Register
  • Wellington – Wednesday 8 November 2017: Register

If you have any questions you’d like to see answered at the days feel free to leave a comment.

I hope to see you there!

Tagged , ,

Speaking: Azure Functions at MUG Strasbourg – 28 September

I’m really excited about this opportunity to share the power of Azure with the developer and IT Pro community in France that is soon to gain local Azure Regions in which to build their solutions.

If you live in the surrounding areas I’d love to see you there. More details available via Meetup.

Tagged , ,

What happened with my Azure VM Scale Set Instance IDs?

In my last couple of posts (here and here) I've covered moving from Azure VMs to running workloads on VM Scale Sets (VMSS).

One item you use need to adjust to is the instance naming scheme that is used by VMSS.

I was looking at the two VM Scale Sets I have setup in my environment and was surprised to see the following.

The first VM Scale Set I setup has Instances with these IDs (0 and 1):

VMSS 01

and my second has Instances with IDs of 2 and 3:

VMSS 02

I thought this was a bit weird that two unrelated VM Scale Sets seem to be sharing a common Instance ID source – maybe something to do with the underlying provisioning engine continuing on Instance IDs across VMSS on a VNet, or something similar?

As it turns out, this is entirely coincidental.

When you provision a VMSS and you set the "overprovision" flag to "true" the provisioning engine will build more Instances than required (you can see this if you watch in the portal while the VMSS is being provisioned) and then delete the excess above what is actually required. This is by design and is described by Microsoft on their VMSS design considerations page.

Here's a snippet of what your ARM template will look like:

"properties": {
"overprovision": "true",
"singlePlacementGroup": "false",
"upgradePolicy": {
"mode": "Manual"
}
}

So, for my scenario, it just happens that for the first VMSS that the engine deleted the Instances above Instance 1, and for the second VMSS the engine deleted Instances 0 and 1!

Too Easy! 🙂

Tagged , ,

Moving from Azure VMs to Azure VM Scale Sets – Runtime Instance Configuration

In my previous post I covered how you can move from deploying a solution to pre-provisioned Virtual Machines (VMs) in Azure to a process that allows you to create a custom VM Image that you deploy into VM Scale Sets (VMSS) in Azure.

As I alluded to in that post, one item we will need to take care of in order to truly move to a VMSS approach using a VM image is to remove any local static configuration data we might bake into our solution.

There are a range of options you can move to when going down this path, from solutions you custom build to running services such as Hashicorp’s Consul.

The environment I’m running in is fairly simple, so I decided to focus on a simple custom build. The remainder of this post is covering the approach I’ve used to build a solution that works for me, and perhaps might inspire you.

I am using an ASP.Net Web API as my example, but I am also using a similar pattern for Windows Services running on other VMSS instances – just the location your startup code goes will be different.

The Starting Point

Back in February I blogged about how I was managing configuration of a Web API I was deploying using VSTS Release Management. In that post I covered how you can use the excellent Tokenization Task to create a Web Deploy Parameters file that can be used to replace placeholders on deployment in the web.config of an application.

My sample web.config is shown below.

The problem with this approach when we shift to VM Images is that these values are baked into the VM Image which is the build output, which in turn can be deployed to any environment. I could work around this by building VM Images for each environment to which I deploy, but frankly that is less than ideal and breaks the idea of “one binary (immutable VM), many environments”.

The Solution

I didn’t really want to go down the route of service discovery using something like Consul, and I really only wanted to use Azure’s default networking setup. This networking requirement meant no custom private DNS I could use in some form of configuration service discovery based on hostname lookup.

…and…. to be honest, with the PaaS services I have in Azure, I can build my own solution pretty easily.

The solution I did land on looks similar to the below.

  • Store runtime configuration in Cosmos DB and geo-replicate this information so it is highly available. Each VMSS setup gets its own configuration document which is identified by a key-service pair as the document ID.
  • Leverage a read-only Access Key for Cosmos DB because we won’t ever ask clients to update their own config!
  • Use Azure Key Vault as to store the Cosmos DB Account and Access Key that can be used to read the actual configuration. Key Vault is Regionally available by default so we’re good there too.
  • Configure an Azure AD Service Principal with access to Key Vault to allow our solution to connect to Key Vault.

I used a conventions-based approach to configuration, so that the whole process works based on the VMSS instance name and the service type requesting configuration. You can see this in the code below based on the URL being used to access Key Vault and the Cosmos DB document ID that uses the same approach.

The resulting changes to my Web API code (based on the earlier web.config sample) are shown below. This all occurs at application startup time.

I have also defined a default Application Insights account into which any instance can log should it have problems (which includes not being able to read its expected Application Insights key). This is important as it allows us to troubleshoot issues without needing to get access to the VMSS instances.

Here’s how we authorise our calls to Key Vault to retrieve our initial configuration Secrets (called on line 51 of the above sample code).

My goal was to make configuration easily manageable across multiple VMSS instances which requires some knowledge around how VMSS instance names are created.

The basic details are that they consist of a hostname prefix (based on what you input at VMSS creation time) that is appended with a base-36 (hexatrigesimal) value representing the actual instance. There’s a great blog from Guy Bowerman from Microsoft that covers this in detail so I won’t reproduce it here.

The final piece of the puzzle is the Cosmos DB configuration entry which I show below.

The ‘id’ field maps to the VMSS instance prefix that is determined at runtime based on the name you used when creating the VMSS. We strip the trailing 6 characters to remove the unique component of each VMSS instance hostname.

The outcome of the three components (code changes, Key Vault and Cosmos DB) is that I can quickly add or remove VMSS groups in configuration, change where their configuration data is stored by updating the Key Vault Secrets, and even update running VMSS instances by changing the configuration settings and then forcing a restart on the VMSS instances, causing them to re-read configuration.

Is the above the only or best way to do this? Absolutely not 🙂

I’d like to think it’s a good way that might inspire you to build something similar or better 🙂

Interestingly, getting to this stage as well, I’ve also realised there might be some value in considering moving this solution to Service Fabric in future, though I am more inclined to shift to Containers running under the control an orchestrator like Kubernetes.

What are you thoughts?

Until the next post!

Tagged , , , , , ,

Moving from Azure VMs to Azure VM Scale Sets – VM Image Build

I have previously blogged about using Visual Studio Team Services (VSTS) to securely build and deploy solutions to Virtual Machines running in Azure.

In this, and following posts I am going to take the existing build process I have and modify it so I can make use of VM Scale Sets to host my API solution. This switch is to allow the API to scale under load.

My current setup is very much fit for purpose for the limited trial it’s been used in, but I know (at minimum) I’ll see at least 150 times the traffic when I am running at full-scale in production, and while my trial environment barely scratches the surface in terms of consumed resources, I don’t want to have to capacity plan to the n-nth degree for production.

Shifting to VM Scale Sets with autoscale enabled will help me greatly in this respect!

Current State of Affairs

Let’s refresh ourselves with what is already in place.

Build

My existing build is fairly straightforward – we version the code (using a PowerShell script), restore packages, build the solution and then finally make sure all our artifacts are available for use by the Release Management process.

Existing build process

The output of this build is a Web Deploy package along with a PowerShell DSC module that configures the deployment on the target VM.

Release Management

I am using multiple Environments for Release Management to manage transformations of the Web Deploy Parameters file along with the Azure Subscription / Resource Group being deployed to. The Tasks in each Environment are the same though.

My Release Management Tasks (as shown below) open the NSG to allow DSC remote connections from VSTS, transform the Web Deploy Parameters file, find the VMs in a particular Azure Resource Group, copy the deployment package to each VM, run the DSC script to install the solution, before finally closing the NSG again to stop the unwashed masses from prying into my environment.

Existing release process

All good so far?

What’s the goal?

The goal is to make the minimum amount of changes to existing VSTS and deployment artifacts while moving to VM Scale Sets… sounds like an interesting challenge, so let’s go!

Converting the Build

The good news is that we can retain the majority of our existing Build definition.

Here are the items we do need to update.

Provisioning PowerShell

The old deployment approach leveraged PowerShell Desired State Configuration (DSC) to configure the target VM and deploy the custom code. The DSC script to achieve this is shown below.

The challenge with the above PowerShell is it assumes the target VM has been configured to allow WinRM / DSC to run. In our updated approach of creating a VM Image this presents some challenges, so I redeveloped the above script so it doesn’t require the use of DSC. The result is shown below.

As an aside, we could also drop the use of the Parameters file here too. As we’ll see in another post, we need to make the VM Image stateless, so any local web.config changes that are environment-specific are problematic and are best excluded from the resulting image.

Network Security Group Script

In the new model, which prepares a VM Image, we no longer need the Azure PowerShell script that opens / closes the Network Security Group (NSG) on deployment, so it’s removed in the new process.

No more use of Release Management

As the result of our Build is a VM Image we no longer need to leverage Release Management either, making our overall process much simpler.

The New Build

The new Build definition shown below – you will notice the above changes have been applied, with the addition of two new Tasks. The aspect of this I am most happy about is that our core build actually remains mostly unchanged – we have had to add two additional Tasks and change one PowerShell script to make this work.

New build process

Let’s look at the new Tasks.

Build Machine Image

This Task utilises Packer from Hashicorp to prepare a generalised Windows VM image that we can use in a VM Scale Set.

The key items to note are: you need an Azure Subscription where a temporary VM, and the final generalised VHD can be created so that Packer can build the baseline image for you.

New Build Packer Task

You will notice we are using the existing artifacts staging directory as the source of our configuration PowerShell (DeployVmSnap.ps1) which is used by Packer to configure up the host once the VM is created using an Azure Gallery Image.

The other important item here is the use of the output field. This will contain the fully qualified URL in blob storage where the resulting packed image will reside. We can use this in our next step.

Create VM Image Registration

The last Task I’ve added is to invoke an Azure PowerShell script, which is just a PowerShell script, but with the necessary environmental configuration to allow me to execute Cmdlets that interact with Azure’s APIs.

The result of the previous Packer-based Task is a VHD sitting in a Blob Storage account. While we can use this in various scenarios, I am interested in ensuring it is visible in the Azure Portal and also in allowing it to be used in VM Scale Sets that utilised Managed Disks.

The PowerShell script is shown below.

.. and here is how it is used in the Task in VSTS..

New build VM Image

You can see how we have utilised the Packer Task’s output parameter as an input into this Task (it’s in the “Script Arguments” box at the bottom of the Task).

The Result

Once we have this configured and running the result is a nice crisp VM Image that can be used in a VM Scale Set. The below screenshot shows you how this looks in my environment – I wrapped the Azure Storage Account where the VHDs live, along with the VM Image registrations in the same Resource Group for cleaner management.

Build Output

There are still some outstanding items we need to do with, specifically: configuration management (our VM Image has to be stateless) and VM Scale Set creation using the Image. We will deal with these two items in the following posts.

For now I hope you have a good grasp on how you can convert an existing VSTS build that deploys to existing VMs to one that produces a generalised VM Image that you can use either for new VMs or in VM Scale Sets.

Until the next post.

🙂

Want to see how I dealt with instance configuration? Then have a read of my next post in this series.

Tagged , , , , ,

Microsoft Open Source Roadshow – Free training on Open Source and Azure

Microsoft Open Source Roadshow

In early August I’ll be running a couple of free training days covering how developers who work in the Open Source space can bring their solutions to Microsoft’s Azure public cloud.

We’ll cover language support (application and Azure SDK), OS support (Linux, BSD), Datastores (MySQL, PostreSQL, MongoDB), Continuous Deployment and, last, but not least, Containers (Docker, Container Registry, Kubernetes, et al).

We’re starting off in Sydney and Melbourne, so if you’re interested here are the links to register:

  • Sydney – Monday 7 August 2017: Register
  • Melbourne – Friday 11 August 2017: Register

If you have any questions you’d like to see answered at the days feel free to leave a comment.

I hope to see you there!

Tagged , ,

Deploy a PHP site to Azure Web Apps using Dropbox

I’ve been having some good fun getting into the nitty gritty of Azure’s Open Source support and keep coming across some amazing things.

If you want to move away from those legacy hosting businesses and want a simple method to deploy static or dynamic websites, then this is worth a look.

The sample PHP site I used for this demonstration can be cloned on Github here: https://github.com/banago/simple-php-website

The video is without sound, but should be easy enough to follow without.

It’s so simple even your dog could do it.

Dogue

Tagged , , ,

Zero to MySQL in less than 10 minutes with Azure Database for MySQL and Azure Web Apps

I’m a long-time fan of Open Source and have spent chunks of my career knocking out LAMP solutions since before ‘LAMP’ was a thing!

Over the last few years we have seen a revived Microsoft begin to embrace (not ’embrace and extend’) the various Open Source platforms and tools that are out there and to actively contribute and participate with them.

Here’s our challenge today – setup a MySQL environment, including a web-based management UI, with zero local installation on your machine and minimal mouse clicks.

Welcome to Azure Cloud Shell

Our first step is to head on over to the Azure portal at https://portal.azure.com/ and login.

Once you are logged in open up a Cloud Shell instance by clicking on the icon at the top right of the navigation bar.

Cloud Shell

If this is the first time you’ve run it you will be prompted to create a home file share. Go ahead and do that :).

Once completed, run this command and note down the unique ID of the Subscription you’re using (or note the ID of the one you want to use!)

 
az account list

MySQL Magic

Now the fun begins! I bet you’re thinking “lot’s of CLI action”, and you’d be right. With a twist!

I’m going to present things using a series of simple bash scripts – you could easily combine these into one script and improve their argument handling, but I wanted to show the individual steps without writing one uber script that becomes impenetrable to understand!

Here’s our script to setup MySQL in Azure.

Now I could get you to type that out, or upload to your cloud share via the Portal, but that’s no fun!

At your Cloud Shell prompt run the following (update the last command line with your arguments):

 
curl -O -L https://gist.githubusercontent.com/sjwaight/0ba37e3c3522aeaebf6bd20c9f3895b0/raw/a9fccb9126330983b8a84a71f07dea2b1c583c68/create-azure-mysql.sh
chmod 755 create-azure-mysql.sh

# make sure you update the parameters for your environment
./create-azure-mysql.sh 368ff49e-XXXX-XXXX-XXXX-eb42e73e2f25 westus mysqldemorg mysqldemo01 yawadmin 5ecurePass@word!

After a few minutes you will have a MySQL server ready for use. Note that by default you won’t be able to connect to it as the firewall around it is shut by default (which is a good thing). We’ll rectify connectivity later. For now, on to the next piece of the puzzle.

Manage MySQL from a browser

No, not via some super-duper Microsoft MySQL tooling, but via everyone’s old favourite phpMyAdmin.

Surely this will take a lot of work to install I hear you ask? Not at all!

Enter Azure Web App Site Extensions! Specifically the phpMyAdmin extension.

Let’s get started by creating a new App Service Plan and Web App to which we can deploy our management web application.

We’re going to re-use the trick we learned above – pulling a Gist from Github using curl. First have a ready through the script :).

You can download this Gist from within your Cloud Shell and execute it as follows. Make sure to update the command line arguments

 
curl -O -L https://gist.githubusercontent.com/sjwaight/d65a04036cd6ceda63bf90485e22adb4/raw/6802a35d21a9d46e0abb5e3f1c88f32551f325da/create-azure-webapp.sh
chmod 755 create-azure-webapp.sh

# make sure you update the parameters for your environment
./create-azure-webapp.sh 368ff49e-XXXX-XXXX-XXXX-eb42e73e2f25 westus mysqldemorg mysqldemo01 yawadmin 5ecurePass@word! mydemoapplan msqlmgewebapp

In order to deploy the Web App Sit Extension we are going to dig a bit behind the covers of Azure App Services and utilise the REST API provided by the kudu site associated with our Web App (this appears as ‘Advanced tools’ in the Portal). If you want to understand more about its capabilities you can, and specifically about how to work with Site Extensions read their excellent documentation.

Note: if you haven’t previously setup a Git / FTP deployment user you should uncomment the line that does this. Note that this step sets the same credentials for all instances in the Subscription, so if you already have credentials defined think twice before uncommenting this line!

 
curl -O -L https://gist.githubusercontent.com/sjwaight/8a530aff0e5f3991484176825b49a563/raw/be8cfa68f95a29687f2dae6af9ca163e8e7877ac/enable-phpmyadmin.sh
chmod 755 enable-phpmyadmin.sh

# make sure you update the parameters for your environment
./enable-phpmyadmin.sh 368ff49e-XXXX-XXXX-XXXX-eb42e73e2f25 mysqldemorg msqlmgewebapp deployuser d3pl0yP455!

Done!

Browse to your freshly setup phpMyAdmin instance.

Connection Error

Oh noes!

Yes, we forgot to open up the firewall surrounding the Azure Database for MySQL instance. We can do this pretty easily.

Remember those ‘outboundIpAddresses’ values you captured when you created the Web App above? Good, this is where you will need them.

You should find that you have four source IP addresses from which outbound traffic can originate. These won’t change as long as you don’t “stop” or delete the Web App.

Here’s our simple script to enable the Web App to talk to our MySQL instance.

Now the usual drill.

Note: You might have more than four outbound IP addresses to allow – if so simply edit the script to suit.

 
curl -O -L https://gist.githubusercontent.com/sjwaight/bbd71ca3f094abe5cf438c1b826b83fe/raw/4b6aaddee8f6a21b81061ed4c4085258c0271068/update-mysql-firewall.sh
chmod 755 update-mysql-firewall.sh

# make sure you update the parameters for your environment
./update-mysql-firewall.sh 368ff49e-XXXX-XXXX-XXXX-eb42e73e2f25 mysqldemorg mysqldemo01 192.168.1.1 192.168.2.2 192.168.3.3 192.168.4.4

Once the rules are applied, try refreshing the web browser and you will now see the you are viewing the glorious phpMyAdmin.

phpMyAdmin on Azure!

Congratulations, you now have a fully functional MySQL environment, with no installs and minimal configuration required!

Tagged , , , , , , ,