Category Archives: Azure

What happened with my Azure VM Scale Set Instance IDs?

In my last couple of posts (here and here) I've covered moving from Azure VMs to running workloads on VM Scale Sets (VMSS).

One item you use need to adjust to is the instance naming scheme that is used by VMSS.

I was looking at the two VM Scale Sets I have setup in my environment and was surprised to see the following.

The first VM Scale Set I setup has Instances with these IDs (0 and 1):

VMSS 01

and my second has Instances with IDs of 2 and 3:

VMSS 02

I thought this was a bit weird that two unrelated VM Scale Sets seem to be sharing a common Instance ID source – maybe something to do with the underlying provisioning engine continuing on Instance IDs across VMSS on a VNet, or something similar?

As it turns out, this is entirely coincidental.

When you provision a VMSS and you set the "overprovision" flag to "true" the provisioning engine will build more Instances than required (you can see this if you watch in the portal while the VMSS is being provisioned) and then delete the excess above what is actually required. This is by design and is described by Microsoft on their VMSS design considerations page.

Here's a snippet of what your ARM template will look like:

"properties": {
"overprovision": "true",
"singlePlacementGroup": "false",
"upgradePolicy": {
"mode": "Manual"
}
}

So, for my scenario, it just happens that for the first VMSS that the engine deleted the Instances above Instance 1, and for the second VMSS the engine deleted Instances 0 and 1!

Too Easy! ๐Ÿ™‚

Tagged , ,

Moving from Azure VMs to Azure VM Scale Sets โ€“ Runtime Instance Configuration

In my previous post I covered how you can move from deploying a solution to pre-provisioned Virtual Machines (VMs) in Azure to a process that allows you to create a custom VM Image that you deploy into VM Scale Sets (VMSS) in Azure.

As I alluded to in that post, one item we will need to take care of in order to truly move to a VMSS approach using a VM image is to remove any local static configuration data we might bake into our solution.

There are a range of options you can move to when going down this path, from solutions you custom build to running services such as Hashicorp’s Consul.

The environment I’m running in is fairly simple, so I decided to focus on a simple custom build. The remainder of this post is covering the approach I’ve used to build a solution that works for me, and perhaps might inspire you.

I am using an ASP.Net Web API as my example, but I am also using a similar pattern for Windows Services running on other VMSS instances – just the location your startup code goes will be different.

The Starting Point

Back in February I blogged about how I was managing configuration of a Web API I was deploying using VSTS Release Management. In that post I covered how you can use the excellent Tokenization Task to create a Web Deploy Parameters file that can be used to replace placeholders on deployment in the web.config of an application.

My sample web.config is shown below.

The problem with this approach when we shift to VM Images is that these values are baked into the VM Image which is the build output, which in turn can be deployed to any environment. I could work around this by building VM Images for each environment to which I deploy, but frankly that is less than ideal and breaks the idea of “one binary (immutable VM), many environments”.

The Solution

I didn’t really want to go down the route of service discovery using something like Consul, and I really only wanted to use Azure’s default networking setup. This networking requirement meant no custom private DNS I could use in some form of configuration service discovery based on hostname lookup.

…and…. to be honest, with the PaaS services I have in Azure, I can build my own solution pretty easily.

The solution I did land on looks similar to the below.

  • Store runtime configuration in Cosmos DB and geo-replicate this information so it is highly available. Each VMSS setup gets its own configuration document which is identified by a key-service pair as the document ID.
  • Leverage a read-only Access Key for Cosmos DB because we won’t ever ask clients to update their own config!
  • Use Azure Key Vault as to store the Cosmos DB Account and Access Key that can be used to read the actual configuration. Key Vault is Regionally available by default so we’re good there too.
  • Configure an Azure AD Service Principal with access to Key Vault to allow our solution to connect to Key Vault.

I used a conventions-based approach to configuration, so that the whole process works based on the VMSS instance name and the service type requesting configuration. You can see this in the code below based on the URL being used to access Key Vault and the Cosmos DB document ID that uses the same approach.

The resulting changes to my Web API code (based on the earlier web.config sample) are shown below. This all occurs at application startup time.

I have also defined a default Application Insights account into which any instance can log should it have problems (which includes not being able to read its expected Application Insights key). This is important as it allows us to troubleshoot issues without needing to get access to the VMSS instances.

Here’s how we authorise our calls to Key Vault to retrieve our initial configuration Secrets (called on line 51 of the above sample code).

My goal was to make configuration easily manageable across multiple VMSS instances which requires some knowledge around how VMSS instance names are created.

The basic details are that they consist of a hostname prefix (based on what you input at VMSS creation time) that is appended with a base-36 (hexatrigesimal) value representing the actual instance. There’s a great blog from Guy Bowerman from Microsoft that covers this in detail so I won’t reproduce it here.

The final piece of the puzzle is the Cosmos DB configuration entry which I show below.

The ‘id’ field maps to the VMSS instance prefix that is determined at runtime based on the name you used when creating the VMSS. We strip the trailing 6 characters to remove the unique component of each VMSS instance hostname.

The outcome of the three components (code changes, Key Vault and Cosmos DB) is that I can quickly add or remove VMSS groups in configuration, change where their configuration data is stored by updating the Key Vault Secrets, and even update running VMSS instances by changing the configuration settings and then forcing a restart on the VMSS instances, causing them to re-read configuration.

Is the above the only or best way to do this? Absolutely not ๐Ÿ™‚

I’d like to think it’s a good way that might inspire you to build something similar or better ๐Ÿ™‚

Interestingly, getting to this stage as well, I’ve also realised there might be some value in considering moving this solution to Service Fabric in future, though I am more inclined to shift to Containers running under the control an orchestrator like Kubernetes.

What are you thoughts?

Until the next post!

Tagged , , , , , ,

Moving from Azure VMs to Azure VM Scale Sets – VM Image Build

I have previously blogged about using Visual Studio Team Services (VSTS) to securely build and deploy solutions to Virtual Machines running in Azure.

In this, and following posts I am going to take the existing build process I have and modify it so I can make use of VM Scale Sets to host my API solution. This switch is to allow the API to scale under load.

My current setup is very much fit for purpose for the limited trial it’s been used in, but I know (at minimum) I’ll see at least 150 times the traffic when I am running at full-scale in production, and while my trial environment barely scratches the surface in terms of consumed resources, I don’t want to have to capacity plan to the n-nth degree for production.

Shifting to VM Scale Sets with autoscale enabled will help me greatly in this respect!

Current State of Affairs

Let’s refresh ourselves with what is already in place.

Build

My existing build is fairly straightforward – we version the code (using a PowerShell script), restore packages, build the solution and then finally make sure all our artifacts are available for use by the Release Management process.

Existing build process

The output of this build is a Web Deploy package along with a PowerShell DSC module that configures the deployment on the target VM.

Release Management

I am using multiple Environments for Release Management to manage transformations of the Web Deploy Parameters file along with the Azure Subscription / Resource Group being deployed to. The Tasks in each Environment are the same though.

My Release Management Tasks (as shown below) open the NSG to allow DSC remote connections from VSTS, transform the Web Deploy Parameters file, find the VMs in a particular Azure Resource Group, copy the deployment package to each VM, run the DSC script to install the solution, before finally closing the NSG again to stop the unwashed masses from prying into my environment.

Existing release process

All good so far?

What’s the goal?

The goal is to make the minimum amount of changes to existing VSTS and deployment artifacts while moving to VM Scale Sets… sounds like an interesting challenge, so let’s go!

Converting the Build

The good news is that we can retain the majority of our existing Build definition.

Here are the items we do need to update.

Provisioning PowerShell

The old deployment approach leveraged PowerShell Desired State Configuration (DSC) to configure the target VM and deploy the custom code. The DSC script to achieve this is shown below.

The challenge with the above PowerShell is it assumes the target VM has been configured to allow WinRM / DSC to run. In our updated approach of creating a VM Image this presents some challenges, so I redeveloped the above script so it doesn’t require the use of DSC. The result is shown below.

As an aside, we could also drop the use of the Parameters file here too. As we’ll see in another post, we need to make the VM Image stateless, so any local web.config changes that are environment-specific are problematic and are best excluded from the resulting image.

Network Security Group Script

In the new model, which prepares a VM Image, we no longer need the Azure PowerShell script that opens / closes the Network Security Group (NSG) on deployment, so it’s removed in the new process.

No more use of Release Management

As the result of our Build is a VM Image we no longer need to leverage Release Management either, making our overall process much simpler.

The New Build

The new Build definition shown below – you will notice the above changes have been applied, with the addition of two new Tasks. The aspect of this I am most happy about is that our core build actually remains mostly unchanged – we have had to add two additional Tasks and change one PowerShell script to make this work.

New build process

Let’s look at the new Tasks.

Build Machine Image

This Task utilises Packer from Hashicorp to prepare a generalised Windows VM image that we can use in a VM Scale Set.

The key items to note are: you need an Azure Subscription where a temporary VM, and the final generalised VHD can be created so that Packer can build the baseline image for you.

New Build Packer Task

You will notice we are using the existing artifacts staging directory as the source of our configuration PowerShell (DeployVmSnap.ps1) which is used by Packer to configure up the host once the VM is created using an Azure Gallery Image.

The other important item here is the use of the output field. This will contain the fully qualified URL in blob storage where the resulting packed image will reside. We can use this in our next step.

Create VM Image Registration

The last Task I’ve added is to invoke an Azure PowerShell script, which is just a PowerShell script, but with the necessary environmental configuration to allow me to execute Cmdlets that interact with Azure’s APIs.

The result of the previous Packer-based Task is a VHD sitting in a Blob Storage account. While we can use this in various scenarios, I am interested in ensuring it is visible in the Azure Portal and also in allowing it to be used in VM Scale Sets that utilised Managed Disks.

The PowerShell script is shown below.

.. and here is how it is used in the Task in VSTS..

New build VM Image

You can see how we have utilised the Packer Task’s output parameter as an input into this Task (it’s in the “Script Arguments” box at the bottom of the Task).

The Result

Once we have this configured and running the result is a nice crisp VM Image that can be used in a VM Scale Set. The below screenshot shows you how this looks in my environment – I wrapped the Azure Storage Account where the VHDs live, along with the VM Image registrations in the same Resource Group for cleaner management.

Build Output

There are still some outstanding items we need to do with, specifically: configuration management (our VM Image has to be stateless) and VM Scale Set creation using the Image. We will deal with these two items in the following posts.

For now I hope you have a good grasp on how you can convert an existing VSTS build that deploys to existing VMs to one that produces a generalised VM Image that you can use either for new VMs or in VM Scale Sets.

Until the next post.

๐Ÿ™‚

Want to see how I dealt with instance configuration? Then have a read of my next post in this series.

Tagged , , , , ,

Microsoft Open Source Roadshow – Free training on Open Source and Azure

Microsoft Open Source Roadshow

In early August I’ll be running a couple of free training days covering how developers who work in the Open Source space can bring their solutions to Microsoft’s Azure public cloud.

We’ll cover language support (application and Azure SDK), OS support (Linux, BSD), Datastores (MySQL, PostreSQL, MongoDB), Continuous Deployment and, last, but not least, Containers (Docker, Container Registry, Kubernetes, et al).

We’re starting off in Sydney and Melbourne, so if you’re interested here are the links to register:

  • Sydney – Monday 7 August 2017: Register
  • Melbourne – Friday 11 August 2017: Register

If you have any questions you’d like to see answered at the days feel free to leave a comment.

I hope to see you there!

Tagged , ,

Deploy a PHP site to Azure Web Apps using Dropbox

I’ve been having some good fun getting into the nitty gritty of Azure’s Open Source support and keep coming across some amazing things.

If you want to move away from those legacy hosting businesses and want a simple method to deploy static or dynamic websites, then this is worth a look.

The sample PHP site I used for this demonstration can be cloned on Github here:ย https://github.com/banago/simple-php-website

The video is without sound, but should be easy enough to follow without.

It’s so simple even your dog could do it.

Dogue

Tagged , , ,

Zero to MySQL in less than 10 minutes with Azure Database for MySQL and Azure Web Apps

I’m a long-time fan of Open Source and have spent chunks of my career knocking out LAMP solutions since before ‘LAMP’ was a thing!

Over the last few years we have seen a revived Microsoft begin to embrace (not ’embrace and extend’) the various Open Source platforms and tools that are out there and to actively contribute and participate with them.

Here’s our challenge today – setup a MySQL environment, including a web-based management UI, with zero local installation on your machine and minimal mouse clicks.

Welcome to Azure Cloud Shell

Our first step is to head on over to the Azure portal at https://portal.azure.com/ and login.

Once you are logged in open up a Cloud Shell instance by clicking on the icon at the top right of the navigation bar.

Cloud Shell

If this is the first time you’ve run it you will be prompted to create a home file share. Go ahead and do that :).

Once completed, run this command and note down the unique ID of the Subscription you’re using (or note the ID of the one you want to use!)

 
az account list

MySQL Magic

Now the fun begins! I bet you’re thinking “lot’s of CLI action”, and you’d be right. With a twist!

I’m going to present things using a series of simple bash scripts – you could easily combine these into one script and improve their argument handling, but I wanted to show the individual steps without writing one uber script that becomes impenetrable to understand!

Here’s our script to setup MySQL in Azure.

Now I could get you to type that out, or upload to your cloud share via the Portal, but that’s no fun!

At your Cloud Shell prompt run the following (update the last command line with your arguments):

 
curl -O -L https://gist.githubusercontent.com/sjwaight/0ba37e3c3522aeaebf6bd20c9f3895b0/raw/a9fccb9126330983b8a84a71f07dea2b1c583c68/create-azure-mysql.sh
chmod 755 create-azure-mysql.sh

# make sure you update the parameters for your environment
./create-azure-mysql.sh 368ff49e-XXXX-XXXX-XXXX-eb42e73e2f25 westus mysqldemorg mysqldemo01 yawadmin 5ecurePass@word!

After a few minutes you will have a MySQL server ready for use. Note that by default you won’t be able to connect to it as the firewall around it is shut by default (which is a good thing). We’ll rectify connectivity later. For now, on to the next piece of the puzzle.

Manage MySQL from a browser

No, not via some super-duper Microsoft MySQL tooling, but via everyone’s old favourite phpMyAdmin.

Surely this will take a lot of work to install I hear you ask? Not at all!

Enter Azure Web App Site Extensions! Specifically the phpMyAdmin extension.

Let’s get started by creating a new App Service Plan and Web App to which we can deploy our management web application.

We’re going to re-use the trick we learned above – pulling a Gist from Github using curl. First have a ready through the script :).

You can download this Gist from within your Cloud Shell and execute it as follows. Make sure to update the command line arguments

 
curl -O -L https://gist.githubusercontent.com/sjwaight/d65a04036cd6ceda63bf90485e22adb4/raw/6802a35d21a9d46e0abb5e3f1c88f32551f325da/create-azure-webapp.sh
chmod 755 create-azure-webapp.sh

# make sure you update the parameters for your environment
./create-azure-webapp.sh 368ff49e-XXXX-XXXX-XXXX-eb42e73e2f25 westus mysqldemorg mysqldemo01 yawadmin 5ecurePass@word! mydemoapplan msqlmgewebapp

In order to deploy the Web App Sit Extension we are going to dig a bit behind the covers of Azure App Services and utilise the REST API provided by the kudu site associated with our Web App (this appears as ‘Advanced tools’ in the Portal). If you want to understand more about its capabilities you can, and specifically about how to work with Site Extensions read their excellent documentation.

Note: if you haven’t previously setup a Git / FTP deployment user you should uncomment the line that does this. Note that this step sets the same credentials for all instances in the Subscription, so if you already have credentials defined think twice before uncommenting this line!

 
curl -O -L https://gist.githubusercontent.com/sjwaight/8a530aff0e5f3991484176825b49a563/raw/be8cfa68f95a29687f2dae6af9ca163e8e7877ac/enable-phpmyadmin.sh
chmod 755 enable-phpmyadmin.sh

# make sure you update the parameters for your environment
./enable-phpmyadmin.sh 368ff49e-XXXX-XXXX-XXXX-eb42e73e2f25 mysqldemorg msqlmgewebapp deployuser d3pl0yP455!

Done!

Browse to your freshly setup phpMyAdmin instance.

Connection Error

Oh noes!

Yes, we forgot to open up the firewall surrounding the Azure Database for MySQL instance. We can do this pretty easily.

Remember those ‘outboundIpAddresses’ values you captured when you created the Web App above? Good, this is where you will need them.

You should find that you have four source IP addresses from which outbound traffic can originate. These won’t change as long as you don’t “stop” or delete the Web App.

Here’s our simple script to enable the Web App to talk to our MySQL instance.

Now the usual drill.

Note: You might have more than four outbound IP addresses to allow – if so simply edit the script to suit.

 
curl -O -L https://gist.githubusercontent.com/sjwaight/bbd71ca3f094abe5cf438c1b826b83fe/raw/4b6aaddee8f6a21b81061ed4c4085258c0271068/update-mysql-firewall.sh
chmod 755 update-mysql-firewall.sh

# make sure you update the parameters for your environment
./update-mysql-firewall.sh 368ff49e-XXXX-XXXX-XXXX-eb42e73e2f25 mysqldemorg mysqldemo01 192.168.1.1 192.168.2.2 192.168.3.3 192.168.4.4

Once the rules are applied, try refreshing the web browser and you will now see the you are viewing the glorious phpMyAdmin.

phpMyAdmin on Azure!

Congratulations, you now have a fully functional MySQL environment, with no installs and minimal configuration required!

Tagged , , , , , , ,

Speaking at Office 365 Saturday

If you’re interested to learn more about Microsoft Graph API and how you can leverage it to build compelling solutions in the form of Bots in Microsoft Teams, I’ll be speaking at Office 365 Saturday in Sydney this week on June 3rd.

Tickets are free, but get in while there are still some left!

O365 Saturday Sydney

Saturday, Jun 3, 2017, 8:45 AM

Clifton’s Sydney
60 Margaret Street Sydney, AU

115 Members Went

Welcome to the 2017 edition of Sydney Office 365 Saturday.Join administrators, end users, architects, developers, and other professionals that work with Microsoft Technologies for a great day of awesome sessions presented by industry experts.Did you attend and want to leave feedback.Leave feedback here.O365 Saturday is a fantastic day to learn…

Check out this Meetup →

Demo

If you are interested in the demonstration I ran during my talk you can download the code from Github.

The way the bot hangs together is shown below.

How is the bot built?

Tagged

A Year with Azure Functions

“The best journeys answer questions that in the beginning you didn’t even think to ask.” – Jeff Johnson – 180ยฐ South

I thought with the announcement at Build 2017 of the preview of the Functions tooling for Visual Studio 2017 that I would take a look back on the journey I’ve been on with Functions for the past 12 months. This post is a chance to cover what’s changed and capture some of the lessons I learned along the way, and that hopefully you can take something away from.

In the beginning

I joined the early stages of a project team at a customer and was provided with access to their Microsoft Azure cloud environment as a way to stand up rapid prototypes and deliver ongoing solutions.

I’ve been keen for a while to do away with as much traditional infrastructure as possible in my solutions, primarily as a way to make ongoing management a much easier proposition (plus it aligns with my developer sensibilities). Starting with this environment I was determined to apply this where I could.

In this respect the (at the time) newly announced Function Apps fit the bill!

The First Release

I worked on a couple of early prototype systems for the customer that leveraged Azure Functions in their pre-GA state, edited directly in the Browser. I’m a big C# fan, so all our Functions have been C# flavoured.

After hacking away in the browser I certainly found out how much I missed Visual Studio’s Intellisense, though my C# improved after a few years away from deep use of the language!

We ran some field tests with the system, and out of this took some lessons away.

I blogged about Functions with KeyVault and also on how to deliver emails using Functions with SendGrid.

At this point I also learnt a key lesson: Functions are awesome, but may not be best suited to time-critical operations (where the trigger speed is measured in milliseconds or low seconds). This was particularly the case with uncompiled Function Apps (which at the time was the only option).

I wrote a Function to push a sythentic transaction into our pipeline periodically to offset some of this behaviour. Even after multiple revisions we’re still doing this certain activities such as retrieving and caching Graph API access tokens.

Another pain point for us at this juncture was the lack of continuous deployment options. Our workaround was a basic copy / paste into a VSTS Web project held in Git repository in VSTS.

Around the end of our initial field trials Functions hit GA which was perfect for us!

The Second Release

I now had a production-ready system running Functions at its core. At this stage we had all our Functions in a single App because the load wasn’t sufficient to break them out.

Our service was yet to launch when the Functions team announced a CI / CD experience using the ‘Deployment Options’ setting in the Azure Portal.

We revved to a second release, using the new CI / CD deployment option to enforce deployments to a production App Service Plan. We were still living with on-demand compilation which we found impactful at deployment time as everything had to be restored and recompiled.

The Third Release (funproj all round!)

Visual Studio 2015 tooling was announced!

Yep, we used it – welcome to funproj Revision (3) of the codebase!

About now, one of my earlier blogs on sending email came back to haunt me as I began to (and still do periodically) receive emails from people clearly trying out my sample code!

Ah… the dangers of writing demonstrations that have your real email address in the To: field.

One item we’ve done with each major revision of the Functions code is to publish a new App Service Plan and use this is as our deployment target. We paramaterise as much as we can so we can pre-populate things like Service Bus connection strings. Our newly deployed Functions are disabled and then manually cut over one at a time until we’re satisfied that the new deployment is in a happy state. This n-1 Function App will live for a couple of weeks after initial deployment with its Functions disabled. This was all decided before deployment slot support which may change this design ๐Ÿ™‚

We have a Function we can run that checks that all your environment variables are setup in a new App Plan – hopefully in future we’ll get this automated away as well!

Additionally, if we have a major piece of code changing, but not sufficient for a major revision, we’ll do an Azure-level backup of the App before we run the deployment so we have a solid recovery point to return to if need be.

Function runtime versioning

We had one particularly bad day where our Functions started playing up out-of-the-blue. It turned out that an unintended breaking change had been made by the Functions team when they did a new release.

The key learning from this experience for us was that our production Function Apps always run a known-good version of the runtime which is set as a specific version by setting the FUNCTIONS_EXTENSION_VERSION to something like ‘1.0.10576’ rather than ‘~1’ which means ‘latest non-breaking 1 release’.

We run our development App Services using ‘~1’ however, so we can determine periodically to update our production runtime to the latest known good runtime version so we don’t lag too far behind the good work the Functions team is doing.

Our other main issue during this revision was that our App Plan stopped being able to read the x509 cert store for some reason and our KeyVault client started failing. I never got to the bottom of it, but it was fixed through deploying to a new Plan.

Talking Functions

I was lucky enough to get to talk a bit about the solution we’d been building in early February.

The video quality varies, but here it is if you’re interested.

Hello Compiled Functions… oooooh Application Insights! (Rev 4!)

Compiled Functions? Sure, why not?!

The benefit here is the reduced startup times produced by the pre-compiled nature of the solution. The only downside we’ve found is deployments can cause transient thread-abort exceptions in some libraries, but these are not substantial and we deal with them gracefully in our solution.

As an example, we had previously seen Function cold start times some times up to 15 seconds. We now rarely see any impacts, some of which is the result of lessons we’ve learned along the way, coupled with the compiled codebase and the hard work the Functions team is doing to continuously improve their platform.

Early on in the life of our solution we had looked at Application Insights as a way to track our Function App performance and trace logging. We abandoned it because we ended up sometimes writing more App Insights code than app code! The newly introduced App Insights support, however, does away with all of this and it works a treat:

Functions App Insights

Unfortunately as part of the move here we also dropped Visual Studio 2015 support which meant we dropped the ‘funproj’ project layout. As Compiled Functions now give us local design time benefits we didn’t get before, it’s a good trade-off for my mind.

So… revision 5 anyone?!

We’re not here yet, though once the Visual Studio 2017 tooling hits GA we’ll probably look seriously at moving to it.

Ideally we’ll also move to a more fully testable solution and one that supports VSTS Release Management which we use for many other aspects of our overall solution.

Gee… sounds like a lot of work!

This might sound like a lot of busy work, but to be honest, the microservices we’re running using Functions are so small as to be easy to port from one revision to the next. We make conscious decisions around the right time to move, when we feel that the benefits to be realised are worth the small hit to our productivity.

Ultimately our goal is a shown below (minus the flames of course!)

That’s a wrap!

The Functions team is open in their work – ask them questions on Stack Overflow http://stackoverflow.com/questions/tagged/azure-functions

Raise your issues and submit your PRs here: https://github.com/Azure/Azure-Functions

Happy Days ๐Ÿ™‚

Tagged , ,

Global Azure Bootcamp 2017 Session – .Net Core, Docker and Kubernetes

If you are attending my session and would like to undertake the exercise here’s what you’ll need to install locally, along with instructions on working with the code.

Pro-tip: As this is a demo consider using an Azure Region in North America where the compute cost per minute is lower than in Australia.

Prerequisites

Note that for both the Azure CLI and Kubernetes tools you might need to modify your PC’s PATH variable to include the paths to the ‘az’ and ‘kubectl’ commands.

On my install these ended up in:

az: C:\Users\simon\AppData\Roaming\Python\Python36\Scripts\
kubectl: c:\Program Files (x86)\

If you have any existing PowerShell or Command prompts open you will need to close and re-open to pick up the new path settings.

Readying your Docker-hosted App

When you compile your solution to deploy to Docker running on Azure, make sure you select the ‘Release’ configuration in Visual Studio. This will ensure the right entry point is created so your containers will start when deployed in Azure. If you run into issues, make sure you have this setting right!

If you compile the Demo2 solution it will produce a Docker image with the tag ‘1.0’. You can then compile the Demo3 solution which will produce a Docker image with the tag ‘1.1’. They can both live in your local environment side-by-side no issues.

Log into your Azure Subcription

Open up PowerShell or a Command Prompt and log into your subscription.

az login

Note: if you have more than one Subscription you will need to use the az account command to select the right one.

Creating an Azure Container Service with Kubernetes.

Before you do this step, make sure you have created your Azure Container Registry (ACR). The Azure Container Service has some logic built-in that will make it easier to pull images from ACR and avoid a bunch of cross-configuration. The ACR has to be in the same Subscription for this to work.

I chose to create an ACS instance using the Azure CLI because it allows me to control the Kubernetes cluster configuration better than the Portal.

The easiest way to get started is to follow the Kubernetes walk-through on the Microsoft Docs site.

Continue until you get to the “Create your first Kubernetes service” and then stop.

Publishing to your Registry

As a starting point make sure you enable the admin user for your Registry so you can push images to it. You can do this via the Portal under “Access keys”.

Start a new PowerShell prompt and let’s make sure we’re good to by seeing if our compiled .Net Core solution images areย here.

docker images

REPOSITORY                          TAG      IMAGE ID       CREATED  SIZE
siliconvalve.gab2017.demowebcore    1.0      e0f32b05eb19   1m ago   317MB
siliconvalve.gab2017.demowebcore    1.1      a0732b0ead13   1m ago   317MB

Good!

Now let’s log into our Azure Registry.

docker login YOUR_ACR.azureacr.io -u ADMIN_USER -p PASSWORD
Login Succeeded

Let’s tag and push our local images to our Azure Container Registry. Note this will take a while as it will publish the image which our case is 317MB. If you updated this in future only the differential will be re-published.

docker tag siliconvalve.gab2017.demowebcore:1.0 YOUR_ACR.azurecr.io/gab2017/siliconvalve.gab2017.demowebcore:1.0
docker tag siliconvalve.gab2017.demowebcore:1.1 YOUR_ACR.azurecr.io/gab2017/siliconvalve.gab2017.demowebcore:1.1
docker push YOUR_ACR.azurecr.io/gab2017/siliconvalve.gab2017.demowebcore:1.0
The push refers to a repository [YOUR_ACR.azurecr.io/gab2017/siliconvalve.gab2017.demowebcore]
f190fc6d75e4: Pushed
e64be9dd3979: Pushed
c300a19b03ee: Pushed
33f8c8c50fa7: Pushed
d856c3941aa0: Pushed
57b859e6bf6a: Pushed
d17d48b2382a: Pushed
latest: digest: sha256:14cf48fbab5949b7ad1bf0b6945201ea10ca9eb2a26b06f37 size: 1787

Repeat the ‘docker push’ for you 1.1 image too.

At this point we could now push this image to any compatible Docker host and it would run.

Deploying to Azure Container Service

Now comes the really fun bit :).

If you inspect the GAB-Demo2 project you will find a Kubernetes deployment file, the contents of which are displayed below.

If you update the Azure Container Registry path and insert an appropriate admin secret you now have a file that will deploy your docker image to a Kubernetes-managed Azure Container Service.

At your command line run:

kubectl create -f PATH_TO_FILE\gab-api-demo-deployment.yml
service "gabdemo" created
deployment "gabdemo" created

After a few minutes if you look in the Azure Portal you will find that a public load balancer has been created by Kubernetes which will allow you to hit the API definition at http://IP_OF_LOADBALANCER/swagger

You can also find this IP address by using the Kubernetes management UI, which we can get to using a secured tunnel to the Kubernetes management host (this step only works if you setup the ACS environment fully and downloaded credentials).

At your command prompt type:

az acs kubernetes browse --name=YOUR_CLUSTER_NAME --resource-group=YOUR_RESOURCE_GROUP
Proxy running on 127.0.0.1:8001/ui
Press CTRL+C to close the tunnel...
Starting to serve on 127.0.0.1:8001

A browser will pop open on the Kubernetes management portal and you can then open the Services view and see your published endpoints by editing the ‘gabdemo’ Service and viewing the public IP address at the very bottom of the dialog.

If you hit the Swagger URL for this you might get a 500 server error – this is easy to fix (and is a bug in my code that I need to fix!) – simply change the URL in the Swagger page to include “v1.0” instead of “v1”.

Kubernetes Service Details

Upgrade the Container image

For extra bonus points you can also upgrade the container image running by telling Kubernetes to modify the Service. You can do this either via the commandline using kubectl or you can edit the Service definition via the management web UI (shown below) and Kubernetes will upgrade the Container image running. If you hit the swagger page for the service you will see the API version has incremented to 1.1 now!

Edit Service

You could also choose to roll-back if you wanted – simply update the tag back to 1.0 and watch the API rollback.

So there we have it – modernising existing .Net Windows-hosted apps so they run on Docker Containers on Azure, managed by Kubernetes!

Tagged , ,

When to use Azure Load Balancer or Application Gateway

One thing Microsoft Azure is very good at is giving you choices – choices on how you host your workloads and how you let people connect to those workloads.

In this post I am going to take a quick look at two technologies thatย are related to high availability of solutions: Azure Load Balancer and Application Gateway.

Application Gateway is a bit of a dark horse for many people. Anyone coming to Azure has to get to grips with Azure Load Balancer (ALB) as a means to provide highly available solutions, but many don’t progress beyond this to look at Application Gateway (AG) – hopefully this post will change that!

The OSI model

A starting point to understand one of the key differences between the ALB and AG offerings is the OSI model which describes the logical layers required to implement computer networking. Wikipedia has a good introduction, which I don’t intend to reproduce here, but I’ll cover the important bits below.

TCP/IP and UDP are bundled together at Layer 4 in the OSI model. These underpin a range of higher level protocols such as HTTP which sit at the stop of the stack at Layer 7. You will often see load balancers defined as Layer 4 or Layer 7, and, as you can guess, the traffic they can manage varies as a result.

Load balancing HTTP using a Layer 4 capable device works, though using Cookie-based session affinity is out, while a pure Layer 7 solution has no chance to handle TCP or UDP traffic which sit below at Layer 4.

Clear as mud? Great!

You might ask what does it mean to us in this post?

I’m glad you asked ๐Ÿ™‚ The first real difference between the Azure Load Balancer and Application Gateway is that an ALB works at traffic at Layer 4, while Application Gateway handles just Layer 7 traffic, and specifically, within that,ย HTTP (including HTTPS and WebSockets).

Anything else to know?

There are a few other differences it is worth calling out. I will summarise as follows:

  • Load Balancer is free (unless you wish to use multiple Virtual IPs). Application Gateway is billed per-hour, and has two tiers, depending on features you need (with/without WAF)
  • Application Gateway supports SSL termination, URL-based routing, multi-site routing, Cookie-based session affinity and Web Application Firewall (WAF) features. Azure Load Balancer provides basic load balancing based on 2 or 5 tuple matches.
  • Load Balancer only supports endpoints hosted in Azure. Application Gateway can support any routable IP address.

It’s also worth pointing out that when you provision an Application Gateway you also get a transparent Load Balancer along for the ride. The Load Balancer is responsible for balancing traffic between the Application Gateway instances to ensure it remains highly available ๐Ÿ™‚

Using either with VM Scale Sets (VMSS)

The default setup for a VMSS includes a Load Balancer. If you want to use an Application Gateway instead you can either leave the Load Balancer in place and put the Application Gateway in front of it, or you can use an ARM template similar to the following sample to swap out the Load Balancer for an Application Gateway instance instead.

https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-windows-app-gateway

The key item to ensure is that the Network Interface Card (NIC) is configured for each VM in the Scale Set to be a part of the Backend Address Pool of the Application Gateway.

So there we have a quick overview of the Azure Load Balancer and Application Gateway offerings and when to consider one over the other.

Either the Azure Load Balancer overview or the Application Gateway introduction provides a good breakdown of where you should consider using one or the other.

Happy (highly available) days! ๐Ÿ™‚

Tagged , ,