Category Archives: Cloud

Moving from Azure VMs to Azure VM Scale Sets – Runtime Instance Configuration

In my previous post I covered how you can move from deploying a solution to pre-provisioned Virtual Machines (VMs) in Azure to a process that allows you to create a custom VM Image that you deploy into VM Scale Sets (VMSS) in Azure.

As I alluded to in that post, one item we will need to take care of in order to truly move to a VMSS approach using a VM image is to remove any local static configuration data we might bake into our solution.

There are a range of options you can move to when going down this path, from solutions you custom build to running services such as Hashicorp’s Consul.

The environment I’m running in is fairly simple, so I decided to focus on a simple custom build. The remainder of this post is covering the approach I’ve used to build a solution that works for me, and perhaps might inspire you.

I am using an ASP.Net Web API as my example, but I am also using a similar pattern for Windows Services running on other VMSS instances – just the location your startup code goes will be different.

The Starting Point

Back in February I blogged about how I was managing configuration of a Web API I was deploying using VSTS Release Management. In that post I covered how you can use the excellent Tokenization Task to create a Web Deploy Parameters file that can be used to replace placeholders on deployment in the web.config of an application.

My sample web.config is shown below.

The problem with this approach when we shift to VM Images is that these values are baked into the VM Image which is the build output, which in turn can be deployed to any environment. I could work around this by building VM Images for each environment to which I deploy, but frankly that is less than ideal and breaks the idea of “one binary (immutable VM), many environments”.

The Solution

I didn’t really want to go down the route of service discovery using something like Consul, and I really only wanted to use Azure’s default networking setup. This networking requirement meant no custom private DNS I could use in some form of configuration service discovery based on hostname lookup.

…and…. to be honest, with the PaaS services I have in Azure, I can build my own solution pretty easily.

The solution I did land on looks similar to the below.

  • Store runtime configuration in Cosmos DB and geo-replicate this information so it is highly available. Each VMSS setup gets its own configuration document which is identified by a key-service pair as the document ID.
  • Leverage a read-only Access Key for Cosmos DB because we won’t ever ask clients to update their own config!
  • Use Azure Key Vault as to store the Cosmos DB Account and Access Key that can be used to read the actual configuration. Key Vault is Regionally available by default so we’re good there too.
  • Configure an Azure AD Service Principal with access to Key Vault to allow our solution to connect to Key Vault.

I used a conventions-based approach to configuration, so that the whole process works based on the VMSS instance name and the service type requesting configuration. You can see this in the code below based on the URL being used to access Key Vault and the Cosmos DB document ID that uses the same approach.

The resulting changes to my Web API code (based on the earlier web.config sample) are shown below. This all occurs at application startup time.

I have also defined a default Application Insights account into which any instance can log should it have problems (which includes not being able to read its expected Application Insights key). This is important as it allows us to troubleshoot issues without needing to get access to the VMSS instances.

Here’s how we authorise our calls to Key Vault to retrieve our initial configuration Secrets (called on line 51 of the above sample code).

My goal was to make configuration easily manageable across multiple VMSS instances which requires some knowledge around how VMSS instance names are created.

The basic details are that they consist of a hostname prefix (based on what you input at VMSS creation time) that is appended with a base-36 (hexatrigesimal) value representing the actual instance. There’s a great blog from Guy Bowerman from Microsoft that covers this in detail so I won’t reproduce it here.

The final piece of the puzzle is the Cosmos DB configuration entry which I show below.

The ‘id’ field maps to the VMSS instance prefix that is determined at runtime based on the name you used when creating the VMSS. We strip the trailing 6 characters to remove the unique component of each VMSS instance hostname.

The outcome of the three components (code changes, Key Vault and Cosmos DB) is that I can quickly add or remove VMSS groups in configuration, change where their configuration data is stored by updating the Key Vault Secrets, and even update running VMSS instances by changing the configuration settings and then forcing a restart on the VMSS instances, causing them to re-read configuration.

Is the above the only or best way to do this? Absolutely not 🙂

I’d like to think it’s a good way that might inspire you to build something similar or better 🙂

Interestingly, getting to this stage as well, I’ve also realised there might be some value in considering moving this solution to Service Fabric in future, though I am more inclined to shift to Containers running under the control an orchestrator like Kubernetes.

What are you thoughts?

Until the next post!

Tagged , , , , , ,

Moving from Azure VMs to Azure VM Scale Sets – VM Image Build

I have previously blogged about using Visual Studio Team Services (VSTS) to securely build and deploy solutions to Virtual Machines running in Azure.

In this, and following posts I am going to take the existing build process I have and modify it so I can make use of VM Scale Sets to host my API solution. This switch is to allow the API to scale under load.

My current setup is very much fit for purpose for the limited trial it’s been used in, but I know (at minimum) I’ll see at least 150 times the traffic when I am running at full-scale in production, and while my trial environment barely scratches the surface in terms of consumed resources, I don’t want to have to capacity plan to the n-nth degree for production.

Shifting to VM Scale Sets with autoscale enabled will help me greatly in this respect!

Current State of Affairs

Let’s refresh ourselves with what is already in place.

Build

My existing build is fairly straightforward – we version the code (using a PowerShell script), restore packages, build the solution and then finally make sure all our artifacts are available for use by the Release Management process.

Existing build process

The output of this build is a Web Deploy package along with a PowerShell DSC module that configures the deployment on the target VM.

Release Management

I am using multiple Environments for Release Management to manage transformations of the Web Deploy Parameters file along with the Azure Subscription / Resource Group being deployed to. The Tasks in each Environment are the same though.

My Release Management Tasks (as shown below) open the NSG to allow DSC remote connections from VSTS, transform the Web Deploy Parameters file, find the VMs in a particular Azure Resource Group, copy the deployment package to each VM, run the DSC script to install the solution, before finally closing the NSG again to stop the unwashed masses from prying into my environment.

Existing release process

All good so far?

What’s the goal?

The goal is to make the minimum amount of changes to existing VSTS and deployment artifacts while moving to VM Scale Sets… sounds like an interesting challenge, so let’s go!

Converting the Build

The good news is that we can retain the majority of our existing Build definition.

Here are the items we do need to update.

Provisioning PowerShell

The old deployment approach leveraged PowerShell Desired State Configuration (DSC) to configure the target VM and deploy the custom code. The DSC script to achieve this is shown below.

The challenge with the above PowerShell is it assumes the target VM has been configured to allow WinRM / DSC to run. In our updated approach of creating a VM Image this presents some challenges, so I redeveloped the above script so it doesn’t require the use of DSC. The result is shown below.

As an aside, we could also drop the use of the Parameters file here too. As we’ll see in another post, we need to make the VM Image stateless, so any local web.config changes that are environment-specific are problematic and are best excluded from the resulting image.

Network Security Group Script

In the new model, which prepares a VM Image, we no longer need the Azure PowerShell script that opens / closes the Network Security Group (NSG) on deployment, so it’s removed in the new process.

No more use of Release Management

As the result of our Build is a VM Image we no longer need to leverage Release Management either, making our overall process much simpler.

The New Build

The new Build definition shown below – you will notice the above changes have been applied, with the addition of two new Tasks. The aspect of this I am most happy about is that our core build actually remains mostly unchanged – we have had to add two additional Tasks and change one PowerShell script to make this work.

New build process

Let’s look at the new Tasks.

Build Machine Image

This Task utilises Packer from Hashicorp to prepare a generalised Windows VM image that we can use in a VM Scale Set.

The key items to note are: you need an Azure Subscription where a temporary VM, and the final generalised VHD can be created so that Packer can build the baseline image for you.

New Build Packer Task

You will notice we are using the existing artifacts staging directory as the source of our configuration PowerShell (DeployVmSnap.ps1) which is used by Packer to configure up the host once the VM is created using an Azure Gallery Image.

The other important item here is the use of the output field. This will contain the fully qualified URL in blob storage where the resulting packed image will reside. We can use this in our next step.

Create VM Image Registration

The last Task I’ve added is to invoke an Azure PowerShell script, which is just a PowerShell script, but with the necessary environmental configuration to allow me to execute Cmdlets that interact with Azure’s APIs.

The result of the previous Packer-based Task is a VHD sitting in a Blob Storage account. While we can use this in various scenarios, I am interested in ensuring it is visible in the Azure Portal and also in allowing it to be used in VM Scale Sets that utilised Managed Disks.

The PowerShell script is shown below.

.. and here is how it is used in the Task in VSTS..

New build VM Image

You can see how we have utilised the Packer Task’s output parameter as an input into this Task (it’s in the “Script Arguments” box at the bottom of the Task).

The Result

Once we have this configured and running the result is a nice crisp VM Image that can be used in a VM Scale Set. The below screenshot shows you how this looks in my environment – I wrapped the Azure Storage Account where the VHDs live, along with the VM Image registrations in the same Resource Group for cleaner management.

Build Output

There are still some outstanding items we need to do with, specifically: configuration management (our VM Image has to be stateless) and VM Scale Set creation using the Image. We will deal with these two items in the following posts.

For now I hope you have a good grasp on how you can convert an existing VSTS build that deploys to existing VMs to one that produces a generalised VM Image that you can use either for new VMs or in VM Scale Sets.

Until the next post.

🙂

Want to see how I dealt with instance configuration? Then have a read of my next post in this series.

Tagged , , , , ,

Microsoft Open Source Roadshow – Free training on Open Source and Azure

Microsoft Open Source Roadshow

In early August I’ll be running a couple of free training days covering how developers who work in the Open Source space can bring their solutions to Microsoft’s Azure public cloud.

We’ll cover language support (application and Azure SDK), OS support (Linux, BSD), Datastores (MySQL, PostreSQL, MongoDB), Continuous Deployment and, last, but not least, Containers (Docker, Container Registry, Kubernetes, et al).

We’re starting off in Sydney and Melbourne, so if you’re interested here are the links to register:

  • Sydney – Monday 7 August 2017: Register
  • Melbourne – Friday 11 August 2017: Register

If you have any questions you’d like to see answered at the days feel free to leave a comment.

I hope to see you there!

Tagged , ,

Deploy a PHP site to Azure Web Apps using Dropbox

I’ve been having some good fun getting into the nitty gritty of Azure’s Open Source support and keep coming across some amazing things.

If you want to move away from those legacy hosting businesses and want a simple method to deploy static or dynamic websites, then this is worth a look.

The sample PHP site I used for this demonstration can be cloned on Github here: https://github.com/banago/simple-php-website

The video is without sound, but should be easy enough to follow without.

It’s so simple even your dog could do it.

Dogue

Tagged , , ,

AAD B2C Talk – Innovation Days 2016 Wrap

I recently spoke at the Innovation Days 2016 event held in Sydney on Azure AD B2C.

The presentation for my talk is available here:

https://1drv.ms/p/s!AqBI2LiKM4LHwNJvTxrXNAblpTBCJA

and you can find the sample code for the web and API apps here:

https://github.com/sjwaight/innovationdays2016/

Why You Should Care About Containers

The release this week of the Windows Server 2016 Technical Preview 3 which includes the first public release of Microsoft’s Docker-compatible container implementation is sure to bring additional focus onto an already hot topic.

I was going to write a long introductory post about why containers matter, but Azure CTO Mark Russinovich beat me to it with his great post this week over on the Azure site.

Instead, here’s a TL;DR summary on containers and the Windows announcement this week.

  • A Container isn’t a Virtual Machine it’s a Virtual Operating System. Low-level services are provided by a Container Host which manages resource allocation (CPU / RAM / Disk). Some smarts around filesystem use means a Container can effectively share most of the underlying Container Host’s filesystem and only needs to track delta changes to files.
  • Containers are not just a cloud computing thing: they can run anywhere you can run a Linux or Windows server.
  • Containers are, however well suited to cloud computing because they offer:
    • faster startup times (they aren’t an entire OS on their own)
    • easier duplication and snapshotting (no need to track an entire VM any more)
    • higher density of hosting (though noisy neighbour control still needs solving)
    • easier portability: anywhere you have a compatible Container Host you can run your Container. The underlying virtualisation platform no longer matters, just the OS.
  • Docker is supported on all major tier one public clouds. Azure, AWS, GCP, Bluemix and Softlayer.
  • A Linux Container can’t run on a Windows Host (and vice versa): the Container Host shares its filesystem with a Container so it’s not possible to mix and match them!
  • Containers are well suited to use in microservices architectures where a Container hosts a single service.
  • Docker isn’t the only Container tech about (but is the best known and considered most mature) and we can hold out hope of good interoperability in the Container space (for now) thanks to the Open Container Initiative (OCI).

Containers offer great value for DevOps-type custom software development and delivery, but can also work for standard off-the-shelf software too. I fully expect we will see Microsoft offer Containers for specific roles for their various server products.

As an example, for Exchange Server you may see Containers available for each Exchange role: Mailbox, Client Access (CAS), Hub Transport (Bridgehead), Unified Messaging and Edge Transport. You apply minimal configuration to the Container but can immediately deploy it into an existing or new Exchange environment. I would imagine this would make a great deal of sense to the teams running Office 365 given the number of instances of these they would have to run.

So, there we have it, hopefully an easily digestable intro and summary of all things Containers. If you want to play with the latest Windows Server release you can spin up a copy in Azure if you want. If you don’t have a subscription sign up for a trial. Alternatively, Docker offers some good introductory resources and training is available in Australia*.

HTH.

* Disclaimer: Cevo is a sister company of my employer Kloud Solutions.

Tagged , , , , ,

Setting Instance Level Public IPs on Azure VMs

Since October 2014 it has been possible to add a public IP address to a virtual machine in Azure so that it can be directly connected to by clients on the internet. This bypasses the load balancing in Azure and is primarily designed for those scenarios where you need to test a host without the load balancer, or you are deploying a technology that may require a connection type that isn’t suited to Azure’s Load Balancing technology.

This is all great, but the current implementation provides you with dynamic IP addresses only, which is not great unless you can wrap a DNS CNAME over the top of them. Reading the ILPIP documentation suggested that a custom FQDN was generated for an ILPIP, but for the life of me I couldn’t get it to work!

I went around in circles a bit based on the documentation Microsoft supplies as it looked like all I needed to do was to call the Set-AzurePublicIP Cmdlet and the Azure fabric would take care of the rest… but no such luck!

Get-AzureVM -ServiceName svc01 -Name vm01 | `
Set-AzurePublicIP -PublicIPName vm01ip -IdleTimeoutInMinutes 4 | `
Update-AzureVM

When I did a Get-AzureVM after the above I got the following output – note that I did get a public IP, but no hostname to go along with it!

DeploymentName              : svc01
Name                        : vm01
Label                       :
VM                          : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVM
InstanceStatus              : ReadyRole
IpAddress                   : 10.0.0.5
InstanceStateDetails        :
PowerState                  : Started
InstanceErrorCode           :
InstanceFaultDomain         : 1
InstanceName                : vm01
InstanceUpgradeDomain       : 1
InstanceSize                : Small
HostName                    : vm01
AvailabilitySetName         : asn01
DNSName                     : http://svc01.cloudapp.net/
Status                      : ReadyRole
GuestAgentStatus            : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.GuestAgentStatus
ResourceExtensionStatusList : {Microsoft.Compute.BGInfo}
PublicIPAddress             : 191.239.XX.XX
PublicIPName                : vm01ip
PublicIPDomainNameLabel     :
PublicIPFqdns               : {}
NetworkInterfaces           : {}
VirtualNetworkName          : Group demo01
ServiceName                 : svc01
OperationDescription        : Get-AzureVM
OperationId                 : 62fdb5b28dccb3xx7ede3yyy18c0454
OperationStatus             : OK

Aaarggh!

The Solution

It turns out, after a little experimentation, that you all you have to do to get this to work is to supply a value to an undocumented parameter DomainNameLabel for the Set-AzurePublicIP Cmdlet.

Note: there is also no way to achieve this at time of writing via the Azure web portals – you have to use PowerShell to get this configured.

Let’s try our call again above with the right arguments this time!

Get-AzureVM -ServiceName svc01 -Name vm01 | `
Set-AzurePublicIP -PublicIPName vm01ip `
   -IdleTimeoutInMinutes 4 -DomainNameLabel vm01ilpip | `
Update-AzureVM

Success!!

DeploymentName              : svc01
Name                        : vm01
Label                       :
VM                          : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVM
InstanceStatus              : ReadyRole
IpAddress                   : 10.0.0.5
InstanceStateDetails        :
PowerState                  : Started
InstanceErrorCode           :
InstanceFaultDomain         : 1
InstanceName                : vm01
InstanceUpgradeDomain       : 1
InstanceSize                : Small
HostName                    : vm01
AvailabilitySetName         : asn01
DNSName                     : http://svc01.cloudapp.net/
Status                      : ReadyRole
GuestAgentStatus            : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.GuestAgentStatus
ResourceExtensionStatusList : {Microsoft.Compute.BGInfo}
PublicIPAddress             : 191.239.XX.XX
PublicIPName                : vm01ip
PublicIPDomainNameLabel     : vm01ilpip
PublicIPFqdns               : {vm01ilpip.svc01.cloudapp.net , vm01ilpip.0.svc01.cloudapp.net}
NetworkInterfaces           : {}
VirtualNetworkName          : Group demo01
ServiceName                 : svc01
OperationDescription        : Get-AzureVM
OperationId                 : 62fdb5b28dccb3xx7ede3yyy18c0454
OperationStatus             : OK

Now that I have this information I can setup DNS CNAMEs against the PublicIPFqdns and use DNS to manage the invariable IP address change between instance recycles. Happy days!

Tagged , , , , , ,

Microsoft Ignite 2015 Event Review

Frank Sinatra sang “My Kind of Town (Chicago is)” and Ol’ Blue Eyes certainly knew a great town when he saw one!

The first ever Microsoft Ignite was held just this past week in Chicago at the McCormick Place Convention Centre (the largest in North America) and I was lucky enough to attend with the other 22,000+ attendees!

Ignite’s been a bit of an interesting event this time around as it has replaced a bunch of product-specific North American conferences such as MEC and Lync Conference, and it seemed to attract overflow from people who missed out on tickets to Build the week before. I think a lot of attendees seemed a little unsure about what Ignite actually was – is it for IT Pros or Developers, or both? More on this later!

Let me share my experience with you.

Firstly, as you might guess from my introduction, Ignite was huge – 22,000+ attendees, 4.5 days and a session catalogue that ran into easily 100+ sessions (I haven’t counted, but I’m sure someone has the full number and that my estimate is way-way too low). The Expo floor itself was massive, with Microsoft product teams taking substantial floor space and being available and open to talk and take feedback.

The sheer scale of this event lead to some fairly interesting experiences…

Eating

I think everyone got used to being herded to the first open food buffet where breakfast and lunch were served. Obviously humans will head to the nearest table, but I’m pretty sure by day 5 everyone was a little over the phrase ‘keep moving down the line to the first open table’ (followed closely by ‘food this way!’). It was generally done very politely though.

Food variation was pretty good and the serving style meant you avoided large servings, though some offerings, were, errr, not what I’m used to (but I gave some a go in the name of international relations).

The red velvet cake was pretty amazing. I can’t pick a top meal (mainly because I don’t remember them all), but overall the food gets a thumbs up.

Moving Around

The distances needing to be travelled between sessions sometimes resulted in needing to almost sprint between them. Using one speaker’s joke: your Fitbit thanks you.

The size of McCormick Place meant that travel time between two sessions in the gap between sessions (typically 15 minutes) could be a challenge. Couple this with a crowd who are unfamiliar with the location and all sorts of mayhem ensues. I would say by day three the chaos had settled down as people started to get familiar with locations (or were back at the hotel with a hangover).

If you wanted to have a meaningful discussion with anyone in the Expo you would effectively forgo a session or lunch depending on which was more important to you :).

💡 Pro-tip: learn the locations / map before you go as there are lot of signs in the centre that may not make much sense at first.

Getting Out

McCormick Place is a substantial distance from downtown Chicago which presented some challenges. Shuttle buses picked up and dropped off during morning and evening periods, but not in the middle of the day. If you needed anything in the middle of the day it was via taxi. The Chicago Metra train runs through here, but appears to be so infrequent that it’s not that useful.

On Tuesday evening many social events had been organised by various product teams and vendors which were mostly held downtown. Trying to make these immediately after the end of the day was tricky as shuttle buses to hotels filled very quickly and a massive taxi queue formed.

For me this meant an hour long walk to my first event, essentially missing most of it!

The second event, also downtown, was a bit more of a success though 🙂

Did I mention the Queues?

For…

  • Toilets: I can now appreciate what major events are like for women who usually end up queuing for the toilet. Many of the breakout sessions were held near toilets that were woefully inadequate for the volume of people (particularly if you’re serving the same people free coffee and drinks…)

    💡 Pro-tip: there are a set of massive gents toilets located behind the Connies Pizza on North Level 2. Patently I didn’t go searching for the Ladies…

  • Swag: yep, you could tell the cool giveaways or prizes on the Expo floor simply by looking at the length of the queue.
  • Food: small ones at breakfast and lunch, some unending ones for the Attendee Celebration (hands up if you actually got a hot dog?!)

    💡 Pro-tip: at the Celebration find the least popular food that you still like. Best one for me was the steamed pork and vegetable buns, though there are only so many you can eat.

  • Transport: as I already hinted at above – depending on time of day you could end up in a substantial queue to get on a bus or taxi.

    💡 Pro-tip: take a room in a hotel a fair distance away (less people) and also walk a little if you need a taxi and flag one down.

Session Content

I don’t come from an IT Pro background and I don’t have an alignment with a particular product such as Exchange, so for me Ignite consisted of Azure-focused content, some SharePoint development for Office 365 and custom Azure application development using Node. I got a lot of useful insights at the event so it hit the mark for me – the union of IT Pro and Developer competencies is being driven by public cloud technology so it was great!

I have the feeling quite a few attendees were those who missed out on entrance to Build the week before, and I suspect for many they may have found a lack of compelling content (unless they were SharePoint developers). I also felt that a lot of content advertised as level 300 was more like level 200, though there were some good sessions that got the depth just right. I’m not sure if this issue is because of the diverse range of roles expected to be attend (admins, developers, managers and C-levels) which meant content was written to the lowest common denominator.

Also finding suitable sessions was a bit of a challenge too given the volume available. While the online session builder (and mobile app) was certainly useful I did spend a bit of time just scrolling through things and I would say the repeated sessions were probably also unnecessary. I certainly missed a couple of sessions I would have liked to attend (though I can catch up on Channel 9) primarily because I missed them in the schedule completely.

I hope for 2016 some work is done on the content to:

  • Make it easier to build a schedule in advance – the web schedule builder was less than ideal
  • Increase the technical depth of sessions, or clearly demarcate content aimed only at architect or C-level attendees
  • Have presenters who can present. There were some sessions I went to that were trainwrecks – granted in a conference this size maybe that happens… but I just had the feeling here that some speakers had no training or prep time for their sessions
  • Reduce or remove repeated sessions.

💡 Pro-tip: make sure to get the mobile application for Ignite (and that you have it connected to the Internet). It really was the most useful thing to have at the event!

Ignite The Future

As I noted above, this was the first year Ignite was held (and also the first in Chicago). During the 2015 conference Microsoft announced that the conference will be back in Chicago for 2016.

Should you go? Absoutely!

Some tweaks to the event (granted, so fairly large ones) should help make it smoother next time round – and I’ve seen the Microsoft Global Events team actively taking feedback on board elsewhere online.

The Ignite Brand is also here to stay – I have it on good advice that TechEd as a brand is effectively “Done” and Ignite will be taking over. Witness the first change: Ignite New Zealand.

Chicago’s certainly my type of town!

PS – make sure to check out what’s on when you’re in town…

Tagged , ,

Get Started with Docker on Azure

The most important part of this whole post is that you need to know that the whale in the Docker logo is officially named “Moby Dock“. Once you know that you can probably bluff your way through at least an introductory session on Docker :).

It’s been hard to miss the increasing presence of Docker, particularly if you work in cloud technology. Each of the major cloud providers has raced to provide container services (Azure, AWS, GCE and IBM) and these platforms see benefits in the higher density hosting they can achieve with minimal changes to existing infrastructure.

In this post I’m going to look at first steps to getting Docker running in Azure. There are other posts about that will cover this but there are a few gotchas along the way that I will cover off here.

First You Need a Beard

Anyone worth their take home pay who works with *nix needs to grow a beard. Not one of those hipstery-type things you see on bare-ankled fixie riders. No – a real beard.

While Microsoft works on adding Docker support in the next Windows Server release you are, for the most part, stuck using a Linux variant to host and manage your Docker containers.

The Azure Cross-Platform Command-Line Interface teases you with the ability to create Docker hosts from a Windows-based computer, but ultimately you’ll have a much easier experience running it all from a Linux environment (even if you do download the xplat-cli there anyway).

If you do try to set things up using a Windows machine you’ll have to do a little dancing to get certificates setup (see my answer on this stackoverflow post). This is shortly followed by the realisation that you then can’t manage the host you just created by getting those nice certificates onto another host – too much work if you ask me :).

While we’re on Docker and Windows let’s talk a little about boot2docker. This is designed to provide an easy way to get started with Docker and while it’s a great idea (especially for Windows users) you will have problems if you are running Hyper-V already due to boot2docker’s use of Virtualbox which won’t run if you already have Hyper-V installed.

So Linux it is then!

Management Machine

Firstly let’s setup a Linux host that will be our Docker management host. For this post we’ll use a CentOS 7 host (I’ve avoided using Ubuntu because there are some challenges installing and using node.js which is required for the Azure xplat CLI).

Once this machine is up and running we can SSH into it and install the required packages. Note that you’ll need to run this script as a root-equivalent user.

Now we have our bits to manage the Docker environment we can now build an image and actual Docker container host.

Docker Container Host

On Azure the easiest way to get going to with Docker is to use the cross platform CLI’s Docker features.

As a non-root user on our management linux box we can run the following commands to get our Docker host up and running. I’m using an Organisational Account here so I don’t need to download any settings files.

# will prompt for username and password
[sw@sw1 ~]$ azure login

# set mode to service management
[sw@sw1 ~]$ azure config mode asm

# get the list of Ubuntu images - select one for the next command
[sw@sw1 ~]$ azure vm image list | grep Ubuntu-14_04

# setup the host - replace placeholders
[sw@sw1 ~]$ azure vm docker create -e 22 -l "West US" {dockerhost} "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20141125-en-us-30GB" {linxuser} {linxpwd}

At this point we now have a new Azure VM up and running that has all the necessary Docker bits installed for us. If we look at the VM’s entry in the Azure Portal we can see that ports 22 and 4243 are already open for us. We can go ahead and test that everything’s good. Don’t forget to substitute your hostname!

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 info

Deploy an Image to a Container

As we have our baseline infrastructure ready to rock so let’s go ahead and deploy an image to it. For the purpose of this post we are going to use the wordpress-nginx image that can be built using the configuration in this Github repository.

On our management host we can run the following commands to build the image from the Dockerfile contained in the Git repository.

[sw@sw1 ~]$ git clone https://github.com/eugeneware/docker-wordpress-nginx.git

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 build -t="docker-wordpress-nginx" docker-wordpress-nginx/

Note: you need to make sure you run this as the user who setup the Docker container host and that you do it in the home directory of the user. This is because the certificates generated by the container host setup are stored in the user’s home folder in a directory called .docker. Also, expect this process to take a reasonable amount of time because it’s having to pull down a lot of data!

Once our image build is finished we can verify that it is on the Docker host by issuing this command:

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 images

Let’s create a new containerised version of the image and map the HTTP port out so we can access it from elsewhere in the world (we’re going to map port 80 to port 80). I’m also going to supply a friendly name for the container so I can easily reference it going forward (if I didn’t do this I’d get a nice long random string I’d need to use each time).

[sw@sw1 ~]$  docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 create -p 80:80 --name="dwn01" docker-wordpress-nginx

Now that we have created this we can start the container and it will happily run until we stop it :).

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 start dwn01

If we return to the VM management section in the Azure Management Portal and add an Endpoint to map to port 80 on our Docker container host we can then open up our WordPress setup page in a web browser and configure up WordPress.

If we simply stop the container we will lose any changes to the running environment. Docker provides us with the ‘commit’ command to rectify this. Let’s go ahead and save our state:

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 commit dwn01 sw/dwn01

and then we can stop the Container.

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 stop dwn01

We now have a preserved state container along with the original unchanged one. If we want to move this container to another platform that supports Docker we could also do that, or we could repeat all our changes based on the original unchanged container.

This has been a very brief overview of Docker on Azure – hopefully it will get you started with the basics and comfortable with the mechanics of setting and up and managing Docker.

Tagged , ,

Microsoft Azure: 2014 Year in Review

What a massive year it’s been for Microsoft’s Azure public cloud platform. Running the Azure Sydney User Group this year has been great fun and seeing the growing local interest has been fantastic.

The focus from Microsoft has really changed in this space and has been clearly signalled with the change in name of Azure from Windows Azure to Microsoft Azure during the year and an increasingly broad set of non-Microsoft services offered on it.

2015 promises to be another big year, but let’s look back at what happened during 2014 with Azure.


January

The year got off to a fairly quiet start, but as we’ll see, it soon ramped up.

Preview

Everything this month was under GA only, so see below!

Generally Available

  • Websites:
    • staged publishing support
    • Always On support *
    • more frequent metric updates and monitoring alerts
  • SQL Database: new metrics and alerts
  • Mobile Services: SenchaTouch support
  • Cloud Services: A8 and A9 machine sizes now supported.

* If you’re using New Relic there are some known issues with this feature.

Other News

The Azure platform received PCI-DSS compliance validation and introduced reduced pricing rates for storage and storage transactions.


February and March

The headline item in this period was the launch of the Japan Geography with Japan East (Saitama Prefecture) and West (Osaka Prefecture) providing that market with in-country services. Also during this period we had the following announcements and launches:

Preview

Generally Available

Other News

Local gamers unhappy not to have a local Xbox server platform to run on. Who knew it was such an issue having lag and big ping times 😉

Can we haz l0c4l serverz?


April

The big change this month was the change in name for Azure. Guaranteeing a million-and-one outdated websites, slides and documents in one swoop, the service name was changed from Windows Azure to Microsoft Azure. Just for fun there is no “official” logo, just text-based branding.

This change was a subtle nod to Azure’s ability to run Infrastructure-as-a-Service (IaaS) workloads on platforms other than Windows – something it had been doing for quite some time when this change was made.

Preview

  • Newly designed management portal
  • Mobile services: documented offline support and role-based Azure AD authentication
  • Resource Manager via PowerShell
  • SQL Database: active geo-replication (read replicas); self-service restore; 500GB support; 99.95% SLA
  • Media Services: secure delivery and Office 365 Video Portal.

Generally Available

  • Azure SDK 2.3: increased Visual Studio support – create VMs using Server Explorer
  • Autoscale – Virtual Machines, Cloud Services, Web Sites and Mobile Services
  • Azure AD Premium – Multi-factor Authentication (MFA) and security reporting
  • Websites: SSL bundled; Java support; Web Hosting Plans; Available in SE Asia
  • Web Jobs SDK
  • Media Services: Live Streaming; Partnerships for Content Management and Analytics (Ooyala) and Live Ingest (iStreamPlanet)
  • Basic Tier introduction: lower cost for dev/test scenarios. Applies to VMs and Websites
  • Puppet and Chef support on Azure VMs via VM Agent Extensions
  • Scheduler Service
  • Read Access Geo Redundant Storage (RA-GRS).

May and June

The pace from the first quarter of the year carried over into these two months! The stand out amongst the range of announcements in this period was the launch of the API Management service which was the result of the October 2013 acquisition of Apiphany.

Preview

  • Azure API Management – publish, manage and secure your existing REST APIs
  • Azure File Service (SMB shares) – even use on Linux VMs
  • BizTalk Hybrid Connections – on-prem connects without the secops guys 😉
  • Redis Cache support – now the preferred caching platform in Azure
  • RemoteApp – Lay down common Apps on demand
  • Site Recovery – backup your on-prem VMs to Azure
  • Secure VMs using security extensions from Microsoft, Symantec and McAfee
  • Internal Load Balancing for VMs and Cloud Services
  • HDInsights: Apache HBASE and Hadoop 3.1
  • Azure Machine Learning (or as I like to call it “Skynet”).

Generally Available

  • ExpressRoute – WAN and DC cross-connects
  • Multi-connection Virtual Networks (VNET) and VNET-to-VNET connections
  • Public IP Address Reservation (IPv4 shortage anyone?)
  • Traffic Manager: use Azure and non-Azure (“external”) endpoints
  • A8 and A9 VM support – lots of everything (8 / 16 cores – 7 GB RAM per core)
  • Storage Import/Export service – check region availability!

Other News

MSDN subscribers gained the ability to deploy Windows 7 and 8 images onto Azure VMs for dev/test scenarios and Enterprise Agreement (EA) customers were given the ability to purchase add-ons via the Azure Store which had previously not been possible.

We also learned about availability of IPv4 addresses with some US-based services being issued IPv4 addresses assigned to South America, causing many LOLs for service admins out there who thought their services were in Brazil!


July and August

This period’s summary: Ice Bucket Challenge.

Preview

  • Event Hubs: capture data from all the Internet connected things!
  • Redis cache: in more places and sizes
  • Preview management portal: manage Azure SQL Database
  • DocumentDB
  • Azure Search.

Generally Available


September

No single announcement jumps out so I was going to put a picture of a kitten here but I thought you might want to see this (even if it is from 2012).

Preview

  • Role-based access control (RBAC) for Azure management in preview portal only
  • Resource Tagging support: filter by tag – useful for billing and ops
  • Azure SQL Database – Elastic Scale preview. Replaces Federations model
  • DocumentDB – enhanced management tooling and metrics
  • Azure Automation – AD auth; PowerShell converter; Runbook gallery and scheduling
  • Media Services – Live Streaming and DRM, faster encoding and indexer.

Generally Available

  • ‘D’ Series VMs: 60% faster CPU, more RAM and local SSD disk
  • Redis Cache: recommended cache solution in Azure. 250MB – 53GB! support
  • Site Recovery: on-prem DR with Azure – Win / Linux
  • Notification Hubs: Baidu Push (China)
  • Virtual Machines: instance-level public IPs (no NAT/PAT)
  • Azure SQL Database: three new service tiers and hourly billing
  • API Management: added OAuth support and REST Management API
  • Websites: VNet support, “scalable CMS” with WordPress and backups improvements
  • Management Services Alerts.

October and November

Pretty hard to go by this news it terms of ‘most outstanding announcement’ for these two months, especially for those of us in Australia!

Preview

  • ‘G’ Series VMs – (“Godzilla” VM) more CPU/RAM/SSD than any VM in any cloud *
  • Premium Storage – SSD-based with more than 50k IOPS *
  • Marketplace changes – CoreOS and Cloudera
  • Increased focus on Docker including portal support
  • Cloud Platform System (CPS) from Dell.
  • Batch: parallel task coordination
  • Data Factory: build data processing pipelines
  • Stream Analytics: analyse your Event Hubs data.

* Announced but not yet in public preview.

Generally Available

  • Australia Geography launches!
  • Network Security Groups
  • Multi-NIC Support in VMs (VM size dependent)
  • Forced Tunnelling (route traffic back on-prem)
  • ExpressRoute:
    • Cross-Subscription Sharing
    • Multi-connect to an Azure VNET
  • Bigger Azure Virtual Gateways
  • Ops Logging for Gateways and ExpressRoute
  • More control over Gateway encryption
  • Azure Load Balancer Source IP Affinity (“Sticky Sessions”)
  • Nested Traffic Manager Profiles
  • Preview Portal: Internal Load Balancing and Instance / Reserved IP Management
  • Automation Service: PowerShell Service Orchestration
  • Microsoft Antimalware Extension on VMs and Cloud Services (for free)
  • Many more VM Extensions available (PowerShell DSC / Octopus Deploy Tentacle)
  • Event Hubs: ingest more messages; SLA-backed.

Other News

We always have this vision of large-scale services being relatively immune to wide-ranging outages, yet all the main cloud platforms have regular challenges resulting in service disruptions of some variety.

On November 18 (or 19 depending on your timezone) Azure had one of these events, causing a disruption across many of its Regions affecting Storage and VMs.

The final Root Cause Analysis (RCA) shows the sorts of challenges involved in running platforms of this size.


December

You can almost hear the drawing of the breath before the Azure team starts 2015…

Preview

  • Premium Storage
  • Azure SQL Database: better feature parity with SQL 2014 and better large DB support.
  • Search: management via portal, multi-lingual support.
  • DocumentDB: better management via portal.
  • Azure Data Factory: integration with Machine Learning.

Generally Available

  • RemoteApp: run desktop apps anywhere
  • Azure SQL Database: new auditing features
  • Live Media Streaming: access the same platform as used at the World Cup and Olympics
  • Site Recovery: supported without SCVMM being deployed
  • Active Directory: App Proxy and password write-back enabled
  • Mobile Services: Offline Sync Managed SDK
  • HDInsight: Cluster customisation.

Other News

Another big announcement for the Australian cloud market was the news that from early 2015 Microsoft would be offering Office 365 and CRM Online from within Australia’s borders. What a great time to be working in this market!


There we have it! What a year! I haven’t detailed every single announcement to come out from the Azure team (this post would easily be twice as long), but if you think I’ve missed anything important leave a comment and I’ll update the post.

Simon.

Tagged , , , ,