Category Archives: Containers

Fix Provider error in Cloud Shell when using AKS in a new Azure Region

Given the recent announcement of the GA of Azure Kubernetes Service I thought I would take it for a spin in one of the new Regions it is now available in. I have previously deploy AKS in East US using the Azure Cloud Shell so didn’t expect to run into any issues. However, I hit a minor snag, which I’m documenting here in case you come across it too.

az group create --name rg-aks-01 --location westus2

az aks create –resource-group rg-aks-01 –name testaks01 –node-count 1 –generate-ssh-keys

The subscription is not registered for the resource type ‘managedClusters’ in the location ‘westus2’. Please re-register for this provider in order to have access to this location.

And this is the fix.

az provider register --namespace Microsoft.ContainerService

Registering is still on-going. You can monitor using ‘az provider show -n Microsoft.ContainerService’

Then a short while later I ran the ‘show’ command and can now see this service is available in all the new Regions for GA (snippet shown below).

"locations": [
"UK West",
"East US",
"West Europe",
"Central US",
"Canada East",
"Canada Central",
"UK South",
"West US",
"West US 2",
"Australia East",
"North Europe"
]

Happy Days! 😎

Tagged , , ,

Understanding Azure’s Container PaaS Capabilities

If you’ve been using Azure over the past twelve months, you can’t but have the feeling that it’s become a bit like this…

Containers... Containers Everywhere

.. and you’d be right.

To be fair, though, Containers have been one of the hot topics in computing in general and certainly one that’s been getting the most interest in my recent Azure Open Source Roadshows.

One thing that has struck me though is that people are not clear on the purpose of all the services in Azure that have ‘Containers’ listed as a capability, so in this post I am going to try and review the Azure Platform-as-a-Service offerings that have Container capabilities and cover what the services can be used for.

First, before we begin, let’s quickly get some fundamentals under our belts.

What is a Container?

Containers provide encapsulation and isolation for workloads and remove the need for a complete Operating System image to be deployed in order to manage resource allocation.

They have proven popular because they typically have smaller footprints than Virtual Machines, boot much faster as a result and have a modern build process based on composition that gels well with software development.

A Container still needs to “run” somewhere – this “somewhere” is what I will call a “Container Host” through the rest of this post.

So where does Docker fit into all of this? Docker provides tooling for the creation, running and management of Containers and is by far the best known tech in this space. Microsoft has worked with Docker to ensure the Docker tooling supports Windows and Windows Containers.

Our most basic Container workload setup then would be: one Container Host running one Container.

What is a Container Orchestrator?

A big part of running Containers at scale is their management which is where technologies like Kubernetes (k8s), Docker Swarm and DC/OS come into play. These technologies allow you to manage multiple Containers and their workloads, performing orchestration of deployments and controlling connectivity between Container instances running on Container Hosts.

An Orchestrator typically runs more than one node to ensure availability, but nothing stops us from running a simple single node setup like Minikube to start to learn about them.

Right, now we have some fundamentals in place, let’s take a look at what Azure offers.

Azure’s Container Offerings

Note that we are going to focus on PaaS services – you can of course still run Containers on Virtual Machines, or deploy something like OpenShift in Azure if you wish.

Please note: any service listed as ‘Preview’ should not be used for Production deployments!


Azure Container Registry (ACR)

What is it? ACR is an Azure-hosted Container Registry based on the open-source Docker Registry v2 spec. This is a turnkey part of Azure’s Container story.

Why use it? When you build a Container Image you need a place to store it. Docker Hub is the Registry where you pull all your public Images from and which is run by Docker. ACR provides you with a private Docker-compatible Registry that you can push Images to and use as a deployment source.

Benefits:

  • Private Registry you configure that is not published on a well-known endpoint that is a source for public Images
  • Provides a unique *.azurecr.io Registry endpoint which can be used to store Images that can be deployed *anywhere* (not just Azure)
  • Webhook support that can be used for Continuous Deployment, particularly with Azure’s Web Apps for Containers (see below)
  • Control access to the Registry using Azure Active Directory Credentials
  • ACR provides seamless authentication (i.e. no configuration) with other Azure services like Azure Container Instances, Azure Container Services, App Service and Batch.
  • Geo-replication is the hotness! (requires Premium level) * Preview

Restrictions:

  • Cooler features (like geo-replication) are at higher price point only
  • I’m struggling here for others! πŸ™‚

> ACR Documentation.


Azure Container Instances (ACI) * Preview

What is it? An Individual Container Host that can run one or more Container. No need for you to manage the Host.

Why use it? These are probably the easiest way to get going running Container-based workloads in Azure. If you have a simple workload that needs a public IP and which can talk to various Azure PaaS services then consider ACI over Web Apps for Containers or Azure Container Service.

Benefits:

  • No need for you to manage the Container Host – tell it which Containers to run and that’s it!
  • Pay per-second for use with customisable CPU Core and memory options
  • Supports multiple Containers in single ACI Host
  • “Whole of Azure” scale: deploy ACI workloads in any Azure Region (where ACI is available).

Restrictions:

  • No production Orchestrator support: there is an experimental Kubernetes Connector, but apart from this you cannot bolt an ACI Host into an Orchestrated environment
  • No VNet support: you can’t connect an ACI Host to Azure VNets.

> ACI Documentation.


Web Apps for Containers (App Service on Linux)

What is it? A Container Host that runs on a Linux-based variant of Azure’s App Service that is aimed at web-centric workloads (hence the name). Like ACI, you still don’t need to manage the Host.

Why use it? If you have a website or HTTP API workload you traditionally host on Linux and that you can (or have) containerised, then this is a good spot to start as it limits your workloads exposure to HTTP (80) and HTTPS (443) ports.

Additionally, even if you haven’t containerised your solution, you can still use this service to host it. When you select a framework to use to host your solution (like Java or PHP) the framework is deployed to Web Apps for Containers as a Docker Image!

Benefits:

  • Get access to standard App Service features like Autoscale, Custom Domains, SSL and Continuous Deployment
  • No need for you to manage the Container Host – tell the Web Apps Instance which Container to run and it will do that
  • Deploy existing Docker images from Docker Hub, Azure Container Registry or from Azure’s pre-built framework images
  • Troubleshoot containers using SSH from Kudu.

Restrictions:

  • No Orchestrator support: what you gain through using App Service you give up in not being able to bolt the Container Host into an Orchestrator like Kubernetes
  • Multi-Container deployments are not supported
  • Not all Windows-based App Service features are supported (yet)
  • Not currently supported in App Service Environments.

> Web Apps for Containers Documentation.


Azure Container Services (ACS)

What is it? A service that allows you to run Container Hosts that are managed by an Orchestrator of your choice (Kubernetes, Docker or DC/OS).

Why use it? If you already run Container workloads on VMs (regardless of hosting location) that use an Orchestrator, or you’d like to start using Containers at scale and need an Orchestrator, then this is the service to use.

Benefits:

  • No need for you to manage the underlying Virtual Machine infrastructure
  • Orchestrator and Container Host setup is managed for you by the ACS engine (which is open source)
  • Container Host scalability is supported via use of Virtual Machine Scale Sets (VMSSs)
  • All hosts (Orchestrator and Container) are vanilla instances – this is not a special “Azure release”
  • Orchestrators have Azure extensions allowing them to perform actions such as creating Azure Load Balancers when you specify load balanced workloads in your setup.
  • Integration with Azure Container Registry for Image deployments.

Restrictions:

  • You pay for both the Container Hosts and Orchestrator Nodes (they are just VMs after all)
  • You can’t increase the number of Orchestrator / Cluster Masters after you have initially created an ACS cluster
  • You can’t upgrade the Orchestrator once you have created an ACS cluster – you need to create a new ACS cluster to gain access to a newer release.

> ACS Documentation: Kubernetes | Docker | DC/OS.


Azure Container Services – Managed Kubernetes (AKS) * Preview

What is it? This service is similar to ACS (above), however in this service (which only supports Kubernetes) the Orchestrator Nodes are managed on your behalf by Microsoft.

Why use it? If you are invested in Kubernetes (or intend to use it as your Orchestrator) and would prefer not to have to manage the Orchestator Nodes then you should select this over standard ACS with “unmanaged” Kubernetes. If you are using the ACS Kubernetes offering already then this is a logical place to migrate to once AKS is Generally Available.

Benefits:

  • You don’t pay for the Orchestrator Nodes running Kubernetes
  • Orchestrator Node availability, patching and upgrading is managed by Microsoft
  • No need to create a new ACS cluster to pick up new Kubernetes releases
  • Will support 100% of the standard Kubernetes API
  • All other ACS features remain in place!

Restrictions:

  • Only supports Kubernetes!
  • During preview does not support all Kubernetes features.

> AKS Documentation.

On a side note: AKS + ACI (with its Kubernetes connector) + ACR will be an amazing PaaS Container story once all these components are all Generally Available! 😎


Azure Service Fabric

What is it? Service Fabric is both a cluster Orchestrator and a development framework for delivery of highly available, distributed applications. It pre-dates the current Container hype cycle and is used to deliver services in Azure such as CosmosDB.

Why use it? If you want to leverage the Reliable Actor and Service patterns offered by the Service Fabric development framework. Also worth considering if you haven’t yet started with an Orchestrator like Kubernetes.

Benefits:

  • Mature product that underpins key Microsoft cloud-scale services
  • Runs in Azure or on-premises.

Restrictions:

  • Container workloads can’t benefit from the development framework as they run as ‘guest executables’ on cluster nodes (this will change in future as you will be able to Containerise Reliable Actors and Services)
  • Developers using Windows 10 can’t deploy Container-based solutions to local Service Fabric clusters.

> Service Fabric Container Documentation.


Azure Batch

What is it? The name says it all really – you can use Azure Batch to run compute workloads that can be broken into lots of concurrent processes. Examples include payroll runs, animation rendering or research modelling. Batch sits well within the High Performance Compute (HPC) landscape.

Why use it? If you have a batch-style workload with processors that can be Containerised then this is a service you should seriously be considering. More-so if you wish to consider a hybrid scenario where you run some of your workload in-house and burst to Azure as required. As you have Containerised workload you can ship dependencies in a single bundle.

Benefits:

  • You can use Docker Hub as a source for Images (yes, you could pull tensorflow and run it in Azure πŸ˜‰ ), in addition to ACR and any other compatible Registry
  • Use Singularity in addition to Docker Containers with Batch
  • Run processes on low-priority VMs to reduce the cost (best for non-time sensitive operations).

Restrictions:

  • RDMA (high performance networking) support only available for Containers running on Linux.

> Azure Batch Container Documentation.


So there we are – hopefully the Azure Container story now makes much more sense and you can pick between the services to understand those that would be most appropriate to your use case.

Happy days! 😎

Tagged , , , , , ,

Continuous Deployment for Docker with VSTS and Azure Container Registry

I’ve been watching with interest the growing maturity of Containers, and in particular their increasing penetration as a hosting and deployment artefact in Azure. While I’ve long believed them to be the next logical step for many developers, until recently they have had limited appeal to many every-day developers as the tooling hasn’t been there, particularly in the Microsoft ecosystem.

Starting with Visual Studio 2015, and with the support of Docker for Windows I started to see this stack as viable for many.

In my current engagement we are starting on new features and decided that we’d look to ASP.Net Core 2.0 to deliver our REST services and host them in Docker containers running in Azure’s Web App for Containers offering. We’re heavy uses of Visual Studio Team Services and given Microsoft’s focus on Docker we didn’t see that there would be any blockers.

Our flow at high level is shown below.

Build Pipeline

1. Developer with Visual Studio 2017 and Docker for Windows for local dev/test
2. Checked into VSTS and built using VSTS Build
3. Container Image stored in Azure Container Registry (ACR)
4. Continuously deployed to Web Apps for Containers.

We hit a few sharp edges along the way, so I thought I’d cover off how we worked around them.

Pre-requisites

There are a few things you need to have in place before you can start to use the process covered in this blog. Rather than reproduce them here in detail, go and have a read of the following items and come back when you’re done.

  • Setting up a Service Principal to allow your VSTS environment to have access to your Azure Subscription(s), as documented by Donovan Brown.
  • Create an Azure Container Registry (ACR), from the official Azure Documentation. Hint here: don’t use the “Classic” option as it does not support Webhooks which are required for Continuous Deployment from ACR.

See you back here soon πŸ™‚

Setting up your Visual Studio project

Before I dive into this, one cool item to note, is that you can add Docker support to existing Visual Studio projects, so if you’re interested in trying this out you can take a look at how you can add support to your current solution (note that it doesn’t magically support all project types… so if you’ve got that cool XAML or WinForms project… you’re out of luck for now).

Let’s get started!

In Visual Studio do a File > New > Project. As mentioned above, we’re building an ASP.Net Core REST API, so I went ahead and selected .Net Core and ASP.Net Core Web Application.

New Project - .Net Core

Once you’ve done this you get a selection of templates you can choose from – we selected Web API and ensured that we left Docker support on, and that it was on Linux (just saying that almost makes my head explode with how cool it is πŸ˜‰ )

Web API with Docker support

At this stage we now have baseline REST API with Docker support already available. You can run and debug locally via IIS Express or via Docker – give it a try :).

If you’ve not used this template before you might notice that there is an additional project in the solution that contains a series of Docker-related YAML files – for our purposes we aren’t going to touch these, but we do need to modify a couple of files included in our ASP.Net Core solution.

If we try to run a Docker build on VSTS using the supplied Dockerfile it will fail with an error similar to:

COPY failed: stat /var/lib/docker/tmp/docker-builder613328056/obj/Docker/publish: no such file or directory
/usr/bin/docker failed with return code: 1

Let’s fix this.

Add a new file to the project and name it “Dockerfile.CI” (or something similar) – it will appear as a sub-item of the existing Dockerfile. In this new file add the following, ensuring you update the ENTRYPOINT to point at your DLL.

This Dockerfile is based on a sample from Docker’s official documentation and uses a Docker Container to run the build, before copying the results to the actual final Docker Image that contains your app code and the .Net Core runtime.

We have one more change to make. If we do just the above, the project will fail to build because the default dockerignore file is stopping the copying of pretty much all files to the Container we are using for build. Let’s fix this one by updating the file to contain the following πŸ™‚

Now we have the necessary bits to get this up and running in VSTS.

VSTS build

This stage is pretty easy to get up and running now we have the updated files in our solution.

In VSTS create a new Build and select the Container template (right now it’s in preview, but works well).

Docker Build 01

On the next screen, select the “Hosted Linux” build agent (also now in preview, but works a treat). You need to select this so that you build a Linux-based Image, otherwise you will get a Windows Container which may limit your deployment options.

build container 02

We then need to update the Build Tasks to have the right details for the target ACR and to build the solution using the “Dockerfile.CI” file we created earlier, rather than the default Dockerfile. I also set a fixed name for the Image Name, primarily because the default selected by VSTS typically tends to be invalid. You could also consider changing the tag from $(Build.BuildId) to be $(Build.BuildNumber) which is much easier to directly track in VSTS.

build container 03

Finally, update the Publish Image Task with the same ACR and Image naming scheme.

Running your build should generate an image that is registered in the target ACR as shown below.

ACR

Deploy to Web Apps for Containers

Once the Container Image is registered in ACR, you can theoretically deploy it to any container host (Azure Container Instances, Web Apps for Containers, Azure Container Services), but for this blog we’ll look at Web Apps for Containers.

When you create your new Web App for Containers instance, ensure you select Azure Container Registry as the source and that you select the correct Repository. If you have added the ‘latest’ tag to your built Images you can select that at setup, and later enable Continuous Deployment.

webappscontainers

The result will be that your custom Image is deployed into your Web Apps for Containers instance and which will be available on ports 80 and 443 for the world to use.

Happy days!

I’ve uploaded the sample project I used for this blog to Github – you can find it at: https://github.com/sjwaight/docker-dotnetcore-vsts-demo

Also, please feel free to leave any comments you have, and I am certainly interested in other ways to achieve this outcome as we considered Docker Compose with the YAML files but ran into issues at build time.

Tagged , , ,

Global Azure Bootcamp 2017 Session – .Net Core, Docker and Kubernetes

If you are attending my session and would like to undertake the exercise here’s what you’ll need to install locally, along with instructions on working with the code.

Pro-tip: As this is a demo consider using an Azure Region in North America where the compute cost per minute is lower than in Australia.

Prerequisites

Note that for both the Azure CLI and Kubernetes tools you might need to modify your PC’s PATH variable to include the paths to the ‘az’ and ‘kubectl’ commands.

On my install these ended up in:

az: C:\Users\simon\AppData\Roaming\Python\Python36\Scripts\
kubectl: c:\Program Files (x86)\

If you have any existing PowerShell or Command prompts open you will need to close and re-open to pick up the new path settings.

Readying your Docker-hosted App

When you compile your solution to deploy to Docker running on Azure, make sure you select the ‘Release’ configuration in Visual Studio. This will ensure the right entry point is created so your containers will start when deployed in Azure. If you run into issues, make sure you have this setting right!

If you compile the Demo2 solution it will produce a Docker image with the tag ‘1.0’. You can then compile the Demo3 solution which will produce a Docker image with the tag ‘1.1’. They can both live in your local environment side-by-side no issues.

Log into your Azure Subcription

Open up PowerShell or a Command Prompt and log into your subscription.

az login

Note: if you have more than one Subscription you will need to use the az account command to select the right one.

Creating an Azure Container Service with Kubernetes.

Before you do this step, make sure you have created your Azure Container Registry (ACR). The Azure Container Service has some logic built-in that will make it easier to pull images from ACR and avoid a bunch of cross-configuration. The ACR has to be in the same Subscription for this to work.

I chose to create an ACS instance using the Azure CLI because it allows me to control the Kubernetes cluster configuration better than the Portal.

The easiest way to get started is to follow the Kubernetes walk-through on the Microsoft Docs site.

Continue until you get to the “Create your first Kubernetes service” and then stop.

Publishing to your Registry

As a starting point make sure you enable the admin user for your Registry so you can push images to it. You can do this via the Portal under “Access keys”.

Start a new PowerShell prompt and let’s make sure we’re good to by seeing if our compiled .Net Core solution images areΒ here.

docker images

REPOSITORY                          TAG      IMAGE ID       CREATED  SIZE
siliconvalve.gab2017.demowebcore    1.0      e0f32b05eb19   1m ago   317MB
siliconvalve.gab2017.demowebcore    1.1      a0732b0ead13   1m ago   317MB

Good!

Now let’s log into our Azure Registry.

docker login YOUR_ACR.azureacr.io -u ADMIN_USER -p PASSWORD
Login Succeeded

Let’s tag and push our local images to our Azure Container Registry. Note this will take a while as it will publish the image which our case is 317MB. If you updated this in future only the differential will be re-published.

docker tag siliconvalve.gab2017.demowebcore:1.0 YOUR_ACR.azurecr.io/gab2017/siliconvalve.gab2017.demowebcore:1.0
docker tag siliconvalve.gab2017.demowebcore:1.1 YOUR_ACR.azurecr.io/gab2017/siliconvalve.gab2017.demowebcore:1.1
docker push YOUR_ACR.azurecr.io/gab2017/siliconvalve.gab2017.demowebcore:1.0
The push refers to a repository [YOUR_ACR.azurecr.io/gab2017/siliconvalve.gab2017.demowebcore]
f190fc6d75e4: Pushed
e64be9dd3979: Pushed
c300a19b03ee: Pushed
33f8c8c50fa7: Pushed
d856c3941aa0: Pushed
57b859e6bf6a: Pushed
d17d48b2382a: Pushed
latest: digest: sha256:14cf48fbab5949b7ad1bf0b6945201ea10ca9eb2a26b06f37 size: 1787

Repeat the ‘docker push’ for you 1.1 image too.

At this point we could now push this image to any compatible Docker host and it would run.

Deploying to Azure Container Service

Now comes the really fun bit :).

If you inspect the GAB-Demo2 project you will find a Kubernetes deployment file, the contents of which are displayed below.

If you update the Azure Container Registry path and insert an appropriate admin secret you now have a file that will deploy your docker image to a Kubernetes-managed Azure Container Service.

At your command line run:

kubectl create -f PATH_TO_FILE\gab-api-demo-deployment.yml
service "gabdemo" created
deployment "gabdemo" created

After a few minutes if you look in the Azure Portal you will find that a public load balancer has been created by Kubernetes which will allow you to hit the API definition at http://IP_OF_LOADBALANCER/swagger

You can also find this IP address by using the Kubernetes management UI, which we can get to using a secured tunnel to the Kubernetes management host (this step only works if you setup the ACS environment fully and downloaded credentials).

At your command prompt type:

az acs kubernetes browse --name=YOUR_CLUSTER_NAME --resource-group=YOUR_RESOURCE_GROUP
Proxy running on 127.0.0.1:8001/ui
Press CTRL+C to close the tunnel...
Starting to serve on 127.0.0.1:8001

A browser will pop open on the Kubernetes management portal and you can then open the Services view and see your published endpoints by editing the ‘gabdemo’ Service and viewing the public IP address at the very bottom of the dialog.

If you hit the Swagger URL for this you might get a 500 server error – this is easy to fix (and is a bug in my code that I need to fix!) – simply change the URL in the Swagger page to include “v1.0” instead of “v1”.

Kubernetes Service Details

Upgrade the Container image

For extra bonus points you can also upgrade the container image running by telling Kubernetes to modify the Service. You can do this either via the commandline using kubectl or you can edit the Service definition via the management web UI (shown below) and Kubernetes will upgrade the Container image running. If you hit the swagger page for the service you will see the API version has incremented to 1.1 now!

Edit Service

You could also choose to roll-back if you wanted – simply update the tag back to 1.0 and watch the API rollback.

So there we have it – modernising existing .Net Windows-hosted apps so they run on Docker Containers on Azure, managed by Kubernetes!

Tagged , ,

Why You Should Care About Containers

The release this week of the Windows Server 2016 Technical Preview 3 which includes the first public release of Microsoft’s Docker-compatible container implementation is sure to bring additional focus onto an already hot topic.

I was going to write a long introductory post about why containers matter, but Azure CTO Mark Russinovich beat me to it with his great post this week over on the Azure site.

Instead, here’s a TL;DR summary on containers and the Windows announcement this week.

  • A Container isn’t a Virtual Machine it’s a Virtual Operating System. Low-level services are provided by a Container Host which manages resource allocation (CPU / RAM / Disk). Some smarts around filesystem use means a Container can effectively share most of the underlying Container Host’s filesystem and only needs to track delta changes to files.
  • Containers are not just a cloud computing thing: they can run anywhere you can run a Linux or Windows server.
  • Containers are, however well suited to cloud computing because they offer:
    • faster startup times (they aren’t an entire OS on their own)
    • easier duplication and snapshotting (no need to track an entire VM any more)
    • higher density of hosting (though noisy neighbour control still needs solving)
    • easier portability: anywhere you have a compatible Container Host you can run your Container. The underlying virtualisation platform no longer matters, just the OS.
  • Docker is supported on all major tier one public clouds. Azure, AWS, GCP, Bluemix and Softlayer.
  • A Linux Container can’t run on a Windows Host (and vice versa): the Container Host shares its filesystem with a Container so it’s not possible to mix and match them!
  • Containers are well suited to use in microservices architectures where a Container hosts a single service.
  • Docker isn’t the only Container tech about (but is the best known and considered most mature) and we can hold out hope of good interoperability in the Container space (for now) thanks to the Open Container Initiative (OCI).

Containers offer great value for DevOps-type custom software development and delivery, but can also work for standard off-the-shelf software too. I fully expect we will see Microsoft offer Containers for specific roles for their various server products.

As an example, for Exchange Server you may see Containers available for each Exchange role: Mailbox, Client Access (CAS), Hub Transport (Bridgehead), Unified Messaging and Edge Transport. You apply minimal configuration to the Container but can immediately deploy it into an existing or new Exchange environment. I would imagine this would make a great deal of sense to the teams running Office 365 given the number of instances of these they would have to run.

So, there we have it, hopefully an easily digestable intro and summary of all things Containers. If you want to play with the latest Windows Server release you can spin up a copy in Azure if you want. If you don’t have a subscription sign up for a trial. Alternatively, Docker offers some good introductory resources and training is available in Australia*.

HTH.

* Disclaimer: Cevo is a sister company of my employer Kloud Solutions.

Tagged , , , , ,

Get Started with Docker on Azure

The most important part of this whole post is that you need to know that the whale in the Docker logo is officially named “Moby Dock“. Once you know that you can probably bluff your way through at least an introductory session on Docker :).

It’s been hard to miss the increasing presence of Docker, particularly if you work in cloud technology. Each of the major cloud providers has raced to provide container services (Azure, AWS, GCE and IBM) and these platforms see benefits in the higher density hosting they can achieve with minimal changes to existing infrastructure.

In this post I’m going to look at first steps to getting Docker running in Azure. There are other posts about that will cover this but there are a few gotchas along the way that I will cover off here.

First You Need a Beard

Anyone worth their take home pay who works with *nix needs to grow a beard. Not one of those hipstery-type things you see on bare-ankled fixie riders. No – a real beard.

While Microsoft works on adding Docker support in the next Windows Server release you are, for the most part, stuck using a Linux variant to host and manage your Docker containers.

The Azure Cross-Platform Command-Line Interface teases you with the ability to create Docker hosts from a Windows-based computer, but ultimately you’ll have a much easier experience running it all from a Linux environment (even if you do download the xplat-cli there anyway).

If you do try to set things up using a Windows machine you’ll have to do a little dancing to get certificates setup (see my answer on this stackoverflow post). This is shortly followed by the realisation that you then can’t manage the host you just created by getting those nice certificates onto another host – too much work if you ask me :).

While we’re on Docker and Windows let’s talk a little about boot2docker. This is designed to provide an easy way to get started with Docker and while it’s a great idea (especially for Windows users) you will have problems if you are running Hyper-V already due to boot2docker’s use of Virtualbox which won’t run if you already have Hyper-V installed.

So Linux it is then!

Management Machine

Firstly let’s setup a Linux host that will be our Docker management host. For this post we’ll use a CentOS 7 host (I’ve avoided using Ubuntu because there are some challenges installing and using node.js which is required for the Azure xplat CLI).

Once this machine is up and running we can SSH into it and install the required packages. Note that you’ll need to run this script as a root-equivalent user.

Now we have our bits to manage the Docker environment we can now build an image and actual Docker container host.

Docker Container Host

On Azure the easiest way to get going to with Docker is to use the cross platform CLI’s Docker features.

As a non-root user on our management linux box we can run the following commands to get our Docker host up and running. I’m using an Organisational Account here so I don’t need to download any settings files.

# will prompt for username and password
[sw@sw1 ~]$ azure login

# set mode to service management
[sw@sw1 ~]$ azure config mode asm

# get the list of Ubuntu images - select one for the next command
[sw@sw1 ~]$ azure vm image list | grep Ubuntu-14_04

# setup the host - replace placeholders
[sw@sw1 ~]$ azure vm docker create -e 22 -l "West US" {dockerhost} "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20141125-en-us-30GB" {linxuser} {linxpwd}

At this point we now have a new Azure VM up and running that has all the necessary Docker bits installed for us. If we look at the VM’s entry in the Azure Portal we can see that ports 22 and 4243 are already open for us. We can go ahead and test that everything’s good. Don’t forget to substitute your hostname!

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 info

Deploy an Image to a Container

As we have our baseline infrastructure ready to rock so let’s go ahead and deploy an image to it. For the purpose of this post we are going to use the wordpress-nginx image that can be built using the configuration in this Github repository.

On our management host we can run the following commands to build the image from the Dockerfile contained in the Git repository.

[sw@sw1 ~]$ git clone https://github.com/eugeneware/docker-wordpress-nginx.git

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 build -t="docker-wordpress-nginx" docker-wordpress-nginx/

Note: you need to make sure you run this as the user who setup the Docker container host and that you do it in the home directory of the user. This is because the certificates generated by the container host setup are stored in the user’s home folder in a directory called .docker. Also, expect this process to take a reasonable amount of time because it’s having to pull down a lot of data!

Once our image build is finished we can verify that it is on the Docker host by issuing this command:

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 images

Let’s create a new containerised version of the image and map the HTTP port out so we can access it from elsewhere in the world (we’re going to map port 80 to port 80). I’m also going to supply a friendly name for the container so I can easily reference it going forward (if I didn’t do this I’d get a nice long random string I’d need to use each time).

[sw@sw1 ~]$  docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 create -p 80:80 --name="dwn01" docker-wordpress-nginx

Now that we have created this we can start the container and it will happily run until we stop it :).

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 start dwn01

If we return to the VM management section in the Azure Management Portal and add an Endpoint to map to port 80 on our Docker container host we can then open up our WordPress setup page in a web browser and configure up WordPress.

If we simply stop the container we will lose any changes to the running environment. Docker provides us with the ‘commit’ command to rectify this. Let’s go ahead and save our state:

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 commit dwn01 sw/dwn01

and then we can stop the Container.

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 stop dwn01

We now have a preserved state container along with the original unchanged one. If we want to move this container to another platform that supports Docker we could also do that, or we could repeat all our changes based on the original unchanged container.

This has been a very brief overview of Docker on Azure – hopefully it will get you started with the basics and comfortable with the mechanics of setting and up and managing Docker.

Tagged , ,