Category Archives: CI

Continuous Deployment for Docker with VSTS and Azure Container Registry

I’ve been watching with interest the growing maturity of Containers, and in particular their increasing penetration as a hosting and deployment artefact in Azure. While I’ve long believed them to be the next logical step for many developers, until recently they have had limited appeal to many every-day developers as the tooling hasn’t been there, particularly in the Microsoft ecosystem.

Starting with Visual Studio 2015, and with the support of Docker for Windows I started to see this stack as viable for many.

In my current engagement we are starting on new features and decided that we’d look to ASP.Net Core 2.0 to deliver our REST services and host them in Docker containers running in Azure’s Web App for Containers offering. We’re heavy uses of Visual Studio Team Services and given Microsoft’s focus on Docker we didn’t see that there would be any blockers.

Our flow at high level is shown below.

Build Pipeline

1. Developer with Visual Studio 2017 and Docker for Windows for local dev/test
2. Checked into VSTS and built using VSTS Build
3. Container Image stored in Azure Container Registry (ACR)
4. Continuously deployed to Web Apps for Containers.

We hit a few sharp edges along the way, so I thought I’d cover off how we worked around them.

Pre-requisites

There are a few things you need to have in place before you can start to use the process covered in this blog. Rather than reproduce them here in detail, go and have a read of the following items and come back when you’re done.

  • Setting up a Service Principal to allow your VSTS environment to have access to your Azure Subscription(s), as documented by Donovan Brown.
  • Create an Azure Container Registry (ACR), from the official Azure Documentation. Hint here: don’t use the “Classic” option as it does not support Webhooks which are required for Continuous Deployment from ACR.

See you back here soon ūüôā

Setting up your Visual Studio project

Before I dive into this, one cool item to note, is that you can add Docker support to existing Visual Studio projects, so if you’re interested in trying this out you can take a look at how you can add support to your current solution (note that it doesn’t magically support all project types… so if you’ve got that cool XAML or WinForms project… you’re out of luck for now).

Let’s get started!

In Visual Studio do a File > New > Project. As mentioned above, we’re building an ASP.Net Core REST API, so I went ahead and selected .Net Core and ASP.Net Core Web Application.

New Project - .Net Core

Once you’ve done this you get a selection of templates you can choose from – we selected Web API and ensured that we left Docker support on, and that it was on Linux (just saying that almost makes my head explode with how cool it is ūüėČ )

Web API with Docker support

At this stage we now have baseline REST API with Docker support already available. You can run and debug locally via IIS Express or via Docker – give it a try :).

If you’ve not used this template before you might notice that there is an additional project in the solution that contains a series of Docker-related YAML files – for our purposes we aren’t going to touch these, but we do need to modify a couple of files included in our ASP.Net Core solution.

If we try to run a Docker build on VSTS using the supplied Dockerfile it will fail with an error similar to:

COPY failed: stat /var/lib/docker/tmp/docker-builder613328056/obj/Docker/publish: no such file or directory
/usr/bin/docker failed with return code: 1

Let’s fix this.

Add a new file to the project and name it “Dockerfile.CI” (or something similar) – it will appear as a sub-item of the existing Dockerfile. In this new file add the following, ensuring you update the ENTRYPOINT to point at your DLL.

This Dockerfile is based on a sample from Docker’s official documentation and uses a Docker Container to run the build, before copying the results to the actual final Docker Image that contains your app code and the .Net Core runtime.

We have one more change to make. If we do just the above, the project will fail to build because the default dockerignore file is stopping the copying of pretty much all files to the Container we are using for build. Let’s fix this one by updating the file to contain the following ūüôā

Now we have the necessary bits to get this up and running in VSTS.

VSTS build

This stage is pretty easy to get up and running now we have the updated files in our solution.

In VSTS create a new Build and select the Container template (right now it’s in preview, but works well).

Docker Build 01

On the next screen, select the “Hosted Linux” build agent (also now in preview, but works a treat). You need to select this so that you build a Linux-based Image, otherwise you will get a Windows Container which may limit your deployment options.

build container 02

We then need to update the Build Tasks to have the right details for the target ACR and to build the solution using the “Dockerfile.CI” file we created earlier, rather than the default Dockerfile. I also set a fixed name for the Image Name, primarily because the default selected by VSTS typically tends to be invalid. You could also consider changing the tag from $(Build.BuildId) to be $(Build.BuildNumber) which is much easier to directly track in VSTS.

build container 03

Finally, update the Publish Image Task with the same ACR and Image naming scheme.

Running your build should generate an image that is registered in the target ACR as shown below.

ACR

Deploy to Web Apps for Containers

Once the Container Image is registered in ACR, you can theoretically deploy it to any container host (Azure Container Instances, Web Apps for Containers, Azure Container Services), but for this blog we’ll look at Web Apps for Containers.

When you create your new Web App for Containers instance, ensure you select Azure Container Registry as the source and that you select the correct Repository. If you have added the ‘latest’ tag to your built Images you can select that at setup, and later enable Continuous Deployment.

webappscontainers

The result will be that your custom Image is deployed into your Web Apps for Containers instance and which will be available on ports 80 and 443 for the world to use.

Happy days!

I’ve uploaded the sample project I used for this blog to Github – you can find it at: https://github.com/sjwaight/docker-dotnetcore-vsts-demo

Also, please feel free to leave any comments you have, and I am certainly interested in other ways to achieve this outcome as we considered Docker Compose with the YAML files but ran into issues at build time.

Tagged , , ,

Deploy a PHP site to Azure Web Apps using Dropbox

I’ve been having some good fun getting into the nitty gritty of Azure’s Open Source support and keep coming across some amazing things.

If you want to move away from those legacy hosting businesses and want a simple method to deploy static or dynamic websites, then this is worth a look.

The sample PHP site I used for this demonstration can be cloned on Github here: https://github.com/banago/simple-php-website

The video is without sound, but should be easy enough to follow without.

It’s so simple even your dog could do it.

Dogue

Tagged , , ,

Deploy Umbraco using Octopus Deploy

Every once in a while you come across a tool that really fits its purpose and delivers good value for not a whole lot of effort. ¬†I’m happy to say that I think Octopus Deploy is one such tool! ¬†While Octopus isn’t the first (or most mature in this space) it hits a sweet spot and really offers any sized team the ability to get on¬†board the Continous Deployment / Delivery bandwagon.

My team is using it to deploy a range of .Net websites and we’re considering it for pushing database changes too (although these days a lot of what we build utilises Entity Framework so we don’t need to push that many DB scripts about any more) and one thing we’ve done a lot of is deploy Umbraco 4 sites.

Its About The Code

One important fact to get out here is that I’m only going to talk about how Octopus will help you deploy .Net code changes only. ¬†While you can generate SQL scripts and deploy them using Octopus (and ReadyRoll perhaps), Umbraco development has little to do with schema change and everything to do with instance data change. ¬†This is not an easy space to be – more so with larger websites – and even Umbraco has found it hard to solve despite producing Courier specifically for this challenge. ¬†This all being said, I’m sure if you spend time working with SQL Data Compare¬† you can come up with a database deployment step using scripts.

Setting It Up

Before you start Umbraco deployments using Octopus you need to make a decision about what to deploy each time and then modify your target.

When developing with Umbraco you will have a “media”, an “umbraco” and an “umbraco_client” folder in your solution folder but not necessarily included in your Visual Studio solution. ¬†These three folders will also be present on your target deployment server and in order to leverage Octopus properly you need to manage these three folders appropriately.

Media folder

This folder holds files that are uploaded by CMS users over time. ¬†It is rare that you would want to take media from a development environment and push it to any other environment other than on initial deployment. ¬†If you do deploy it each time then your deployment will be (a) larger and (b) more challenging to deploy (notice I didn’t say impossible). ¬†You’ll also need to deal with merging of media “meta data” in the Umbraco CMS you’re deploying to (you’re back at Courier at this stage).

Regardless of whether you want to push media or not you will need to deal with how you treat the media folder on your target server – Octopus can automatically re-map your IIS root folder for your website to your new deployment so you’lll need to write Powershell to deal with this (and merging content if required).

Our team’s process is to not transfer media files via Octopus and¬†we have solved the media folder problem by creating the folder as a Virtual Directory in IIS on the target web server. ¬†As long as the physical folder has the right permissions you will have no problems with this approach. ¬†The added benefit here is that when Octopus Deploy remaps your IIS root folder to a new deployment the media is already in place and not affected at all.

Umbraco folders

The two Umbraco folders are required for the CMS to function as expected. ¬†While some of you might make changes internally to these folders I’d recommend you visit your reasons for doing so and see if you can’t make these two folders static and simply re-deploy the Umbraco binaries in your Octopus package.

There are a couple of ways to proceed with these folders – you can choose to redeploy them each time or you can treat them as exceptions and, as with the media folder, you can create Virtual Directories for them under IIS.

If you want to redeploy them as part of your package you will need to do a few things:

  1. Create a small stub “umbraco.zip” that is empty or that contains a single file (or similar) in your Visual Studio solution.
  2. Write some Powershell in your PostDeploy.ps1 file that unzips those folders into the right place on your target web server.
  3. In your build script (on your build server, right?) utilise an appropriate MSBuild extension (like the trusty MSBuildCommunityTasks) to zip the two folders into a zip that replaces the stub you created in (1).
  4. Run your build in Release mode (required to trigger OctoPack) which will trigger the packaging of your outputs including the new zip from (1).

On deployment you should see your Powershell execute and unpack your Umbraco folders into the right place!

Alternatively, you can choose not to redeploy these two folders each time – if this suits (and it does for us) then you can use the same approach as with the media folder and simply create two Virtual Directories. ¬†Once you’ve deployed everything will work as expected.

It’s Packaged (and that’s a Wrap)

So there we have a simple scenario for deploying Umbraco via Octopus Deploy – I’m sure there are more challenging scenarios than the above but I bet with a little msbuild and Powershell-foo you can work something out.

I hope you’ve found this post useful and I’d recommend checking out the¬†Octopus Deploy Blog¬†to see what great work Paul does at taking feedback on board for the product.

Tagged , , , ,

The Terrible Truth About Version 1.0.0.0

Let me start by saying that if you think this going to be a post about how bad most “v1” software is then you will be sorely disappointed and you should move on.

What I am going to talk about is fairly similar to Scott Hanselman’s blog on semantic versioning and the reasons you should be using versioning in your software.

Care Factor

Firstly, you may ask yourself, “why should I care about versioning?”

This is a fair question.

Perhaps you don’t need to care about it, but I suspect more accurately you’ve never needed to care about it.

Through the custom software projects I’ve been involved in my career I’ve seen pretty much any combination you care to think of of how software can be built, released and maintained.¬† In the majority of cases proper versioning has taken a back seat to almost every other aspect of delivery (assuming versioning was even thought of at all).¬† Certainly the discussions I’ve often participated in around exactly what version¬†was deployed where have tended to be fairly excruciating as they have typically arisen when there is a problem that needs root cause analysis and ultimately a fix.

Examples of bad practice I’ve seen are:

  • Using file create date / time to determine which assembly copy was newer (of course this approach was undocumented).
  • Someone screwed up the versioning so the newer builds had an older version number (and nobody ever bothered to fix it).
  • Everything is versioned 1.0.0.0 with a mix of “F5” developer machine builds and build server builds being deployed randomly to fix issues. ¬†This one resulted in “race conditions” where an assembly may get updated from more than one source (and you never knew which one).
  • New build server = zero out all counters = ground-hog day versioning (or “what was old is now new”).
  • New branch, old version number scheme. ¬†Builds with¬†parallel¬†version numbers. Oops.
  • Version 5 of ReSharper had assembly (and MSI) versions that bore a tenuous relationship to the actual official “version”. ¬†The 5.1.2 release notes have an interesting responses thread¬†which I would put as recommended reading on how versioning affects end users.

The bottom line is that versioning (when used properly) allows us to identify a unique set of features and configurations that were in use at any given time.  In .Net it also gives us access to one of the more powerful features of the Global Assembly Cache (GAC) as we can deploy multiple versions of the same assembly side-by-side!

If this seems self-explanatory that’s because it is.

Still With Me?

What I’ve written so far could apply equally to any technology used in software delivery but now I’m going to cover what I think is the root cause of the lack of care about versioning in the .Net landscape:

It’s too easy to not care about version with .Net.

Thanks to Microsoft we get a free AssemblyInfo file along for the ride in each new project we create.  It wonderfully provides us with this template:

using System.Reflection;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;

// General Information about an assembly is controlled through the following
// set of attributes. Change these attribute values to modify the information
// associated with an assembly.
[assembly: AssemblyTitle("YourApplicationName")]
[assembly: AssemblyDescription("")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("")]
[assembly: AssemblyProduct("YourApplicationName")]
[assembly: AssemblyCopyright("Copyright © 2012")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]

// Setting ComVisible to false makes the types in this assembly not visible
// to COM components. If you need to access a type in this assembly from
// COM, set the ComVisible attribute to true on that type.
[assembly: ComVisible(false)]

// The following GUID is for the ID of the typelib if this project is exposed to COM
[assembly: Guid("1266236f-9358-40d0-aac7-fe41ae102f04")]

// Version information for an assembly consists of the following four values:
//
// Major Version
// Minor Version
// Build Number
// Revision
//
// You can specify all the values or you can default the Build and Revision Numbers
// by using the '*' as shown below:
// [assembly: AssemblyVersion("1.0.*")]
[assembly: AssemblyVersion("1.0.0.0")]
[assembly: AssemblyFileVersion("1.0.0.0")]

The problem is the last two lines. I wonder how many people bother to even look at this file?

Let’s try a little experiment – how many matches for the default entry for version in the AssemblyInfo file can we find on GitHub? Oh, about 62,000.

Now this isn’t necessarily a problem but it indicates the scale of the challenge! To be fair to those on GitHub, all/some/none of those matches may well be overridden on a build server (I hope so!) using one of the multitude of techniques available (if you’d like a few options let me know and I’ll do another post on this).

Let’s solve the problem by deleting those two lines – sounds like a plan? Get rid of the pesky version number completely. ¬†Does the application still compile? Yup.

So what version do we get left with? ¬†Let’s take a look at the resulting details.

You are kidding right?!

What The?! Version 0.0.0.0?! ¬†We’ve made things even more broken than they already were. ¬†That’s not a strategy!

Some Suggestions

Fairly obviously we are somewhat stuck with what we have here – it is not illegal at compile time to not have an explicit version number defined (resulting in 0.0.0.0). ¬†We can’t modify the default behaviour of csc.exe or vbc.exe to enforce an incremented version number…

Assuming you’re using good practices and have a build server running your builds you can drop in a couple of unit tests that will allow you to ensure any compiled assembly doesn’t have either of our problematic versions. ¬†Here’s a super simple example of how to do this. ¬†Note this needs to run on your build server – these tests will fail on developer machines if you aren’t checking in updated AssemblyInfo files (which I would recommend against).


[TestMethod]
public void ValidateNotDefaultVersionNumber()
{
       const string defaultVersionNumber = "1.0.0.0";
       var myTestPoco = new MyPoco();
       var assembly = Assembly.GetAssembly(myTestPoco.GetType());
       Assert.AreNotEqual(defaultVersionNumber, assembly.GetName().Version.ToString());
}

[TestMethod]
public void ValidateNotEmptyVersionNumber()
{
       const string noDefinedVersionNumber = "0.0.0.0";
       var myTestPoco = new MyPoco();
       var assembly = Assembly.GetAssembly(myTestPoco.GetType());
       Assert.AreNotEqual(noDefinedVersionNumber, assembly.GetName().Version.ToString());
}

Remember to update the tests as you increment major / minor version numbers.

The goal is to ensure your build server will not emmit assemblies that have version numbers that match those held in source control. ¬†We’re doing this to enforce deployment of server builds and make it obviously a different version number for a developer build. ¬†I personally set builds to never check updated AssemblyInfo files back into source control for this very reason.

Make sure you *do* label builds in source control however. ¬†If you don’t you may have challenges in the longer term in being able to retrieve older versions unless you have a very large “drops” storage location :).

Early on in planning your project you should also decide on a version number scheme and then stick to it. ¬†Make sure it is documented and supported in your release management practices and that everyone on the team understands how it works. ¬†It will also ease communication with your stakeholders when you have known version numbers to discuss (no more “it will be in the next release”).

Finally, don’t let the version number hide away. ¬†Ensure it displayed somewhere in the application you’re building. ¬†It doesn’t need to be on every page or screen but it should be somewhere that a non-Admin user can easily view.

So, there we go, hopefully a few handy tips that will have you revisting the AssemblyInfo files in your projects.

Happy versioning!

Dilbert.com

Tagged , , ,

Configure Mercurial Pull with HTTP Authentication for Jenkins on Windows

As part of the journey our team is going on in 2012 I am looking to migrate our processes and tools to a more scalable and maintainable state.¬† One of the first items to look at was the replacement of that early cornerstone of CI ‚Äď CruiseControl.¬† Our existing server was aging and was running on a platform that meant Azure SDK support was unavailable.¬† To resolve this I moved to a new machine and new CruiseControl build only to discover that the Mercurial features don’t work.

After an initial clone the CC builds failed ‚Äď having spent time investigating I decided to see what else was about that allowed us to do centralised builds.¬† Bearing in mind this is a transition state I didn’t want invest a huge amount of time and money in a solution and found the Jenkins CI offering online.¬† As a bonus it has support (via plugins) for Mercurial and MSBuild so it seemed to fit the bill.

However…

As is always the way things didn’t quite work as I wanted.¬† Our existing build setup utilises HTTP authentication and not SSH to connect to our hosted Mercurial repositories and when I setup new Jenkins builds I was seeing that each build would clone the entire remote repository again which in some instances is not desirable given their size.¬† This is what I saw at the top of my build console log:

Started by user anonymous
Building in workspace E:\Program Files (x86)\Jenkins\jobs\XXXXXXXXX\workspace
[workspace] $ hg.exe showconfig paths.default
ERROR: Workspace reports paths.default as https://AAAAAAAA@bitbucket.org/BBBBB/XXXXXXXXX
which looks different than https://AAAAAAAA:PPPPPPPP@bitbucket.org/BBBBB/XXXXXXXXX
so falling back to fresh clone rather than incremental update

So I was scratching my head about this – we’ve always used the inline auth approach but it looked like in this instance the Jenkins Mercurial plugin / Mercurial client was deciding that the inline authentication made the source different enough that it wouldn’t even attempt to do a pull.

Digging around I discovered that the hg command line supports use of “ini” files to set global parameters.¬† I could theoretically have just set this in the baseline ini file that comes with the Mercurial client but as that file says “any updates to your client software will remove these settings.”

So, how to resolve?

Here are the steps I followed to resolve:

1. Create a new non-admin windows login with a valid password.

2. Grant this login control of the Jenkins folder (and any others that Jenkins may read / write / update).

3. Open up Windows Services and change the Jenkins service logon to be the new logon from (1).

4. Create a .hgrc file in the new windows login’s “C:\Users\<username>\” folder with these contents:

[auth]
bitbucket.prefix = bitbucket.org/BBBBB
bitbucket.username = AAAAAAAA
bitbucket.password = PPPPPPPP

5. Restart the Jenkins Windows Service.

Now when you run your build you will see a much more expected site:

Started by user anonymous
Building in workspace E:\Program Files (x86)\Jenkins\jobs\XXXXXXXX\workspace
[workspace] $ hg.exe showconfig paths.default
[workspace] $ hg.exe pull --rev default
[workspace] $ hg.exe update --clean --rev default
0 files updated, 0 files merged, 0 files removed, 0 files unresolved

Hopefully this post will save you some time if you’re trying to achieve the same sort of thing for your team.

Tagged , , ,