Category Archives: Build

Continuous Deployment for Docker with VSTS and Azure Container Registry

I’ve been watching with interest the growing maturity of Containers, and in particular their increasing penetration as a hosting and deployment artefact in Azure. While I’ve long believed them to be the next logical step for many developers, until recently they have had limited appeal to many every-day developers as the tooling hasn’t been there, particularly in the Microsoft ecosystem.

Starting with Visual Studio 2015, and with the support of Docker for Windows I started to see this stack as viable for many.

In my current engagement we are starting on new features and decided that we’d look to ASP.Net Core 2.0 to deliver our REST services and host them in Docker containers running in Azure’s Web App for Containers offering. We’re heavy uses of Visual Studio Team Services and given Microsoft’s focus on Docker we didn’t see that there would be any blockers.

Our flow at high level is shown below.

Build Pipeline

1. Developer with Visual Studio 2017 and Docker for Windows for local dev/test
2. Checked into VSTS and built using VSTS Build
3. Container Image stored in Azure Container Registry (ACR)
4. Continuously deployed to Web Apps for Containers.

We hit a few sharp edges along the way, so I thought I’d cover off how we worked around them.


There are a few things you need to have in place before you can start to use the process covered in this blog. Rather than reproduce them here in detail, go and have a read of the following items and come back when you’re done.

  • Setting up a Service Principal to allow your VSTS environment to have access to your Azure Subscription(s), as documented by Donovan Brown.
  • Create an Azure Container Registry (ACR), from the official Azure Documentation. Hint here: don’t use the “Classic” option as it does not support Webhooks which are required for Continuous Deployment from ACR.

See you back here soon ūüôā

Setting up your Visual Studio project

Before I dive into this, one cool item to note, is that you can add Docker support to existing Visual Studio projects, so if you’re interested in trying this out you can take a look at how you can add support to your current solution (note that it doesn’t magically support all project types… so if you’ve got that cool XAML or WinForms project… you’re out of luck for now).

Let’s get started!

In Visual Studio do a File > New > Project. As mentioned above, we’re building an ASP.Net Core REST API, so I went ahead and selected .Net Core and ASP.Net Core Web Application.

New Project - .Net Core

Once you’ve done this you get a selection of templates you can choose from – we selected Web API and ensured that we left Docker support on, and that it was on Linux (just saying that almost makes my head explode with how cool it is ūüėČ )

Web API with Docker support

At this stage we now have baseline REST API with Docker support already available. You can run and debug locally via IIS Express or via Docker – give it a try :).

If you’ve not used this template before you might notice that there is an additional project in the solution that contains a series of Docker-related YAML files – for our purposes we aren’t going to touch these, but we do need to modify a couple of files included in our ASP.Net Core solution.

If we try to run a Docker build on VSTS using the supplied Dockerfile it will fail with an error similar to:

COPY failed: stat /var/lib/docker/tmp/docker-builder613328056/obj/Docker/publish: no such file or directory
/usr/bin/docker failed with return code: 1

Let’s fix this.

Add a new file to the project and name it “Dockerfile.CI” (or something similar) – it will appear as a sub-item of the existing Dockerfile. In this new file add the following, ensuring you update the ENTRYPOINT to point at your DLL.

This Dockerfile is based on a sample from Docker’s official documentation and uses a Docker Container to run the build, before copying the results to the actual final Docker Image that contains your app code and the .Net Core runtime.

We have one more change to make. If we do just the above, the project will fail to build because the default dockerignore file is stopping the copying of pretty much all files to the Container we are using for build. Let’s fix this one by updating the file to contain the following ūüôā

Now we have the necessary bits to get this up and running in VSTS.

VSTS build

This stage is pretty easy to get up and running now we have the updated files in our solution.

In VSTS create a new Build and select the Container template (right now it’s in preview, but works well).

Docker Build 01

On the next screen, select the “Hosted Linux” build agent (also now in preview, but works a treat). You need to select this so that you build a Linux-based Image, otherwise you will get a Windows Container which may limit your deployment options.

build container 02

We then need to update the Build Tasks to have the right details for the target ACR and to build the solution using the “Dockerfile.CI” file we created earlier, rather than the default Dockerfile. I also set a fixed name for the Image Name, primarily because the default selected by VSTS typically tends to be invalid. You could also consider changing the tag from $(Build.BuildId) to be $(Build.BuildNumber) which is much easier to directly track in VSTS.

build container 03

Finally, update the Publish Image Task with the same ACR and Image naming scheme.

Running your build should generate an image that is registered in the target ACR as shown below.


Deploy to Web Apps for Containers

Once the Container Image is registered in ACR, you can theoretically deploy it to any container host (Azure Container Instances, Web Apps for Containers, Azure Container Services), but for this blog we’ll look at Web Apps for Containers.

When you create your new Web App for Containers instance, ensure you select Azure Container Registry as the source and that you select the correct Repository. If you have added the ‘latest’ tag to your built Images you can select that at setup, and later enable Continuous Deployment.


The result will be that your custom Image is deployed into your Web Apps for Containers instance and which will be available on ports 80 and 443 for the world to use.

Happy days!

I’ve uploaded the sample project I used for this blog to Github – you can find it at:

Also, please feel free to leave any comments you have, and I am certainly interested in other ways to achieve this outcome as we considered Docker Compose with the YAML files but ran into issues at build time.

Tagged , , ,

Continuous Deployment of Windows Services using VSTS

I have to admit writing this post feels a bit “old skool”. Prior to the last week I can’t remember the last time I had to break out a Windows Service to solve anything. Regardless, for one cloud-based IaaS project I’m working on I needed a simple worker-type solution that was private and could post data to a private REST API hosted on the other end of an Azure VNet Peer.

While I could have solved this problem any number of ways I plumped for Windows Service primarily because it will be familiar to developers and administrators at the organisation I’m working with, but I figured if I’m going to have to deploy onto VMs I’m sure not deploying in an old-fashioned way! Luckily we’re already running in Azure and hosting on VSTS so I have access to all the tools I need!

Getting Setup

The setup for this process is very similar to the standard “Deploy to Azure VM” scenario that is very well covered in the official documentation and which I added some context to in a blog post earlier in the year.

Once you have the basics in place (it only takes a matter of minutes to prepare each machine) you can head back here to cover off the changes you need to make.

Note: this process is going to assume you have a Windows Service Project in Visual Studio 2015 that is being built using VSTS’s in-built build infrastructure. If you have other configurations you may need to take different steps to get this to work ūüôā

Tweak build artefact output

First we need to make sure that the outputs from our build are stored as artefacts in VSTS. I didn’t use any form of installer packaging here so I needed to ensure my build outputs were all copied to the “drops” folder.

Here is my build definition which is pretty vanilla:

Build Process

The tweak I made was on the Visual Studio build step (step 2) where I defined an additional MSBuild Argument that set the OutputPath to be the VSTS build agent’s artifacts directory which will automatically be copied by the Publish Artifacts step:

Build Update

If I look at a history entry for my CI build and select Artifacts I can see that my Windows Service binary and all its associated assemblies, config files (and importantly Deployment Script) are stored with the build.

Build Artefacts

Now we have the build in the right configuration let’s move on to the deployment.

Deploying a Service

This is actually easier than it used to be :). Many of us would remember the need to package the Windows Service into an MSI and then use InstallUtil.exe to do the install on deployment.

Fear not! You no longer need this approach for Windows Services!

PowerShell FTW!

Yes, that Swiss Army knife comes to the rescue again with the Get-Service, New-Service, Stop-Service and Start-Service Cmdlets.

We can combine these handy Cmdlets in our Deployment script to manage the installation of our Windows Service as shown below.

The Release Management definition remains unchanged – all we had to do was ensure our build outputs were available to copy from the ‘Drop’ folder on the build and that they are copied to C:\temp\ on the target VM(s). Our Desployment Script takes care of the rest!

That’s it! Next time your CI build passes your CD kicks in and your Windows Service will be updated on your target VMs!

Tagged ,

Deploying to Azure VMs using VSTS Release Management

I am going to subtitle this post “the missing manual” because I spent quite a bit of time troubleshoothing how this should all work.

Microsoft provides a bunch of useful information on how to deploy from Visual Studio Team Services (VSTS) to different targets, including Azure Virtual Machines.

In an ideal world I wouldn’t be using VMs at all, but for my current particular use case I have to use VMs so the above (linked) approach worked.

The approach sounds good but I ran into a few sharp edges that I thought I would document here (and hopefully the source documentation will be updated to reflect this in due course).

Preparing deployment targets

Azure FQDNs

Note: Please see my update at the bottom of this post before reading this section. While you can use IP addresses (if you make them static) it’s worth configuring test certs with the FQDN.

I thought I’d do the right thing by configuring the Azure IP of my hosts to have a full FQDN rather than just an IP address.

As I found out this is not a good idea.

The main issue you will run into is the generated certs on target hosts only have the hostname in them (i.e. azauhost01) rather than the full public FQDN (i.e.

When the Release Management infrastructure tries to connect to a host this cert mismatch causes a fatal error. I didn’t spend much time troubleshooting so decided to revert to use of IP addresses only.

When using dynamic IP addresses the first Release Management action “Azure Deployment:Select Resource Group action” is important as it allows for discovery of all VMs and their IPs (i.e. no hardcoding required). This apprach does mean, however, you need to consider how you group VMs into Resource Groups to allow any VM in the Resource Group to be used as the deployment target.

Select Resource Group

Local permissions

I was running my deployments to non-Domain joined Windows 2012 R2 server instances with local administrative accounts and had opened the necessary port in the NSG rules to allow WinRM into the servers from the VSTS infrastructure.

Everything looked good on deployment until PowerShell execution on the target hosts resulted in errors due to permissions. As it turns out the error message was actually useful in resolving this problem ūüôā

In order to move beyond this error I had to prepare each target host by running these commands at an admin command prompt on the host:

winrm quickconfig


We could drop these into a DSC module and run that way if we wanted to make this repeatable across new hosts.

There is a good troubleshooting overview for this from Microsoft.

Wait! Where did my PowerShell script go?

If you follow the instructions provided by Microsoft you need to add a deployment Powershell script (actually a DSC module) to your Web App (their example uses “ConfigureWebserver.ps1” for the filename).

There is one issue with this approach – the build step to package the Web App actually ends up bundling the PowerShell inside of a deployment zip which means once the files are copied to your target VM the PowerShell can’t be invoked.

The fix for this is to add an additional build step that copies the PowerShell to the drops folder on VSTS which means the PowerShell will be transferred to the target VM.

Your final build definition should look like the below

Build definition

and the Copy Files task should be configured like this (note below that /Deploy is the folder in my solution that contains the PowerShell I’ve included for deployment purposes):

Build Step

Once you have done this you will find that the script is now available in the VSTS drops folder and can be copied to the VMs which allows you to execute it via the Release Management action.

Wrapping up

Once I had these changes in place and had made some minor path / project name tweaks to match my project I could run the process end-to-end.

The one final item I’ll call out here is the default deployment location of the solution on the target VM ends up being the wwwroot of your inetpub folder with a subfolder named ProjectName_deploy. If you map this to an Application in IIS you should be good to go :).

Update – 8 December 2016

After I’d been running happily for a while my Release started failing. It turns out my target VMs were cycled and moved to different public IP addresses. As the WinRM HTTP certificate had the old IP address in the remote calls failed.

I found a great blog post on how to rectify this situation though:

Happy days!

Tagged , ,

Create New Folder Hierarchies For TFS Projects using Git SCM

If, like a lot of people who’ve worked heavily with TFS you may not have spent much time working with Git or any of its DVCS bretheren.

Firstly, a few key things:

1. Read and absorb the tutorial on how best to work with Git from the guys over at Atlassian.

2. Install the Visual Studio 2012 Update 2 (currently in CTP, possibly in RTM by the time you read this). (grab just vsupdate_KB2707250.exe)

3. Install the Git Tools for Visual Studio

4. Install the most recent Git client software from

5. Set your default Visual Studio Source Control provider to be “Microsoft Git Provider”.

6. Setup an account on Team Foundation Service (, or if you’re lucky enough maybe you can even do this with your on-premise TFS instance now…

7. Make sure you enable and set alternative credentials in your TFS profile:


8. Setup a project that uses Git for source control.

At this stage you have a couple of options – you can clone the repository using Visual Studio’s Git support


OR you can do it right from the commandline using the standard Git tooling (make sure you’re at a good location on disk when you run this command):

git clone milhouse
Cloning into 'milhouse'...
Username for '': homer
Password for '':
Warning: You appear to have cloned an empty repository.

I tend to setup¬†a project directory hierarchy early on and with Git support in Visual Studio I’d say it’s even more important as you don’t have a Source Control Explorer view of the world and Visual Studio can quickly create a mess when adding lots of projects or solution elements. ¬†The challenge is that (as of writing) Git won’t support empty folders and the easiest work around is to create your folder structure and drop an empty file into each folder.

Now this is where Visual Studio’s Git tools won’t help you – they have no concept of files / folders held outside of Visual Studio solutions so you will need to use the Git tools at the commandline to affect this change. Once have your hierarchy setup with empty files in each folder, at a command prompt change into the root of your local repository and then do the following.

git add -A
git commit -m "Hmmmm donuts."

Now, at this point, if you issue “git push” you may experience a problem and receive this message:

No refs in common and none specified; doing nothing.
Perhaps you should specify a branch such as ‘master’.
Everything up-to-date.

Which apart from being pretty good english (if we ignore ‘refs’) is pretty damn useless.

How to fix? Like this:

git push origin master

This will perform a forced push and your newly populated hierachy should be pushed to TFS, er Git, er TFS. You get the idea. Then the others on your team are able to clone the repository (or perform a pull) and will receive the updates.


Update: A big gotcha that I’ve found, and it results in a subtle issue is this: if you have a project that has spaces in its title (i.e. “Big Web”) then Git happily URL encodes that and will write the folder to disk in the form “Big%20Web” which is all fine and dandy until you try to compile anything in Visual Studio. Then you’ll start getting CS0006 compilation errors (unable to find metadata files). ¬†The fix is to override the target when cloning the repository to make sure the folder is validly named (in my example above this checks out the “bart” project to the local “milhouse” folder).


Deploy Umbraco using Octopus Deploy

Every once in a while you come across a tool that really fits its purpose and delivers good value for not a whole lot of effort. ¬†I’m happy to say that I think Octopus Deploy is one such tool! ¬†While Octopus isn’t the first (or most mature in this space) it hits a sweet spot and really offers any sized team the ability to get on¬†board the Continous Deployment / Delivery bandwagon.

My team is using it to deploy a range of .Net websites and we’re considering it for pushing database changes too (although these days a lot of what we build utilises Entity Framework so we don’t need to push that many DB scripts about any more) and one thing we’ve done a lot of is deploy Umbraco 4 sites.

Its About The Code

One important fact to get out here is that I’m only going to talk about how Octopus will help you deploy .Net code changes only. ¬†While you can generate SQL scripts and deploy them using Octopus (and ReadyRoll perhaps), Umbraco development has little to do with schema change and everything to do with instance data change. ¬†This is not an easy space to be – more so with larger websites – and even Umbraco has found it hard to solve despite producing Courier specifically for this challenge. ¬†This all being said, I’m sure if you spend time working with SQL Data Compare¬† you can come up with a database deployment step using scripts.

Setting It Up

Before you start Umbraco deployments using Octopus you need to make a decision about what to deploy each time and then modify your target.

When developing with Umbraco you will have a “media”, an “umbraco” and an “umbraco_client” folder in your solution folder but not necessarily included in your Visual Studio solution. ¬†These three folders will also be present on your target deployment server and in order to leverage Octopus properly you need to manage these three folders appropriately.

Media folder

This folder holds files that are uploaded by CMS users over time. ¬†It is rare that you would want to take media from a development environment and push it to any other environment other than on initial deployment. ¬†If you do deploy it each time then your deployment will be (a) larger and (b) more challenging to deploy (notice I didn’t say impossible). ¬†You’ll also need to deal with merging of media “meta data” in the Umbraco CMS you’re deploying to (you’re back at Courier at this stage).

Regardless of whether you want to push media or not you will need to deal with how you treat the media folder on your target server – Octopus can automatically re-map your IIS root folder for your website to your new deployment so you’lll need to write Powershell to deal with this (and merging content if required).

Our team’s process is to not transfer media files via Octopus and¬†we have solved the media folder problem by creating the folder as a Virtual Directory in IIS on the target web server. ¬†As long as the physical folder has the right permissions you will have no problems with this approach. ¬†The added benefit here is that when Octopus Deploy remaps your IIS root folder to a new deployment the media is already in place and not affected at all.

Umbraco folders

The two Umbraco folders are required for the CMS to function as expected. ¬†While some of you might make changes internally to these folders I’d recommend you visit your reasons for doing so and see if you can’t make these two folders static and simply re-deploy the Umbraco binaries in your Octopus package.

There are a couple of ways to proceed with these folders – you can choose to redeploy them each time or you can treat them as exceptions and, as with the media folder, you can create Virtual Directories for them under IIS.

If you want to redeploy them as part of your package you will need to do a few things:

  1. Create a small stub “” that is empty or that contains a single file (or similar) in your Visual Studio solution.
  2. Write some Powershell in your PostDeploy.ps1 file that unzips those folders into the right place on your target web server.
  3. In your build script (on your build server, right?) utilise an appropriate MSBuild extension (like the trusty MSBuildCommunityTasks) to zip the two folders into a zip that replaces the stub you created in (1).
  4. Run your build in Release mode (required to trigger OctoPack) which will trigger the packaging of your outputs including the new zip from (1).

On deployment you should see your Powershell execute and unpack your Umbraco folders into the right place!

Alternatively, you can choose not to redeploy these two folders each time – if this suits (and it does for us) then you can use the same approach as with the media folder and simply create two Virtual Directories. ¬†Once you’ve deployed everything will work as expected.

It’s Packaged (and that’s a Wrap)

So there we have a simple scenario for deploying Umbraco via Octopus Deploy – I’m sure there are more challenging scenarios than the above but I bet with a little msbuild and Powershell-foo you can work something out.

I hope you’ve found this post useful and I’d recommend checking out the¬†Octopus Deploy Blog¬†to see what great work Paul does at taking feedback on board for the product.

Tagged , , , ,

The Terrible Truth About Version

Let me start by saying that if you think this going to be a post about how bad most “v1” software is then you will be sorely disappointed and you should move on.

What I am going to talk about is fairly similar to Scott Hanselman’s blog on semantic versioning and the reasons you should be using versioning in your software.

Care Factor

Firstly, you may ask yourself, “why should I care about versioning?”

This is a fair question.

Perhaps you don’t need to care about it, but I suspect more accurately you’ve never needed to care about it.

Through the custom software projects I’ve been involved in my career I’ve seen pretty much any combination you care to think of of how software can be built, released and maintained.¬† In the majority of cases proper versioning has taken a back seat to almost every other aspect of delivery (assuming versioning was even thought of at all).¬† Certainly the discussions I’ve often participated in around exactly what version¬†was deployed where have tended to be fairly excruciating as they have typically arisen when there is a problem that needs root cause analysis and ultimately a fix.

Examples of bad practice I’ve seen are:

  • Using file create date / time to determine which assembly copy was newer (of course this approach was undocumented).
  • Someone screwed up the versioning so the newer builds had an older version number (and nobody ever bothered to fix it).
  • Everything is versioned with a mix of “F5” developer machine builds and build server builds being deployed randomly to fix issues. ¬†This one resulted in “race conditions” where an assembly may get updated from more than one source (and you never knew which one).
  • New build server = zero out all counters = ground-hog day versioning (or “what was old is now new”).
  • New branch, old version number scheme. ¬†Builds with¬†parallel¬†version numbers. Oops.
  • Version 5 of ReSharper had assembly (and MSI) versions that bore a tenuous relationship to the actual official “version”. ¬†The 5.1.2 release notes have an interesting responses thread¬†which I would put as recommended reading on how versioning affects end users.

The bottom line is that versioning (when used properly) allows us to identify a unique set of features and configurations that were in use at any given time.  In .Net it also gives us access to one of the more powerful features of the Global Assembly Cache (GAC) as we can deploy multiple versions of the same assembly side-by-side!

If this seems self-explanatory that’s because it is.

Still With Me?

What I’ve written so far could apply equally to any technology used in software delivery but now I’m going to cover what I think is the root cause of the lack of care about versioning in the .Net landscape:

It’s too easy to not care about version with .Net.

Thanks to Microsoft we get a free AssemblyInfo file along for the ride in each new project we create.  It wonderfully provides us with this template:

using System.Reflection;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;

// General Information about an assembly is controlled through the following
// set of attributes. Change these attribute values to modify the information
// associated with an assembly.
[assembly: AssemblyTitle("YourApplicationName")]
[assembly: AssemblyDescription("")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("")]
[assembly: AssemblyProduct("YourApplicationName")]
[assembly: AssemblyCopyright("Copyright © 2012")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]

// Setting ComVisible to false makes the types in this assembly not visible
// to COM components. If you need to access a type in this assembly from
// COM, set the ComVisible attribute to true on that type.
[assembly: ComVisible(false)]

// The following GUID is for the ID of the typelib if this project is exposed to COM
[assembly: Guid("1266236f-9358-40d0-aac7-fe41ae102f04")]

// Version information for an assembly consists of the following four values:
// Major Version
// Minor Version
// Build Number
// Revision
// You can specify all the values or you can default the Build and Revision Numbers
// by using the '*' as shown below:
// [assembly: AssemblyVersion("1.0.*")]
[assembly: AssemblyVersion("")]
[assembly: AssemblyFileVersion("")]

The problem is the last two lines. I wonder how many people bother to even look at this file?

Let’s try a little experiment – how many matches for the default entry for version in the AssemblyInfo file can we find on GitHub? Oh, about 62,000.

Now this isn’t necessarily a problem but it indicates the scale of the challenge! To be fair to those on GitHub, all/some/none of those matches may well be overridden on a build server (I hope so!) using one of the multitude of techniques available (if you’d like a few options let me know and I’ll do another post on this).

Let’s solve the problem by deleting those two lines – sounds like a plan? Get rid of the pesky version number completely. ¬†Does the application still compile? Yup.

So what version do we get left with? ¬†Let’s take a look at the resulting details.

You are kidding right?!

What The?! Version! ¬†We’ve made things even more broken than they already were. ¬†That’s not a strategy!

Some Suggestions

Fairly obviously we are somewhat stuck with what we have here – it is not illegal at compile time to not have an explicit version number defined (resulting in ¬†We can’t modify the default behaviour of csc.exe or vbc.exe to enforce an incremented version number…

Assuming you’re using good practices and have a build server running your builds you can drop in a couple of unit tests that will allow you to ensure any compiled assembly doesn’t have either of our problematic versions. ¬†Here’s a super simple example of how to do this. ¬†Note this needs to run on your build server – these tests will fail on developer machines if you aren’t checking in updated AssemblyInfo files (which I would recommend against).

public void ValidateNotDefaultVersionNumber()
       const string defaultVersionNumber = "";
       var myTestPoco = new MyPoco();
       var assembly = Assembly.GetAssembly(myTestPoco.GetType());
       Assert.AreNotEqual(defaultVersionNumber, assembly.GetName().Version.ToString());

public void ValidateNotEmptyVersionNumber()
       const string noDefinedVersionNumber = "";
       var myTestPoco = new MyPoco();
       var assembly = Assembly.GetAssembly(myTestPoco.GetType());
       Assert.AreNotEqual(noDefinedVersionNumber, assembly.GetName().Version.ToString());

Remember to update the tests as you increment major / minor version numbers.

The goal is to ensure your build server will not emmit assemblies that have version numbers that match those held in source control. ¬†We’re doing this to enforce deployment of server builds and make it obviously a different version number for a developer build. ¬†I personally set builds to never check updated AssemblyInfo files back into source control for this very reason.

Make sure you *do* label builds in source control however. ¬†If you don’t you may have challenges in the longer term in being able to retrieve older versions unless you have a very large “drops” storage location :).

Early on in planning your project you should also decide on a version number scheme and then stick to it. ¬†Make sure it is documented and supported in your release management practices and that everyone on the team understands how it works. ¬†It will also ease communication with your stakeholders when you have known version numbers to discuss (no more “it will be in the next release”).

Finally, don’t let the version number hide away. ¬†Ensure it displayed somewhere in the application you’re building. ¬†It doesn’t need to be on every page or screen but it should be somewhere that a non-Admin user can easily view.

So, there we go, hopefully a few handy tips that will have you revisting the AssemblyInfo files in your projects.

Happy versioning!

Tagged , , ,

Dr. Script or: How I Learned to Stop Worrying and Love Powershell

Powershell has been with us now since late 2006 but my experience is that widespread understanding and use of it is still very restricted within the .Net developer community. ¬†If you’re a Windows administrator, operator or release manager I’m sure you’re all over it. ¬†If you’re job description doesn’t fit in one of those three groups and you’re not inclined to understand how your software will be deployed or operated then the chances are you don’t know much about Powershell.

I have to say that Powershell’s syntactic differences to the “standard” C# most developers would know is an easy place to start disliking working with it (for example it’s not “==” for equals it’s “-eq” and null becomes $null). ¬†I say to those developers: it’s worth persevering because you stand to gain a lot once you get your head around it.

If you need to deploy something to test or debug it or you have an awkward environment with which to work with (I’m looking at you SharePoint 2010) then you will save yourself a lot of time by scripting out as much manual stuff as you can. ¬†Your release managers and ops teams will also love you if you can provide scripts that allow them to deploy without needing to perform a massive number of manual steps (if you ever want to get to the Continuous Deployment nirvana you can’t really avoid working with a scripting language).

I’m currently working on a project that includes the need for 84 IIS Websites in a load balanced environment. ¬†These sites follow a pattern – think I want to manually configure 84 IIS instances? ¬†Right. ¬†I have a Powershell script that will setup each site including a new local windows login for the Application Pool, setup the Application Pool (including user), create the Website (including folders and holding page and association with the Application Pool) and set all various host headers I need. ¬†Finally it will grant my WebDeploy user rights to deploy to the site. ¬†I’ll put a version of that script up in a future post.

On the same project we’ve used Powershell and the Amazon Web Services .Net SDK to provide tooling for use to push database backups from SQL Server to S3 (what, no RDS yet for SQL Server????) ¬†That’s another trick – you get full access to .Net in Powershell simply by pulling in a reference to the right .Net assembly.

Anyway, I could bang on about why you need to learn Powershell if you haven’t, but I’ll pick up this thread in a future post when I provide a sample script for setting up IIS ¬†sites.

On a side note I read about the death of Adam Yauch (MCA) from the Beastie Boys on May 4th – he’d been battling cancer for the last three years and most certainly didn’t deserve to put down the mic at 47. ¬†This one’s for him.

Tagged , , , ,

Easy Testing Of Your Web.Config Transformations

One of the powerful features of ASP.Net 4.0 was the introduction of web.config transformations that meant you could now do with ASP.Net out-of-the-box what you would have previously done with some form of custom XSLT transform in your build process. One thing that is not that easy is to test the outputs from the transformations.

One option is the¬†simple online web.config tester¬†from the guys over at¬†AppHarbor. ¬†While that’s great, personally I don’t want to round-trip my web.config files over the Net just to test something I should be able to do locally. The result was that after some playing I found a way to test locally utilising msbuild with the right parameters.

The one proviso to this simple test working is that you have successfully compiled the code for you web application (either via msbuild command-line or inside Visual Studio). ¬†This test will fail if the binaries or other inputs for your package aren’t available.

All you need to do is issue this command line:

MSBuild.exe YourProject.csproj /T:Package /P:DeployOnBuild=True;Configuration=YourConfiguration

You will now find in the ‘obj’ folder of the project you targetted a set of folders – if you dig through them you will find a “TransformsWebConfig” sub-folder that will contain the output result of your transform.

Happy Days!


New in Visual Studio 2012 is the ability to “Preview Transform” on configuration files that utilise the above technique. ¬†Open your solution in Visual Studio, expand the transformation node of your config file, select the transform to review and choose “Preview Transform” from the menu. ¬†Grab a look at screenshots either at Hanselman’s blog or here.

Tagged , , ,

Configure Mercurial Pull with HTTP Authentication for Jenkins on Windows

As part of the journey our team is going on in 2012 I am looking to migrate our processes and tools to a more scalable and maintainable state.¬† One of the first items to look at was the replacement of that early cornerstone of CI ‚Äď CruiseControl.¬† Our existing server was aging and was running on a platform that meant Azure SDK support was unavailable.¬† To resolve this I moved to a new machine and new CruiseControl build only to discover that the Mercurial features don’t work.

After an initial clone the CC builds failed ‚Äď having spent time investigating I decided to see what else was about that allowed us to do centralised builds.¬† Bearing in mind this is a transition state I didn’t want invest a huge amount of time and money in a solution and found the Jenkins CI offering online.¬† As a bonus it has support (via plugins) for Mercurial and MSBuild so it seemed to fit the bill.


As is always the way things didn’t quite work as I wanted.¬† Our existing build setup utilises HTTP authentication and not SSH to connect to our hosted Mercurial repositories and when I setup new Jenkins builds I was seeing that each build would clone the entire remote repository again which in some instances is not desirable given their size.¬† This is what I saw at the top of my build console log:

Started by user anonymous
Building in workspace E:\Program Files (x86)\Jenkins\jobs\XXXXXXXXX\workspace
[workspace] $ hg.exe showconfig paths.default
ERROR: Workspace reports paths.default as
which looks different than
so falling back to fresh clone rather than incremental update

So I was scratching my head about this – we’ve always used the inline auth approach but it looked like in this instance the Jenkins Mercurial plugin / Mercurial client was deciding that the inline authentication made the source different enough that it wouldn’t even attempt to do a pull.

Digging around I discovered that the hg command line supports use of “ini” files to set global parameters.¬† I could theoretically have just set this in the baseline ini file that comes with the Mercurial client but as that file says “any updates to your client software will remove these settings.”

So, how to resolve?

Here are the steps I followed to resolve:

1. Create a new non-admin windows login with a valid password.

2. Grant this login control of the Jenkins folder (and any others that Jenkins may read / write / update).

3. Open up Windows Services and change the Jenkins service logon to be the new logon from (1).

4. Create a .hgrc file in the new windows login’s “C:\Users\<username>\” folder with these contents:

bitbucket.prefix =
bitbucket.username = AAAAAAAA
bitbucket.password = PPPPPPPP

5. Restart the Jenkins Windows Service.

Now when you run your build you will see a much more expected site:

Started by user anonymous
Building in workspace E:\Program Files (x86)\Jenkins\jobs\XXXXXXXX\workspace
[workspace] $ hg.exe showconfig paths.default
[workspace] $ hg.exe pull --rev default
[workspace] $ hg.exe update --clean --rev default
0 files updated, 0 files merged, 0 files removed, 0 files unresolved

Hopefully this post will save you some time if you’re trying to achieve the same sort of thing for your team.

Tagged , , ,