Category Archives: Release Management

Moving from Azure VMs to Azure VM Scale Sets ‚Äď Runtime Instance Configuration

In my previous post I covered how you can move from deploying a solution to pre-provisioned Virtual Machines (VMs) in Azure to a process that allows you to create a custom VM Image that you deploy into VM Scale Sets (VMSS) in Azure.

As I alluded to in that post, one item we will need to take care of in order to truly move to a VMSS approach using a VM image is to remove any local static configuration data we might bake into our solution.

There are a range of options you can move to when going down this path, from solutions you custom build to running services such as Hashicorp’s Consul.

The environment I’m running in is fairly simple, so I decided to focus on a simple custom build. The remainder of this post is covering the approach I’ve used to build a solution that works for me, and perhaps might inspire you.

I am using an ASP.Net Web API as my example, but I am also using a similar pattern for Windows Services running on other VMSS instances – just the location your startup code goes will be different.

The Starting Point

Back in February I blogged about how I was managing configuration of a Web API I was deploying using VSTS Release Management. In that post I covered how you can use the excellent Tokenization Task to create a Web Deploy Parameters file that can be used to replace placeholders on deployment in the web.config of an application.

My sample web.config is shown below.

The problem with this approach when we shift to VM Images is that these values are baked into the VM Image which is the build output, which in turn can be deployed to any environment. I could work around this by building VM Images for each environment to which I deploy, but frankly that is less than ideal and breaks the idea of “one binary (immutable VM), many environments”.

The Solution

I didn’t really want to go down the route of service discovery using something like Consul, and I really only wanted to use Azure’s default networking setup. This networking requirement meant no custom private DNS I could use in some form of configuration service discovery based on hostname lookup.

…and…. to be honest, with the PaaS services I have in Azure, I can build my own solution pretty easily.

The solution I did land on looks similar to the below.

  • Store runtime configuration in Cosmos DB and geo-replicate this information so it is highly available. Each VMSS setup gets its own configuration document which is identified by a key-service pair as the document ID.
  • Leverage a read-only Access Key for Cosmos DB because we won’t ever ask clients to update their own config!
  • Use Azure Key Vault as to store the Cosmos DB Account and Access Key that can be used to read the actual configuration. Key Vault is Regionally available by default so we’re good there too.
  • Configure an Azure AD Service Principal with access to Key Vault to allow our solution to connect to Key Vault.

I used a conventions-based approach to configuration, so that the whole process works based on the VMSS instance name and the service type requesting configuration. You can see this in the code below based on the URL being used to access Key Vault and the Cosmos DB document ID that uses the same approach.

The resulting changes to my Web API code (based on the earlier web.config sample) are shown below. This all occurs at application startup time.

I have also defined a default Application Insights account into which any instance can log should it have problems (which includes not being able to read its expected Application Insights key). This is important as it allows us to troubleshoot issues without needing to get access to the VMSS instances.

Here’s how we authorise our calls to Key Vault to retrieve our initial configuration Secrets (called on line 51 of the above sample code).

My goal was to make configuration easily manageable across multiple VMSS instances which requires some knowledge around how VMSS instance names are created.

The basic details are that they consist of a hostname prefix (based on what you input at VMSS creation time) that is appended with a base-36 (hexatrigesimal) value representing the actual instance. There’s a great blog from Guy Bowerman from Microsoft that covers this in detail so I won’t reproduce it here.

The final piece of the puzzle is the Cosmos DB configuration entry which I show below.

The ‘id’ field maps to the VMSS instance prefix that is determined at runtime based on the name you used when creating the VMSS. We strip the trailing 6 characters to remove the unique component of each VMSS instance hostname.

The outcome of the three components (code changes, Key Vault and Cosmos DB) is that I can quickly add or remove VMSS groups in configuration, change where their configuration data is stored by updating the Key Vault Secrets, and even update running VMSS instances by changing the configuration settings and then forcing a restart on the VMSS instances, causing them to re-read configuration.

Is the above the only or best way to do this? Absolutely not ūüôā

I’d like to think it’s a good way that might inspire you to build something similar or better ūüôā

Interestingly, getting to this stage as well, I’ve also realised there might be some value in considering moving this solution to Service Fabric in future, though I am more inclined to shift to Containers running under the control an orchestrator like Kubernetes.

What are you thoughts?

Until the next post!

Tagged , , , , , ,

Moving from Azure VMs to Azure VM Scale Sets – VM Image Build

I have previously blogged about using Visual Studio Team Services (VSTS) to securely build and deploy solutions to Virtual Machines running in Azure.

In this, and following posts I am going to take the existing build process I have and modify it so I can make use of VM Scale Sets to host my API solution. This switch is to allow the API to scale under load.

My current setup is very much fit for purpose for the limited trial it’s been used in, but I know (at minimum) I’ll see at least 150 times the traffic when I am running at full-scale in production, and while my trial environment barely scratches the surface in terms of consumed resources, I don’t want to have to capacity plan to the n-nth degree for production.

Shifting to VM Scale Sets with autoscale enabled will help me greatly in this respect!

Current State of Affairs

Let’s refresh ourselves with what is already in place.

Build

My existing build is fairly straightforward – we version the code (using a PowerShell script), restore packages, build the solution and then finally make sure all our artifacts are available for use by the Release Management process.

Existing build process

The output of this build is a Web Deploy package along with a PowerShell DSC module that configures the deployment on the target VM.

Release Management

I am using multiple Environments for Release Management to manage transformations of the Web Deploy Parameters file along with the Azure Subscription / Resource Group being deployed to. The Tasks in each Environment are the same though.

My Release Management Tasks (as shown below) open the NSG to allow DSC remote connections from VSTS, transform the Web Deploy Parameters file, find the VMs in a particular Azure Resource Group, copy the deployment package to each VM, run the DSC script to install the solution, before finally closing the NSG again to stop the unwashed masses from prying into my environment.

Existing release process

All good so far?

What’s the goal?

The goal is to make the minimum amount of changes to existing VSTS and deployment artifacts while moving to VM Scale Sets… sounds like an interesting challenge, so let’s go!

Converting the Build

The good news is that we can retain the majority of our existing Build definition.

Here are the items we do need to update.

Provisioning PowerShell

The old deployment approach leveraged PowerShell Desired State Configuration (DSC) to configure the target VM and deploy the custom code. The DSC script to achieve this is shown below.

The challenge with the above PowerShell is it assumes the target VM has been configured to allow WinRM / DSC to run. In our updated approach of creating a VM Image this presents some challenges, so I redeveloped the above script so it doesn’t require the use of DSC. The result is shown below.

As an aside, we could also drop the use of the Parameters file here too. As we’ll see in another post, we need to make the VM Image stateless, so any local web.config changes that are environment-specific are problematic and are best excluded from the resulting image.

Network Security Group Script

In the new model, which prepares a VM Image, we no longer need the Azure PowerShell script that opens / closes the Network Security Group (NSG) on deployment, so it’s removed in the new process.

No more use of Release Management

As the result of our Build is a VM Image we no longer need to leverage Release Management either, making our overall process much simpler.

The New Build

The new Build definition shown below – you will notice the above changes have been applied, with the addition of two new Tasks. The aspect of this I am most happy about is that our core build actually remains mostly unchanged – we have had to add two additional Tasks and change one PowerShell script to make this work.

New build process

Let’s look at the new Tasks.

Build Machine Image

This Task utilises Packer from Hashicorp to prepare a generalised Windows VM image that we can use in a VM Scale Set.

The key items to note are: you need an Azure Subscription where a temporary VM, and the final generalised VHD can be created so that Packer can build the baseline image for you.

New Build Packer Task

You will notice we are using the existing artifacts staging directory as the source of our configuration PowerShell (DeployVmSnap.ps1) which is used by Packer to configure up the host once the VM is created using an Azure Gallery Image.

The other important item here is the use of the output field. This will contain the fully qualified URL in blob storage where the resulting packed image will reside. We can use this in our next step.

Create VM Image Registration

The last Task I’ve added is to invoke an Azure PowerShell script, which is just a PowerShell script, but with the necessary environmental configuration to allow me to execute Cmdlets that interact with Azure’s APIs.

The result of the previous Packer-based Task is a VHD sitting in a Blob Storage account. While we can use this in various scenarios, I am interested in ensuring it is visible in the Azure Portal and also in allowing it to be used in VM Scale Sets that utilised Managed Disks.

The PowerShell script is shown below.

.. and here is how it is used in the Task in VSTS..

New build VM Image

You can see how we have utilised the Packer Task’s output parameter as an input into this Task (it’s in the “Script Arguments” box at the bottom of the Task).

The Result

Once we have this configured and running the result is a nice crisp VM Image that can be used in a VM Scale Set. The below screenshot shows you how this looks in my environment – I wrapped the Azure Storage Account where the VHDs live, along with the VM Image registrations in the same Resource Group for cleaner management.

Build Output

There are still some outstanding items we need to do with, specifically: configuration management (our VM Image has to be stateless) and VM Scale Set creation using the Image. We will deal with these two items in the following posts.

For now I hope you have a good grasp on how you can convert an existing VSTS build that deploys to existing VMs to one that produces a generalised VM Image that you can use either for new VMs or in VM Scale Sets.

Until the next post.

ūüôā

Want to see how I dealt with instance configuration? Then have a read of my next post in this series.

Tagged , , , , ,

Secure your VSTS Release Management Azure VM deployments with NSGs and PowerShell

One of the neat features of VSTS’ Release Management capability is the ability to deploy to Virtual Machine hosted in Azure (amongst other environments) which I previously walked through setting up.

One thing that you need to configure when you use this deployment approach is an open TCP port to the Virtual Machines to allow remote access to PowerShell and WinRM on the target machines from VSTS.

In Azure this means we need to define a Network Security Group (NSG) inbound rule to allow the traffic (sample shown below). As we are unable to limit the source address (i.e. where VSTS Release Management will call from) we are stuck creating a rule with a Source of “Any” which is less than ideal, even with the connection being TLS-secured. This would probably give security teams a few palpitations when they look at it too!

Network Security Group

We might be able to determine a source address based on monitoring traffic, but there is no guarantee that the Release Management host won’t change at some point which would mean our rule blocks that traffic and our deployment breaks.

So how do we fix this in an automated way with VSTS Release Management and provide a secured environment?

Let’s take a look.

The Fix

The fix is actually quite straightforward it turns out.

As the first step you should go to the existing NSG and flip the inbound rule from “Allow” to “Deny”. This will stop the great unwashed masses from being able to hit TCP port 5986 on your Virtual Machines immediately.

As a side note… if you think nobody is looking for your VMs and open ports, try putting a VM up in Azure and leaving RDP (3389) open to “Any” and see how long it takes before you start seeing authentication failures in your Security event log due to account enumeration attempts.

Modify Project Being Deployed

We’re going to leverage an existing Release Management capability to solve this issue, but first we need to provide a custom PowerShell script that we can use to manipulate the NSG that contains the rule we are currently using to block inbound traffic.

This PowerShell script is just a simple wrapper that combines Azure PowerShell Cmdlets to allow us to a) read the NSG b) update the rule we need c) update the NSG, which commits the change back to Azure.

I usually include this script in a Folder called “Deploy” in my project and set the build action to “Copy always”. As a result the file will be copied to the Artefacts folder at build time which means we have access to it in Release Management.

Project Setup

You should run a build with this included file so that it is available in your

Modify Release Management Defintion

Note that in order to complete this step you must have a connection between VSTS and your target Azure Subscription already configured as a Service Endpoint. Typically this needs to be done by a user with sufficient rights in both VSTS and the Azure Subscription.

Now we are going to modify our existing Release Management definition to make use of this new script.

The way we are going to enable this is by using the existing Azure PowerShell Task that we have available in both Build and Release Management environments in VSTS.

I’ve shown a sample where I’ve added this Task to an existing Release Management definition.

Release Management Definition

There is a reason this Task is added twice – once to change the NSG rule to be “Allow” and then once, at the end, to switch it back to “Deny”. Ideally we want to do the “Allow” early in the process flow to allow time for the NSG to be updated prior to our RM deployment attempting to access the machine(s) remotely.

The Open NSG Task is configured as shown.

Allow Script

The Script Arguments should match those given in the sample script above. As sample we might have:

-resourceGroupName MyTestResourceGroup -networkSecurityGroupName vnet01-nsg 
-securityRuleName custom-vsts-deployments -allowOrDeny Allow -priority 3010

The beauty of our script is that the Close NSG Task is effectively the same, but instead of “Allow” we put “Deny” which will switch the rule to blocking traffic!

Make sure you set the “Close” Task to “Always run”. This way if any other component in the Definition fails we will at least close up the NSG again.

Additionally, if you have a Resource Group Lock in place (and you should for all production workloads) this approach will still work because we are only modifying an existing rule, rather than trying to add / remove it each time.

That’s it!

You can now benefit from VSTS remote deployments while at the same time keeping your environment locked down.

Happy days ūüôā

Tagged , , , , ,

Per-environment config value tokenization for Azure Web Apps using VSTS Release Management

For the majority of the last ten years I’ve been working with delivery of solutions where build and deployment comes from some centralised location.

When Microsoft made InRelease part of TFS as Release Management, I couldn’t wait to use it. Unfortunately in its state at that time the learning curve was quite steep and the immediate value was outweighed by the effort to get up and running.

Roll forward to 2016 and we find Release Management as a modern, web-based feature of Visual Studio Team Services (VSTS). The cherry on the cake is that a lot of the learning curve has dropped away as a result.

In this post I’m going to look at how we can deploy a Web Deploy (or MS Deploy) packaged Web Application to an Azure Web Application and define different deployment environments with varying configurations.

Many people would apply configuration transformations at build time, but in my scenario I want to deploy the same compiled package to multiple environments without the need to recompile anything.

My Challenge

The build definition for my Web Application results in a package that allows it to be deployed to an Azure Web App by Web Deploy. The result is the web.config configuration file is in a zip file that is transferred to the server for deployment by Web Deploy.

Clearly at this point I don’t have access to the web.config file in the drop folder so I can’t transform it with Release Management. Or can I?!

Using Web Deploy Parameters

Thankfully the design of Web Deploy provides for the scenario I described above though use of either commandline arguments or a specially formatted input file that I will call the “SetParameters” file.

Given this is a first-class feature in the broader Microsoft developer toolkit, I’d expected that there would be a few Tasks in VSTS that I could use to get all of this up and running… I got close, but couldn’t quite get it functioning as I wanted.

Through the rest of this post I will walk you through the setup to get this going.

Note: I am going to assume you have setup Build and Release Management definitions in VSTS already. Your Build should package to deploy to an Azure Web App and the Release Management definition to deploy it.

VSTS Release Management Setup

The first thing to get all of this up and running is to add the Release Management Utilities extension to your subscription. This extension includes the Tokenizer Task which will be key to getting the configuration per-environment up and running.

You also need to define an “Environment” in Release Management for each deployment target we have, which will also be used as a container for environmental configuration items to replace at deployment time. A sample is shown below with two Environments defined

Environments

We’ll come back to VSTS later, for now, let’s look at the project changes you need to make.

Source Project Changes

For the purpose of this exercise I’m just worrying about web.config changes.

First of all, you need to tokenise the settings you wish to transform. I have provided a sample below that shows how this looks in a web.config. The format of two underscores on either side of your token placeholder is required.

The next item we need to do is to add a new XML file to our Visual Studio project at the root level. This file should be called “Parameters.xml” and I have included a sample below that shows what we need to add to if it we want to ensure we replace the tokens in the above sample web.config.

You’ll notice one additional item in the file below that isn’t related directly to the web.config above – the IIS Website name that will be used when deployed. I found if I didn’t include this the deployment would fail.

When you add this file, make sure to set the properties for it to a Build Action of “None” and Copy to Output Directory of “Do not copy”.

Note: if you haven’t already done so, you should run a Build so that you have Build Artifacts ready to select in a later step.

Add the Tokenizer to your Release Management Definition

We need now to return to VSTS’ web interface and modify our existing Release Management definition (or create a new one) that adds the Tokenizer utility to the process.

You will need to repeat this so all your environments have the same setup. I’ve shown how my Test environment setup looks like below (note that I changed the default description of the Tokenizer Task).

Release Management Definition

Configuration of the Tokenizer is pretty straight forward at this point, especially if we’ve already run a build. Simply select the SetParameters.xml file your build already produced.

Tokenizer setting

Define values to replace Tokens

This is where we define the values that will be used to replace the tokens at deployment time.

Click on the three dots at the top right of the environment definition and from the menu select “Configuration variables…” as shown below.

Variable Definition

A dialog loads that allows us to define the values that will go into our web.config for this environment. The great thing you’ll note is that you can obfuscate sensitive details (in my example, the key to access the Document DB account). This is non-reversible too – you can’t “unhide” the value and see the plain-text version.

Token Values

We’re almost done!

Explicitly select SetParameters file for deployment

I’m using the 3.* (preview) version of the Deploy Azure App Service Release Management Task, which I have configured as shown.

App Service Task

At this point, if you create a new Release and deploy to the configured environment you will find that the deployed web.config contains the values you specified in VSTS and you will no longer need multiple builds to send the same package to multiple environments.

Happy Days! ūüôā

Tagged , , , , , ,

Continuous Deployment of Windows Services using VSTS

I have to admit writing this post feels a bit “old skool”. Prior to the last week I can’t remember the last time I had to break out a Windows Service to solve anything. Regardless, for one cloud-based IaaS project I’m working on I needed a simple worker-type solution that was private and could post data to a private REST API hosted on the other end of an Azure VNet Peer.

While I could have solved this problem any number of ways I plumped for Windows Service primarily because it will be familiar to developers and administrators at the organisation I’m working with, but I figured if I’m going to have to deploy onto VMs I’m sure not deploying in an old-fashioned way! Luckily we’re already running in Azure and hosting on VSTS so I have access to all the tools I need!

Getting Setup

The setup for this process is very similar to the standard “Deploy to Azure VM” scenario that is very well covered in the official documentation and which I added some context to in a blog post earlier in the year.

Once you have the basics in place (it only takes a matter of minutes to prepare each machine) you can head back here to cover off the changes you need to make.

Note: this process is going to assume you have a Windows Service Project in Visual Studio 2015 that is being built using VSTS’s in-built build infrastructure. If you have other configurations you may need to take different steps to get this to work ūüôā

Tweak build artefact output

First we need to make sure that the outputs from our build are stored as artefacts in VSTS. I didn’t use any form of installer packaging here so I needed to ensure my build outputs were all copied to the “drops” folder.

Here is my build definition which is pretty vanilla:

Build Process

The tweak I made was on the Visual Studio build step (step 2) where I defined an additional MSBuild Argument that set the OutputPath to be the VSTS build agent’s artifacts directory which will automatically be copied by the Publish Artifacts step:

Build Update

If I look at a history entry for my CI build and select Artifacts I can see that my Windows Service binary and all its associated assemblies, config files (and importantly Deployment Script) are stored with the build.

Build Artefacts

Now we have the build in the right configuration let’s move on to the deployment.

Deploying a Service

This is actually easier than it used to be :). Many of us would remember the need to package the Windows Service into an MSI and then use InstallUtil.exe to do the install on deployment.

Fear not! You no longer need this approach for Windows Services!

PowerShell FTW!

Yes, that Swiss Army knife comes to the rescue again with the Get-Service, New-Service, Stop-Service and Start-Service Cmdlets.

We can combine these handy Cmdlets in our Deployment script to manage the installation of our Windows Service as shown below.

The Release Management definition remains unchanged – all we had to do was ensure our build outputs were available to copy from the ‘Drop’ folder on the build and that they are copied to C:\temp\ on the target VM(s). Our Desployment Script takes care of the rest!

That’s it! Next time your CI build passes your CD kicks in and your Windows Service will be updated on your target VMs!

Tagged ,

Quick Links To Help You Learn About Developing For The Cloud

Unsurprisingly I think the way Cloud Computing is transforming the IT industry is also leading to easier ways to learn and develop skills about the Cloud. In this post I’m going to give a run down on what I think some of the best ways are to start dipping your toe into this space if you haven’t already.

Sign up for a free trial

This is easy AND low cost. Turn up to the sign-up page for most major players and you’ll get free or low-cost services for a timed period. Sure, you couldn’t start the next Facebook at this level, but it will give you enough to start to learn what’s on offer.¬† You can run VMs, deploy solutions, utilise IaaS, PaaS and SaaS offerings and generally kick the tyres of the features of each. At time of writing these are:

Learn the APIs and use the SDKs

Each of Amazon, Azure, Google, Office 365 and Rackspace¬†offer some form of remote programmable API (typically presented as REST endpoints).¬† If you’re going to move into Cloud from traditional hosting or system development practices then starting to learn about programmable infrastructure is a must. ¬†Understanding the APIs available will depend on leveraging existing documentation:

If you aren’t a fan of working so close to the wire you can always leverage one of the associated SDKs in the language of your choice:

The great thing about having .Net support is you can then leverage those SDKs directly in PowerShell and automate a lot of items via scripting.

Developer Tool Support

While having an SDK is fine there’s also a need to support developers within whatever IDE they happen to be using. ¬†Luckily you get support here too:

Source Control and Release Management

The final piece of the puzzle and one not necessarily tied to the individual Cloud providers is where to put your source code and how to deploy it.

  • Amazon Web Services: You can leverage Elastic Beanstalk for deployment purposes (this is a part of the Visual Studio and Eclipse toolkits).¬†http://aws.amazon.com/elasticbeanstalk/
  • Google App Engine: Depending on language you have a few options for auto-deploying applications using command-line tools from build scripts. ¬†Eclipse tooling (covered above) also provides deployment capabilities.
  • Rackspace Cloud: no publicly available information on build and deploy.
  • Windows Azure: You can leverage deployment capabilities out of Visual Studio (probably not the best solution though) or utilise the in-built Azure platform support to deploy from a range of hosted source control providers such as BitBucket (Git or Mercurial), Codeplex, Dropbox (yes, I know), GitHub or TFS. ¬†A really strong showing here from the Azure platform!¬†http://www.windowsazure.com/en-us/develop/net/common-tasks/publishing-with-git/

So, there we have it – probably one of the most link-heavy posts you’ll ever come across – hopefully the links will stay valid for a while yet! ¬†If you spot anything that’s dead or that is just plain wrong leave me a comment.

HTH.

Tagged , ,

SharePoint Online 2013 ALM Practices

SharePoint has always been a bit a challenge when it comes to structured ALM and developer practices which is something Microsoft partially addressed with the release of SharePoint and Visual Studio 2010. Deploying and building solutions for SharePoint 2013 pretty much retains most of the IP from 2010 with the noted deprecation of Sandbox Solutions (this means they’ll be gone in SharePoint vNext).

As part of the project I’m leading at Kloud at the moment we are rebuilding an Intranet so it runs on SharePoint Online 2013 so I wanted to share some of the Application Lifecycle Management (ALM) processes we’ve been using.

Packaging

Most of the work we have been doing to date has leveraged existing features within the SharePoint core – we have, however, spent time utilising the Visual Studio 2012 SharePoint templates to package our customisations so they can be moved between multiple environments. SharePoint Online still provides support for Sandboxed Solutions and we’ve found that they provide a convenient way to deploy elements that are not developed as Apps. Designer packages can also be exported and edited in Visual Studio and produce a re-deployable package (which result in Sandboxed Solutions).

Powershell

At the time of writing, the number of Powershell Commandlets for managing SharePoint Online are substantially less those for on-premise. If you need to modify any element below a Site Collection you are pretty much forced to write custom tooling or perform the tasks manually – we have made a call in come cases to build tooling using the Client Side Object Model (CSOM) or to perform tasks manually.

Development Environment

Microsoft has invested some time in the developer experience around SharePoint Online and now provides you with free access to an “Office 365 Developer Site” which gives you a single-license Office 365 environment in which to develop solutions. The General Availability of Office 365 Wave 15 (the 2013 suite) sees these sites only being available for businesses holding enterprise (E3 or E4) licenses. ¬†Anyone else will need to utilise a 30 day trial tenant.

We have had each team member setup their own site and develop solutions locally prior to rolling them into our main deployment. Packaging and deployment is obviously key here as we need to be able to keep the developer instances in sync with each other and the easiest way to achieve that is with WSPs that can be redeployed as required.

One other item we have done around development is to utilise an on-premise setup in a VM to provide developers with a more rapid development experience in some cases (and more transparent troubleshooting). As you mostly stick to the SharePoint CSOM a lot of your development these days resides in JavaScript which means you shouldn’t hit any snags in relying in on-premise / full-trust features in your delivered solutions.

Note that the Office 365 Developer Site is a single-license environment which means you can’t do multi-user testing or content targeting. That’s where test environments come into play!

Test Environment

The best way to achieve a more structured ALM approach with Office 365 is to leverage an intermediate test environment – the easiest way for anyone to achieve this is to register for a trial Office 365 tenant – while only technically available for 30 days this still provides you with the ability to test prior to deploying to your production environment.

Once everything is tested and good to go into production you’re already in a position to know the steps involved in deployment!

As you can see – it’s still not a perfect world for SharePoint ALM, but with a little work you can get to a point where you are at least starting to enforce a little rigour around build and deployment.

Hope this helps!

Useful Links

Tagged , , , , ,

Create New Folder Hierarchies For TFS Projects using Git SCM

If, like a lot of people who’ve worked heavily with TFS you may not have spent much time working with Git or any of its DVCS bretheren.

Firstly, a few key things:

1. Read and absorb the tutorial on how best to work with Git from the guys over at Atlassian.
http://atlassian.com/git/tutorial/git-basics

2. Install the Visual Studio 2012 Update 2 (currently in CTP, possibly in RTM by the time you read this).
http://www.microsoft.com/en-us/download/details.aspx?id=36539 (grab just vsupdate_KB2707250.exe)

3. Install the Git Tools for Visual Studio http://visualstudiogallery.msdn.microsoft.com/abafc7d6-dcaa-40f4-8a5e-d6724bdb980c

4. Install the most recent Git client software from http://git-scm.com/downloads

5. Set your default Visual Studio Source Control provider to be “Microsoft Git Provider”.

6. Setup an account on Team Foundation Service (https://tfs.visualstudio.com/), or if you’re lucky enough maybe you can even do this with your on-premise TFS instance now…

7. Make sure you enable and set alternative credentials in your TFS profile:

alt-credentials

8. Setup a project that uses Git for source control.

At this stage you have a couple of options – you can clone the repository using Visual Studio’s Git support

gitclone

OR you can do it right from the commandline using the standard Git tooling (make sure you’re at a good location on disk when you run this command):

git clone https://thesimpsons.visualstudio.com/defaultcollection/_git/bart milhouse
Cloning into 'milhouse'...
Username for 'https://thesimpsons.visualstudio.com/': homer
Password for 'https://thesimpsons.visualstudio.com/':
Warning: You appear to have cloned an empty repository.

I tend to setup¬†a project directory hierarchy early on and with Git support in Visual Studio I’d say it’s even more important as you don’t have a Source Control Explorer view of the world and Visual Studio can quickly create a mess when adding lots of projects or solution elements. ¬†The challenge is that (as of writing) Git won’t support empty folders and the easiest work around is to create your folder structure and drop an empty file into each folder.

Now this is where Visual Studio’s Git tools won’t help you – they have no concept of files / folders held outside of Visual Studio solutions so you will need to use the Git tools at the commandline to affect this change. Once have your hierarchy setup with empty files in each folder, at a command prompt change into the root of your local repository and then do the following.

git add -A
git commit -m "Hmmmm donuts."

Now, at this point, if you issue “git push” you may experience a problem and receive this message:

No refs in common and none specified; doing nothing.
Perhaps you should specify a branch such as ‘master’.
Everything up-to-date.

Which apart from being pretty good english (if we ignore ‘refs’) is pretty damn useless.

How to fix? Like this:

git push origin master

This will perform a forced push and your newly populated hierachy should be pushed to TFS, er Git, er TFS. You get the idea. Then the others on your team are able to clone the repository (or perform a pull) and will receive the updates.

HTH.

Update: A big gotcha that I’ve found, and it results in a subtle issue is this: if you have a project that has spaces in its title (i.e. “Big Web”) then Git happily URL encodes that and will write the folder to disk in the form “Big%20Web” which is all fine and dandy until you try to compile anything in Visual Studio. Then you’ll start getting CS0006 compilation errors (unable to find metadata files). ¬†The fix is to override the target when cloning the repository to make sure the folder is validly named (in my example above this checks out the “bart” project to the local “milhouse” folder).

Tagged

Deploy Umbraco using Octopus Deploy

Every once in a while you come across a tool that really fits its purpose and delivers good value for not a whole lot of effort. ¬†I’m happy to say that I think Octopus Deploy is one such tool! ¬†While Octopus isn’t the first (or most mature in this space) it hits a sweet spot and really offers any sized team the ability to get on¬†board the Continous Deployment / Delivery bandwagon.

My team is using it to deploy a range of .Net websites and we’re considering it for pushing database changes too (although these days a lot of what we build utilises Entity Framework so we don’t need to push that many DB scripts about any more) and one thing we’ve done a lot of is deploy Umbraco 4 sites.

Its About The Code

One important fact to get out here is that I’m only going to talk about how Octopus will help you deploy .Net code changes only. ¬†While you can generate SQL scripts and deploy them using Octopus (and ReadyRoll perhaps), Umbraco development has little to do with schema change and everything to do with instance data change. ¬†This is not an easy space to be – more so with larger websites – and even Umbraco has found it hard to solve despite producing Courier specifically for this challenge. ¬†This all being said, I’m sure if you spend time working with SQL Data Compare¬† you can come up with a database deployment step using scripts.

Setting It Up

Before you start Umbraco deployments using Octopus you need to make a decision about what to deploy each time and then modify your target.

When developing with Umbraco you will have a “media”, an “umbraco” and an “umbraco_client” folder in your solution folder but not necessarily included in your Visual Studio solution. ¬†These three folders will also be present on your target deployment server and in order to leverage Octopus properly you need to manage these three folders appropriately.

Media folder

This folder holds files that are uploaded by CMS users over time. ¬†It is rare that you would want to take media from a development environment and push it to any other environment other than on initial deployment. ¬†If you do deploy it each time then your deployment will be (a) larger and (b) more challenging to deploy (notice I didn’t say impossible). ¬†You’ll also need to deal with merging of media “meta data” in the Umbraco CMS you’re deploying to (you’re back at Courier at this stage).

Regardless of whether you want to push media or not you will need to deal with how you treat the media folder on your target server – Octopus can automatically re-map your IIS root folder for your website to your new deployment so you’lll need to write Powershell to deal with this (and merging content if required).

Our team’s process is to not transfer media files via Octopus and¬†we have solved the media folder problem by creating the folder as a Virtual Directory in IIS on the target web server. ¬†As long as the physical folder has the right permissions you will have no problems with this approach. ¬†The added benefit here is that when Octopus Deploy remaps your IIS root folder to a new deployment the media is already in place and not affected at all.

Umbraco folders

The two Umbraco folders are required for the CMS to function as expected. ¬†While some of you might make changes internally to these folders I’d recommend you visit your reasons for doing so and see if you can’t make these two folders static and simply re-deploy the Umbraco binaries in your Octopus package.

There are a couple of ways to proceed with these folders – you can choose to redeploy them each time or you can treat them as exceptions and, as with the media folder, you can create Virtual Directories for them under IIS.

If you want to redeploy them as part of your package you will need to do a few things:

  1. Create a small stub “umbraco.zip” that is empty or that contains a single file (or similar) in your Visual Studio solution.
  2. Write some Powershell in your PostDeploy.ps1 file that unzips those folders into the right place on your target web server.
  3. In your build script (on your build server, right?) utilise an appropriate MSBuild extension (like the trusty MSBuildCommunityTasks) to zip the two folders into a zip that replaces the stub you created in (1).
  4. Run your build in Release mode (required to trigger OctoPack) which will trigger the packaging of your outputs including the new zip from (1).

On deployment you should see your Powershell execute and unpack your Umbraco folders into the right place!

Alternatively, you can choose not to redeploy these two folders each time – if this suits (and it does for us) then you can use the same approach as with the media folder and simply create two Virtual Directories. ¬†Once you’ve deployed everything will work as expected.

It’s Packaged (and that’s a Wrap)

So there we have a simple scenario for deploying Umbraco via Octopus Deploy – I’m sure there are more challenging scenarios than the above but I bet with a little msbuild and Powershell-foo you can work something out.

I hope you’ve found this post useful and I’d recommend checking out the¬†Octopus Deploy Blog¬†to see what great work Paul does at taking feedback on board for the product.

Tagged , , , ,

Agile Requirements – An Introduction.

I did a session today with the team at work around how Agile Requirements fit into the bigger picture of Agile project delivery. ¬† The presentation is up on SlideShare so if you’re looking to bootstrap a presentation please feel free to use the contents.