Category Archives: .Net

Continuous Deployment of Windows Services using VSTS

I have to admit writing this post feels a bit “old skool”. Prior to the last week I can’t remember the last time I had to break out a Windows Service to solve anything. Regardless, for one cloud-based IaaS project I’m working on I needed a simple worker-type solution that was private and could post data to a private REST API hosted on the other end of an Azure VNet Peer.

While I could have solved this problem any number of ways I plumped for Windows Service primarily because it will be familiar to developers and administrators at the organisation I’m working with, but I figured if I’m going to have to deploy onto VMs I’m sure not deploying in an old-fashioned way! Luckily we’re already running in Azure and hosting on VSTS so I have access to all the tools I need!

Getting Setup

The setup for this process is very similar to the standard “Deploy to Azure VM” scenario that is very well covered in the official documentation and which I added some context to in a blog post earlier in the year.

Once you have the basics in place (it only takes a matter of minutes to prepare each machine) you can head back here to cover off the changes you need to make.

Note: this process is going to assume you have a Windows Service Project in Visual Studio 2015 that is being built using VSTS’s in-built build infrastructure. If you have other configurations you may need to take different steps to get this to work ūüôā

Tweak build artefact output

First we need to make sure that the outputs from our build are stored as artefacts in VSTS. I didn’t use any form of installer packaging here so I needed to ensure my build outputs were all copied to the “drops” folder.

Here is my build definition which is pretty vanilla:

Build Process

The tweak I made was on the Visual Studio build step (step 2) where I defined an additional MSBuild Argument that set the OutputPath to be the VSTS build agent’s artifacts directory which will automatically be copied by the Publish Artifacts step:

Build Update

If I look at a history entry for my CI build and select Artifacts I can see that my Windows Service binary and all its associated assemblies, config files (and importantly Deployment Script) are stored with the build.

Build Artefacts

Now we have the build in the right configuration let’s move on to the deployment.

Deploying a Service

This is actually easier than it used to be :). Many of us would remember the need to package the Windows Service into an MSI and then use InstallUtil.exe to do the install on deployment.

Fear not! You no longer need this approach for Windows Services!

PowerShell FTW!

Yes, that Swiss Army knife comes to the rescue again with the Get-Service, New-Service, Stop-Service and Start-Service Cmdlets.

We can combine these handy Cmdlets in our Deployment script to manage the installation of our Windows Service as shown below.

The Release Management definition remains unchanged – all we had to do was ensure our build outputs were available to copy from the ‘Drop’ folder on the build and that they are copied to C:\temp\ on the target VM(s). Our Desployment Script takes care of the rest!

That’s it! Next time your CI build passes your CD kicks in and your Windows Service will be updated on your target VMs!

Tagged ,

Azure Internal Load Balancing – Setting Distribution Mode

Kloud Blog

I‚Äôm going to start by saying that I totally missed that the setting of distribution mode on Azure‚Äôs Internal Load Balancer (ILB) service is possible. This is mostly because you don‚Äôt set the distribution mode at the ILB level ‚Äď you set it at the Endpoint level (which in hindsight makes sense because that‚Äôs how you do it for the public load balancing too).

There is an excellent blog on the Azure site that covers distribution modes for public load balancing and the good news is that they also apply to internal load balancing as well. Let’s take a look.

In the example below we’ll use the following parameters:

  • Cloud Service: apptier
  • Two VMS: apptier01, apptier02
  • VNet subnet with name of ‚Äėappsubnet‚Äô
    adding a
  • load balancer with static IP address of
  • balances HTTP traffic based on Source and Destination IP.

Here’s the PowerShell to achieve this…

View original post 2 more words

Getting Started with Office 365 Video

Kloud Blog

Starting Tuesday November 18 Microsoft started rolling out Office 365 Video to customers who have opted in to the First Release programme (if you haven’t you will need to wait a little longer!)

Kloud has built video solutions on Office 365 in the past so it‚Äôs great to see Microsoft deliver this as a native feature of SharePoint Online ‚Äď and one that leverages the underlying power of Azure Media Services capabilities for video cross-encoding and dynamic packaging.

In this blog post we’ll take a quick tour of the new offering and show a simple usage scenario.

Basic Restrictions

In order to have access to Office 365 Video the following must be true for your Office 365 tenant:

  • SharePoint Online must be part of your subscription and users must have been granted access to it.
  • Users must have E1, E2, E3, E4, A2, A3 or A4 licenses.
  • There is no‚Ķ

View original post 423 more words

Use Azure Management API SDK in an Entity Framework custom database initializer

A post over on Stack Overflow got me thinking about how you can override the default behaviour of the Entity Framework code first database initializer so that the tier of the database created is something other than the deprecated ‘Web’ tier. Here’s one way to go about it.

Required bits

There are a few things to get going here – you’ll need to add the the Microsoft Azure SQL Database Management Library nuget package to your solution which will install a bunch of dependencies required to interact with the Azure Management API.

You should also familiarise yourself with how to create and use Management Certificates which will be required for all interactions with the Azure Management API.

Once you’ve looked through that I suggest having a good read of Brady Gaster’s blogs on using the Management API in which he gives some good overviews on working with Azure SQL Database AND on how you can go about uploading your Management Certificate to an Azure Website.

For the purpose of the remainder of this post we’ll be using the sample MVC / EF code first sample application which can be downloaded from MSDN’s code site.

Now you’ve done that, let’s get started…

Create a custom EF initializer

Entity Framework provides a nice extensibility point for managing initialisation of databases amongst other items (primarily to allow you to use the latest hipster database of choice and roll your own supporting code) and we’re going to use a simple sample to show how we could change the behaviour we’re seeing above.

In the below sample we create Standard tier databases – we could just as easily change this to a configuration element and modify which database we wish to create. Note that I load a lot of information from configuration – in the below sample I can deploy those configuration elements at the Cloud Service level and manage via the Azure Management Portal. I could just as easily leave them in the web.config if I wanted to.

A sample of what appears in the configuration (this is from a web.config)

    <add key="AzureSqlDatabaseServerName" value="t95xxttjmj"/>
    <add key="AzureSqlDatabaseName" value="SchoolDemo"/>
    <add key="AzureSubscriptionId" value="00000000-0000-0000-0000-000000000000"/>
    <add key="AzureSubscriptionCertThumbprint" value="61b463082dcb0198aab451c14efb7ff4b83a42b4"/>

In our global.asax of our web application we then need to include the following code:

Database.SetInitializer(new ContosoCustomDatabaseInitializer());

At this point when EF attempts to fire up a new database instance it will call our custom code and initialise a new database on the specified server using the management libraries.

Hopefully in a future release we’ll see an update to the default database setting to use the new Standard tier instead.

Tagged , , , ,

Save VM Run Costs in Azure – Shut em down!

One of the benefits of public cloud services is the rich set of APIs they make available to developers and IT Pros alike.

Traditionally if you requested compute resources for development or testing purposes you placed a request, waited for your resources to be provisioned and then effectively left them running.

Periodic audits by your IT Ops team might have picked up development or test machines that were no longer required, but these audits might occur only once your businesses infrastructure started to run low on free resources.

As a demonstration of how easy it is to manage resources in Azure take a look at the PowerShell script below.  In less than 30 lines of code I can enumerate all virtual machines in a subscription and then power them off.  I could just as easily do this to power them on (granted there may be a required order to power-on).

A New Computing Metric for the Cloud: Time-To-Scale-Out

Last week I was looking through my timeline and came across this tweet from Troy Hunt asking about how the autoscale features in Azure worked.

(click on the date of the above tweet to view the entire conversation I had with Troy).

In and of itself this question doesn’t immediately seem unusual. Then you think about it a bit. How long does it take?

Well the bottom line is actually a bit trickier than you would think (watch the great video on Channel9 for more details on how Azure does it).

This is why I am proposing a new metric for use in cloud autoscale.

Time-To-Scale-Out (TTSO)

This metric can be defined as:

The time between when the cloud fabric determines that a scale-out is required and the time that the scale-out instance is serving requests.

Many people confuse the time at which a scale event is fired (i.e. 5 minutes in Azure Websites case) with the time at which a new instance is actually serving traffic (5 minutes + N).

On my current cloud IaaS engagement we’re looking at how new applications can leverage autoscale correctly and have determined that:

  • Prefer custom machine images over vanilla images with startup customisations (i.e. via cloud-init or similar constructs).
  • Prefer startup customisations over deployment¬†via orchestration services such as Puppet or Chef.

or, another way:

  • Simple is better – if you have too many dependencies on startup your TTSO will be substantial.

Autoscale is not magic (and won’t save you)

I think we sometimes take for granted the elastic capabilities of cloud computing and assume that putting a few autoscale parameters in should see off unexpected peaks in demand.

The truth is that the majority of applications we will put in the cloud will never have a need for automated autoscale (i.e. based on CPU utilisation or other metrics), but I can guarantee that many would benefit from manual scale or scheduled autoscale (end of week, month, financial year anyone?)

Another important point to take into consideration is the performance of your cloud provider’s fabric when scaling. ¬†You might be in a part of their datacentre with a lot of “noisy neighbours” which might make scale events take longer – you have little or no control over this.

Will you meet your SLA?

Ultimately you will need to test any application you intend to use autoscale with to ensure that your SLAs will be met in the event that you autoscale.  Can your minimum capacity cope with the load during your TTSO?  Can you make your number of scale units larger to avoid multiple scale events?

Make sure you bake¬†startup telemetry into your instances so you can measure your TTSO and work on refining it over time (and ensuring that each new generation of your machine image doesn’t negatively impact your TTSO).

Happy scaling!

What Happens When You Delete a User from Visual Studio Online

As of May 7 2014, Visual Studio Online has shifted to a paid model where all user accounts over the basic 5 user limit must hold a paid license in order to access and use the service.

So, what happens¬†when you remove a user’s account from Visual Studio Online? Let’s take a look.

No, really, it’s OK

Firstly, just a word of reassurance here – your data consistency will remain even after you’ve removed the user’s account. ¬†It might seem obvious but it is comforting to know that you’ll still have the history associated with the account.

Work Items

Your Product Backlog Items (PBIs) and other Work Items will continue to work as expected Рeven if the user created or updated the Work Item or has it assigned to them at the time their account is removed.

Audit trail entries that were created by this user through their activities on individual Work Items will retain full fidelity so you can trace actions by the user at any future point.

Source Control

Both Git and TFS source control repositories will behave as expected and the user’s change history of files will remain intact.

Are they really locked out?

Yes. Visual Studio Online via the web, Visual Studio and source control will not allow a deleted account to have access. ¬†Even Git with it’s alternative credentials returns an authentication failure when trying to perform any action.

Any edge cases?

Not that we’ve seen. ¬†Here’s some advice on things to do before you remove an account (granted some of these might not make any sense depending on how stale the account is and what it’s purpose is).

  1. Be aware there is an initial soft limit of 50 user accounts per subscription.  You can raise a support ticket to get this increased.
  2. Make sure you have alternate project administrators if the account you are deleting has previously been the only project administrator.
  3. Make sure any outstanding check-ins are either shelved or checked-in.

It’s early days for Visual Studio Online as a paid SaaS solution – it will be interesting to see how it continues to evolve now that it’s generating a revenue stream for Microsoft.

Tagged , , , ,

The False Promise of Cloud Auto-scale

Go to the cloud because it has an ability to scale elastically.

You’ve read that bit before right?

Certainly if you’ve been involved in anything remotely cloud¬†related¬†in the last few years you will most certainly have come across the concept of on-demand or elastic scaling. Azure has it, AWS has it, OpenStack has it. ¬†It’s one of the cornerstone pieces of any public cloud platform.

Interestingly it’s also one of the facets of the cloud that I see often misunderstood by developers and IT Pros alike.

In this post I’ll explore why auto-scale isn’t black magic and why your existing applications can’t always just automagically start using it.

What is “Elastic Scalability”?

Gartner (2009) defines it as follows:

“Scalable and Elastic: The service can scale capacity up or down as the consumer demands at the speed of full automation (which may be seconds for some services and hours for others). Elasticity is a trait of shared pools of resources. Scalability is a feature of the underlying infrastructure and software platforms. Elasticity is associated with not only scale but also an economic model that enables scaling in both directions in an automated fashion. This means that services scale on demand to add or remove resources as needed.”


Sounds pretty neat – based on demand you can utilise platform features to scale out your application automatically. ¬†Notice I didn’t say scale up your application. ¬†Have a read of this Wikipedia article¬†if you need to understand the difference.

On Microsoft Azure, for example, we have some cool sliders and thresholds we can use to determine how we can scale out our deployed solutions.

Azure Auto Scale Screenshot

Scale this service based on CPU or on a schedule.

What’s Not to Understand?

In order to answer this¬†we should examine how we’ve tended to build and operate¬†applications in the on-premise world:

  • More instances of most software means more money for licenses. While you¬†might get some cost relief for warm or cold standby you are¬†going to have to pony up the cash if you want to run more than a single instance of most off-the-shelf software in warm or hot standby mode.
  • Organisations have shied away from multi-instance¬†applications to avoid needing to patch and maintain additional operating systems and virtualisation hosts¬†(how many “mega” web servers are running in your data centre that host many web sites?)
  • On-premise compute resources are finite¬†(relative to the cloud). ¬†Tight control of used resources leads to the outcome in the previous point – consolidation takes place because that hardware your company bought needs to handle¬†the next 3 years of growth.
  • Designing and building an application that can run in a multi-instance configuration¬†can be hard (how many web sites are you running that need “sticky session” support on a load balancer to work properly?) ¬†Designing and building¬†applications that are stateless at some level may be viewed by many as black magic!

The upshot of all these above points¬†is that we have tended to a “less is more” approach when building or operating solutions on premise. ¬†The¬†simplest mode of hosting the application in a way that meets business availability needs is typically the one that gets chosen. Anything more is a luxury (or a complete pain to operate!)

So, How to Realise the Promise?

In order to fully leverage auto-scale capabilities we need to:

  • Adopt¬†off-the-shelf¬†software that provides¬†a consumption-based licensing model. Thankfully in many cases we are already here – we can run many enterprise operating system, application and database software solutions using a pay-as-you-go (PAYG) scheme. ¬†I can bring my own license if I’ve already paid for one too. ¬†Those vendors who don’t offer this flexibility will eventually be left behind as it will become a competitive advantage for others in their field.
  • Leverage programmable infrastructure via automation¬†and a¬†culture shift to¬†“DevOps” within our¬†organisations. ¬†Automation¬†removes the need for manual completion of many¬†operational tasks thus enabling¬†auto-scale scenarios.¬† The¬†new collaborative¬†structure of DevOps¬†empowers our operational teams to be more agile and to¬†deliver more innovative solutions than they perhaps have done in the past.
  • Be clever about measuring what our minimum and maximum thresholds are for acceptable user experience. ¬†Be prepared to set those CPU sliders lower or¬†higher than you might otherwise have if you were doing the same thing on-premise. ¬†Does the potential beneficial performance of auto-scale at a lower CPU utilisation level out-weigh the marginally small cost you pay given that the platform will scale back as necessary?
  • Start building applications for the cloud. ¬†If you’ve designed and built applications with many stateless components already then you may have little work to do. ¬†If you haven’t then be prepared to deal with the technical debt to fix (or start over). ¬†Treat as much of your application’s components as you can as cattle and minimise the pets (read¬†a definition that hopefully clears up that analogy).

So there we have a few things we need to think about when trying to realise the potential value of elastic scalability.  The bottom line is your organisation or team will need to invest time before moving to the cloud to truly benefit from auto-scale once there.  You should also be prepared to accept that some of what you build or operate may never be suitable for auto-scale, but that it could easily benefit from manual scale up or out as required (for example at end of month / quarter / year for batch processing).


Tagged , , , , ,

Use Tags to better manage your TFS work items

Updated: early in 2014 Microsoft released an update that now makes it possible to query using Tags.  See the full details online.

As many of you have found, at present it isn’t possible to customise work item templates to provide custom fields the same way you can in the on-premise version of TFS. While the out-of-the-box work item types mostly suffice there are cases where not being able to customise the template impacts your ability to properly report on the work you are managing.

To go some way to address this Microsoft released an update to Team Foundation Service in January 2013 to provide a ‘tag’ feature that allows users to add meta-data to a work item.

I must admit until my most recent engagement that I hadn’t looked closely at tags as I’d found that using a well thought through Area Path scheme tended to work well when querying work items. I’m now working on a project where we are performing a large number of migrations between platforms and must deal with a fair amount of variation between our source system and our target.

Our primary concern is to arrange our work by business unit – this will ultimately determine the order in which we undertake work due to a larger project that will affect availability of our target system for these business units. To this end, the Area Path is setup so that we can query based on Business Unit.

In addition to business unit we then have a couple of key classifications that would be useful to filter results on: type of the source system; long term plan for the system once migrated.

When creating our backlog we tagged each item as we bought it in, resulting in a set of terms we can easily filter with.

Backlog with tags

Which allows us to easily filter the backlog (and Sprint backlogs if you use a query view for a Sprint) like this:

Backlog with applied filter

The great thing with this approach is as you add a filter to a backlog the remaining filter options display the number of work items that match that element – in the example below we can see that there are 90 “team sites” and 77 “applications” that are also tagged “keep”. This is extremely useful and a great way to get around some of the work item customisation limitations in Team Foundation Service.

Filter applied

One item to be aware of with tags is that right now the best experience with them is on the web. If you regularly use Team Explorer or Excel to manage your TFS backlog then they may be of limited use. You can still filter using them within Excel but the mechanism in which they are presented (as a semi-colon list of keywords) means easily filtering by individual tags isn’t possible.

Excel filter

Tagged , , ,

Portable Azure Mobile Services DTOs when using Xamarin and C#

As part of an upcoming talk I am giving I am spending a lot of time working on a demo that shows how to do push notifications cross-device. One concept that is really drummed in when working in this space is that building reusable C# is key to leveraging the savings from building Windows Phone, iOS, Android apps this way.

A C# Azure Mobile Services client library exists for each of the platforms and it’s really easy to use your standard C# classes as Data Transfer Objects (DTOs) for storage in Azure as a result. One item that tripped me up on Android and iOS was that I found I could write data to the cloud easily and while I could get a set of resulting objects back their properties were either null or in their default state. Not useful!

This is how my class looked – I had arrived at this after using DataMember and JsonProperty objects and finding, at first appearances, that I didn’t need either for the code to actually work. I was wrong :).

public class Task : IBusinessEntity
     public Task () {}

     public int ID { get; set; }
     public string Name { get; set; }
     public string Notes { get; set; }
     public bool Done { get; set; }
     public string Assignee { get; set; }

The challenge in decorating the above class with DataMember is that this class isn’t actually natively supported on Windows Phone – you will get a NotSupportedException the first time your application attempts to run any code that uses this class. The suggested fix in the Exception is to utilise the JsonProperty attribute instead which doesn’t actually work on Android or iOS deployment. So… this is the fix I came up with – it’s not pretty but it is leveraging one way to build portable C# code (note there is no default symbol defined for iOS builds which doesn’t help!) Behold…

public class Task : IBusinessEntity
    public Task () {}

    public int ID { get; set; }

    [DataMember(Name = "name")]
    public string Name { get; set; }

    [DataMember(Name = "notes")]
    public string Notes { get; set; }

    [DataMember(Name = "done")]
    public bool Done { get; set; }

    [DataMember(Name = "assignee")]
    public string Assignee { get; set; }

I think you’ll find that compiler pre-processor directives like this will quickly become your friend when writing C# to target multiple platforms.

Tagged , , , , , ,