Category Archives: ALM

Multi-environment deployments for Compiled C# Azure Functions with VSTS Release Management

This post covers an approach you can use to deploy compiled C# Functions using the tooling available in Visual Studio 2017 and various Build and Release Management Tasks contained in Visual Studio Team Services (VSTS).

Note that this post discusses deploying to the v1 Functions runtime platform.

I was lucky enough to speak with Damian Brady on the DevOps Labs show on Channel 9 and cover the first part of this blog. If you’ve watched that, or you’ve come here via the Github repository for the solution we used, then we’ll go down to the next level and really look at how you can recreate this setup in your environment.

1. Pre-requisites

There are a few moving pieces we need to get into place first in order to complete the configuration. Let’s take a look at those.

a. Connecting environments

Note that in order to complete these steps you may require elevated privileges in one or more of the mentioned services. If you do not have access to an admin-level account in any of the services you will likely need to ask someone to configure these for you.

> Github to VSTS

In our demonstration we’re using a Github repository as our source repository and allowing VSTS’ Build capability to perform Continuous Integration Builds when a commit occurs on the master branch. Microsoft has documented how to configure Github to connect with VSTS already, so go and take a read and head back here when you’re done setting up the integration.

> VSTS to Azure

We use the Azure Resource Manager Service Endpoint option in VSTS to configure our connection into Azure from VSTS. There is documentation from Microsoft around how you setup the connection, including steps to create a custom Service Principal (or use a pre-existing one your Azure admin has created for you). Once again, go have a read and once you have a working Service Endpoint in VSTS head back over here.

b. Configure SendGrid

If you’d like to run the Functions once deployed you will need to configure SendGrid so you can use the binding in the Functions being deployed. You can follow the official Azure documentation on setting up a (free) SendGrid account and then make sure to set the API key value for the AzureWebJobsSendGridApiKey App Setting for your deployed Functions.

c. Create target Azure Resources

Go ahead and also create a Function App in the Subscription you want to deploy to (ensure that the Service Principal you setup previously has Contributor-level access to, at minimum, the Resource Group that will contain the Function).

You can use the process we document here to deploy to either a Service Plan or Consumption Plan, though there is a minor difference we will see later in the post.

The Azure resources to deploy should include:

  • Application Insights
  • Cosmos DB Account – Add Database “quotedemo” with Collections “quotes” and “leases”
  • Function App (can be Consumption or Service Plan)
  • Storage Account (can be created at same time as Function App)

Before we move on, make sure to capture the following:

  • Application Insights Telemetry Key (shown on the ‘Essentials’ part of the Application Insights instance)
  • Cosmos DB Account URL and Access Key
  • Function App:
    • AzureWebJobsStorage (Service Plan);
    • WEBSITE_CONTENTAZUREFILECONNECTIONSTRING (Consumption Plan);
    • WEBSITE_CONTENTSHARE (Consumption Plan);
    • FUNCTIONS_EXTENSION_VERSION (most likely set to “~1”).

d. Configure Application Insights for Release Annotations

In this scenario we will need to enable the Application Insights API or order to support Release Annotations which make it possible to see new release markers on timelines.

Once again, Microsoft has this well documented, including the Task you need to add in VSTS to allow you to create Annotations.

OK! Now we are ready to configure the Build and Release Management steps in VSTS.

2. Configure the Build

In a Team Project in VSTS you wish to use host the Build and Release Management Definitions go ahead and create a new Build.

Remember to select Github as your source repository.

Setup Github as source

When you are prompted for a template, select the ASP.Net Core (.Net Framework) build.

Select Build Template

If you want a Continuous Integration (CI) build then ensure you set the Trigger as required.

CI trigger

At this point we have a build that produces a packaged web application that can be pushed to the Azure App Service hosting the Function App. We could add more Tasks to the Build to do this, but we want to support multiple environments so this is where Release Management comes into play.

I recommend you run the Build to ensure it’s functional and to produce an artefact we can use in our next step.

3. Configure Release Management

Now we have a build that produces a build artefact we can now use VSTS Release Management (RM) to deploy and configure this artefact to any environment we can reach.

Let’s go ahead and choose to create a new Release Management Definition. When you have the option, select the “Azure App Service Deployment” template.

Release Management Template

The resulting Definition will be very vanilla and contain a single Task. We need to make some changes to deploy our service exactly as we’d like.

a. Add the Build Artefact

First we need to tell Release Management what we want to deploy, so let’s go ahead and add our existing Build by clicking on the Artefacts box and selecting our Build as shown below.

Add Artefact

If you wish to enable Continuous Deployment (CD) into Environments you can click on the lightning bolt on the Artefact and enable the CD trigger. Note that you can still stop automated deployments by putting in approvals or making deployments manual – by creating a Release you always have a build artefact to deploy.

CD Trigger

b. Configure RM Tasks

This is where your previously completed configuration with the Azure Service Endpoint and in setting up the resources in Azure will come into play.

Click on the Tasks tab and the RM Definition will open.

Clicking Task

Once open click on the Environment at the top of the Task list. As we are going to deploy all assets into a single Subscription we can set up a few items that will apply to all Tasks in the RM Definition.

The first thing we will do is to select the Service Endpoint we previously setup (below it is named “Service Principal for Demo”, but you can name it anything meaningful).

Setup Environment

Once you select the method of connection to the Azure Subscription change the “App type” field to be “Function App” and then from the final picker, select the Function App instance you setup earlier. If you don’t see it, it could be that you placed it another subscription or that the Service Endpoint does not have sufficient rights to list the Function Apps in the Subscription.

Your setting should look something like the below.

Environment Configuration

We could deploy the sample code now, but it would fail to run because it is missing configuration.

c. Deploying configuration

You will notice that up until now we’ve not dealt with any of the runtime configuration settings for the Function. When you develop locally the Functions Tools in VSTS will generate a “local.settings.json” file, but it will be blocked from commit via the gitignore included in the project type. It’s recommended you don’t change this, and even if you it won’t help you on deployment anyway (so… y’know, why bother to change the ignore file?)

For this Task we are going to need to pull in a free Marketplace Task – the Azure WebApp Configuration from Pascal Naber (Xpirit). This Task is a wrapper around some Azure Cmdlets, but it does a great job of removing your overhead in managing that ūüôā

You will need to be a VSTS admin in order to install Marketplace Tasks (if you aren’t you can still request an admin to install them).

Once installed you can now add the Task to your Release Management Definition after the App Service Deployment (as shown below).

Release Tasks

The Task expects any App Settings you need to deploy to be added as Variables to the Definition. So, for example, if we want to control the value we set for the ‘APPINSIGHTS_INSTRUMENTATIONKEY’ App Setting in our target Function we would create a Variable in our Release called ‘appsetting.APPINSIGHTS_INSTRUMENTATIONKEY’ and set the value to be the Telemetry Key we captured earlier in the post.

The beauty of this approach is that you can one-way save secrets and they won’t show up (or be recoverable) via the Variables tab again. There is also an option to write them to Azure Key Vault if you want.

Below is a sample of the Variables once setup.

RM Variables

The eagle-eyed amongst you might spot that I am also setting default Function values (FUNCTIONS_EXTENSION_VERSION, AzureWebJobsStorage, AzureWebJobsDashboard for a Service Plan).

This is because I force the Application Settings Task to overwrite all existing values in the App Service. This is on purpose – it ensures no manual fixes are ever safe in Azure and our Release Management Definition is the source of truth for both the Artefact and the Configuration.

Note: For Consumption Plans make sure you set WEBSITE_CONTENTAZUREFILECONNECTIONSTRING, WEBSITE_CONTENTSHARE in addition to the above values. If you don’t your deployment will fail after the first deployment (this is the source of the error in the video).

The full set of Variables for our sample is listed below.

Common Variables

  • AppInsightsApiKey – API key you configured earlier for Application Insights (*not* Telemetry Key).
  • AppInsightsApp – Application ID configured earlier for Application Insights (also not Telemetry Key).
  • appsetting.APPINSIGHTS_INSTRUMENTATIONKEY – use telemetry key from Application Insights.
  • appsetting.AzureWebJobsDashboard – use exiting value from Function (before first deployment).
  • appsetting.AzureWebJobsSendGridApiKey – use SendGrid API key you setup earlier (should start ‘SG.’).
  • appsetting.AzureWebJobsStorage – use exiting value from Function (before first deployment).
  • appsetting.CosmosConnection – use Connection String from Cosmos Account you setup earlier.
  • appsetting.FUNCTIONS_EXTENSION_VERSION – use exiting value from Function (before first deployment).
  • appsetting.NotificationsSender – use an email address you control (to be used as From: in emails).

Service Plan deployment

  • None: above list is all you need

Consumption Plan deployment

  • appsetting.WEBSITE_CONTENTAZUREFILECONNECTIONSTRING – use exiting value from Function (before first deployment).
  • appsetting.WEBSITE_CONTENTSHARE – use exiting value from Function (before first deployment).

Once you’ve configured the Variables you should now be able to save the Release Management definition and create a Release to test out your deployment.

I’ve recorded a quick video (see below) that shows this end-to-end and also has an additional bonus step of hitting an HTTP Triggered endpoint on the Function as a post-deployment confirmation step (you will need to copy a Host key from the Function App and save it as ‘VersionApiKey’ in the Variable to use to call the API, then add the Smoke Web Test Task from the Marketplace).

So what should the demo Function do? It should trigger an email to a recipient when a record is added to Cosmos DB. The recipient is listed in the record that is inserted, samples of which are included in the Github project. If you can’t get it running make sure to leave a comment and I’ll help you out!

Tagged

Microsoft Application Insights – APM for Everyone

When you work as heavily as I have with a technology like Application Insights you do tend to forget the amazing power you have at your fingertips.

Over the last few years I’ve come to rely heavily on Application Insights as the primary Application Performance Management (APM) tool of choice for services I build, whether they are hosted in Azure or not.

In this post I am going to take a quick walk through features that I think every developer should now about with Application Insights so they can also get maximum benefit from it too!

Your language has an SDK

Chances are pretty good that if you’re on a popular platform that Application Insights will have an SDK you can use. SDKs are great because adding them to a solution produces a bunch of default telemetry with nothing more than a Telemetry Key required.

The Application Insights team maintains their SDK documentation and SDK code references on Github. Needless to say .Net has great support, but Java, JavaScript and Node.js also get first-party support, with community support for Go, Python and Ruby. Want to do APM that includes native mobile experiences? No problem, drop in the HockeyApp SDKs.

Use it regardless of your hosting environment

Not using Azure to host your solution? Not a problem. If you can make outbound calls from your host to Application Insights then you can use Application Insights. ūüíĮ

Useful free tier

In an upcoming post I’ll talk more about perceived and actual value of free services in the cloud, but let me say for most basic scenarios the 5 GB of ingested Application Insights data per month will more than suffice. If not, you can manage your costs by moving to a sampling model that means you can still glean useful insights about your application’s behaviours without breaking the bank.

No features are removed at the free tier pricing tier either – you can still do full analytics on the log information that is captured!

Dependency tracking

The out-of-the-box dependency tracking is super handy to diagnose performance issues that result from upstream calls.

The only downside here is that the default capabilities are good at tracking HTTP-based dependencies, SQL Server, and not much else (at time of writing). Having said this, there is a published way for you to track other custom dependencies if needed, though it requires dedicated code – the out-of-the-box tracking requires no additional special code which is amazing!

I have to say that HTTP dependency tracking has been exceptionally useful in a REST-heavy environment, even tracking HTTP calls to external service providers like SendGrid, Twilio and others, providing us an easily accessible view on where our latency is arising from.

The sample below shows dependency behaviour for a single request to a caching service in an application. The very first request (at bottom of list) is a call to Cosmos DB which returns a 404 (Not Found) HTTP status code which then triggers a lookup of some data via a HTTP call to an API with the result returned then written to Cosmos DB for the next request. This is super useful information and I did precisely nothing to my code (other than add the Application Insights SDK to my solution) to capture this for every request!

Remote Dependencies

Track impact of releases

Application Insights has a REST API which allows you to add custom steps to Continuous Deployment pipelines to publish a Release Annotation to your timeline in Application Insights so you can see if a release impacts your solution.

Visual Studio Team Services’ Release Management will do this for you automatically, but if you aren’t using VSTS then you can still leverage this capability. A sample is shown below (thankfully we had no negative impact with this release!)

Release Annotation

Insights to your inbox

Super handy if you don’t want to go hunting for stats or you want to share aggregated stats with stakeholders.

App Insights Email

Heavy duty analytics

If the default experiences in the Azure Portal aren’t enough, then you can leverage the power of Azure Log Analytics to perform more detailed queries and drill into your data and build tables or graphs from the results.

A good example of this is the answer I provided to the following on Twitter from Troy.

Each request will be captured along with useful metadata (in this case from the underlying .Net codebase) which allows us to do further querying and filtering on the data.

Here’s a sample of such a request (this one is a HTTP request to an API endpoint) with the metadata shown which is needed to help solve Troy’s question.

Sample HTTP Request

The trick is then to head over to the Log Analytics environment…

Open Analytics

.. and then drill into the data to provide you with your desired answer.

Analytics query

You can then tabulate or graph the output. The above is a really simple query – trust me, you can do far more complicated than this!

Failure drill-in

This view has recently improved and become far more interactive – you can easily identify common reasons for failures and drill right in to, in my experience, identify root cause within a matter of moments!

In HTTP applications you do get a bit of expected noise (things like expected 401, 403 and 404 errors) which can be annoying to sift through, particuarly for REST-type APIs, but it’s a small price to pay for the power you get!

Failures View

Availability Checks, Health Alerts and Smart Detection

I’m not going into these in too much detail, but you can also set Alerts and health checks in Application Insights and the service will also do analysis of trends and alert you to items that may require your attention (even if you don’t have a specific rule set).

Custom Events, User Journeys and Cohorts

Like health checks I am not going to go through these in detail, but if this is the sort of insight you need, then it is possible to access it here too. If you need to log custom data in Application Insights you can do that too using Custom Events.

What are you waiting for?!

I can honestly say I would be hard pressed these days to build anything without including Application Insights in it, particularly if I won’t have direct access to the hosting environment.

Troubleshooting runtime issues becomes much easier with the details you can glean from walking request stacks as presented by Application Insights. I’ve isolated and fixed more than my fair share of runtime issues (mostly configuration related) without ever needing to try and reproduce locally because I could quickly tell via the telemetry where things were going wrong.

Happy days! ūüėé

Tagged , , ,

Provide non-admin users with read-only access to Service Endpoints in VSTS

I am currently transitioning some work to another team in our business. Part of this transition has been to pre-configure various Service Endpoints in Visual Studio Team Services (VSTS) to provide a way for the new team to deploy into target Azure environments without the team necessarily having direct or privileged access into those Azure environments.

In this post I am going to look at how you can grant users access to these Service Endpoints without them being able to modify them. This post will also be useful if you’ve configured Service Endpoints (as an admin) and then others on the team (who are non-admins) are unable to see them.

Note that this advice applies to any Service Endpoint – not just Azure!

By default only users who are members of the following groups can see Service Endpoints:

– Project Admins
– Endpoint Admins
– Endpoint Creators.

It’s unlikely that you want all your team members to hold these roles, so let’s see how we can grant rights to use Service Endpoints without being an admin!

We’re going to complete this task with an existing Service Endpoint, but you should hopefully see how you can do this at the same time you setup a new Endpoint in future.

Open up your Team Project and in the top navigation mouse over the settings (cog) icon and from the context menu click “Services”.

Service Endpoints

Once the Endpoints page has loaded, select the Endpoint you wish to allow non-admin users to see.

Selected Endpoint

Now click on ‘Roles’ to display the currently assigned users and groups and their permissions (the current list will only contain users or groups at an ‘Administrators’ level).

Roles Screen

Now we’re in the right place to add our additional read-only users or groups!

Click on the ‘+ Add’ button and the Add user dialog is displayed. Ensure that the ‘Role’ is set to ‘User’ and then find the User or Group you want to assign this right to. In our demo below we are allowing the current project’s Contributors group to use Endpoints.

Add user dialog

Once you click the ‘Add’ button the user or group will be granted read-only rights to the Endpoint. This will allow them to find or use the Endpoint in Build or Release Management Definitions (like below).

Release Definition

Happy (secured) days! ūüėé

Tagged , , , ,

Moving from Azure VMs to Azure VM Scale Sets ‚Äď Runtime Instance Configuration

In my previous post I covered how you can move from deploying a solution to pre-provisioned Virtual Machines (VMs) in Azure to a process that allows you to create a custom VM Image that you deploy into VM Scale Sets (VMSS) in Azure.

As I alluded to in that post, one item we will need to take care of in order to truly move to a VMSS approach using a VM image is to remove any local static configuration data we might bake into our solution.

There are a range of options you can move to when going down this path, from solutions you custom build to running services such as Hashicorp’s Consul.

The environment I’m running in is fairly simple, so I decided to focus on a simple custom build. The remainder of this post is covering the approach I’ve used to build a solution that works for me, and perhaps might inspire you.

I am using an ASP.Net Web API as my example, but I am also using a similar pattern for Windows Services running on other VMSS instances – just the location your startup code goes will be different.

The Starting Point

Back in February I blogged about how I was managing configuration of a Web API I was deploying using VSTS Release Management. In that post I covered how you can use the excellent Tokenization Task to create a Web Deploy Parameters file that can be used to replace placeholders on deployment in the web.config of an application.

My sample web.config is shown below.

The problem with this approach when we shift to VM Images is that these values are baked into the VM Image which is the build output, which in turn can be deployed to any environment. I could work around this by building VM Images for each environment to which I deploy, but frankly that is less than ideal and breaks the idea of “one binary (immutable VM), many environments”.

The Solution

I didn’t really want to go down the route of service discovery using something like Consul, and I really only wanted to use Azure’s default networking setup. This networking requirement meant no custom private DNS I could use in some form of configuration service discovery based on hostname lookup.

…and…. to be honest, with the PaaS services I have in Azure, I can build my own solution pretty easily.

The solution I did land on looks similar to the below.

  • Store runtime configuration in Cosmos DB and geo-replicate this information so it is highly available. Each VMSS setup gets its own configuration document which is identified by a key-service pair as the document ID.
  • Leverage a read-only Access Key for Cosmos DB because we won’t ever ask clients to update their own config!
  • Use Azure Key Vault as to store the Cosmos DB Account and Access Key that can be used to read the actual configuration. Key Vault is Regionally available by default so we’re good there too.
  • Configure an Azure AD Service Principal with access to Key Vault to allow our solution to connect to Key Vault.

I used a conventions-based approach to configuration, so that the whole process works based on the VMSS instance name and the service type requesting configuration. You can see this in the code below based on the URL being used to access Key Vault and the Cosmos DB document ID that uses the same approach.

The resulting changes to my Web API code (based on the earlier web.config sample) are shown below. This all occurs at application startup time.

I have also defined a default Application Insights account into which any instance can log should it have problems (which includes not being able to read its expected Application Insights key). This is important as it allows us to troubleshoot issues without needing to get access to the VMSS instances.

Here’s how we authorise our calls to Key Vault to retrieve our initial configuration Secrets (called on line 51 of the above sample code).

My goal was to make configuration easily manageable across multiple VMSS instances which requires some knowledge around how VMSS instance names are created.

The basic details are that they consist of a hostname prefix (based on what you input at VMSS creation time) that is appended with a base-36 (hexatrigesimal) value representing the actual instance. There’s a great blog from Guy Bowerman from Microsoft that covers this in detail so I won’t reproduce it here.

The final piece of the puzzle is the Cosmos DB configuration entry which I show below.

The ‘id’ field maps to the VMSS instance prefix that is determined at runtime based on the name you used when creating the VMSS. We strip the trailing 6 characters to remove the unique component of each VMSS instance hostname.

The outcome of the three components (code changes, Key Vault and Cosmos DB) is that I can quickly add or remove VMSS groups in configuration, change where their configuration data is stored by updating the Key Vault Secrets, and even update running VMSS instances by changing the configuration settings and then forcing a restart on the VMSS instances, causing them to re-read configuration.

Is the above the only or best way to do this? Absolutely not ūüôā

I’d like to think it’s a good way that might inspire you to build something similar or better ūüôā

Interestingly, getting to this stage as well, I’ve also realised there might be some value in considering moving this solution to Service Fabric in future, though I am more inclined to shift to Containers running under the control an orchestrator like Kubernetes.

What are you thoughts?

Until the next post!

Tagged , , , , , ,

Moving from Azure VMs to Azure VM Scale Sets – VM Image Build

I have previously blogged about using Visual Studio Team Services (VSTS) to securely build and deploy solutions to Virtual Machines running in Azure.

In this, and following posts I am going to take the existing build process I have and modify it so I can make use of VM Scale Sets to host my API solution. This switch is to allow the API to scale under load.

My current setup is very much fit for purpose for the limited trial it’s been used in, but I know (at minimum) I’ll see at least 150 times the traffic when I am running at full-scale in production, and while my trial environment barely scratches the surface in terms of consumed resources, I don’t want to have to capacity plan to the n-nth degree for production.

Shifting to VM Scale Sets with autoscale enabled will help me greatly in this respect!

Current State of Affairs

Let’s refresh ourselves with what is already in place.

Build

My existing build is fairly straightforward – we version the code (using a PowerShell script), restore packages, build the solution and then finally make sure all our artifacts are available for use by the Release Management process.

Existing build process

The output of this build is a Web Deploy package along with a PowerShell DSC module that configures the deployment on the target VM.

Release Management

I am using multiple Environments for Release Management to manage transformations of the Web Deploy Parameters file along with the Azure Subscription / Resource Group being deployed to. The Tasks in each Environment are the same though.

My Release Management Tasks (as shown below) open the NSG to allow DSC remote connections from VSTS, transform the Web Deploy Parameters file, find the VMs in a particular Azure Resource Group, copy the deployment package to each VM, run the DSC script to install the solution, before finally closing the NSG again to stop the unwashed masses from prying into my environment.

Existing release process

All good so far?

What’s the goal?

The goal is to make the minimum amount of changes to existing VSTS and deployment artifacts while moving to VM Scale Sets… sounds like an interesting challenge, so let’s go!

Converting the Build

The good news is that we can retain the majority of our existing Build definition.

Here are the items we do need to update.

Provisioning PowerShell

The old deployment approach leveraged PowerShell Desired State Configuration (DSC) to configure the target VM and deploy the custom code. The DSC script to achieve this is shown below.

The challenge with the above PowerShell is it assumes the target VM has been configured to allow WinRM / DSC to run. In our updated approach of creating a VM Image this presents some challenges, so I redeveloped the above script so it doesn’t require the use of DSC. The result is shown below.

As an aside, we could also drop the use of the Parameters file here too. As we’ll see in another post, we need to make the VM Image stateless, so any local web.config changes that are environment-specific are problematic and are best excluded from the resulting image.

Network Security Group Script

In the new model, which prepares a VM Image, we no longer need the Azure PowerShell script that opens / closes the Network Security Group (NSG) on deployment, so it’s removed in the new process.

No more use of Release Management

As the result of our Build is a VM Image we no longer need to leverage Release Management either, making our overall process much simpler.

The New Build

The new Build definition shown below – you will notice the above changes have been applied, with the addition of two new Tasks. The aspect of this I am most happy about is that our core build actually remains mostly unchanged – we have had to add two additional Tasks and change one PowerShell script to make this work.

New build process

Let’s look at the new Tasks.

Build Machine Image

This Task utilises Packer from Hashicorp to prepare a generalised Windows VM image that we can use in a VM Scale Set.

The key items to note are: you need an Azure Subscription where a temporary VM, and the final generalised VHD can be created so that Packer can build the baseline image for you.

New Build Packer Task

You will notice we are using the existing artifacts staging directory as the source of our configuration PowerShell (DeployVmSnap.ps1) which is used by Packer to configure up the host once the VM is created using an Azure Gallery Image.

The other important item here is the use of the output field. This will contain the fully qualified URL in blob storage where the resulting packed image will reside. We can use this in our next step.

Create VM Image Registration

The last Task I’ve added is to invoke an Azure PowerShell script, which is just a PowerShell script, but with the necessary environmental configuration to allow me to execute Cmdlets that interact with Azure’s APIs.

The result of the previous Packer-based Task is a VHD sitting in a Blob Storage account. While we can use this in various scenarios, I am interested in ensuring it is visible in the Azure Portal and also in allowing it to be used in VM Scale Sets that utilised Managed Disks.

The PowerShell script is shown below.

.. and here is how it is used in the Task in VSTS..

New build VM Image

You can see how we have utilised the Packer Task’s output parameter as an input into this Task (it’s in the “Script Arguments” box at the bottom of the Task).

The Result

Once we have this configured and running the result is a nice crisp VM Image that can be used in a VM Scale Set. The below screenshot shows you how this looks in my environment – I wrapped the Azure Storage Account where the VHDs live, along with the VM Image registrations in the same Resource Group for cleaner management.

Build Output

There are still some outstanding items we need to do with, specifically: configuration management (our VM Image has to be stateless) and VM Scale Set creation using the Image. We will deal with these two items in the following posts.

For now I hope you have a good grasp on how you can convert an existing VSTS build that deploys to existing VMs to one that produces a generalised VM Image that you can use either for new VMs or in VM Scale Sets.

Until the next post.

ūüôā

Want to see how I dealt with instance configuration? Then have a read of my next post in this series.

Tagged , , , , ,

Retrieve Work Item Discussion History using Visual Studio Online’s REST API

At TechEd North America Microsoft announced the availability of a preview REST API for Visual Studio Online. ¬†Anyone who has spent time around Team Foundation Server will know that it’s always had a strongly-typed C# SDK which can be used to extract and manipulate work items held in TFS. ¬†This REST API is new and will eventually be available for the on-premise hosted version of TFS.

In a lucky coincidence I’m doing some work that requires me to extract work item information for other purposes so I decided to give the API a whirl. ¬†As you can see it went pretty well!

*Strictly* speaking, it isn’t¬†a complete Work Item as by default¬†Attachments and History are both excluded. ¬†Interestingly both of these¬†tend to be pain points for other traditional tools to deal with too :).

Luckily the new REST API offers support for retrieval of “Updates” which represent the set of fields changed on each update to the Work Item. ¬†This can be any combination of fields or comments directly added to the history by users.

The demo code below shows how you can build out the history so it displays “Discussion Only” items and excludes¬†all others. ¬†As it’s early days for the API maybe we’ll see the sort of logic covered here wrapped in an API call so you don’t need to parse these objects locally.

Once the entire solution is ready I’ll post the source up on Github. ¬†Note you’ll need to enable the “Alternate Credentials” on your user profile to get this code¬†working.

Tagged , , , , ,

Use Tags to better manage your TFS work items

Updated: early in 2014 Microsoft released an update that now makes it possible to query using Tags.  See the full details online.

As many of you have found, at present it isn’t possible to customise work item templates to provide custom fields the same way you can in the on-premise version of TFS. While the out-of-the-box work item types mostly suffice there are cases where not being able to customise the template impacts your ability to properly report on the work you are managing.

To go some way to address this Microsoft released an update to Team Foundation Service in January 2013 to provide a ‘tag’ feature that allows users to add meta-data to a work item.

I must admit until my most recent engagement that I hadn’t looked closely at tags as I’d found that using a well thought through Area Path scheme tended to work well when querying work items. I’m now working on a project where we are performing a large number of migrations between platforms and must deal with a fair amount of variation between our source system and our target.

Our primary concern is to arrange our work by business unit – this will ultimately determine the order in which we undertake work due to a larger project that will affect availability of our target system for these business units. To this end, the Area Path is setup so that we can query based on Business Unit.

In addition to business unit we then have a couple of key classifications that would be useful to filter results on: type of the source system; long term plan for the system once migrated.

When creating our backlog we tagged each item as we bought it in, resulting in a set of terms we can easily filter with.

Backlog with tags

Which allows us to easily filter the backlog (and Sprint backlogs if you use a query view for a Sprint) like this:

Backlog with applied filter

The great thing with this approach is as you add a filter to a backlog the remaining filter options display the number of work items that match that element – in the example below we can see that there are 90 “team sites” and 77 “applications” that are also tagged “keep”. This is extremely useful and a great way to get around some of the work item customisation limitations in Team Foundation Service.

Filter applied

One item to be aware of with tags is that right now the best experience with them is on the web. If you regularly use Team Explorer or Excel to manage your TFS backlog then they may be of limited use. You can still filter using them within Excel but the mechanism in which they are presented (as a semi-colon list of keywords) means easily filtering by individual tags isn’t possible.

Excel filter

Tagged , , ,

Quick Links To Help You Learn About Developing For The Cloud

Unsurprisingly I think the way Cloud Computing is transforming the IT industry is also leading to easier ways to learn and develop skills about the Cloud. In this post I’m going to give a run down on what I think some of the best ways are to start dipping your toe into this space if you haven’t already.

Sign up for a free trial

This is easy AND low cost. Turn up to the sign-up page for most major players and you’ll get free or low-cost services for a timed period. Sure, you couldn’t start the next Facebook at this level, but it will give you enough to start to learn what’s on offer.¬† You can run VMs, deploy solutions, utilise IaaS, PaaS and SaaS offerings and generally kick the tyres of the features of each. At time of writing these are:

Learn the APIs and use the SDKs

Each of Amazon, Azure, Google, Office 365 and Rackspace¬†offer some form of remote programmable API (typically presented as REST endpoints).¬† If you’re going to move into Cloud from traditional hosting or system development practices then starting to learn about programmable infrastructure is a must. ¬†Understanding the APIs available will depend on leveraging existing documentation:

If you aren’t a fan of working so close to the wire you can always leverage one of the associated SDKs in the language of your choice:

The great thing about having .Net support is you can then leverage those SDKs directly in PowerShell and automate a lot of items via scripting.

Developer Tool Support

While having an SDK is fine there’s also a need to support developers within whatever IDE they happen to be using. ¬†Luckily you get support here too:

Source Control and Release Management

The final piece of the puzzle and one not necessarily tied to the individual Cloud providers is where to put your source code and how to deploy it.

  • Amazon Web Services: You can leverage Elastic Beanstalk for deployment purposes (this is a part of the Visual Studio and Eclipse toolkits).¬†http://aws.amazon.com/elasticbeanstalk/
  • Google App Engine: Depending on language you have a few options for auto-deploying applications using command-line tools from build scripts. ¬†Eclipse tooling (covered above) also provides deployment capabilities.
  • Rackspace Cloud: no publicly available information on build and deploy.
  • Windows Azure: You can leverage deployment capabilities out of Visual Studio (probably not the best solution though) or utilise the in-built Azure platform support to deploy from a range of hosted source control providers such as BitBucket (Git or Mercurial), Codeplex, Dropbox (yes, I know), GitHub or TFS. ¬†A really strong showing here from the Azure platform!¬†http://www.windowsazure.com/en-us/develop/net/common-tasks/publishing-with-git/

So, there we have it – probably one of the most link-heavy posts you’ll ever come across – hopefully the links will stay valid for a while yet! ¬†If you spot anything that’s dead or that is just plain wrong leave me a comment.

HTH.

Tagged , ,

SharePoint Online 2013 ALM Practices

SharePoint has always been a bit a challenge when it comes to structured ALM and developer practices which is something Microsoft partially addressed with the release of SharePoint and Visual Studio 2010. Deploying and building solutions for SharePoint 2013 pretty much retains most of the IP from 2010 with the noted deprecation of Sandbox Solutions (this means they’ll be gone in SharePoint vNext).

As part of the project I’m leading at Kloud at the moment we are rebuilding an Intranet so it runs on SharePoint Online 2013 so I wanted to share some of the Application Lifecycle Management (ALM) processes we’ve been using.

Packaging

Most of the work we have been doing to date has leveraged existing features within the SharePoint core – we have, however, spent time utilising the Visual Studio 2012 SharePoint templates to package our customisations so they can be moved between multiple environments. SharePoint Online still provides support for Sandboxed Solutions and we’ve found that they provide a convenient way to deploy elements that are not developed as Apps. Designer packages can also be exported and edited in Visual Studio and produce a re-deployable package (which result in Sandboxed Solutions).

Powershell

At the time of writing, the number of Powershell Commandlets for managing SharePoint Online are substantially less those for on-premise. If you need to modify any element below a Site Collection you are pretty much forced to write custom tooling or perform the tasks manually – we have made a call in come cases to build tooling using the Client Side Object Model (CSOM) or to perform tasks manually.

Development Environment

Microsoft has invested some time in the developer experience around SharePoint Online and now provides you with free access to an “Office 365 Developer Site” which gives you a single-license Office 365 environment in which to develop solutions. The General Availability of Office 365 Wave 15 (the 2013 suite) sees these sites only being available for businesses holding enterprise (E3 or E4) licenses. ¬†Anyone else will need to utilise a 30 day trial tenant.

We have had each team member setup their own site and develop solutions locally prior to rolling them into our main deployment. Packaging and deployment is obviously key here as we need to be able to keep the developer instances in sync with each other and the easiest way to achieve that is with WSPs that can be redeployed as required.

One other item we have done around development is to utilise an on-premise setup in a VM to provide developers with a more rapid development experience in some cases (and more transparent troubleshooting). As you mostly stick to the SharePoint CSOM a lot of your development these days resides in JavaScript which means you shouldn’t hit any snags in relying in on-premise / full-trust features in your delivered solutions.

Note that the Office 365 Developer Site is a single-license environment which means you can’t do multi-user testing or content targeting. That’s where test environments come into play!

Test Environment

The best way to achieve a more structured ALM approach with Office 365 is to leverage an intermediate test environment – the easiest way for anyone to achieve this is to register for a trial Office 365 tenant – while only technically available for 30 days this still provides you with the ability to test prior to deploying to your production environment.

Once everything is tested and good to go into production you’re already in a position to know the steps involved in deployment!

As you can see – it’s still not a perfect world for SharePoint ALM, but with a little work you can get to a point where you are at least starting to enforce a little rigour around build and deployment.

Hope this helps!

Useful Links

Tagged , , , , ,