Category Archives: Cloud

Provide non-admin users with read-only access to Service Endpoints in VSTS

I am currently transitioning some work to another team in our business. Part of this transition has been to pre-configure various Service Endpoints in Visual Studio Team Services (VSTS) to provide a way for the new team to deploy into target Azure environments without the team necessarily having direct or privileged access into those Azure environments.

In this post I am going to look at how you can grant users access to these Service Endpoints without them being able to modify them. This post will also be useful if you’ve configured Service Endpoints (as an admin) and then others on the team (who are non-admins) are unable to see them.

Note that this advice applies to any Service Endpoint – not just Azure!

By default only users who are members of the following groups can see Service Endpoints:

– Project Admins
– Endpoint Admins
– Endpoint Creators.

It’s unlikely that you want all your team members to hold these roles, so let’s see how we can grant rights to use Service Endpoints without being an admin!

We’re going to complete this task with an existing Service Endpoint, but you should hopefully see how you can do this at the same time you setup a new Endpoint in future.

Open up your Team Project and in the top navigation mouse over the settings (cog) icon and from the context menu click “Services”.

Service Endpoints

Once the Endpoints page has loaded, select the Endpoint you wish to allow non-admin users to see.

Selected Endpoint

Now click on ‘Roles’ to display the currently assigned users and groups and their permissions (the current list will only contain users or groups at an ‘Administrators’ level).

Roles Screen

Now we’re in the right place to add our additional read-only users or groups!

Click on the ‘+ Add’ button and the Add user dialog is displayed. Ensure that the ‘Role’ is set to ‘User’ and then find the User or Group you want to assign this right to. In our demo below we are allowing the current project’s Contributors group to use Endpoints.

Add user dialog

Once you click the ‘Add’ button the user or group will be granted read-only rights to the Endpoint. This will allow them to find or use the Endpoint in Build or Release Management Definitions (like below).

Release Definition

Happy (secured) days! 😎

Tagged , , , ,

Azure AD B2C Custom Attributes: How to easily find their unique key value

When working with Azure Active Directory B2C you can create what are known as Custom Attributes which allow you to store data about users beyond the attributes (firstname, lastname, etc) that are available out-of-the-box.

When you want to work with these Custom Attributes in a solution you build you will need to know the unique key of the attribute in order to reference it.

What do I mean by this? Let’s take a quick look using an example.

Note that you will need to be a B2C Global Admin in order to perform some tasks covered in this post.

Creating Custom Attributes

These are created via the Azure Management Portal. In my sample I am going to add an attribute to hold a tier rating for a user (say, Gold, Silver and Bronze) called “TierRating”.

The video below shows how you can do this.

Find Attribute’s Unique Key Value

Now we have this Custom Attribute created we will want to use it in our solution. If you’re eagle-eyed you may find in the Portal that these Custom attributes appear be named ‘extension_AttributeName’ (i.e. ‘extension_TierRating’).

This won’t work in your solution though 🙂

When you create a Custom Attribute this is actually being done for you by a custom application called the “b2c-extensions-app” that is deployed to all B2C tenants at provisioning time.

Why am I telling you this? I am telling you this because it’s the key to determining the Custom Attribute’s unique key value 🙂

You will need the Application ID for the b2c-extensions-app, which you can find in the Portal as shown in the video below.

Using it in your code

Now we have this value (in our demo video the value is ‘bb10b272-0267-46f0-8b6f-4367e8b1b1e6’) we can start to interact with Custom Attributes in our code.

Firstly we need to drop the dashes so it becomes ‘bb10b272026746f08b6f4367e8b1b1e6’. We combine this with the “Name” value for the Attribute, along with a prefix of “extension_”.

So for our tier rating Custom Attribute the full key for it becomes ‘extension_bb10b272026746f08b6f4367e8b1b1e6_TierRating’.

A sample of how this key is used in our solution is shown below.

This pattern is used for every Custom Attribute you create in this Directory.

So there we have it – the easiest way you can determine the actual unique key for a Custom Attribute!

Happy days 😎

Tagged , ,

Recommendations on using Terraform to manage Azure resources

If you’ve been working in the cloud infrastructure space for the last few years you can’t have missed the buzz around Hashicorp’s Terraform product. Terraform provides a declarative model for infrastructure provisioning that spans multiple cloud providers as well as on-premises services from the likes of VMWare.

I’ve recently had the opportunity to use Terraform to do some Azure infrastructure provisioning so I thought I’d share some recommendations on using Terraform with Azure (as at January 2018). I’ll also preface this post by saying that I have only been provisioning Azure PaaS services (App Service, Cosmos DB, Traffic Manager, Storage and Application Insights) and haven’t used any IaaS components at all.

In the beginning

I needed to provide an easy way to provision around 30 inter-related services that together constitute the hosting environment for single customer solution. Ideally I wanted a way to make it easy to re-provision these services as required.

I’ve used Azure Resource Manager (ARM) templates heavily in the past, but thought I could get some additional value from Terraform as it provides you with additional capabilities that aren’t present in ARM templates. As an example, right now you can’t provision Azure Storage Containers with ARM, but you can with Terraform.

I began, as I do with these sorts of templates, by incrementally defining resources and building the Terraform definition as I went. I got to the point where I decided to refactor some of the Terraform definitions to modularise the solution to hopefully make it a bit easier to understand and manage going forward.

When I did this refactor I also changed a bunch of resource naming schemes to better match my customer’s preferred standard. The net result of all this change was that I had a substantial amount of updates to be applied in my test lab that I had been incrementally updating as I went.

Now the fun begins

I ran ‘terraform plan’ which generated my execution plan (always make sure to provide an “out” parameter so you know ‘apply’ will match the plan exactly). I then ran ‘apply’ and left it running while I went to lunch.

When I returned about 45 minutes later my ‘terraform apply’ was still running, seemingly stuck on destroying one of the resources.

A quick visual check in the Portal of the Azure Resource Group these resources were in suggested that everything I wanted provisioned had been provisioned successfully.

Given this state of affairs I Ctrl+C’d the job, to which Terraform advised me:

Interrupt received.
Please wait for Terraform to exit or data loss may occur.

So, I gave the job a few more minutes to gracefully exit, at which point I sent another Ctrl+C and the job exited with this heart-warming message:

Two interrupts received. Exiting immediately. Note that data loss may have occurred.

Out of interest I immediately ran ‘terraform plan’ to understand what Terraform thought was provisioned versus what actually was.

The net result? Terraform had no idea that anything was provisioned!

A look at the local state file showed it was effectively empty. I restored the backup state file which it turned out was actually of minimal use because the delta between the backup and what I had just applied was too great – the resulting plan looked like an Azure resource massacre about to happen!

What to do?

I thought at this point that I was using the tooling incorrectly – how could I so easily get into this state? If I was using this to manage a production environment I’d be dead in the water.

Through additional reading and speaking with others, this is a known long-term pain point with Terraform – lose your local state and you are in a world of pain. At this time, you can’t even easily rebuild this local state without having to write a bunch of Imports which means you need to know what to import and you lose tracking of elements like random string generation at the same time.

Recommendations and Observations

Out of this experience I have some recommendations and observations around how I see Terraform (in its current state) fitting into environmental management in Azure:

  • Use Resource / Resource Group locks (delete or read-only) always: this applies even outside of use of Terraform. This will stop you from accidentally changing important resources. While you can include the definition of resource locks in your Terraform definitions I’d recommend you leave them out. If you use a Contributor-level user to do your deployments Terraform will fail when it tries to lock Resource Groups.
  • Make smaller, more frequent changes: this equates to a smaller delta between what’s in your state, and what’s in the plan. This means if you do need to recover state from backup you will have less of a change to deal with.
  • Consider your use of Terraform features like the ‘random string provider’ – you could move these to be input parameters that you can generate outside of Terraform. This means you create a fixed set of inputs, so that even if you lose state you can be assured that creating resources with “random” name components will be consistent with your last successful execution.
  • Use Resource Groups with small sets of Resources: fewer resources to deal with in event of a failure.
  • Consider Terraform as an initial provisioning tool for production and a re-provisioning tool for all dev / test and low complexity environments.
  • Use Terraform to detect drift: if you deploy an environment with Terraform, then setup the same definition as a CI build that simply runs ‘terraform plan’ against the deployed environment, using the state you generated on initial deployment as an input. If you have any change (add / delete) as the result of the ‘plan’ then you can fail the build and alert your team to investigate accordingly.
  • Consider for Blue / Green Infrastructure deployments for production only: if you want to push completely fresh infrastructure each time then Terraform is a good tool to consider. The usability of this approach is determined by complexity of your environment and the mix of utility / non-utility services you are deploying. This can work well with a slower cadence of release (monthly or above), even if your environment is fairly complex.
  • Use Azure Storage account backing for your state file (key for Terraform Open Source users). You can do this by setting up an Azure Storage Account and then defining the following in each of your TF files:
    terraform {
      backend "azurerm" {
        storage_account_name = "myterraformstore"
        container_name       = "tfstate"
      }
    }
    

    and then when you execute the init step you provide the additional parameters:

    terraform init -backend-config="access_key=*STORAGE_ACCNT_KEY" -backend-config="name.ofyour.tfstate"
    

    The shame here right now is you don’t get the versioning those who use AWS S3 buckets have access to.

  • Always write an ‘Import’ script once you’ve provisioned key environmental components you can’t afford to lose.

As a side note I notice that there is now an Azure Go SDK dependency for the Terraform Azure provider which is being maintained by Microsoft. I do wonder if this means that Terraform loses some of its appeal because new Azure features for Terraform will invariably be tied to the cadence and capabilities of the Go SDK which is generated against the official Microsoft Azure API. Will this become the way to block provider features that violate the Azure API definition? I guess time will tell.

As with all tools, Terraform has its strengths and weaknesses – hopefully as the product continues to mature we’ll see key features like re-build / import become part of the core value proposition (and not simply appear in the Enterprise version as a paid value add).

Tagged , , ,

Use Azure Health to track active incidents in your Subscriptions

Yesterday afternoon while doing some work I ran into an issue in Azure. Initially I thought this issue was due to a bug in my (new) code and went to my usual debugging helper Application Insights to review what was going on.

The below graphs a little old, but you can see a clear spike on the left of the graphs which is where we started seeing issues and which gave me a clue that something was not right!

App Insights views

Initially I thought this was a compute issue as the graphs are for a VM-hosted REST API (left) and a Functions-based backend (right).

At this point there was no service status indicating an issue so I dug a little deeper and reviewed the detailed Exception information from Application Insights and realised that the source of the problem was the underlying Service Bus and Event Hub features that we use to glue together our services.

You can see the increased error rate from the Service Bus Metrics view below.

Service Bus Metrics

While I was doing this an alert popped up in the Portal advising a service incident and directed me to the Azure Service Health feature in order to view the full incident details and also to track it.

On the Azure Health page I could see an active incident and decided to try out the alerting feature to track this during a commute home.

I clicked on the Add Alert option and configured a new email-based alert. You can also push alerts into your preferred IT Service Management (ITSM) solution, but we aren’t yet using an ITSM platform for this solution, but this would be our choice in future!

In Services I chose Service Bus and Event Hubs and for Regions I selected the two Australian Regions. Note that I had to set up an Action Group as I hadn’t used the feature previously – in the screenshot below I am just reusing the one I previously setup.

Alerts Setup

A short while after saving the Alert configuration the recipients in the Action Group started to receive update emails containing the most recent status of the incident. A sample is shown below.

Notice Email

About 45 minutes after this alert we received a resolution notification.

The amount of time saved for our team with the ease of this setup is pretty amazing, and if you’re not using this feature already you should go and explore it in the Portal and set it up for you key Azure components.

What a great early Christmas present!

😎

Tagged , , , , ,

Azure API Management: 200 OK response but no backend traffic

I’m noting this post down in the “if only someone had already made a big noise about this I might have saved some time” category.

The work I’m doing at present involves fronting some APIs with Azure API Management and then exposing them securely.

When I hit the moment I thought I was done today I was doing some testing, and no matter what I did I couldn’t get my backend service to respond, and I could clearly see no traffic hitting the backend.

After double-checking my policies and doing a few more tests (only a couple of hours) I then happened across this Stack Overflow question and its answer

It turns out that I had, somewhere along the line, removed the “forward-request” policy from the Policy applying to all APIs published via API Management.

So how to fix? As Darrell says, find the offending Policy and add the missing item back.

Edit Policy

When done it should like the image below.

API Policy

… and your API calls will now work as expected and not just give you back 200 OK! 😎

Tagged , , , , ,

Continuous Deployment for Docker with VSTS and Azure Container Registry

I’ve been watching with interest the growing maturity of Containers, and in particular their increasing penetration as a hosting and deployment artefact in Azure. While I’ve long believed them to be the next logical step for many developers, until recently they have had limited appeal to many every-day developers as the tooling hasn’t been there, particularly in the Microsoft ecosystem.

Starting with Visual Studio 2015, and with the support of Docker for Windows I started to see this stack as viable for many.

In my current engagement we are starting on new features and decided that we’d look to ASP.Net Core 2.0 to deliver our REST services and host them in Docker containers running in Azure’s Web App for Containers offering. We’re heavy uses of Visual Studio Team Services and given Microsoft’s focus on Docker we didn’t see that there would be any blockers.

Our flow at high level is shown below.

Build Pipeline

1. Developer with Visual Studio 2017 and Docker for Windows for local dev/test
2. Checked into VSTS and built using VSTS Build
3. Container Image stored in Azure Container Registry (ACR)
4. Continuously deployed to Web Apps for Containers.

We hit a few sharp edges along the way, so I thought I’d cover off how we worked around them.

Pre-requisites

There are a few things you need to have in place before you can start to use the process covered in this blog. Rather than reproduce them here in detail, go and have a read of the following items and come back when you’re done.

  • Setting up a Service Principal to allow your VSTS environment to have access to your Azure Subscription(s), as documented by Donovan Brown.
  • Create an Azure Container Registry (ACR), from the official Azure Documentation. Hint here: don’t use the “Classic” option as it does not support Webhooks which are required for Continuous Deployment from ACR.

See you back here soon 🙂

Setting up your Visual Studio project

Before I dive into this, one cool item to note, is that you can add Docker support to existing Visual Studio projects, so if you’re interested in trying this out you can take a look at how you can add support to your current solution (note that it doesn’t magically support all project types… so if you’ve got that cool XAML or WinForms project… you’re out of luck for now).

Let’s get started!

In Visual Studio do a File > New > Project. As mentioned above, we’re building an ASP.Net Core REST API, so I went ahead and selected .Net Core and ASP.Net Core Web Application.

New Project - .Net Core

Once you’ve done this you get a selection of templates you can choose from – we selected Web API and ensured that we left Docker support on, and that it was on Linux (just saying that almost makes my head explode with how cool it is 😉 )

Web API with Docker support

At this stage we now have baseline REST API with Docker support already available. You can run and debug locally via IIS Express or via Docker – give it a try :).

If you’ve not used this template before you might notice that there is an additional project in the solution that contains a series of Docker-related YAML files – for our purposes we aren’t going to touch these, but we do need to modify a couple of files included in our ASP.Net Core solution.

If we try to run a Docker build on VSTS using the supplied Dockerfile it will fail with an error similar to:

COPY failed: stat /var/lib/docker/tmp/docker-builder613328056/obj/Docker/publish: no such file or directory
/usr/bin/docker failed with return code: 1

Let’s fix this.

Add a new file to the project and name it “Dockerfile.CI” (or something similar) – it will appear as a sub-item of the existing Dockerfile. In this new file add the following, ensuring you update the ENTRYPOINT to point at your DLL.

This Dockerfile is based on a sample from Docker’s official documentation and uses a Docker Container to run the build, before copying the results to the actual final Docker Image that contains your app code and the .Net Core runtime.

We have one more change to make. If we do just the above, the project will fail to build because the default dockerignore file is stopping the copying of pretty much all files to the Container we are using for build. Let’s fix this one by updating the file to contain the following 🙂

Now we have the necessary bits to get this up and running in VSTS.

VSTS build

This stage is pretty easy to get up and running now we have the updated files in our solution.

In VSTS create a new Build and select the Container template (right now it’s in preview, but works well).

Docker Build 01

On the next screen, select the “Hosted Linux” build agent (also now in preview, but works a treat). You need to select this so that you build a Linux-based Image, otherwise you will get a Windows Container which may limit your deployment options.

build container 02

We then need to update the Build Tasks to have the right details for the target ACR and to build the solution using the “Dockerfile.CI” file we created earlier, rather than the default Dockerfile. I also set a fixed name for the Image Name, primarily because the default selected by VSTS typically tends to be invalid. You could also consider changing the tag from $(Build.BuildId) to be $(Build.BuildNumber) which is much easier to directly track in VSTS.

build container 03

Finally, update the Publish Image Task with the same ACR and Image naming scheme.

Running your build should generate an image that is registered in the target ACR as shown below.

ACR

Deploy to Web Apps for Containers

Once the Container Image is registered in ACR, you can theoretically deploy it to any container host (Azure Container Instances, Web Apps for Containers, Azure Container Services), but for this blog we’ll look at Web Apps for Containers.

When you create your new Web App for Containers instance, ensure you select Azure Container Registry as the source and that you select the correct Repository. If you have added the ‘latest’ tag to your built Images you can select that at setup, and later enable Continuous Deployment.

webappscontainers

The result will be that your custom Image is deployed into your Web Apps for Containers instance and which will be available on ports 80 and 443 for the world to use.

Happy days!

I’ve uploaded the sample project I used for this blog to Github – you can find it at: https://github.com/sjwaight/docker-dotnetcore-vsts-demo

Also, please feel free to leave any comments you have, and I am certainly interested in other ways to achieve this outcome as we considered Docker Compose with the YAML files but ran into issues at build time.

Tagged , , ,

Microsoft Open Source Roadshow – Free training on Azure – Canberra and Sydney

Microsoft Open Source Roadshow

I’m excited to have the opportunity to share Azure’s powerful Open Source support with more developers in November.

Our first run of these sessions in August proved popular, so if you, or someone you know, wants to learn more they can sign up below.

We’ll cover language support (application and Azure SDK), OS support (Linux, BSD), Datastores (MySQL, PostreSQL, MongoDB), Continuous Deployment and, last, but not least, Containers (Docker, Container Registry, Kubernetes, et al).

We’re starting off in Canberra this time, then Sydney, so if you’re interested here are the links to register:

  • Canberra – Wednesday 1 November 2017: Register
  • Sydney – Friday 10 November 2017: Register

We’re also running two days in New Zealand in November too if you know anyone who might want to come along.

If you have any questions you’d like to see answered at the days feel free to leave a comment.

I hope to see you there!

Tagged , ,

Microsoft Open Source Roadshow – Free training on Azure – Auckland and Wellington!

Microsoft Open Source Roadshow

Hello New Zealand friends!

I’m really happy to share that we are bringing the Open Source Roadshows to Auckland and Wellington in November 2017!

We’ll cover language support (application and Azure SDK), OS support (Linux, BSD), Datastores (MySQL, PostreSQL, MongoDB, SQL Server on Linux), Continuous Deployment and, last, but not least, Containers (Docker, Container Registry, Kubernetes, et al).

If you’re interested in learning more here are the links to register:

  • Auckland – Tuesday 7 November 2017: Register
  • Wellington – Wednesday 8 November 2017: Register

If you have any questions you’d like to see answered at the days feel free to leave a comment.

I hope to see you there!

Tagged , ,

Speaking: Azure Functions at MUG Strasbourg – 28 September

I’m really excited about this opportunity to share the power of Azure with the developer and IT Pro community in France that is soon to gain local Azure Regions in which to build their solutions.

If you live in the surrounding areas I’d love to see you there. More details available via Meetup.

Tagged , ,

Moving from Azure VMs to Azure VM Scale Sets – Runtime Instance Configuration

In my previous post I covered how you can move from deploying a solution to pre-provisioned Virtual Machines (VMs) in Azure to a process that allows you to create a custom VM Image that you deploy into VM Scale Sets (VMSS) in Azure.

As I alluded to in that post, one item we will need to take care of in order to truly move to a VMSS approach using a VM image is to remove any local static configuration data we might bake into our solution.

There are a range of options you can move to when going down this path, from solutions you custom build to running services such as Hashicorp’s Consul.

The environment I’m running in is fairly simple, so I decided to focus on a simple custom build. The remainder of this post is covering the approach I’ve used to build a solution that works for me, and perhaps might inspire you.

I am using an ASP.Net Web API as my example, but I am also using a similar pattern for Windows Services running on other VMSS instances – just the location your startup code goes will be different.

The Starting Point

Back in February I blogged about how I was managing configuration of a Web API I was deploying using VSTS Release Management. In that post I covered how you can use the excellent Tokenization Task to create a Web Deploy Parameters file that can be used to replace placeholders on deployment in the web.config of an application.

My sample web.config is shown below.

The problem with this approach when we shift to VM Images is that these values are baked into the VM Image which is the build output, which in turn can be deployed to any environment. I could work around this by building VM Images for each environment to which I deploy, but frankly that is less than ideal and breaks the idea of “one binary (immutable VM), many environments”.

The Solution

I didn’t really want to go down the route of service discovery using something like Consul, and I really only wanted to use Azure’s default networking setup. This networking requirement meant no custom private DNS I could use in some form of configuration service discovery based on hostname lookup.

…and…. to be honest, with the PaaS services I have in Azure, I can build my own solution pretty easily.

The solution I did land on looks similar to the below.

  • Store runtime configuration in Cosmos DB and geo-replicate this information so it is highly available. Each VMSS setup gets its own configuration document which is identified by a key-service pair as the document ID.
  • Leverage a read-only Access Key for Cosmos DB because we won’t ever ask clients to update their own config!
  • Use Azure Key Vault as to store the Cosmos DB Account and Access Key that can be used to read the actual configuration. Key Vault is Regionally available by default so we’re good there too.
  • Configure an Azure AD Service Principal with access to Key Vault to allow our solution to connect to Key Vault.

I used a conventions-based approach to configuration, so that the whole process works based on the VMSS instance name and the service type requesting configuration. You can see this in the code below based on the URL being used to access Key Vault and the Cosmos DB document ID that uses the same approach.

The resulting changes to my Web API code (based on the earlier web.config sample) are shown below. This all occurs at application startup time.

I have also defined a default Application Insights account into which any instance can log should it have problems (which includes not being able to read its expected Application Insights key). This is important as it allows us to troubleshoot issues without needing to get access to the VMSS instances.

Here’s how we authorise our calls to Key Vault to retrieve our initial configuration Secrets (called on line 51 of the above sample code).

My goal was to make configuration easily manageable across multiple VMSS instances which requires some knowledge around how VMSS instance names are created.

The basic details are that they consist of a hostname prefix (based on what you input at VMSS creation time) that is appended with a base-36 (hexatrigesimal) value representing the actual instance. There’s a great blog from Guy Bowerman from Microsoft that covers this in detail so I won’t reproduce it here.

The final piece of the puzzle is the Cosmos DB configuration entry which I show below.

The ‘id’ field maps to the VMSS instance prefix that is determined at runtime based on the name you used when creating the VMSS. We strip the trailing 6 characters to remove the unique component of each VMSS instance hostname.

The outcome of the three components (code changes, Key Vault and Cosmos DB) is that I can quickly add or remove VMSS groups in configuration, change where their configuration data is stored by updating the Key Vault Secrets, and even update running VMSS instances by changing the configuration settings and then forcing a restart on the VMSS instances, causing them to re-read configuration.

Is the above the only or best way to do this? Absolutely not 🙂

I’d like to think it’s a good way that might inspire you to build something similar or better 🙂

Interestingly, getting to this stage as well, I’ve also realised there might be some value in considering moving this solution to Service Fabric in future, though I am more inclined to shift to Containers running under the control an orchestrator like Kubernetes.

What are you thoughts?

Until the next post!

Tagged , , , , , ,