Author Archives: Simon

Fix Provider error in Cloud Shell when using AKS in a new Azure Region

Given the recent announcement of the GA of Azure Kubernetes Service I thought I would take it for a spin in one of the new Regions it is now available in. I have previously deploy AKS in East US using the Azure Cloud Shell so didn’t expect to run into any issues. However, I hit a minor snag, which I’m documenting here in case you come across it too.

az group create --name rg-aks-01 --location westus2

az aks create –resource-group rg-aks-01 –name testaks01 –node-count 1 –generate-ssh-keys

The subscription is not registered for the resource type ‘managedClusters’ in the location ‘westus2’. Please re-register for this provider in order to have access to this location.

And this is the fix.

az provider register --namespace Microsoft.ContainerService

Registering is still on-going. You can monitor using ‘az provider show -n Microsoft.ContainerService’

Then a short while later I ran the ‘show’ command and can now see this service is available in all the new Regions for GA (snippet shown below).

"locations": [
"UK West",
"East US",
"West Europe",
"Central US",
"Canada East",
"Canada Central",
"UK South",
"West US",
"West US 2",
"Australia East",
"North Europe"
]

Happy Days! 😎

Tagged , , ,

Read Tags from Azure Resource Groups and track using Table Storage

If you run in an environment where you need to track changes to Tags on Resource Groups in Azure then you may find this PowerShell snippet useful as code to drop into a Runbook.

The snippet will enumerate all Resource Groups in a Subscription (we assume you are already logged into the Subscription you want to use) and then extract all Tags from each Resource Group and write the details to Azure Table Storage.

Once you run this snippet you will be able to use the data for reporting purposes, where each Resource Group’s Resource ID will be used as the Partition Key, with the Tag Name (Key) and the current Date / Time used as a Row Key. You now have a reporting source you can use in the likes of Power BI.

Happy Days 😎

Tagged , ,

Are Free Tier Cloud Services Worth The Cost?

Over the years that I’ve been talking with public groups on cloud services, and Azure in particular, I will typically have at least one person in every group make a statement like this:

“Azure’s good, but the free tier isn’t as good as AWS.”

I’ve discussed this statement with groups enough times that I thought it would be good to capture my perspective on where Azure stands, provide some useful resources, and pose a question to those whose starting point is free services.

You want The Free? You can’t handle The Free!

You want the free?!

When people talk about how a free tier isn’t that useful, typically what they are saying translates into is one of two scenarios:

  1. The timeframe the free tier is offered for is not long enough for the person to achieve an acceptable learning outcome based on their time investment;
     
  2. More commonly, the service limits are too low meaning the person cannot achieve an acceptable learning outcome before their credit runs outs (regardless of time).

The reality is, beyond basic scenarios (run a Virtual Machine, create a database), and where someone doesn’t have sufficient continuous time to allocate to their cloud environment, the more likely it is they will receive minimal value from free tier services.

Effective use of free tiers

So how to minimise these outcomes?

  1. Be clear about what you want to achieve before you start a free tier subscription. If you don’t know *what* you want to do in advance you are likely to fritter away that free credit before you get to your eventual end goal. Additionally, if you know what you want to achieve then review the required cloud services you will use and determine if a free tier is going to provide you with sufficient resources to reach your goal.
     
  2. Start with pre-built environments or quickstarts – find labs or similar that give you access to existing environments. Attend events that include credits as part of attendance and use those to achieve a goal. Look at tutorials and samples to find automation scripts / templates that can get you up and running quickly (but remember the previous tip – if you try to provision a ten node Kubernetes cluster will that actually succeed in a free tier? Would a one node cluster suffice to allow you to learn?)

All the cloud platforms will provide you with time-limited free tiers, with some services being offered as “always free” at certain low usage levels.

Azure has had free services trials or tiers in one way or another for some time. Traditionally, however it hasn’t offered a 12 month period, though fairly recently that’s changed and there is now an extended 12 month Free tier offering for Azure.

One Azure cloud… many ways to get ongoing credits

Where Azure does differ substantially from AWS in particular is in the number of offerings Azure has that get you access to Azure credits on ongoing basis, lifting you out of having to use just free tier services:

  • Microsoft Developer Network (MSDN) Azure Benefits – available as an add-on to existing MSDN subscribers (note: your organisation might not have access to this benefit depending on your licensing). This is an ongoing benefit while you pay for an MSDN subscription.
  • Azure Starter for Students (formerly Dreamspark). This is an ongoing benefit while a student.
  • BizSpark Benefits – available to those who are leveraging the BizSpark programme for their business. Ongoing benefit while you are in the BizSpark program.
  • Azure for non-profits – go through the process to prove your status and gain access to Azure Credits.
  • Microsoft Azure Passes – typically when Microsoft runs training courses for Azure attendees will typically be provided with Azure credits in the form of an Azure Pass. We gave these away at the Global Azure Bootcamp this year. Time-limited offers (one to three months).

Investing in yourself or your idea

The reality of free tier services is they will only get you so far, whether the use you make of the cloud is to learn new concepts or to try an idea you have.

My take is this: if you aren’t prepared to invest your own money (i.e. I just want more free stuff) then you don’t put much value on your own education or idea.

If we had the cloud computing services we have now when the dotcom boom was happening we may well have seen a massively different outcome.

Startups wouldn’t have spent massive amounts of their funding on infrastructure and wasted months waiting for services to be provisioned before they even got to serving the first request.

Imagine if you had on-demand services when you were at school (maybe you still are) – the quality of your education would be improved by access to these sorts of services.

We are at a pivotal moment where we now have access to on-demand resources that a generation ago would have been unimaginable. If you are serious about an idea or personal development put your money where your brain is.

But I’m not an accountant!

Congratulations. Now you are! Didn’t hurt a bit either, did it?

It’s unavoidable for many of us that at some point it will come down to cost. I know there will be more than a few of you sitting there having previously paid a larger than expected cloud hosting bill. I bet you now manage those resources like a hawk. While this is a painful way to learn, you will have identified a key factor in how you design and run cloud native services.

Also, welcome to how businesses work – specifically how to control costs so they can remain viable. This is why your last request to the ops team for 10 servers was rejected, or why you had to finesse your design to fit into existing infrastructure constraints. 🙂

So, where to next?

I highly recommend spending time familiarising yourself with services in the cloud too – avoid anti-patterns that will likely be where you will unexpectedly spend more money than you thought.

You can find good examples of ways to configure services from the likes of Scott Hanselman and content like his “Penny Pinching in the Cloud” posts, or Troy Hunt’s posts on how “Have I been pwned” performs on Azure (pricing at the bottom of the post).

So, did I solve your problem? Make more of the Free? Unlikely I suspect.

Ultimately you need to consider that free tiers and services are designed as a taster, to get you thinking about how your could use those services for other things. While there are “always free” services, the reality is you will be unlikely to build the next Atlassian with it, but I’m pretty sure you can use them to pass exams or to get educated on cloud technology.

Happy Days 😎

Tagged , , , ,

Developer toolkit for working with Azure AD B2C JWT-protected APIs

I’ve blogged in the past about Azure Active Directory B2C and how you can use it as a secure turnkey consumer identity platform for your business.

In this post I’m going to walk through how you can debug JWT-protected APIs where those JWTs are being issued by AAD B2C. Note that a lot of what I write here will probably be applicable in any scenario where you are working with JWTs as AAD B2C is standards compliant so any advice here can be applied elsewhere.

We aren’t going to get into the Identity Experience Framework (IEF) here because he’s a whole universe of detail beyond the basic policy engine we’ll cover here 🙂

Your toolikit

Here’s the tools to get started with debugging.

Required tools:

  • A test AAD B2C tenant – a very strong recommendation *not* to use your production one!
  • An API testing tool like Postman. The B2C team has published how you can use Postman to test protected APIs.
  • Your API source code in a debug environment. Must be configured in the test AAD B2C tenant you are using!
  • A test client application – I’ve been using a customised version of the WPF sample client app from the B2C team.

Optional, but recommended:

  • jwt.ms (there is also jwt.io if you prefer)
  • Mailinator or any number of alternatives.
  • Create a B2C Profile Edit Policy even if you never roll it out to customers. This policy can be invoked via the Azure Portal to allow you to initialise new profile attributes.

Use standard OAuth libraries in your clients

Microsoft has great first-party support for B2C with the Microsoft Authentication Library (MSAL) across multiple platforms, but as B2C is designed to be an OAuth2 compliant service so any library that supports the specification should work with B2C. Microsoft provides samples that show how libraries like AppAuth can be used.

Rolling out custom attributes

There is currently a limitation with B2C around rolling out new custom attributes. Until an attribute is referenced in at least once policy in your tenant the attribute isn’t available to applications that utilise Graph API. This is why I always create a profile edit policy even that I can add new custom attributes to and then invoke the policy via the Azure Portal to initialise the attribute.

Testing APIs

Create test users

This is where a service like Mailinator comes in handy – you can create multiple test users and easily access the email notifications sent by B2C to perform actions like initial account validation or password reset.

Note: free services like Mailinator may be good for simple testing, but you may have security or compliance requirements that mean it can’t be used. In that case consider moving to a paid tier or other services that provide secured mailboxes (a service like outlook.com).

Request Tokens – Test Client Application

Once you have one or more test users you can then use one of the following approaches to obtain test tokens to use when calling APIs.

If you aren’t using Postman to retrieve tokens to supply in API calls then you can use the test client application above (assuming you are developing on Windows – or at some future point when we get WPF ported to .Net Core :)) to request tokens for users in your test tenant.

B2C Test Tool

Once you have the tokens you can copy them out and use them in Postman to make requests against your API by setting Authorization in Postman to use Bearer Tokens and then copying the value from the test tool into the ‘Token’ field.

Postman using a Token

If you’ve having issues with tokens being accepted by your API then you can leverage jwt.ms to review the contents of the token and see why it might be being rejected. A sample is shown below.

jwt.ms sample

If you have access to the target API source code make sure to debug that at the same time to see if you can identify why the token is being rejected.

As a guide the common failure reasons will include: token expired (or not yet valid); scopes are incorrect (if used); incorrect issuer (misconfiguration of client or API where they are not from the same B2C tenant); invalid client or audience ID.

So there we are! I hope you found this post useful in debugging B2C APIs – I certainly wish that I’d had something to reference when I started developing with B2C! Now I do! 😉

Tagged , , ,

Multi-environment deployments for Compiled C# Azure Functions with VSTS Release Management

This post covers an approach you can use to deploy compiled C# Functions using the tooling available in Visual Studio 2017 and various Build and Release Management Tasks contained in Visual Studio Team Services (VSTS).

Note that this post discusses deploying to the v1 Functions runtime platform.

I was lucky enough to speak with Damian Brady on the DevOps Labs show on Channel 9 and cover the first part of this blog. If you’ve watched that, or you’ve come here via the Github repository for the solution we used, then we’ll go down to the next level and really look at how you can recreate this setup in your environment.

1. Pre-requisites

There are a few moving pieces we need to get into place first in order to complete the configuration. Let’s take a look at those.

a. Connecting environments

Note that in order to complete these steps you may require elevated privileges in one or more of the mentioned services. If you do not have access to an admin-level account in any of the services you will likely need to ask someone to configure these for you.

> Github to VSTS

In our demonstration we’re using a Github repository as our source repository and allowing VSTS’ Build capability to perform Continuous Integration Builds when a commit occurs on the master branch. Microsoft has documented how to configure Github to connect with VSTS already, so go and take a read and head back here when you’re done setting up the integration.

> VSTS to Azure

We use the Azure Resource Manager Service Endpoint option in VSTS to configure our connection into Azure from VSTS. There is documentation from Microsoft around how you setup the connection, including steps to create a custom Service Principal (or use a pre-existing one your Azure admin has created for you). Once again, go have a read and once you have a working Service Endpoint in VSTS head back over here.

b. Configure SendGrid

If you’d like to run the Functions once deployed you will need to configure SendGrid so you can use the binding in the Functions being deployed. You can follow the official Azure documentation on setting up a (free) SendGrid account and then make sure to set the API key value for the AzureWebJobsSendGridApiKey App Setting for your deployed Functions.

c. Create target Azure Resources

Go ahead and also create a Function App in the Subscription you want to deploy to (ensure that the Service Principal you setup previously has Contributor-level access to, at minimum, the Resource Group that will contain the Function).

You can use the process we document here to deploy to either a Service Plan or Consumption Plan, though there is a minor difference we will see later in the post.

The Azure resources to deploy should include:

  • Application Insights
  • Cosmos DB Account – Add Database “quotedemo” with Collections “quotes” and “leases”
  • Function App (can be Consumption or Service Plan)
  • Storage Account (can be created at same time as Function App)

Before we move on, make sure to capture the following:

  • Application Insights Telemetry Key (shown on the ‘Essentials’ part of the Application Insights instance)
  • Cosmos DB Account URL and Access Key
  • Function App:
    • AzureWebJobsStorage (Service Plan);
    • WEBSITE_CONTENTAZUREFILECONNECTIONSTRING (Consumption Plan);
    • WEBSITE_CONTENTSHARE (Consumption Plan);
    • FUNCTIONS_EXTENSION_VERSION (most likely set to “~1”).

d. Configure Application Insights for Release Annotations

In this scenario we will need to enable the Application Insights API or order to support Release Annotations which make it possible to see new release markers on timelines.

Once again, Microsoft has this well documented, including the Task you need to add in VSTS to allow you to create Annotations.

OK! Now we are ready to configure the Build and Release Management steps in VSTS.

2. Configure the Build

In a Team Project in VSTS you wish to use host the Build and Release Management Definitions go ahead and create a new Build.

Remember to select Github as your source repository.

Setup Github as source

When you are prompted for a template, select the ASP.Net Core (.Net Framework) build.

Select Build Template

If you want a Continuous Integration (CI) build then ensure you set the Trigger as required.

CI trigger

At this point we have a build that produces a packaged web application that can be pushed to the Azure App Service hosting the Function App. We could add more Tasks to the Build to do this, but we want to support multiple environments so this is where Release Management comes into play.

I recommend you run the Build to ensure it’s functional and to produce an artefact we can use in our next step.

3. Configure Release Management

Now we have a build that produces a build artefact we can now use VSTS Release Management (RM) to deploy and configure this artefact to any environment we can reach.

Let’s go ahead and choose to create a new Release Management Definition. When you have the option, select the “Azure App Service Deployment” template.

Release Management Template

The resulting Definition will be very vanilla and contain a single Task. We need to make some changes to deploy our service exactly as we’d like.

a. Add the Build Artefact

First we need to tell Release Management what we want to deploy, so let’s go ahead and add our existing Build by clicking on the Artefacts box and selecting our Build as shown below.

Add Artefact

If you wish to enable Continuous Deployment (CD) into Environments you can click on the lightning bolt on the Artefact and enable the CD trigger. Note that you can still stop automated deployments by putting in approvals or making deployments manual – by creating a Release you always have a build artefact to deploy.

CD Trigger

b. Configure RM Tasks

This is where your previously completed configuration with the Azure Service Endpoint and in setting up the resources in Azure will come into play.

Click on the Tasks tab and the RM Definition will open.

Clicking Task

Once open click on the Environment at the top of the Task list. As we are going to deploy all assets into a single Subscription we can set up a few items that will apply to all Tasks in the RM Definition.

The first thing we will do is to select the Service Endpoint we previously setup (below it is named “Service Principal for Demo”, but you can name it anything meaningful).

Setup Environment

Once you select the method of connection to the Azure Subscription change the “App type” field to be “Function App” and then from the final picker, select the Function App instance you setup earlier. If you don’t see it, it could be that you placed it another subscription or that the Service Endpoint does not have sufficient rights to list the Function Apps in the Subscription.

Your setting should look something like the below.

Environment Configuration

We could deploy the sample code now, but it would fail to run because it is missing configuration.

c. Deploying configuration

You will notice that up until now we’ve not dealt with any of the runtime configuration settings for the Function. When you develop locally the Functions Tools in VSTS will generate a “local.settings.json” file, but it will be blocked from commit via the gitignore included in the project type. It’s recommended you don’t change this, and even if you it won’t help you on deployment anyway (so… y’know, why bother to change the ignore file?)

For this Task we are going to need to pull in a free Marketplace Task – the Azure WebApp Configuration from Pascal Naber (Xpirit). This Task is a wrapper around some Azure Cmdlets, but it does a great job of removing your overhead in managing that 🙂

You will need to be a VSTS admin in order to install Marketplace Tasks (if you aren’t you can still request an admin to install them).

Once installed you can now add the Task to your Release Management Definition after the App Service Deployment (as shown below).

Release Tasks

The Task expects any App Settings you need to deploy to be added as Variables to the Definition. So, for example, if we want to control the value we set for the ‘APPINSIGHTS_INSTRUMENTATIONKEY’ App Setting in our target Function we would create a Variable in our Release called ‘appsetting.APPINSIGHTS_INSTRUMENTATIONKEY’ and set the value to be the Telemetry Key we captured earlier in the post.

The beauty of this approach is that you can one-way save secrets and they won’t show up (or be recoverable) via the Variables tab again. There is also an option to write them to Azure Key Vault if you want.

Below is a sample of the Variables once setup.

RM Variables

The eagle-eyed amongst you might spot that I am also setting default Function values (FUNCTIONS_EXTENSION_VERSION, AzureWebJobsStorage, AzureWebJobsDashboard for a Service Plan).

This is because I force the Application Settings Task to overwrite all existing values in the App Service. This is on purpose – it ensures no manual fixes are ever safe in Azure and our Release Management Definition is the source of truth for both the Artefact and the Configuration.

Note: For Consumption Plans make sure you set WEBSITE_CONTENTAZUREFILECONNECTIONSTRING, WEBSITE_CONTENTSHARE in addition to the above values. If you don’t your deployment will fail after the first deployment (this is the source of the error in the video).

The full set of Variables for our sample is listed below.

Common Variables

  • AppInsightsApiKey – API key you configured earlier for Application Insights (*not* Telemetry Key).
  • AppInsightsApp – Application ID configured earlier for Application Insights (also not Telemetry Key).
  • appsetting.APPINSIGHTS_INSTRUMENTATIONKEY – use telemetry key from Application Insights.
  • appsetting.AzureWebJobsDashboard – use exiting value from Function (before first deployment).
  • appsetting.AzureWebJobsSendGridApiKey – use SendGrid API key you setup earlier (should start ‘SG.’).
  • appsetting.AzureWebJobsStorage – use exiting value from Function (before first deployment).
  • appsetting.CosmosConnection – use Connection String from Cosmos Account you setup earlier.
  • appsetting.FUNCTIONS_EXTENSION_VERSION – use exiting value from Function (before first deployment).
  • appsetting.NotificationsSender – use an email address you control (to be used as From: in emails).

Service Plan deployment

  • None: above list is all you need

Consumption Plan deployment

  • appsetting.WEBSITE_CONTENTAZUREFILECONNECTIONSTRING – use exiting value from Function (before first deployment).
  • appsetting.WEBSITE_CONTENTSHARE – use exiting value from Function (before first deployment).

Once you’ve configured the Variables you should now be able to save the Release Management definition and create a Release to test out your deployment.

I’ve recorded a quick video (see below) that shows this end-to-end and also has an additional bonus step of hitting an HTTP Triggered endpoint on the Function as a post-deployment confirmation step (you will need to copy a Host key from the Function App and save it as ‘VersionApiKey’ in the Variable to use to call the API, then add the Smoke Web Test Task from the Marketplace).

So what should the demo Function do? It should trigger an email to a recipient when a record is added to Cosmos DB. The recipient is listed in the record that is inserted, samples of which are included in the Github project. If you can’t get it running make sure to leave a comment and I’ll help you out!

Tagged

Microsoft Application Insights – APM for Everyone

When you work as heavily as I have with a technology like Application Insights you do tend to forget the amazing power you have at your fingertips.

Over the last few years I’ve come to rely heavily on Application Insights as the primary Application Performance Management (APM) tool of choice for services I build, whether they are hosted in Azure or not.

In this post I am going to take a quick walk through features that I think every developer should now about with Application Insights so they can also get maximum benefit from it too!

Your language has an SDK

Chances are pretty good that if you’re on a popular platform that Application Insights will have an SDK you can use. SDKs are great because adding them to a solution produces a bunch of default telemetry with nothing more than a Telemetry Key required.

The Application Insights team maintains their SDK documentation and SDK code references on Github. Needless to say .Net has great support, but Java, JavaScript and Node.js also get first-party support, with community support for Go, Python and Ruby. Want to do APM that includes native mobile experiences? No problem, drop in the HockeyApp SDKs.

Use it regardless of your hosting environment

Not using Azure to host your solution? Not a problem. If you can make outbound calls from your host to Application Insights then you can use Application Insights. 💯

Useful free tier

In an upcoming post I’ll talk more about perceived and actual value of free services in the cloud, but let me say for most basic scenarios the 5 GB of ingested Application Insights data per month will more than suffice. If not, you can manage your costs by moving to a sampling model that means you can still glean useful insights about your application’s behaviours without breaking the bank.

No features are removed at the free tier pricing tier either – you can still do full analytics on the log information that is captured!

Dependency tracking

The out-of-the-box dependency tracking is super handy to diagnose performance issues that result from upstream calls.

The only downside here is that the default capabilities are good at tracking HTTP-based dependencies, SQL Server, and not much else (at time of writing). Having said this, there is a published way for you to track other custom dependencies if needed, though it requires dedicated code – the out-of-the-box tracking requires no additional special code which is amazing!

I have to say that HTTP dependency tracking has been exceptionally useful in a REST-heavy environment, even tracking HTTP calls to external service providers like SendGrid, Twilio and others, providing us an easily accessible view on where our latency is arising from.

The sample below shows dependency behaviour for a single request to a caching service in an application. The very first request (at bottom of list) is a call to Cosmos DB which returns a 404 (Not Found) HTTP status code which then triggers a lookup of some data via a HTTP call to an API with the result returned then written to Cosmos DB for the next request. This is super useful information and I did precisely nothing to my code (other than add the Application Insights SDK to my solution) to capture this for every request!

Remote Dependencies

Track impact of releases

Application Insights has a REST API which allows you to add custom steps to Continuous Deployment pipelines to publish a Release Annotation to your timeline in Application Insights so you can see if a release impacts your solution.

Visual Studio Team Services’ Release Management will do this for you automatically, but if you aren’t using VSTS then you can still leverage this capability. A sample is shown below (thankfully we had no negative impact with this release!)

Release Annotation

Insights to your inbox

Super handy if you don’t want to go hunting for stats or you want to share aggregated stats with stakeholders.

App Insights Email

Heavy duty analytics

If the default experiences in the Azure Portal aren’t enough, then you can leverage the power of Azure Log Analytics to perform more detailed queries and drill into your data and build tables or graphs from the results.

A good example of this is the answer I provided to the following on Twitter from Troy.

Each request will be captured along with useful metadata (in this case from the underlying .Net codebase) which allows us to do further querying and filtering on the data.

Here’s a sample of such a request (this one is a HTTP request to an API endpoint) with the metadata shown which is needed to help solve Troy’s question.

Sample HTTP Request

The trick is then to head over to the Log Analytics environment…

Open Analytics

.. and then drill into the data to provide you with your desired answer.

Analytics query

You can then tabulate or graph the output. The above is a really simple query – trust me, you can do far more complicated than this!

Failure drill-in

This view has recently improved and become far more interactive – you can easily identify common reasons for failures and drill right in to, in my experience, identify root cause within a matter of moments!

In HTTP applications you do get a bit of expected noise (things like expected 401, 403 and 404 errors) which can be annoying to sift through, particuarly for REST-type APIs, but it’s a small price to pay for the power you get!

Failures View

Availability Checks, Health Alerts and Smart Detection

I’m not going into these in too much detail, but you can also set Alerts and health checks in Application Insights and the service will also do analysis of trends and alert you to items that may require your attention (even if you don’t have a specific rule set).

Custom Events, User Journeys and Cohorts

Like health checks I am not going to go through these in detail, but if this is the sort of insight you need, then it is possible to access it here too. If you need to log custom data in Application Insights you can do that too using Custom Events.

What are you waiting for?!

I can honestly say I would be hard pressed these days to build anything without including Application Insights in it, particularly if I won’t have direct access to the hosting environment.

Troubleshooting runtime issues becomes much easier with the details you can glean from walking request stacks as presented by Application Insights. I’ve isolated and fixed more than my fair share of runtime issues (mostly configuration related) without ever needing to try and reproduce locally because I could quickly tell via the telemetry where things were going wrong.

Happy days! 😎

Tagged , , ,

Provide non-admin users with read-only access to Service Endpoints in VSTS

I am currently transitioning some work to another team in our business. Part of this transition has been to pre-configure various Service Endpoints in Visual Studio Team Services (VSTS) to provide a way for the new team to deploy into target Azure environments without the team necessarily having direct or privileged access into those Azure environments.

In this post I am going to look at how you can grant users access to these Service Endpoints without them being able to modify them. This post will also be useful if you’ve configured Service Endpoints (as an admin) and then others on the team (who are non-admins) are unable to see them.

Note that this advice applies to any Service Endpoint – not just Azure!

By default only users who are members of the following groups can see Service Endpoints:

– Project Admins
– Endpoint Admins
– Endpoint Creators.

It’s unlikely that you want all your team members to hold these roles, so let’s see how we can grant rights to use Service Endpoints without being an admin!

We’re going to complete this task with an existing Service Endpoint, but you should hopefully see how you can do this at the same time you setup a new Endpoint in future.

Open up your Team Project and in the top navigation mouse over the settings (cog) icon and from the context menu click “Services”.

Service Endpoints

Once the Endpoints page has loaded, select the Endpoint you wish to allow non-admin users to see.

Selected Endpoint

Now click on ‘Roles’ to display the currently assigned users and groups and their permissions (the current list will only contain users or groups at an ‘Administrators’ level).

Roles Screen

Now we’re in the right place to add our additional read-only users or groups!

Click on the ‘+ Add’ button and the Add user dialog is displayed. Ensure that the ‘Role’ is set to ‘User’ and then find the User or Group you want to assign this right to. In our demo below we are allowing the current project’s Contributors group to use Endpoints.

Add user dialog

Once you click the ‘Add’ button the user or group will be granted read-only rights to the Endpoint. This will allow them to find or use the Endpoint in Build or Release Management Definitions (like below).

Release Definition

Happy (secured) days! 😎

Tagged , , , ,

Azure AD B2C Custom Attributes: How to easily find their unique key value

When working with Azure Active Directory B2C you can create what are known as Custom Attributes which allow you to store data about users beyond the attributes (firstname, lastname, etc) that are available out-of-the-box.

When you want to work with these Custom Attributes in a solution you build you will need to know the unique key of the attribute in order to reference it.

What do I mean by this? Let’s take a quick look using an example.

Note that you will need to be a B2C Global Admin in order to perform some tasks covered in this post.

Creating Custom Attributes

These are created via the Azure Management Portal. In my sample I am going to add an attribute to hold a tier rating for a user (say, Gold, Silver and Bronze) called “TierRating”.

The video below shows how you can do this.

Find Attribute’s Unique Key Value

Now we have this Custom Attribute created we will want to use it in our solution. If you’re eagle-eyed you may find in the Portal that these Custom attributes appear be named ‘extension_AttributeName’ (i.e. ‘extension_TierRating’).

This won’t work in your solution though 🙂

When you create a Custom Attribute this is actually being done for you by a custom application called the “b2c-extensions-app” that is deployed to all B2C tenants at provisioning time.

Why am I telling you this? I am telling you this because it’s the key to determining the Custom Attribute’s unique key value 🙂

You will need the Application ID for the b2c-extensions-app, which you can find in the Portal as shown in the video below.

Using it in your code

Now we have this value (in our demo video the value is ‘bb10b272-0267-46f0-8b6f-4367e8b1b1e6’) we can start to interact with Custom Attributes in our code.

Firstly we need to drop the dashes so it becomes ‘bb10b272026746f08b6f4367e8b1b1e6’. We combine this with the “Name” value for the Attribute, along with a prefix of “extension_”.

So for our tier rating Custom Attribute the full key for it becomes ‘extension_bb10b272026746f08b6f4367e8b1b1e6_TierRating’.

A sample of how this key is used in our solution is shown below.

This pattern is used for every Custom Attribute you create in this Directory.

So there we have it – the easiest way you can determine the actual unique key for a Custom Attribute!

Happy days 😎

Tagged , ,

Easy Release Versioning for .Net Projects using VSTS and TFS

Versioning. Here we are. Again.

Over the years I have always worked hard to make versioning a foundational piece of every CI / CD solution I’ve setup. Reliable, logical versioning becomes key to long-term maintenance and troubleshooting efforts, and whatever you can do to make it a “no-brainer” is worth it (your future self will thank you).

The move to .Net Core changed the way a few items work in the .Net world, including versioning, and besides, I am always looking for ways to make versioning easier.

So here’s my cheat-sheet for versioning your solutions. It won’t suit all application types, but for my use case (.Net Web Apps) it works just fine. It will work with VSTS and newer TFS versions too.

I haven’t tested on VB projects, but this should work for them just as easily as C#.

NET Core: Setup Your Project File

Versioning has been simplified in the .Net Core world. Edit your csproj and modify it as follows:

<PropertyGroup>
  <Version Condition=" '$(BUILD_BUILDNUMBER)' == '' ">1.0.0.0</Version>
  <Version Condition=" '$(BUILD_BUILDNUMBER)' != '' ">$(BUILD_BUILDNUMBER)</Version>
</PropertyGroup>

If your file doesn’t have a version node, add the above. This tip comes from Stack Overflow, but I’ve modified it slightly.

The above setup will mean debugging locally will give you a version of 1.0.0.0, and in the event you build in a non-VSTS / TFS environment you will also end up with a 1.0.0.0 version. $(BUILD_BUILDNUMBER) is an environment variable set by Team Build and which will be updated at build time by VSTS or TFS.

NET Framework: Add Custom Task

In the “old” .Net world we have to update the properties of the AssemblyInfo file that is a part of the project, specifically targeting File Version and Assembly Version.

There isn’t an in-built build Task to do this for you, and rather than hack together a script, why not use a great custom task from the marketplace (which also supports TFS)?

I’m using the “Assembly Info” task from Bleddyn Richards, primarily because it has the most recent updated date out of the similar tasks available, which means it’s hopefully getting plenty of love and care from the owner 🙂

Add the above Task to your build definition (make sure to do it before you build the Solution / project) and then set the version numbering as shown below.

VSTS Task Config - versioning

Setup Build Versioning

The above steps are great, but they will count for nothing (or cause a compile fail) if we don’t have a valid versioning number.

The default VSTS build version number format takes this format:

$(date:yyyyMMdd)$(rev:.r)

This results in a build number that looks like this:

20180201.1 (for the first build on February 1 2018).

This isn’t a valid .Net Version number, so we need to change it.

First, let’s add two Variables to our build definition: MajorVersion and MinorVersion.

You can set these to any valid integer value. These can be manually controlled over time as you determine the need to increment Major and Minor version numbers. Note you can make them whatever you like, keeping in mind the size restriction I mention below.

Build Variables

Now let’s change the Build Numbering scheme to use these variables, a specific date format, and the revision:

Number Format

$(MajorVersion).$(MinorVersion).$(date:yy)$(DayOfYear)$(rev:.r)

Which produces a build number that looks like this:

2.0.18037.1 (for first build on February 6 2018 for Major Version 2, Minor Version 0).

You can choose a format that works for you, with one proviso that each version segment must be less than 65,000, which sounds like a lot, until you consider that 20180201 (Feb 1, 2018) is, as an integer (20,180,201) larger than 65,000. Hence my decision to drop to using YY (if you’re reading this in the year 2065 I apologise for my shortsightedness).

The result of these changes will mean that you’ll have a lovely version number automatically written into your solution at build time. An example from a .Net Framework solution is shown below.

Properties Dialog

Happy Days 😎

Tagged ,

Twitter on Linux in Windows Subsystem for Linux

First of all, tip of the hat to Geoff Huntley for putting this in my timeline to start off with :).

So how to get Rainbow Stream to run on Windows Subsystem for Linux (WSL)? Easily!

I’m running on the Slow Ring Insiders (currently on 17074), but hopefully these instructions will work for you.

Crack open a bash shell by running ‘bash’ on your Windows machine and then enter

sudo apt-get install python-pip

sudo apt-get install python-dev libjpeg-dev libfreetype6 libfreetype6-dev zlib1g-dev

sudo pip install backports.functools_lru_cache

sudo pip install rainbowstream

rainbowstream -iot

Now you will enter an interactive console at which you will need to authorise Rainbow Stream to access your profile and act as a client.

The video below shows you the actions you need to take. Enjoy!

Tagged , , ,