Global Azure Bootcamp 2017 Session – .Net Core, Docker and Kubernetes

If you are attending my session and would like to undertake the exercise here’s what you’ll need to install locally, along with instructions on working with the code.

Pro-tip: As this is a demo consider using an Azure Region in North America where the compute cost per minute is lower than in Australia.

Prerequisites

Note that for both the Azure CLI and Kubernetes tools you might need to modify your PC’s PATH variable to include the paths to the ‘az’ and ‘kubectl’ commands.

On my install these ended up in:

az: C:\Users\simon\AppData\Roaming\Python\Python36\Scripts\
kubectl: c:\Program Files (x86)\

If you have any existing PowerShell or Command prompts open you will need to close and re-open to pick up the new path settings.

Readying your Docker-hosted App

When you compile your solution to deploy to Docker running on Azure, make sure you select the ‘Release’ configuration in Visual Studio. This will ensure the right entry point is created so your containers will start when deployed in Azure. If you run into issues, make sure you have this setting right!

If you compile the Demo2 solution it will produce a Docker image with the tag ‘1.0’. You can then compile the Demo3 solution which will produce a Docker image with the tag ‘1.1’. They can both live in your local environment side-by-side no issues.

Log into your Azure Subcription

Open up PowerShell or a Command Prompt and log into your subscription.

az login

Note: if you have more than one Subscription you will need to use the az account command to select the right one.

Creating an Azure Container Service with Kubernetes.

Before you do this step, make sure you have created your Azure Container Registry (ACR). The Azure Container Service has some logic built-in that will make it easier to pull images from ACR and avoid a bunch of cross-configuration. The ACR has to be in the same Subscription for this to work.

I chose to create an ACS instance using the Azure CLI because it allows me to control the Kubernetes cluster configuration better than the Portal.

The easiest way to get started is to follow the Kubernetes walk-through on the Microsoft Docs site.

Continue until you get to the “Create your first Kubernetes service” and then stop.

Publishing to your Registry

As a starting point make sure you enable the admin user for your Registry so you can push images to it. You can do this via the Portal under “Access keys”.

Start a new PowerShell prompt and let’s make sure we’re good to by seeing if our compiled .Net Core solution images are here.

docker images

REPOSITORY                          TAG      IMAGE ID       CREATED  SIZE
siliconvalve.gab2017.demowebcore    1.0      e0f32b05eb19   1m ago   317MB
siliconvalve.gab2017.demowebcore    1.1      a0732b0ead13   1m ago   317MB

Good!

Now let’s log into our Azure Registry.

docker login YOUR_ACR.azureacr.io -u ADMIN_USER -p PASSWORD
Login Succeeded

Let’s tag and push our local images to our Azure Container Registry. Note this will take a while as it will publish the image which our case is 317MB. If you updated this in future only the differential will be re-published.

docker tag siliconvalve.gab2017.demowebcore:1.0 YOUR_ACR.azurecr.io/gab2017/siliconvalve.gab2017.demowebcore:1.0
docker tag siliconvalve.gab2017.demowebcore:1.1 YOUR_ACR.azurecr.io/gab2017/siliconvalve.gab2017.demowebcore:1.1
docker push YOUR_ACR.azurecr.io/gab2017/siliconvalve.gab2017.demowebcore:1.0
The push refers to a repository [YOUR_ACR.azurecr.io/gab2017/siliconvalve.gab2017.demowebcore]
f190fc6d75e4: Pushed
e64be9dd3979: Pushed
c300a19b03ee: Pushed
33f8c8c50fa7: Pushed
d856c3941aa0: Pushed
57b859e6bf6a: Pushed
d17d48b2382a: Pushed
latest: digest: sha256:14cf48fbab5949b7ad1bf0b6945201ea10ca9eb2a26b06f37 size: 1787

Repeat the ‘docker push’ for you 1.1 image too.

At this point we could now push this image to any compatible Docker host and it would run.

Deploying to Azure Container Service

Now comes the really fun bit :).

If you inspect the GAB-Demo2 project you will find a Kubernetes deployment file, the contents of which are displayed below.

If you update the Azure Container Registry path and insert an appropriate admin secret you now have a file that will deploy your docker image to a Kubernetes-managed Azure Container Service.

At your command line run:

kubectl create -f PATH_TO_FILE\gab-api-demo-deployment.yml
service "gabdemo" created
deployment "gabdemo" created

After a few minutes if you look in the Azure Portal you will find that a public load balancer has been created by Kubernetes which will allow you to hit the API definition at http://IP_OF_LOADBALANCER/swagger

You can also find this IP address by using the Kubernets management UI, which we can get to using a secured tunnel to the Kubernets management host (this step only works if you setup the ACS environment fully and downloaded credentials.

At your command prompt type:

az acs kubernetes browse --name=YOUR_CUSTER_NAME --resource-group=YOUR_RESOURCE_GROUP
Proxy running on 127.0.0.1:8001/ui
Press CTRL+C to close the tunnel...
Starting to serve on 127.0.0.1:8001

A browser will pop open on the Kubernetes management portal and you can then open the Services view and see your published endpoints by editing the ‘gabdemo’ Service and viewing the public IP address at the very bottom of the dialog.

Kubernetes Service Details

Upgrade the Container image

For extra bonus points you can also upgrade the container image running by telling Kubernetes to modify the Service. You can do this either via the commandline using kubectl or you can edit the Service definition via the management web UI (shown below) and Kubernetes will upgrade the Container image running. If you hit the swagger page for the service you will see the API version has incremented to 1.1 now!

Edit Service

You could also choose to roll-back if you wanted – simply update the tag back to 1.0 and watch the API rollback.

So there we have it – modernising existing .Net Windows-hosted apps so they run on Docker Containers on Azure, managed by Kubernetes!

Tagged , ,

When to use Azure Load Balancer or Application Gateway

One thing Microsoft Azure is very good at is giving you choices – choices on how you host your workloads and how you let people connect to those workloads.

In this post I am going to take a quick look at two technologies that are related to high availability of solutions: Azure Load Balancer and Application Gateway.

Application Gateway is a bit of a dark horse for many people. Anyone coming to Azure has to get to grips with Azure Load Balancer (ALB) as a means to provide highly available solutions, but many don’t progress beyond this to look at Application Gateway (AG) – hopefully this post will change that!

The OSI model

A starting point to understand one of the key differences between the ALB and AG offerings is the OSI model which describes the logical layers required to implement computer networking. Wikipedia has a good introduction, which I don’t intend to reproduce here, but I’ll cover the important bits below.

TCP/IP and UDP are bundled together at Layer 4 in the OSI model. These underpin a range of higher level protocols such as HTTP which sit at the stop of the stack at Layer 7. You will often see load balancers defined as Layer 4 or Layer 7, and, as you can guess, the traffic they can manage varies as a result.

Load balancing HTTP using a Layer 4 capable device works, though using Cookie-based session affinity is out, while a pure Layer 7 solution has no chance to handle TCP or UDP traffic which sit below at Layer 4.

Clear as mud? Great!

You might ask what does it mean to us in this post?

I’m glad you asked 🙂 The first real difference between the Azure Load Balancer and Application Gateway is that an ALB works at traffic at Layer 4, while Application Gateway handles just Layer 7 traffic, and specifically, within that, HTTP (including HTTPS and WebSockets).

Anything else to know?

There are a few other differences it is worth calling out. I will summarise as follows:

  • Load Balancer is free (unless you wish to use multiple Virtual IPs). Application Gateway is billed per-hour, and has two tiers, depending on features you need (with/without WAF)
  • Application Gateway supports SSL termination, URL-based routing, multi-site routing, Cookie-based session affinity and Web Application Firewall (WAF) features. Azure Load Balancer provides basic load balancing based on 2 or 5 tuple matches.
  • Load Balancer only supports endpoints hosted in Azure. Application Gateway can support any routable IP address.

It’s also worth pointing out that when you provision an Application Gateway you also get a transparent Load Balancer along for the ride. The Load Balancer is responsible for balancing traffic between the Application Gateway instances to ensure it remains highly available 🙂

Using either with VM Scale Sets (VMSS)

The default setup for a VMSS includes a Load Balancer. If you want to use an Application Gateway instead you can either leave the Load Balancer in place and put the Application Gateway in front of it, or you can use an ARM template similar to the following sample to swap out the Load Balancer for an Application Gateway instance instead.

https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-windows-app-gateway

The key item to ensure is that the Network Interface Card (NIC) is configured for each VM in the Scale Set to be a part of the Backend Address Pool of the Application Gateway.

So there we have a quick overview of the Azure Load Balancer and Application Gateway offerings and when to consider one over the other.

Either the Azure Load Balancer overview or the Application Gateway introduction provides a good breakdown of where you should consider using one or the other.

Happy (highly available) days! 🙂

Tagged , ,

Secure your VSTS Release Management Azure VM deployments with NSGs and PowerShell

One of the neat features of VSTS’ Release Management capability is the ability to deploy to Virtual Machine hosted in Azure (amongst other environments) which I previously walked through setting up.

One thing that you need to configure when you use this deployment approach is an open TCP port to the Virtual Machines to allow remote access to PowerShell and WinRM on the target machines from VSTS.

In Azure this means we need to define a Network Security Group (NSG) inbound rule to allow the traffic (sample shown below). As we are unable to limit the source address (i.e. where VSTS Release Management will call from) we are stuck creating a rule with a Source of “Any” which is less than ideal, even with the connection being TLS-secured. This would probably give security teams a few palpitations when they look at it too!

Network Security Group

We might be able to determine a source address based on monitoring traffic, but there is no guarantee that the Release Management host won’t change at some point which would mean our rule blocks that traffic and our deployment breaks.

So how do we fix this in an automated way with VSTS Release Management and provide a secured environment?

Let’s take a look.

The Fix

The fix is actually quite straightforward it turns out.

As the first step you should go to the existing NSG and flip the inbound rule from “Allow” to “Deny”. This will stop the great unwashed masses from being able to hit TCP port 5986 on your Virtual Machines immediately.

As a side note… if you think nobody is looking for your VMs and open ports, try putting a VM up in Azure and leaving RDP (3389) open to “Any” and see how long it takes before you start seeing authentication failures in your Security event log due to account enumeration attempts.

Modify Project Being Deployed

We’re going to leverage an existing Release Management capability to solve this issue, but first we need to provide a custom PowerShell script that we can use to manipulate the NSG that contains the rule we are currently using to block inbound traffic.

This PowerShell script is just a simple wrapper that combines Azure PowerShell Cmdlets to allow us to a) read the NSG b) update the rule we need c) update the NSG, which commits the change back to Azure.

I usually include this script in a Folder called “Deploy” in my project and set the build action to “Copy always”. As a result the file will be copied to the Artefacts folder at build time which means we have access to it in Release Management.

Project Setup

You should run a build with this included file so that it is available in your

Modify Release Management Defintion

Note that in order to complete this step you must have a connection between VSTS and your target Azure Subscription already configured as a Service Endpoint. Typically this needs to be done by a user with sufficient rights in both VSTS and the Azure Subscription.

Now we are going to modify our existing Release Management definition to make use of this new script.

The way we are going to enable this is by using the existing Azure PowerShell Task that we have available in both Build and Release Management environments in VSTS.

I’ve shown a sample where I’ve added this Task to an existing Release Management definition.

Release Management Definition

There is a reason this Task is added twice – once to change the NSG rule to be “Allow” and then once, at the end, to switch it back to “Deny”. Ideally we want to do the “Allow” early in the process flow to allow time for the NSG to be updated prior to our RM deployment attempting to access the machine(s) remotely.

The Open NSG Task is configured as shown.

Allow Script

The Script Arguments should match those given in the sample script above. As sample we might have:

-resourceGroupName MyTestResourceGroup -networkSecurityGroupName vnet01-nsg 
-securityRuleName custom-vsts-deployments -allowOrDeny Allow -priority 3010

The beauty of our script is that the Close NSG Task is effectively the same, but instead of “Allow” we put “Deny” which will switch the rule to blocking traffic!

Make sure you set the “Close” Task to “Always run”. This way if any other component in the Definition fails we will at least close up the NSG again.

Additionally, if you have a Resource Group Lock in place (and you should for all production workloads) this approach will still work because we are only modifying an existing rule, rather than trying to add / remove it each time.

That’s it!

You can now benefit from VSTS remote deployments while at the same time keeping your environment locked down.

Happy days 🙂

Tagged , , , , ,

Inviting Microsoft Account users to your Azure AD-secured VSTS tenant

I’ve done a lot of external invite management for VSTS after the last few years, and generally without fail we’ll have issues getting everyone on-boarded easily. This blog post is a reference for me (and I guess you too) to understand the invite process and document the experience the invited user has.

There are two sections to this blog post:

1. Admin instructions to invite users.

2. Invited user instructions.

Select whichever one applies to you.

The starting point for this post is that external user hasn’t yet been invited to your Azure AD tenant. The user doing in the inviting is also not an Azure AD Global Admin, but I has rights in an Azure tenant.

The Invite to Azure AD

Log into an Azure subscription using your Azure AD account and select Subscriptions. Ideally this shouldn’t be a production tenant!

Select Subscription

I am going to start by inviting this user to my Azure tenant as a Reader-level user which means they will receive an Azure AD invite. I will later revoke this access once they have accepted my invite.

Click “Add” on the IAM blade for the Subscription.

Select Add

Ensure you set the role to “Reader” which provides no ability to execute changes.

Set Role

Now enter the user’s email address. Note you can add multiple email addresses if you want. Click “Save” button to apply the change.

Enter Email

Once I click “Save” the portal will say it is inviting the user. A short while later the invitee will receive an invite email in their inbox. See later in the blog post for their experience.

Add Invited User to VSTS

Now the invited user is in your Azure AD tenant they will show up in the User Search Dialog in VSTS. You must be a VSTS Admin to manage users.

Log into your VSTS tenant and navigate to Users and then search for the newly added user and assign them the license you want them to use.

VSTS invite

Click “Send Invitation” which will be enabled once you select the invitee’s account from the drop-down. Note that VSTS won’t actually send this user an invite.

At this stage the user now has access to your VSTS tenant, but not any projects it contains – make sure you add them to some!

Let’s take a look and see what the invited user sees.

Invited User Experience

If I log in to the invited user’s Outlook.com mailbox I will see an Azure AD invite awaiting.

The invited user should click the “Get Started” button to accept the invite. Unless they complete this process they won’t have access to VSTS.

Invite email

This will open a web browser on the invited tenant’s redemption page that will be branded with any extended branding the Azure AD tenant has.

The user must click ‘Next’ on this screen to accept the invite.

Invite web experience

It will take a few moments to setup the Microsoft Account in the Azure AD tenant.

Adding user to tenant

Once done the user will end up at the default “My Apps” screen but will see nothing at this point as they have not be granted access to anything.

Empty My Apps screen

Invited User Accesses VSTS

The invited user can now navigate to your VSTS tenant in a browser – https://tenantname.visualstudio.com/

If they aren’t already logged into their Microsoft Account they will be prompted to login and then directed to VSTS.

As this is their first time logging in they will be asked to enter some information which will auto-populated, but editable.

VSTS Invite

They then get dropped to the home page for VSTS and are ready to work. If you didn’t add them to any existing projects and haven’t granted them additional privileges they might see the screen below.

VSTS Invite

Make sure they bookmark your VSTS tenant and that they use their invited Microsoft Account each time they want to access it.

Login Experience for User

If the user logs out or their session times out they will be directed to your Azure AD tenant login page firstly, as this is what VSTS is configured to use when you attach an Azure AD tenant to it.

sign-in-01

The invited user should enter their Microsoft Account into the email address box and when the username box loses focus they will be redirected to the Microsoft Account login screen.

sign-in-02

This step quite often catches people out as they aren’t expecting the redirect, particularly if they haven’t used Office 365 or similar systems.

sign-in-03

At the Microsoft Account login page (shown below) they enter their password and they will be directed back to VSTS.

MSA login page

Don’t forget!

If you’re the inviting Admin you can now remove the invited user as a reader from your Azure tenant.

If you want extra security, get the Microsoft Account user’s to turn on two-step verification which will require them to enter a code to login.

Happy coding!

Post credit-roll Admin bonus!

If you find out that some of the users you invited didn’t have a mailbox attached to their Microsoft Account and therefore didn’t get the original invite you can resend the invite. Log into your Azure tenant, open Azure Active Directory and then find the invited user.

Open their profile and click on the ‘Resend invitation’ button – it is greyed out but will work just fine :).

Re-invite a user

Tagged , ,

Per-environment config value tokenization for Azure Web Apps using VSTS Release Management

For the majority of the last ten years I’ve been working with delivery of solutions where build and deployment comes from some centralised location.

When Microsoft made InRelease part of TFS as Release Management, I couldn’t wait to use it. Unfortunately in its state at that time the learning curve was quite steep and the immediate value was outweighed by the effort to get up and running.

Roll forward to 2016 and we find Release Management as a modern, web-based feature of Visual Studio Team Services (VSTS). The cherry on the cake is that a lot of the learning curve has dropped away as a result.

In this post I’m going to look at how we can deploy a Web Deploy (or MS Deploy) packaged Web Application to an Azure Web Application and define different deployment environments with varying configurations.

Many people would apply configuration transformations at build time, but in my scenario I want to deploy the same compiled package to multiple environments without the need to recompile anything.

My Challenge

The build definition for my Web Application results in a package that allows it to be deployed to an Azure Web App by Web Deploy. The result is the web.config configuration file is in a zip file that is transferred to the server for deployment by Web Deploy.

Clearly at this point I don’t have access to the web.config file in the drop folder so I can’t transform it with Release Management. Or can I?!

Using Web Deploy Parameters

Thankfully the design of Web Deploy provides for the scenario I described above though use of either commandline arguments or a specially formatted input file that I will call the “SetParameters” file.

Given this is a first-class feature in the broader Microsoft developer toolkit, I’d expected that there would be a few Tasks in VSTS that I could use to get all of this up and running… I got close, but couldn’t quite get it functioning as I wanted.

Through the rest of this post I will walk you through the setup to get this going.

Note: I am going to assume you have setup Build and Release Management definitions in VSTS already. Your Build should package to deploy to an Azure Web App and the Release Management definition to deploy it.

VSTS Release Management Setup

The first thing to get all of this up and running is to add the Release Management Utilities extension to your subscription. This extension includes the Tokenizer Task which will be key to getting the configuration per-environment up and running.

You also need to define an “Environment” in Release Management for each deployment target we have, which will also be used as a container for environmental configuration items to replace at deployment time. A sample is shown below with two Environments defined

Environments

We’ll come back to VSTS later, for now, let’s look at the project changes you need to make.

Source Project Changes

For the purpose of this exercise I’m just worrying about web.config changes.

First of all, you need to tokenise the settings you wish to transform. I have provided a sample below that shows how this looks in a web.config. The format of two underscores on either side of your token placeholder is required.

The next item we need to do is to add a new XML file to our Visual Studio project at the root level. This file should be called “Parameters.xml” and I have included a sample below that shows what we need to add to if it we want to ensure we replace the tokens in the above sample web.config.

You’ll notice one additional item in the file below that isn’t related directly to the web.config above – the IIS Website name that will be used when deployed. I found if I didn’t include this the deployment would fail.

When you add this file, make sure to set the properties for it to a Build Action of “None” and Copy to Output Directory of “Do not copy”.

Note: if you haven’t already done so, you should run a Build so that you have Build Artifacts ready to select in a later step.

Add the Tokenizer to your Release Management Definition

We need now to return to VSTS’ web interface and modify our existing Release Management definition (or create a new one) that adds the Tokenizer utility to the process.

You will need to repeat this so all your environments have the same setup. I’ve shown how my Test environment setup looks like below (note that I changed the default description of the Tokenizer Task).

Release Management Definition

Configuration of the Tokenizer is pretty straight forward at this point, especially if we’ve already run a build. Simply select the SetParameters.xml file your build already produced.

Tokenizer setting

Define values to replace Tokens

This is where we define the values that will be used to replace the tokens at deployment time.

Click on the three dots at the top right of the environment definition and from the menu select “Configuration variables…” as shown below.

Variable Definition

A dialog loads that allows us to define the values that will go into our web.config for this environment. The great thing you’ll note is that you can obfuscate sensitive details (in my example, the key to access the Document DB account). This is non-reversible too – you can’t “unhide” the value and see the plain-text version.

Token Values

We’re almost done!

Explicitly select SetParameters file for deployment

I’m using the 3.* (preview) version of the Deploy Azure App Service Release Management Task, which I have configured as shown.

App Service Task

At this point, if you create a new Release and deploy to the configured environment you will find that the deployed web.config contains the values you specified in VSTS and you will no longer need multiple builds to send the same package to multiple environments.

Happy Days! 🙂

Tagged , , , , , ,

Azure Functions: Build an ecommerce processor using Braintree’s API

In this blog I am continuing with my series covering useful scenarios for using Azure Functions – today I’m going to cover how you can process payments by using Functions with Braintree’s payment gateway services.

I’m not going to go into the details of setting up and configuring your Braintree account, but what I will say is the model you should be applying to make this scenario work is one where you Function will play the Server role as documented in Braintree’s configuration guide.

My sample below is pretty basic, and for the Function to be truly useful (and secure) to use you will need to consider a few things:

  1. Calculate total monetary amount to charge elsewhere and pass to the Function as an argument (my preference is via a message on a Service Bus Queue or Topic). Don’t do the calculation here – make the Function do precisely one thing – post a payment to Braintree and handle the response.
  2. The payment method token (or nonce) should also come from upstream from an end-user’s authorisation to pay. This is at the core of how Braintree securely processes payments and you should understand it by reading the Braintree documentation.
  3. Don’t, whatever you do, put your Braintree configuration details inline as plain-text. In all environments I always use Azure Key Vault for these sorts of details, coupled with my Key Vault client code*.

Note: I did get contacted by someone who advised that heavy workloads resulting in lots of calls to Key Vault will most likely result in port exhaustion on your Key Vault and your calling code receiving errors. You should consider this in your design – it’s not something I have had to work around just yet and I do have some ideas to solve in a relatively secure fashion which I hope to blog about in future.

Now we have the fundamentals let’s get into it!

Firstly we need to import the Braintree nuget package that brings all the goodness we’ll need to create and send requests and process responses from them. Add the following entry to your project.json file and when you save it the package will be restored.

Once we’ve done this we now have the power of the API at our fingertips and can craft requests as required.

In my simplified example below I am going to process a Transaction Sale for $10 (the currency will depend on your merchant account setup) and use a hardcoded Braintree Customer Identity that maps to an existing Customer entity that I’ve previously created in the Braintree Vault associated with my trial Merchant account.

This is a fairly convoluted example, as in reality you’d pass the paymentMethodToken to the Function as part of any data supplied and, as I noted above, you’d not leave your Merchant details laying around in plain-text (would you now?)

That folks, is pretty much all there is to it! Bake this into a set of Microservice Functions and pass in the total you wish to settle and off you go.

The Braintree SDK has a bunch of other service endpoints you can utilise to perform actions against other objects like Customers or creating recurring subscriptions so don’t limit your use solely to paying for things on checkout.

Happy days 🙂

Tagged , , ,

Azure Functions: Send email using SendGrid

Prior to Azure Functions announcing their General Availability (GA) I had previously used SendGrid as an output binding in order to send email messages.

Since GA, however, the ability to use SendGrid remains undocumented (I assume to give the Functions team time to test and document the binding properly) and the old approach I was using no longer seems valid.

As I needed to use this feature I spent some time digging into getting this working with the GA release of Azure Functions (version ~1). Thankfully as Functions is an abstraction over WebJobs I had plenty of information on how to do it right now thanks to the WebJobs documentation and extensibility :).

Here’s how you can get this working too:

1. Register your SendGrid API key in Application Settings: you must utilise the documented approach of setting your API key in an App Setting called “AzureWebJobsSendGridApiKey”. Without this your Function won’t be able to send mail successfully.

2. Import the SendGrid nuget package into your Function by creating a project.json file that contains this following:

3. Create an output binding on your function that will allow you send the message without needing to create client code in your Function:

4. Add a reference and using statement in your run.csx to ensure you have the right packages included. You can see this in the run.csx below that has all you need to create and send a simple email to a single recipient.

If you want to do a lot more customisation of the email that is sent you can simply refer to the SendGrid C# Library on Github which covers features such as sending using templates.

Once the Functions team publishes an updated approach to using SendGrid I’ll make sure to link to it from here. In the meantime… happy mailing!

Tagged , ,

Azure Functions: Access KeyVault Secrets with a Cert-secured Service Principal

Azure Functions is one of those services in Azure that is seeing a massive amount of uptake. People are using it for so many things, some of which require access to sensitive information at runtime.

At time of writing this post there is a pending Feature Request for Functions to support storing configuration items in Azure KeyVault. If you can’t wait for that Feature to drop here’s how you can achieve this today.

Step 1: Create a KeyVault and Register Secrets

I’m not going to step through doing this in detail as the documentation for KeyVault is pretty good, especially for adding Secrets or Keys. For our purposes we are going to store a password in a Secret in KeyVault and have the most recent version of it be available from this URI:

https://mytestvault.vault.azure.net/secrets/remotepassword

Step 2: Setup a Cert-secured Service Principal in Azure AD

a. Generate a self-signed certificate

This certificate will be used for our Service Principal to authorise itself when calling into KeyVault. You’ll notice that I’m putting a -1 day “start of” validity period into this certificate. This allows us to deal with the infrastructure running at UTC (which my location isn’t) and avoid not being able to access the certificate until UTC matches our local timezone.

b. Create Service Principal with Cert Authentication

This step requires you to log into an Azure Subscription that is tied to the target Azure AD instance in which you wish to register the Service Principal. Your user must also have sufficient privileges to create new users in Azure AD – if it doesn’t this step will fail.

At this point we now have a Vault, a Secret, and a Service Principal that has permissions to read Secrets from our Vault.

Step 3: Add Cert to App Service

In order for our Function App(s) to utilise this Service Principal and its certificate to access KeyVault we need to upload the PFX file we created in 2.a above into the App Service in which our Functions live. This is just as you would do if this App Service was running a Web App but without the need to bind it to anything. The official Azure documentation on uploading certs is good so I won’t duplicate the instructions here.

Watch out – Gotcha!

Once you’ve uploaded your certificate you do need to do one item to ensure that your Function code can read the certificate from store. You do this by adding an Application Setting “WEBSITE_LOAD_CERTIFICATES” and either specify just the thumbprint of your certificate or put “*” to specify any certificate held in the store.

Step 4: Function App KeyVault and Service Principal Setup

a. Nuget Packages

Accessing KeyVault with a Service Principal in Functions requires us to load some Nuget packages that contain the necessary logic to authenticate with Azure AD and to call KeyVault. We do this by adding the following to our Function App’s project.json.

b. KeyVault Client CSX
Now let’s go ahead and drop in our KeyVault “client” that wraps all code for accessing KeyVault in a single CSX (note that this is mostly inspired by other code that shows you how to do this for Web Apps).

Step 5: Use in a Function

As we’ve encapsulated everything to do with KeyVault into a CSX we can retrieve a secret from KeyVault in a Function using a single call once we’ve imported our client code.

Happy (Secure) Days!

Tagged , , ,

Continuous Deployment of Windows Services using VSTS

I have to admit writing this post feels a bit “old skool”. Prior to the last week I can’t remember the last time I had to break out a Windows Service to solve anything. Regardless, for one cloud-based IaaS project I’m working on I needed a simple worker-type solution that was private and could post data to a private REST API hosted on the other end of an Azure VNet Peer.

While I could have solved this problem any number of ways I plumped for Windows Service primarily because it will be familiar to developers and administrators at the organisation I’m working with, but I figured if I’m going to have to deploy onto VMs I’m sure not deploying in an old-fashioned way! Luckily we’re already running in Azure and hosting on VSTS so I have access to all the tools I need!

Getting Setup

The setup for this process is very similar to the standard “Deploy to Azure VM” scenario that is very well covered in the official documentation and which I added some context to in a blog post earlier in the year.

Once you have the basics in place (it only takes a matter of minutes to prepare each machine) you can head back here to cover off the changes you need to make.

Note: this process is going to assume you have a Windows Service Project in Visual Studio 2015 that is being built using VSTS’s in-built build infrastructure. If you have other configurations you may need to take different steps to get this to work 🙂

Tweak build artefact output

First we need to make sure that the outputs from our build are stored as artefacts in VSTS. I didn’t use any form of installer packaging here so I needed to ensure my build outputs were all copied to the “drops” folder.

Here is my build definition which is pretty vanilla:

Build Process

The tweak I made was on the Visual Studio build step (step 2) where I defined an additional MSBuild Argument that set the OutputPath to be the VSTS build agent’s artifacts directory which will automatically be copied by the Publish Artifacts step:

Build Update

If I look at a history entry for my CI build and select Artifacts I can see that my Windows Service binary and all its associated assemblies, config files (and importantly Deployment Script) are stored with the build.

Build Artefacts

Now we have the build in the right configuration let’s move on to the deployment.

Deploying a Service

This is actually easier than it used to be :). Many of us would remember the need to package the Windows Service into an MSI and then use InstallUtil.exe to do the install on deployment.

Fear not! You no longer need this approach for Windows Services!

PowerShell FTW!

Yes, that Swiss Army knife comes to the rescue again with the Get-Service, New-Service, Stop-Service and Start-Service Cmdlets.

We can combine these handy Cmdlets in our Deployment script to manage the installation of our Windows Service as shown below.

The Release Management definition remains unchanged – all we had to do was ensure our build outputs were available to copy from the ‘Drop’ folder on the build and that they are copied to C:\temp\ on the target VM(s). Our Desployment Script takes care of the rest!

That’s it! Next time your CI build passes your CD kicks in and your Windows Service will be updated on your target VMs!

Tagged ,

AAD B2C Talk – Innovation Days 2016 Wrap

I recently spoke at the Innovation Days 2016 event held in Sydney on Azure AD B2C.

The presentation for my talk is available here:

https://1drv.ms/p/s!AqBI2LiKM4LHwNJvTxrXNAblpTBCJA

and you can find the sample code for the web and API apps here:

https://github.com/sjwaight/innovationdays2016/