Category Archives: Cloud

Understanding Tenants, Subscriptions, Regions and Geographies in Azure

If you are getting started working with Azure you might come across a few key terms that it’s important to have a good understanding of. In this post I’m going to cover what I think are four of the key ones.

Tenant

A Tenant, as it relates to Azure, refers to a single instance of Azure Active Directory, or, as it is often called “Azure AD”. Azure AD is a key piece of Microsoft’s cloud platform as it provides a single place to manage users, groups and the permissions they hold in relation to applications published in Azure AD.

Key Microsoft applications that Azure AD provides access to include Office 365, Dynamics 365 and Azure. Yes, you read that right, Azure is treated as an ‘application’. You can also use Azure AD to control access to many other third-party applications such as Salesforce and even the AWS admin console. As an application developer you can register your own applications in Azure AD for the purpose of allowing users access.

Azure AD Tenants are globally unique and are scoped using a domain that ends with ‘onmicrosoft.com’ (i.e. myazuread.onmicrosoft.com) and each has a ‘Tenant ID’ in the form of an UUID/GUID. Some customers choose to connect their internal Active Directory environment to Azure AD to allow single or same sign-on for their staff and will also use a custom domain instead of the default ‘onmicrosoft.com’.

When you access the Azure Portal, or leverage one of the command-line tools to manage Azure resources in a Subscription, you will always be authenticated at some point via the Azure AD Tenant associated with the Subscription you want to access. The actions you can take will depend on the Role you have been assigned in the target Subscription.

Finally, Azure AD Tenants can be associated with multiple Subscriptions (typically in larger organisations), but a Subscription can only ever be associated with a single Azure AD Tenant at any time.

Dev Tip: if you want to develop an application that uses Azure AD but don’t have permissions to register applications in your company’s Azure AD Tenant (or you want a ‘developer’ Azure AD Tenant) you can choose to create a new Azure AD Tenant in the Azure Portal. Make sure in your application that you can easily change Azure AD Tenant details to allow you to redeploy as required. Azure AD has a free tier that should be suitable for most development purposes.

IT Pro Tip: you can change the display name for your Tenant – something I strongly recommend, particularly as Azure AD B2B will mean others will see your Directory name if they are invited and may be confused if the display name is something unclear. Note that you are *not* able to change the default onmicrosoft.com domain.

Subscription

A Subscription in Azure is a logical container into which any number of resources (Virtual Machines, Web Apps, Storage Accounts, etc) can be deployed. It can also be used for coarse-grained access control to these resources, though the correct approach these days is to leverage Role Based Access Control (RBAC) or Management Groups. All incurred costs of the resources contained in the Subscription will also roll-up at this level (see a sample below).

Subscription costs view

As noted above, a Subscription is only ever associated with a single Azure AD Tenant at any time, though it is possible to grant users outside of this Tenant access. You can also choose to change the Azure AD Tenant for a Subscription. This feature is useful if you wish to transfer, say, a Pay-As-You-Go (PAYG) Subscription into an existing Enterprise Enrolment. Subscriptions have both a display name (which you can change) and a Subscription ID (UUID/GUID) which you can’t change.

Subscriptions are not tied to an Azure Region and as a result can contain resources from any number of Regions. This doesn’t mean that you will have access to all Regions, as some Geographies and Regions are restricted from use – we’ll talk more about this next.

Resources contained in a Subscription, but deployed to different Regions will still incur cross-Region costs (where applicable) for the resource.

People sometimes use the word ‘Tenant’ instead of ‘Subscription’ or vice-versa. Hopefully you can now see what the difference is between the two.

Regions and Geographies

Azure differs from the other major cloud providers in its approach to providing services close to the customer. As a result, and at time of writing (August 2018), Azure offers 42 operational Regions with 12 more announced or under development.

A Region is a grouping of data centres that together form a deployment location for workloads. Apart from geo-deployed services like Azure AD or Azure Traffic Manager you will always be asked what Region you wish to deploy a workload to.

Regions are named based on a general geography rather than after exactly where the data centres are. So, for example, in Australia we have four Regions – Australia East, Australia Southeast, Australia Central 1 and Australia Central 2.

A Geography, as it relates to Azure, can be used to describe a specific market – typically a country (Australia), though sometimes a geographic region (Asia, Europe). Normally within a Geography you will find two Regions which will be paired to provide customers with high availability options. Can anyone spot the one Region that doesn’t have its pair in the same Geography?

There are a few special Regions that aren’t open to everyone – US Government Regions, the entire German Geography and China. In Australia, in order to access Australia Central 1 and 2 you must undergo a white listing process to gain access.

When you replicate data or services between Regions you will pay an increased charge for either data transfer between Regions and / or duplicated hosting costs in the secondary Region. Some services such as Azure Storage and Azure SQL Database provide geo-redundant options where you pay an incremental cost to have your data replicated to the secondary Region. In other cases you will need to design your own replication approach based on your application and its hosting infrastructure.

Once you have deployed a service to a Region you are unable to move it – you have to re-provision it if you need the primary location to be somewhere else.

As a final note, while there is a Regional availability model (replication of services between Regions), Microsoft has also introduced the concept of Availability Zones. Availability Zones are still being rolled out globally, and are more than just a logical overlay over Regions. Interesting times!

So there we have it, a quick overview of some of the key terms you should be familiar with when getting started with Azure. As always, if anything’s unclear or you have any questions feel free to drop a comment below.

😎

Tagged , , ,

Empowering Australian Developers to do more with Azure

The first IT job IΒ had was training mature age students at college how to use PCs. While I was doing this I also wrote and delivered a course on how to build sites for the (new at the time) World Wide Web.

My work on the WWW course got me noticed by a business in London that was building websites for their customers. After joining them, one of the largest projects I worked on was the first website for Marks & Spencer which we developed using Active Server Pages (ASP) and hosted on Windows NT 4. We even ran a celebrity text chat session for M&S using Exchange 5.5’s chat service!

Clearly there have been a few years (my son: “yeah, like 10 billion years”) between the above and today, and I’ve done a lot of things in the interim: development, operations, delivery management and community organisation. But at my core I’ve always been a developer.

If you’ve read this far I am sure you’ve figured out that today I’m not solving a technical problem for you. πŸ™‚

So why am I reminiscing about times past? Well, really, I’m just calling back to where I started my career: helping people do more with technology, and particularly with new or unfamiliar technology.

Which leads me to the reason for this blog post.

I’m super excited to let everyone know that from August 2018 I’ll be joining Microsoft here in Australia as the Azure Pro Developer Lead, giving me the opportunity to return to my roots. I will be supporting developers in building their solutions on Azure using the platform’s wide range of innovative features, helping them move beyond Virtual Machines!

Azure enables rapid realisation of ideas you have and I want Australian developers to be empowered to deliver their ideas using Azure as their accelerator. I’m looking forward to starting and will see you at an event, at your office or over a coffee.

Happy Days 😎

Azure and Bit

Azure and Bit artwork from the talented Ashley McNamara. Thanks Ashley!

I also recently saw the below tweet which resonated with me around why I think Microsoft Azure is the best place for developers.

Tagged , ,

Are Free Tier Cloud Services Worth The Cost?

Over the years that I’ve been talking with public groups on cloud services, and Azure in particular, I will typically have at least one person in every group make a statement like this:

“Azure’s good, but the free tier isn’t as good as AWS.”

I’ve discussed this statement with groups enough times that I thought it would be good to capture my perspective on where Azure stands, provide some useful resources, and pose a question to those whose starting point is free services.

You want The Free? You can’t handle The Free!

You want the free?!

When people talk about how a free tier isn’t that useful, typically what they are saying translates into is one of two scenarios:

  1. The timeframe the free tier is offered for is not long enough for the person to achieve an acceptable learning outcome based on their time investment;
     
  2. More commonly, the service limits are too low meaning the person cannot achieve an acceptable learning outcome before their credit runs outs (regardless of time).

The reality is, beyond basic scenarios (run a Virtual Machine, create a database), and where someone doesn’t have sufficient continuous time to allocate to their cloud environment, the more likely it is they will receive minimal value from free tier services.

Effective use of free tiers

So how to minimise these outcomes?

  1. Be clear about what you want to achieve before you start a free tier subscription. If you don’t know *what* you want to do in advance you are likely to fritter away that free credit before you get to your eventual end goal. Additionally, if you know what you want to achieve then review the required cloud services you will use and determine if a free tier is going to provide you with sufficient resources to reach your goal.
     
  2. Start with pre-built environments or quickstarts – find labs or similar that give you access to existing environments. Attend events that include credits as part of attendance and use those to achieve a goal. Look at tutorials and samples to find automation scripts / templates that can get you up and running quickly (but remember the previous tip – if you try to provision a ten node Kubernetes cluster will that actually succeed in a free tier? Would a one node cluster suffice to allow you to learn?)

All the cloud platforms will provide you with time-limited free tiers, with some services being offered as “always free” at certain low usage levels.

Azure has had free services trials or tiers in one way or another for some time. Traditionally, however it hasn’t offered a 12 month period, though fairly recently that’s changed and there is now an extended 12 month Free tier offering for Azure.

One Azure cloud… many ways to get ongoing credits

Where Azure does differ substantially from AWS in particular is in the number of offerings Azure has that get you access to Azure credits on ongoing basis, lifting you out of having to use just free tier services:

  • Microsoft Developer Network (MSDN) Azure Benefits – available as an add-on to existing MSDN subscribers (note: your organisation might not have access to this benefit depending on your licensing). This is an ongoing benefit while you pay for an MSDN subscription.
  • Azure Starter for Students (formerly Dreamspark). This is an ongoing benefit while a student.
  • BizSpark Benefits – available to those who are leveraging the BizSpark programme for their business. Ongoing benefit while you are in the BizSpark program.
  • Azure for non-profits – go through the process to prove your status and gain access to Azure Credits.
  • Microsoft Azure Passes – typically when Microsoft runs training courses for Azure attendees will typically be provided with Azure credits in the form of an Azure Pass. We gave these away at the Global Azure Bootcamp this year. Time-limited offers (one to three months).

Investing in yourself or your idea

The reality of free tier services is they will only get you so far, whether the use you make of the cloud is to learn new concepts or to try an idea you have.

My take is this: if you aren’t prepared to invest your own money (i.e. I just want more free stuff) then you don’t put much value on your own education or idea.

If we had the cloud computing services we have now when the dotcom boom was happening we may well have seen a massively different outcome.

Startups wouldn’t have spent massive amounts of their funding on infrastructure and wasted months waiting for services to be provisioned before they even got to serving the first request.

Imagine if you had on-demand services when you were at school (maybe you still are) – the quality of your education would be improved by access to these sorts of services.

We are at a pivotal moment where we now have access to on-demand resources that a generation ago would have been unimaginable. If you are serious about an idea or personal development put your money where your brain is.

But I’m not an accountant!

Congratulations. Now you are! Didn’t hurt a bit either, did it?

It’s unavoidable for many of us that at some point it will come down to cost. I know there will be more than a few of you sitting there having previously paid a larger than expected cloud hosting bill. I bet you now manage those resources like a hawk. While this is a painful way to learn, you will have identified a key factor in how you design and run cloud native services.

Also, welcome to how businesses work – specifically how to control costs so they can remain viable. This is why your last request to the ops team for 10 servers was rejected, or why you had to finesse your design to fit into existing infrastructure constraints. πŸ™‚

So, where to next?

I highly recommend spending time familiarising yourself with services in the cloud too – avoid anti-patterns that will likely be where you will unexpectedly spend more money than you thought.

You can find good examples of ways to configure services from the likes of Scott Hanselman and content like his “Penny Pinching in the Cloud” posts, or Troy Hunt’s posts on how “Have I been pwned” performs on Azure (pricing at the bottom of the post).

So, did I solve your problem? Make more of the Free? Unlikely I suspect.

Ultimately you need to consider that free tiers and services are designed as a taster, to get you thinking about how your could use those services for other things. While there are “always free” services, the reality is you will be unlikely to build the next Atlassian with it, but I’m pretty sure you can use them to pass exams or to get educated on cloud technology.

Happy Days 😎

Tagged , , , ,

Provide non-admin users with read-only access to Service Endpoints in VSTS

I am currently transitioning some work to another team in our business. Part of this transition has been to pre-configure various Service Endpoints in Visual Studio Team Services (VSTS) to provide a way for the new team to deploy into target Azure environments without the team necessarily having direct or privileged access into those Azure environments.

In this post I am going to look at how you can grant users access to these Service Endpoints without them being able to modify them. This post will also be useful if you’ve configured Service Endpoints (as an admin) and then others on the team (who are non-admins) are unable to see them.

Note that this advice applies to any Service Endpoint – not just Azure!

By default only users who are members of the following groups can see Service Endpoints:

– Project Admins
– Endpoint Admins
– Endpoint Creators.

It’s unlikely that you want all your team members to hold these roles, so let’s see how we can grant rights to use Service Endpoints without being an admin!

We’re going to complete this task with an existing Service Endpoint, but you should hopefully see how you can do this at the same time you setup a new Endpoint in future.

Open up your Team Project and in the top navigation mouse over the settings (cog) icon and from the context menu click “Services”.

Service Endpoints

Once the Endpoints page has loaded, select the Endpoint you wish to allow non-admin users to see.

Selected Endpoint

Now click on ‘Roles’ to display the currently assigned users and groups and their permissions (the current list will only contain users or groups at an ‘Administrators’ level).

Roles Screen

Now we’re in the right place to add our additional read-only users or groups!

Click on the ‘+ Add’ button and the Add user dialog is displayed. Ensure that the ‘Role’ is set to ‘User’ and then find the User or Group you want to assign this right to. In our demo below we are allowing the current project’s Contributors group to use Endpoints.

Add user dialog

Once you click the ‘Add’ button the user or group will be granted read-only rights to the Endpoint. This will allow them to find or use the Endpoint in Build or Release Management Definitions (like below).

Release Definition

Happy (secured) days! 😎

Tagged , , , ,

Azure AD B2C Custom Attributes: How to easily find their unique key value

When working with Azure Active Directory B2C you can create what are known as Custom Attributes which allow you to store data about users beyond the attributes (firstname, lastname, etc) that are available out-of-the-box.

When you want to work with these Custom Attributes in a solution you build you will need to know the unique key of the attribute in order to reference it.

What do I mean by this? Let’s take a quick look using an example.

Note that you will need to be a B2C Global Admin in order to perform some tasks covered in this post.

Creating Custom Attributes

These are created via the Azure Management Portal. In my sample I am going to add an attribute to hold a tier rating for a user (say, Gold, Silver and Bronze) called “TierRating”.

The video below shows how you can do this.

Find Attribute’s Unique Key Value

Now we have this Custom Attribute created we will want to use it in our solution. If you’re eagle-eyed you may find in the Portal that these Custom attributes appear be named ‘extension_AttributeName’ (i.e. ‘extension_TierRating’).

This won’t work in your solution though πŸ™‚

When you create a Custom Attribute this is actually being done for you by a custom application called the “b2c-extensions-app” that is deployed to all B2C tenants at provisioning time.

Why am I telling you this? I am telling you this because it’s the key to determining the Custom Attribute’s unique key value πŸ™‚

You will need the Application ID for the b2c-extensions-app, which you can find in the Portal as shown in the video below.

Using it in your code

Now we have this value (in our demo video the value is ‘bb10b272-0267-46f0-8b6f-4367e8b1b1e6’) we can start to interact with Custom Attributes in our code.

Firstly we need to drop the dashes so it becomes ‘bb10b272026746f08b6f4367e8b1b1e6’. We combine this with the “Name” value for the Attribute, along with a prefix of “extension_”.

So for our tier rating Custom Attribute the full key for it becomes ‘extension_bb10b272026746f08b6f4367e8b1b1e6_TierRating’.

A sample of how this key is used in our solution is shown below.

This pattern is used for every Custom Attribute you create in this Directory.

So there we have it – the easiest way you can determine the actual unique key for a Custom Attribute!

Happy days 😎

Tagged , ,

Recommendations on using Terraform to manage Azure resources

If you’ve been working in the cloud infrastructure space for the last few years you can’t have missed the buzz around Hashicorp’s Terraform product. Terraform provides a declarative model for infrastructure provisioning that spans multiple cloud providers as well as on-premises services from the likes of VMWare.

I’ve recently had the opportunity to use Terraform to do some Azure infrastructure provisioning so I thought I’d share some recommendations on using Terraform with Azure (as at January 2018). I’ll also preface this post by saying that I have only been provisioning Azure PaaS services (App Service, Cosmos DB, Traffic Manager, Storage and Application Insights) and haven’t used any IaaS components at all.

In the beginning

I needed to provide an easy way to provision around 30 inter-related services that together constitute the hosting environment for single customer solution. Ideally I wanted a way to make it easy to re-provision these services as required.

I’ve used Azure Resource Manager (ARM) templates heavily in the past, but thought I could get some additional value from Terraform as it provides you with additional capabilities that aren’t present in ARM templates. As an example, right now you can’t provision Azure Storage Containers with ARM, but you can with Terraform.

I began, as I do with these sorts of templates, by incrementally defining resources and building the Terraform definition as I went. I got to the point where I decided to refactor some of the Terraform definitions to modularise the solution to hopefully make it a bit easier to understand and manage going forward.

When I did this refactor I also changed a bunch of resource naming schemes to better match my customer’s preferred standard. The net result of all this change was that I had a substantial amount of updates to be applied in my test lab that I had been incrementally updating as I went.

Now the fun begins

I ran ‘terraform plan’ which generated my execution plan (always make sure to provide an “out” parameter so you know ‘apply’ will match the plan exactly). I then ran ‘apply’ and left it running while I went to lunch.

When I returned about 45 minutes later my ‘terraform apply’ was still running, seemingly stuck on destroying one of the resources.

A quick visual check in the Portal of the Azure Resource Group these resources were in suggested that everything I wanted provisioned had been provisioned successfully.

Given this state of affairs I Ctrl+C’d the job, to which Terraform advised me:

Interrupt received.
Please wait for Terraform to exit or data loss may occur.

So, I gave the job a few more minutes to gracefully exit, at which point I sent another Ctrl+C and the job exited with this heart-warming message:

Two interrupts received. Exiting immediately. Note that data loss may have occurred.

Out of interest I immediately ran ‘terraform plan’ to understand what Terraform thought was provisioned versus what actually was.

The net result? Terraform had no idea that anything was provisioned!

A look at the local state file showed it was effectively empty. I restored the backup state file which it turned out was actually of minimal use because the delta between the backup and what I had just applied was too great – the resulting plan looked like an Azure resource massacre about to happen!

What to do?

I thought at this point that I was using the tooling incorrectly – how could I so easily get into this state? If I was using this to manage a production environment I’d be dead in the water.

Through additional reading and speaking with others, this is a known long-term pain point with Terraform – lose your local state and you are in a world of pain. At this time, you can’t even easily rebuild this local state without having to write a bunch of Imports which means you need to know what to import and you lose tracking of elements like random string generation at the same time.

Recommendations and Observations

Out of this experience I have some recommendations and observations around how I see Terraform (in its current state) fitting into environmental management in Azure:

  • Use Resource / Resource Group locks (delete or read-only) always: this applies even outside of use of Terraform. This will stop you from accidentally changing important resources. While you can include the definition of resource locks in your Terraform definitions I’d recommend you leave them out. If you use a Contributor-level user to do your deployments Terraform will fail when it tries to lock Resource Groups.
  • Make smaller, more frequent changes: this equates to a smaller delta between what’s in your state, and what’s in the plan. This means if you do need to recover state from backup you will have less of a change to deal with.
  • Consider your use of Terraform features like the ‘random string provider’ – you could move these to be input parameters that you can generate outside of Terraform. This means you create a fixed set of inputs, so that even if you lose state you can be assured that creating resources with “random” name components will be consistent with your last successful execution.
  • Use Resource Groups with small sets of Resources: fewer resources to deal with in event of a failure.
  • Consider Terraform as an initial provisioning tool for production and a re-provisioning tool for all dev / test and low complexity environments.
  • Use Terraform to detect drift: if you deploy an environment with Terraform, then setup the same definition as a CI build that simply runs ‘terraform plan’ against the deployed environment, using the state you generated on initial deployment as an input. If you have any change (add / delete) as the result of the ‘plan’ then you can fail the build and alert your team to investigate accordingly.
  • Consider for Blue / Green Infrastructure deployments for production only: if you want to push completely fresh infrastructure each time then Terraform is a good tool to consider. The usability of this approach is determined by complexity of your environment and the mix of utility / non-utility services you are deploying. This can work well with a slower cadence of release (monthly or above), even if your environment is fairly complex.
  • Use Azure Storage account backing for your state file (key for Terraform Open Source users). You can do this by setting up an Azure Storage Account and then defining the following in each of your TF files:
    terraform {
      backend "azurerm" {
        storage_account_name = "myterraformstore"
        container_name       = "tfstate"
      }
    }
    

    and then when you execute the init step you provide the additional parameters:

    terraform init -backend-config="access_key=*STORAGE_ACCNT_KEY" -backend-config="name.ofyour.tfstate"
    

    The shame here right now is you don’t get the versioning those who use AWS S3 buckets have access to.

  • Always write an ‘Import’ script once you’ve provisioned key environmental components you can’t afford to lose.

As a side note I notice that there is now an Azure Go SDK dependency for the Terraform Azure provider which is being maintained by Microsoft. I do wonder if this means that Terraform loses some of its appeal because new Azure features for Terraform will invariably be tied to the cadence and capabilities of the Go SDK which is generated against the official Microsoft Azure API. Will this become the way to block provider features that violate the Azure API definition? I guess time will tell.

As with all tools, Terraform has its strengths and weaknesses – hopefully as the product continues to mature we’ll see key features like re-build / import become part of the core value proposition (and not simply appear in the Enterprise version as a paid value add).

Tagged , , ,

Use Azure Health to track active incidents in your Subscriptions

Yesterday afternoon while doing some work I ran into an issue in Azure. Initially I thought this issue was due to a bug in my (new) code and went to my usual debugging helper Application Insights to review what was going on.

The below graphs a little old, but you can see a clear spike on the left of the graphs which is where we started seeing issues and which gave me a clue that something was not right!

App Insights views

Initially I thought this was a compute issue as the graphs are for a VM-hosted REST API (left) and a Functions-based backend (right).

At this point there was no service status indicating an issue so I dug a little deeper and reviewed the detailed Exception information from Application Insights and realised that the source of the problem was the underlying Service Bus and Event Hub features that we use to glue together our services.

You can see the increased error rate from the Service Bus Metrics view below.

Service Bus Metrics

While I was doing this an alert popped up in the Portal advising a service incident and directed me to the Azure Service Health feature in order to view the full incident details and also to track it.

On the Azure Health page I could see an active incident and decided to try out the alerting feature to track this during a commute home.

I clicked on the Add Alert option and configured a new email-based alert. You can also push alerts into your preferred IT Service Management (ITSM) solution, but we aren’t yet using an ITSM platform for this solution, but this would be our choice in future!

In Services I chose Service Bus and Event Hubs and for Regions I selected the two Australian Regions. Note that I had to set up an Action Group as I hadn’t used the feature previously – in the screenshot below I am just reusing the one I previously setup.

Alerts Setup

A short while after saving the Alert configuration the recipients in the Action Group started to receive update emails containing the most recent status of the incident. A sample is shown below.

Notice Email

About 45 minutes after this alert we received a resolution notification.

The amount of time saved for our team with the ease of this setup is pretty amazing, and if you’re not using this feature already you should go and explore it in the Portal and set it up for you key Azure components.

What a great early Christmas present!

😎

Tagged , , , , ,

Azure API Management: 200 OK response but no backend traffic

I’m noting this post down in the “if only someone had already made a big noise about this I might have saved some time” category.

The work I’m doing at present involves fronting some APIs with Azure API Management and then exposing them securely.

When I hit the moment I thought I was done today I was doing some testing, and no matter what I did I couldn’t get my backend service to respond, and I could clearly see no traffic hitting the backend.

After double-checking my policies and doing a few more tests (only a couple of hours) I then happened across this Stack Overflow question and its answer

It turns out that I had, somewhere along the line, removed the “forward-request” policy from the Policy applying to all APIs published via API Management.

So how to fix? As Darrell says, find the offending Policy and add the missing item back.

Edit Policy

When done it should like the image below.

API Policy

… and your API calls will now work as expected and not just give you back 200 OK! 😎

Tagged , , , , ,

Continuous Deployment for Docker with VSTS and Azure Container Registry

I’ve been watching with interest the growing maturity of Containers, and in particular their increasing penetration as a hosting and deployment artefact in Azure. While I’ve long believed them to be the next logical step for many developers, until recently they have had limited appeal to many every-day developers as the tooling hasn’t been there, particularly in the Microsoft ecosystem.

Starting with Visual Studio 2015, and with the support of Docker for Windows I started to see this stack as viable for many.

In my current engagement we are starting on new features and decided that we’d look to ASP.Net Core 2.0 to deliver our REST services and host them in Docker containers running in Azure’s Web App for Containers offering. We’re heavy uses of Visual Studio Team Services and given Microsoft’s focus on Docker we didn’t see that there would be any blockers.

Our flow at high level is shown below.

Build Pipeline

1. Developer with Visual Studio 2017 and Docker for Windows for local dev/test
2. Checked into VSTS and built using VSTS Build
3. Container Image stored in Azure Container Registry (ACR)
4. Continuously deployed to Web Apps for Containers.

We hit a few sharp edges along the way, so I thought I’d cover off how we worked around them.

Pre-requisites

There are a few things you need to have in place before you can start to use the process covered in this blog. Rather than reproduce them here in detail, go and have a read of the following items and come back when you’re done.

  • Setting up a Service Principal to allow your VSTS environment to have access to your Azure Subscription(s), as documented by Donovan Brown.
  • Create an Azure Container Registry (ACR), from the official Azure Documentation. Hint here: don’t use the “Classic” option as it does not support Webhooks which are required for Continuous Deployment from ACR.

See you back here soon πŸ™‚

Setting up your Visual Studio project

Before I dive into this, one cool item to note, is that you can add Docker support to existing Visual Studio projects, so if you’re interested in trying this out you can take a look at how you can add support to your current solution (note that it doesn’t magically support all project types… so if you’ve got that cool XAML or WinForms project… you’re out of luck for now).

Let’s get started!

In Visual Studio do a File > New > Project. As mentioned above, we’re building an ASP.Net Core REST API, so I went ahead and selected .Net Core and ASP.Net Core Web Application.

New Project - .Net Core

Once you’ve done this you get a selection of templates you can choose from – we selected Web API and ensured that we left Docker support on, and that it was on Linux (just saying that almost makes my head explode with how cool it is πŸ˜‰ )

Web API with Docker support

At this stage we now have baseline REST API with Docker support already available. You can run and debug locally via IIS Express or via Docker – give it a try :).

If you’ve not used this template before you might notice that there is an additional project in the solution that contains a series of Docker-related YAML files – for our purposes we aren’t going to touch these, but we do need to modify a couple of files included in our ASP.Net Core solution.

If we try to run a Docker build on VSTS using the supplied Dockerfile it will fail with an error similar to:

COPY failed: stat /var/lib/docker/tmp/docker-builder613328056/obj/Docker/publish: no such file or directory
/usr/bin/docker failed with return code: 1

Let’s fix this.

Add a new file to the project and name it “Dockerfile.CI” (or something similar) – it will appear as a sub-item of the existing Dockerfile. In this new file add the following, ensuring you update the ENTRYPOINT to point at your DLL.

This Dockerfile is based on a sample from Docker’s official documentation and uses a Docker Container to run the build, before copying the results to the actual final Docker Image that contains your app code and the .Net Core runtime.

We have one more change to make. If we do just the above, the project will fail to build because the default dockerignore file is stopping the copying of pretty much all files to the Container we are using for build. Let’s fix this one by updating the file to contain the following πŸ™‚

Now we have the necessary bits to get this up and running in VSTS.

VSTS build

This stage is pretty easy to get up and running now we have the updated files in our solution.

In VSTS create a new Build and select the Container template (right now it’s in preview, but works well).

Docker Build 01

On the next screen, select the “Hosted Linux” build agent (also now in preview, but works a treat). You need to select this so that you build a Linux-based Image, otherwise you will get a Windows Container which may limit your deployment options.

build container 02

We then need to update the Build Tasks to have the right details for the target ACR and to build the solution using the “Dockerfile.CI” file we created earlier, rather than the default Dockerfile. I also set a fixed name for the Image Name, primarily because the default selected by VSTS typically tends to be invalid. You could also consider changing the tag from $(Build.BuildId) to be $(Build.BuildNumber) which is much easier to directly track in VSTS.

build container 03

Finally, update the Publish Image Task with the same ACR and Image naming scheme.

Running your build should generate an image that is registered in the target ACR as shown below.

ACR

Deploy to Web Apps for Containers

Once the Container Image is registered in ACR, you can theoretically deploy it to any container host (Azure Container Instances, Web Apps for Containers, Azure Container Services), but for this blog we’ll look at Web Apps for Containers.

When you create your new Web App for Containers instance, ensure you select Azure Container Registry as the source and that you select the correct Repository. If you have added the ‘latest’ tag to your built Images you can select that at setup, and later enable Continuous Deployment.

webappscontainers

The result will be that your custom Image is deployed into your Web Apps for Containers instance and which will be available on ports 80 and 443 for the world to use.

Happy days!

I’ve uploaded the sample project I used for this blog to Github – you can find it at: https://github.com/sjwaight/docker-dotnetcore-vsts-demo

Also, please feel free to leave any comments you have, and I am certainly interested in other ways to achieve this outcome as we considered Docker Compose with the YAML files but ran into issues at build time.

Tagged , , ,

Microsoft Open Source Roadshow – Free training on Azure – Canberra and Sydney

Microsoft Open Source Roadshow

I’m excited to have the opportunity to share Azure’s powerful Open Source support with more developers in November.

Our first run of these sessions in August proved popular, so if you, or someone you know, wants to learn more they can sign up below.

We’ll cover language support (application and Azure SDK), OS support (Linux, BSD), Datastores (MySQL, PostreSQL, MongoDB), Continuous Deployment and, last, but not least, Containers (Docker, Container Registry, Kubernetes, et al).

We’re starting off in Canberra this time, then Sydney, so if you’re interested here are the links to register:

  • Canberra – Wednesday 1 November 2017: Register
  • Sydney – Friday 10 November 2017: Register

We’re also running two days in New Zealand in November too if you know anyone who might want to come along.

If you have any questions you’d like to see answered at the days feel free to leave a comment.

I hope to see you there!

Tagged , ,