Tag Archives: ASP.Net

Continuous Deployment for Docker with VSTS and Azure Container Registry

I’ve been watching with interest the growing maturity of Containers, and in particular their increasing penetration as a hosting and deployment artefact in Azure. While I’ve long believed them to be the next logical step for many developers, until recently they have had limited appeal to many every-day developers as the tooling hasn’t been there, particularly in the Microsoft ecosystem.

Starting with Visual Studio 2015, and with the support of Docker for Windows I started to see this stack as viable for many.

In my current engagement we are starting on new features and decided that we’d look to ASP.Net Core 2.0 to deliver our REST services and host them in Docker containers running in Azure’s Web App for Containers offering. We’re heavy uses of Visual Studio Team Services and given Microsoft’s focus on Docker we didn’t see that there would be any blockers.

Our flow at high level is shown below.

Build Pipeline

1. Developer with Visual Studio 2017 and Docker for Windows for local dev/test
2. Checked into VSTS and built using VSTS Build
3. Container Image stored in Azure Container Registry (ACR)
4. Continuously deployed to Web Apps for Containers.

We hit a few sharp edges along the way, so I thought I’d cover off how we worked around them.

Pre-requisites

There are a few things you need to have in place before you can start to use the process covered in this blog. Rather than reproduce them here in detail, go and have a read of the following items and come back when you’re done.

  • Setting up a Service Principal to allow your VSTS environment to have access to your Azure Subscription(s), as documented by Donovan Brown.
  • Create an Azure Container Registry (ACR), from the official Azure Documentation. Hint here: don’t use the “Classic” option as it does not support Webhooks which are required for Continuous Deployment from ACR.

See you back here soon 🙂

Setting up your Visual Studio project

Before I dive into this, one cool item to note, is that you can add Docker support to existing Visual Studio projects, so if you’re interested in trying this out you can take a look at how you can add support to your current solution (note that it doesn’t magically support all project types… so if you’ve got that cool XAML or WinForms project… you’re out of luck for now).

Let’s get started!

In Visual Studio do a File > New > Project. As mentioned above, we’re building an ASP.Net Core REST API, so I went ahead and selected .Net Core and ASP.Net Core Web Application.

New Project - .Net Core

Once you’ve done this you get a selection of templates you can choose from – we selected Web API and ensured that we left Docker support on, and that it was on Linux (just saying that almost makes my head explode with how cool it is 😉 )

Web API with Docker support

At this stage we now have baseline REST API with Docker support already available. You can run and debug locally via IIS Express or via Docker – give it a try :).

If you’ve not used this template before you might notice that there is an additional project in the solution that contains a series of Docker-related YAML files – for our purposes we aren’t going to touch these, but we do need to modify a couple of files included in our ASP.Net Core solution.

If we try to run a Docker build on VSTS using the supplied Dockerfile it will fail with an error similar to:

COPY failed: stat /var/lib/docker/tmp/docker-builder613328056/obj/Docker/publish: no such file or directory
/usr/bin/docker failed with return code: 1

Let’s fix this.

Add a new file to the project and name it “Dockerfile.CI” (or something similar) – it will appear as a sub-item of the existing Dockerfile. In this new file add the following, ensuring you update the ENTRYPOINT to point at your DLL.

This Dockerfile is based on a sample from Docker’s official documentation and uses a Docker Container to run the build, before copying the results to the actual final Docker Image that contains your app code and the .Net Core runtime.

We have one more change to make. If we do just the above, the project will fail to build because the default dockerignore file is stopping the copying of pretty much all files to the Container we are using for build. Let’s fix this one by updating the file to contain the following 🙂

Now we have the necessary bits to get this up and running in VSTS.

VSTS build

This stage is pretty easy to get up and running now we have the updated files in our solution.

In VSTS create a new Build and select the Container template (right now it’s in preview, but works well).

Docker Build 01

On the next screen, select the “Hosted Linux” build agent (also now in preview, but works a treat). You need to select this so that you build a Linux-based Image, otherwise you will get a Windows Container which may limit your deployment options.

build container 02

We then need to update the Build Tasks to have the right details for the target ACR and to build the solution using the “Dockerfile.CI” file we created earlier, rather than the default Dockerfile. I also set a fixed name for the Image Name, primarily because the default selected by VSTS typically tends to be invalid. You could also consider changing the tag from $(Build.BuildId) to be $(Build.BuildNumber) which is much easier to directly track in VSTS.

build container 03

Finally, update the Publish Image Task with the same ACR and Image naming scheme.

Running your build should generate an image that is registered in the target ACR as shown below.

ACR

Deploy to Web Apps for Containers

Once the Container Image is registered in ACR, you can theoretically deploy it to any container host (Azure Container Instances, Web Apps for Containers, Azure Container Services), but for this blog we’ll look at Web Apps for Containers.

When you create your new Web App for Containers instance, ensure you select Azure Container Registry as the source and that you select the correct Repository. If you have added the ‘latest’ tag to your built Images you can select that at setup, and later enable Continuous Deployment.

webappscontainers

The result will be that your custom Image is deployed into your Web Apps for Containers instance and which will be available on ports 80 and 443 for the world to use.

Happy days!

I’ve uploaded the sample project I used for this blog to Github – you can find it at: https://github.com/sjwaight/docker-dotnetcore-vsts-demo

Also, please feel free to leave any comments you have, and I am certainly interested in other ways to achieve this outcome as we considered Docker Compose with the YAML files but ran into issues at build time.

Tagged , , ,

Save Bytes, Your Sanity and Money

In this day of elastic on-demand compute resource it can be easy to lose focus on how best to leverage a smaller footprint when it’s so easy to add capacity. Having spent many a year working on the web it’s interesting to see how development frameworks and web infrastructure has matured to better support developers in delivering scalable solutions for not much effort. Still, it goes without saying that older applications don’t easily benefit from more modern tooling and even newer solutions sometimes fail to leverage tools because the solution architects and developers just don’t know about them. In this blog post I’ll try to cover off some and provide background as to why it’s important.

Peak hour traffic

We’ve all driven on roads during peak hour – what a nightmare! A short trip can take substantially longer when the traffic is heavy. Processes like paying for tolls or going through traffic lights suddenly start to take exponentially longer which has a knock-on effect to each individual joining the road (and so on). I’m pretty sure you can see the analogy here with the peaks in demand that websites often have, but, unlike on the road the web has this problem two-fold because your request generates a response that has to return to your client (and suffer a similar fate).

At a very high level the keys to better performance on the web are:

  • ensure your web infrastructure takes the least amount of time to handle a request
  • make sure your responses are streamlined to be as small as possible
  • avoid forcing clients to make multiple round-trips to your infrastructure.

All requests (and responses) are not equal

This is subtle and not immediately obvious if you haven’t seen how hosts with different latencies can affect your website. You may have built a very capable platform to service a high volume of requests but you may not have considered the time it takes for those requests to be serviced.

What do I mean?

A practical example is probably best and is something you can visualise yourself using your favourite web browser. In Internet Explorer or Chrome open the developer tools by hitting F12 on your keyboard (in IE make sure to hit “Start Capturing” too) – if you’re using Firefox, Safari, et al… I’m sure you can figure it out ;-). Once open visit a website you know well and watch the list of resources that are loaded. Here I’m hitting Google’s Australia homepage.

I’m on a very low latency cable connection so I have a response in the milliseconds.

Network view in Internet Explorer

Network view in Google Chrome

This means that despite the Google homepage sending me almost 100 KB of data it serviced my entire request in under half a second (I also got some pre-cached goodness thrown in which also makes the response quicker). The real interest beyond this is what is that time actually made up of? Let Chrome explain:

Request detail from Google Chrome

My client (browser) spent 5ms setting up the connection, 1ms sending my request (GET http://www.google.com.au/), 197ms waiting for Google to respond at all, and then 40ms receiving the response. If this was a secure connection there would be more setup as my client and the server do all the necessary encryption / decryption to secure the message transport.

As you can imagine, if I was on a high latency connection each one of these values could be substantially higher. The net result on Google’s infrastructure would be:

  • It takes longer to receive the full request from my client after connection initialisation
  • It takes longer to stream the full response from their infrastructure to my client.

Both of which means my slower connection would use Google’s server resources for longer thus stopping those resources servicing another request.

As you can see this effectively limits the infrastructure to run at lower capacity than it really could and also demonstrates why performing load testing requires that you run test agents that utilise different latencies so you can gauge realistically what your capacity is.

Some things you can do

Given you have no control over how or where the requests will come from there are a few things you can do to help reduce the effect of low latency clients will impact your site.

  1. Reduce the number of requests or round trips: often overlooked but is increasingly becoming easier to achieve. The ways you can achieve a reduction in requests include:
    1. Use a CDN for resources: Microsoft and Google both host jQuery (and various jQuery plugins) on their CDNs. You can leverage these today with minimal effort. Avoid issues with SSL requests by mapping the CDN using a src attribute similar to “//ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.min.js” (without the http: prefix). Beyond jQuery push static images, CSS and other assets to utilise a CDN (regardless of provider) – cost should be no big issue for most scenarios.
    2. Bundle scripts: most modern sites make heavy use of JavaScript and depending on how you build your site you may have many separate source files. True, they may only be a few KB but each request a client makes will need to go through a process similar to the above. Bundling refers to the combining of multiple JavaScript files into a single download. Bundling is now natively supported in ASP.Net 4.5 and is available in earlier versions through third-party tooling for either runtime or at-build bundling. Other platforms and technologies offer similar features.
    3. Use CSS Sprites: many moons ago each individual image reference in CSS would be loaded as an individual asset onto your server. While you can still do this the obvious net effect is the need to request multiple assets from the server. CSS sprites combine multiple images into one image and then utilise offsets in CSS to show the right section of the sprite. The upside is also client-side caching means any image reference in that sprite will be serviced very quickly.
    4. Consider inline content: there I said it. Maybe include small snippets of CSS or JavaScript in the page itself. If it’s the only place it’s used why push it to another file and generate a second request for this page? Feeling brave? You could leverage the Data URI scheme for image or other binary data and have that inline too.
  2. Reduce the size of the resources you are serving using these approaches:
    1. Minification: make sure you minify your CSS and JavaScript. Most modern web frameworks will support this natively or via third-party tooling. It’s surprising how many people overlook this step and on top of that also don’t utilise the minified version of jQuery!
    2. Compress imagery: yes, yes, sounds like the pre-2000 web. Know what? It hasn’t changed. This does become increasingly difficult when you have user generated content (UGC) but even there you can provide server-side compression and resizing to avoid serving multi-MB pages!
    3. Use GZIP compression: there is a trade-off here – under load can your server cope with the compression demands? Does the web server you’re using support GZIP of dynamic content? This change, while typically an easy one (it’s on or off on the server) requires testing to ensure other parts of your infrastructure will support it properly.
  3. Ensure you service requests as quickly as possible – this is typically where most web developers have experience and where a lot of time is spent tuning resources such as databases and SANs to ensure that calls are as responsive as possible. This is a big topic all on its own so I’m not going to dive into it here!

If you’re a bit lost were to start it can pay to use tools like YSlow from Yahoo! Or PageSpeed from Google – these will give you clear guidance on areas to start working on. From there it’s a matter of determining if you need to make code or infrastructure changes (or both) to create a site that can scale to more traffic without needing to necessarily obtain more compute power.

Hope you’ve found this useful – if you have any tips, suggestions or corrections feel free to leave them in the comments below.

Tagged , , , ,

Easy Testing Of Your Web.Config Transformations

One of the powerful features of ASP.Net 4.0 was the introduction of web.config transformations that meant you could now do with ASP.Net out-of-the-box what you would have previously done with some form of custom XSLT transform in your build process. One thing that is not that easy is to test the outputs from the transformations.

One option is the simple online web.config tester from the guys over at AppHarbor.  While that’s great, personally I don’t want to round-trip my web.config files over the Net just to test something I should be able to do locally. The result was that after some playing I found a way to test locally utilising msbuild with the right parameters.

The one proviso to this simple test working is that you have successfully compiled the code for you web application (either via msbuild command-line or inside Visual Studio).  This test will fail if the binaries or other inputs for your package aren’t available.

All you need to do is issue this command line:


MSBuild.exe YourProject.csproj /T:Package /P:DeployOnBuild=True;Configuration=YourConfiguration

You will now find in the ‘obj’ folder of the project you targetted a set of folders – if you dig through them you will find a “TransformsWebConfig” sub-folder that will contain the output result of your transform.

Happy Days!

Updated!

New in Visual Studio 2012 is the ability to “Preview Transform” on configuration files that utilise the above technique.  Open your solution in Visual Studio, expand the transformation node of your config file, select the transform to review and choose “Preview Transform” from the menu.  Grab a look at screenshots either at Hanselman’s blog or here.

Tagged , , ,

Safely Testing .Net Code That Does Email Delivery

As a .Net developer you will most likely have come across the need to create and send an SMTP (email) message as part of a solution you’ve built.  When under development you will have either stubbed out the mail delivery code or will have substituted a test email address for those of the final recipients in a live environment (did your mailbox get full?!).

This approach is simple and works pretty well under development, but you know one day someone will come to you with a production problem relating to mail delivery to multiple recipients with each receiving their own copy of the message.  How do you test this without needing multiple test mailboxes and without spamming the real recipients?

A few years back I learnt of a way to test mail delivery with real email addresses that can be performed locally of a development machine with minimal risk (note I didn’t say “no risk”) that email will actually be delivered to the intended recipients.  The great thing is you don’t need to understand a lot about networking or being a sysadmin god to get this working.

IIS To The Rescue

Yes, IIS.

In this case you don’t even need any third party software – just the web application server that most ASP.Net developers know (“and love” is probably pushing the relationship a little though I think).

First off, you will need to install the SMTP feature support for IIS on your local machine.  You can get instructions for IIS 7 from TechNet as well as for IIS 6.  If you’re on IIS Express, you’re out of luck – it only supports HTTP and HTTPS.

Once you have the SMTP Feature installed you will need to make one important change – set the SMTP server to use a drop folder.  The IIS SMTP process will simply drop files into a location you’ve selected and there they will sit – each file containing an emaill message.

To make this change open the IIS Manager and select the main server node.  You should see (screenshot below) an option for SMTP E-mail.

IIS Manager with SMTP Email option highlighted

IIS Manager with SMTP Email Option Highlighted.

Double-click the SMTP E-mail option to open the settings dialog. Notice at the bottom the option labelled Store e-mail in pickup directory – you should select this and then select an appropriate location on disk.

SMTP Email Settings Page with Drop Folder Highlighted

SMTP Email Settings Page with Drop Folder Highlighted.

Right, that’s the hard bit done.

Run Teh Codez

Now you have a safe place for your test mail to sit you need to ensure that your code is configured to attempt delivery via the SMTP instance you just configured.  You can most likely achieve this by changing your application’s settings (they’re in an XML config file, right?) so that you use either of “localhost” or “127.0.0.1” as the SMTP host name – you won’t need to change the port from the standard port 25.

Now when you run your code you should find that the mail delivery folder you set will be populated with a range of files consisting of a GUID with a .EML extension – each of these is an individual email awaiting your eager eyes.

Lovely EML Files Ready To View.

Lovely EML Files Ready To View.

The files are plain text so can be opened using Notepad or your favourite equivalent – you can view all the SMTP headers as well as the message body.  For extra goodness you can also open these files in Outlook Express (does anyone even use that any more?!) or Outlook as shown below.

Outlook Goodness - See HTML Email Body Content Easily.

Outlook Goodness – See HTML Email Body Content Easily.

I used this approach just recently to help debug some problems for a customer and I could do it using their real email addresses safe in the knowledge that I would not end up spamming them with junk test emails.

Hope you find this approach useful.

Update

I had a reader pass on a service link to Mailtrap which might be more useful if you’re looking to test with a large diverse team.


Tagged , , , , ,