Category Archives: ASP.Net

How to update ASP.Net Forms Based Authentication to use Claims Based Authentication

Ah, the heady days of Visual Studio 2005 and the sparkly .Net 2.0 Framework with its newly minted Generics support. Who could forget them? For many, it seems, they are not so much recent history but an ongoing job to feed and maintain.  A lot, in part, is due to the updates to .Net 3.0 and 3.5 leveraging the same CLR and BCL as the original .Net 2.0 release.

In this post I am going to do a walk through of how we can take an existing ASP.Net 2.0 WebForms application that’s using Forms Based Authentication (FBA) with Membership and Role Provider support and update it to utilise a more modern Claims Based Authentication approach based on Thinktecture IdentityServer v2.

There are two main reasons why you should be interested in making this transition: (1) to remove authentication logic entirely from your application’s codebase; (2) to allow you to share identity information with other applications to support Single Sign On (SSO).

Setting up Thinktecture IdendityServer v2

The first thing I’d recommend is that you setup a copy of the server we’ll use for Claims Based Authentication – download the most recent version. This is primarily because we can leverage the SSL certificate that is generated as part of the setup to secure our Forms Based Application as well. The good news is that the IdentityServer application is just an ASP.Net web application itself so we can use IIS to host it for us. I set it up on https://localhost/idsrv/ and you’ll see that URL used throughout this post.

The 2.0 FBA-secured Application

For the purposes of this blog I am going to use an extremely basic WebForms project that has a sub-folder (~/Secured/) that is access controlled.  Note that I didn’t go back and install Visual Studio 2005 – I created a new WebForm project using Visual Studio 2013 and targetted the .Net 2.0 Framework.  You can download this project as a zip from Github.

The membership, role and profile database was setup simply by creating a new database on SQL Server by running this command (at the location denoted):

aspnet_regsql.exe -S {YOUR_SERVER} -E -A mrp -d {YOUR_DATABASE}

Make sure to run in the right Framework folder (C:\Windows\Microsoft.NET\Framework\v2.0.50727\).

This gives us a set of tables in SQL Server (shown below) are used for storage of user authentication, authorisation and profile information.

ASP.Net SQL Tables

The ASP.Net SQL backend is designed to support multiple applications (multi-tenant) so the important thing to make sure we do is specify the applicationName attribute (“AuthDemoApp”) when defining our member and role providers in our web.config which looks like this:

For the purpose of this demo you can load the ‘AddUser.aspx’ page and then use the two buttons to create a new user account and then create and assign a new role to the user as well. The result of this couple of commands is as follows in the SQL Server tables.

aspnet_Application

  • ApplicationName: AuthDemoApp
  • LoweredApplicationName: authdemoapp
  • ApplicationId: D0EBB6DF-45F6-40AD-A1EA-AEC9919CDFF4
  • Description: NULL

aspnet_Users (partial)

  • ApplicationId: D0EBB6DF-45F6-40AD-A1EA-AEC9919CDFF4
  • UserId: D747A14C-579C-4F6C-80BE-99414A823EDD
  • UserName: bob@smith.com
  • LoweredUserName: bob@smith.com

aspnet_Membership (partial)

  • ApplicationId: D0EBB6DF-45F6-40AD-A1EA-AEC9919CDFF4
  • UserId: D747A14C-579C-4F6C-80BE-99414A823EDD
  • Password: vdmWH7boQ0lY0zBmUYHWSN7j/q4=
  • Email: bob@smith.com

The important take away from the above tables is that our new Application can be uniquely identified by the GUID D0EBB6DF-45F6-40AD-A1EA-AEC9919CDFF4 which ties everything else together. Additionally we can infer that our user (bob@smith.com) can be granted access to other applications because the User D747A14C-579C-4F6C-80BE-99414A823EDD can be associated with any Application that is registered in future by way of the aspnet_Membership table.

You can log in by loading the web app and then clicking on the “Login” link which takes you to a login form.  Once logged in I am redirected to a secure page that displays which role the current user is in. We are using the standard LoginName and LoginStatus ASP.Net controls in the master page.

Upgrading to Claims Based Authentication

You can download an updated project package from Github as a zip if it helps follow this.

1. Update to the 4.5.1 Framework

Firstly we’re going to open our existing .Net 2.0 Web Application and change the target Framework to the most recent (4.5.1) – do this by right-clicking on the web project and selecting Properties. Then change the Framework as per below. I’m happy to admit that such a big jump will probably break a bunch of your custom code but in this demo we’re just focusing on updating the authentication aspects of your application.

Change Framework Version

2. Include the Claims Assemblies

As we’ve upgraded from .Net 2.0 Forms-based we’ll need to add some new assemblies to leverage claims properly in our application.

To this end you need to add the following to your web application:

  • System.IdentityModel
  • System.IndentityModel.Selectors
  • System.identitymodel.services

In addition to the above core assemblies that you should find on your development machine already you’ll need to install the System.IdentityModel.Tokens.ValidatingIssuerNameRegistry assemblies by installing the “Microsoft Token Validation Extensions for Microsoft .Net Framework 4.5” nuget package.

Note that prior to .Net 4.5 you had to leverage Windows Identity Foundation (WIF) to integrate claims authentication with your application – with 4.5 it’s now baked into the core framework though you still need to add references and install the nuget package above.

3. Update web.config

Rather than detail the changes to make to the web.config one-by-one I’m going to link to a Gist that shows you the updated config file (based on the 2.0 one above). You’ll notice that the majority of the changes are adding IdentityModel configuration to ensure we trust with our Secure Token Service (STS). The one other scary item you’ll note is that we set the authentication mode to “None”!!!

At this stage if you fire up the web application (you’ll need to do it over HTTPS) you’ll find if you try and browse the ~/Secured folder that you’ll be directed to the ThinkTecture IdentityServer login page.

Important note: If you don’t set httpRuntime to support .Net 4.5 (line 17 of the Gist above) you’ll get a YSOD on login with a request validation failure due to the way WS-Fed passes the necessary authentication information to your application.

4. Setup ThinkTecture IdentityServer Databases

Whew! If you’re still with me you’re going strong! Now that we have web application ready for claims lets get our STS in working order as well (don’t worry if you’ve previously set it up – we can get it working as we need pretty easily).

As a first step make a backup of your existing ASP.Net SQL database for safety :).

The Thinktecture IdentityServer will utilise an existing Membership database if it can find one and will automatically create its configuration database schema if one isn’t found on the target SQL Server. Let’s create an empty database called ‘IdentityServerConfiguration’ into which the IdentityServer can create it’s own schema.

Open up the location on disk that you installed the IdentityServer in and then open the ~/Configuration/connectionStrings.config file and:

  1. Point both databases at your SQL Server instance.
  2. Set the ‘ProviderDB’ connection string to utilise the same ASP.Net memberhsip and role database as your existing web application.
  3. Set the ‘IdentityServerConfiguration’ connection string to point at the empty database we just created.

You file should look like this:

Now when you visit the IdentityServer at https://localhost/idsrv/ you’ll be presented with the initial configuration screen. Go ahead and change the values as you wish – the important one in our case is the value for “Issuer URI” which is used by the relying parties we set to use this STS (hint: our web application already has this value in the web.config – http://identityserver.v2.thinktecture.com/trust/claimsdemo).

Also make sure you setup a default admin account for your STS! Your page will look like this:

thinktecture initial setup

5. Make Your Users Claims Users

Now that we’ve done all of the above the next bit is the trick to all of this :).

If you look at the aspnet_Applications table in your database you will find a new one listed that has an ApplicationName of “/” – this is your STS and is the key to this step.

Applications List

You have two choices at this point – simply run some SQL to update all existing user and membership entries to map them to the STS ApplicationId or create duplicate entries within the necessary tables to ensure that old user records remain unchanged.

Once you’ve done this you should be able to see that the https://localhost/idsrv/Admin/User page displays all the users that used to be for your forms-based application.

The final piece of the user puzzle is to add all your existing users to the “IdentityServerUsers” role. This can be achieved by writing SQL to simply add entries to the aspnet_UsersInRole table that maps the appropriate RoleId to all the UserId’s you imported.

6. Define your Web Application as a Relying Party

Open up the https://localhost/idsrv/Admin/RP page on the IdentityServer as the admin user and define a new relying party (your web app) – details are shown below.

Relying Party Setup

Are we there yet?

Well, yes, we are. *Almost*.

If you’re leveraging the in-built role provider in your web application you will find that it is not working. The easiest way to fix it is to shift the roles to your STS and then everything will start working as expected. As your role database is already available to the STS simply go through the same exercise of updating each Role in aspnet_Roles to be assigned to the IdentityServer ApplicationId.

… and, finally, there’s a little sign-in / sign-out magic you’ll need – if you take a peak at the Main.Master page you’ll see the changes to make the button work (you could easily wrap in your own control to avoid needing to put code into the master page :)).

So, there we are, I hope you find this useful and that you start your journey to moving your web applications to be claims aware.

HTH.

Tagged , , ,

Save Bytes, Your Sanity and Money

In this day of elastic on-demand compute resource it can be easy to lose focus on how best to leverage a smaller footprint when it’s so easy to add capacity. Having spent many a year working on the web it’s interesting to see how development frameworks and web infrastructure has matured to better support developers in delivering scalable solutions for not much effort. Still, it goes without saying that older applications don’t easily benefit from more modern tooling and even newer solutions sometimes fail to leverage tools because the solution architects and developers just don’t know about them. In this blog post I’ll try to cover off some and provide background as to why it’s important.

Peak hour traffic

We’ve all driven on roads during peak hour – what a nightmare! A short trip can take substantially longer when the traffic is heavy. Processes like paying for tolls or going through traffic lights suddenly start to take exponentially longer which has a knock-on effect to each individual joining the road (and so on). I’m pretty sure you can see the analogy here with the peaks in demand that websites often have, but, unlike on the road the web has this problem two-fold because your request generates a response that has to return to your client (and suffer a similar fate).

At a very high level the keys to better performance on the web are:

  • ensure your web infrastructure takes the least amount of time to handle a request
  • make sure your responses are streamlined to be as small as possible
  • avoid forcing clients to make multiple round-trips to your infrastructure.

All requests (and responses) are not equal

This is subtle and not immediately obvious if you haven’t seen how hosts with different latencies can affect your website. You may have built a very capable platform to service a high volume of requests but you may not have considered the time it takes for those requests to be serviced.

What do I mean?

A practical example is probably best and is something you can visualise yourself using your favourite web browser. In Internet Explorer or Chrome open the developer tools by hitting F12 on your keyboard (in IE make sure to hit “Start Capturing” too) – if you’re using Firefox, Safari, et al… I’m sure you can figure it out ;-). Once open visit a website you know well and watch the list of resources that are loaded. Here I’m hitting Google’s Australia homepage.

I’m on a very low latency cable connection so I have a response in the milliseconds.

Network view in Internet Explorer

Network view in Google Chrome

This means that despite the Google homepage sending me almost 100 KB of data it serviced my entire request in under half a second (I also got some pre-cached goodness thrown in which also makes the response quicker). The real interest beyond this is what is that time actually made up of? Let Chrome explain:

Request detail from Google Chrome

My client (browser) spent 5ms setting up the connection, 1ms sending my request (GET http://www.google.com.au/), 197ms waiting for Google to respond at all, and then 40ms receiving the response. If this was a secure connection there would be more setup as my client and the server do all the necessary encryption / decryption to secure the message transport.

As you can imagine, if I was on a high latency connection each one of these values could be substantially higher. The net result on Google’s infrastructure would be:

  • It takes longer to receive the full request from my client after connection initialisation
  • It takes longer to stream the full response from their infrastructure to my client.

Both of which means my slower connection would use Google’s server resources for longer thus stopping those resources servicing another request.

As you can see this effectively limits the infrastructure to run at lower capacity than it really could and also demonstrates why performing load testing requires that you run test agents that utilise different latencies so you can gauge realistically what your capacity is.

Some things you can do

Given you have no control over how or where the requests will come from there are a few things you can do to help reduce the effect of low latency clients will impact your site.

  1. Reduce the number of requests or round trips: often overlooked but is increasingly becoming easier to achieve. The ways you can achieve a reduction in requests include:
    1. Use a CDN for resources: Microsoft and Google both host jQuery (and various jQuery plugins) on their CDNs. You can leverage these today with minimal effort. Avoid issues with SSL requests by mapping the CDN using a src attribute similar to “//ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.min.js” (without the http: prefix). Beyond jQuery push static images, CSS and other assets to utilise a CDN (regardless of provider) – cost should be no big issue for most scenarios.
    2. Bundle scripts: most modern sites make heavy use of JavaScript and depending on how you build your site you may have many separate source files. True, they may only be a few KB but each request a client makes will need to go through a process similar to the above. Bundling refers to the combining of multiple JavaScript files into a single download. Bundling is now natively supported in ASP.Net 4.5 and is available in earlier versions through third-party tooling for either runtime or at-build bundling. Other platforms and technologies offer similar features.
    3. Use CSS Sprites: many moons ago each individual image reference in CSS would be loaded as an individual asset onto your server. While you can still do this the obvious net effect is the need to request multiple assets from the server. CSS sprites combine multiple images into one image and then utilise offsets in CSS to show the right section of the sprite. The upside is also client-side caching means any image reference in that sprite will be serviced very quickly.
    4. Consider inline content: there I said it. Maybe include small snippets of CSS or JavaScript in the page itself. If it’s the only place it’s used why push it to another file and generate a second request for this page? Feeling brave? You could leverage the Data URI scheme for image or other binary data and have that inline too.
  2. Reduce the size of the resources you are serving using these approaches:
    1. Minification: make sure you minify your CSS and JavaScript. Most modern web frameworks will support this natively or via third-party tooling. It’s surprising how many people overlook this step and on top of that also don’t utilise the minified version of jQuery!
    2. Compress imagery: yes, yes, sounds like the pre-2000 web. Know what? It hasn’t changed. This does become increasingly difficult when you have user generated content (UGC) but even there you can provide server-side compression and resizing to avoid serving multi-MB pages!
    3. Use GZIP compression: there is a trade-off here – under load can your server cope with the compression demands? Does the web server you’re using support GZIP of dynamic content? This change, while typically an easy one (it’s on or off on the server) requires testing to ensure other parts of your infrastructure will support it properly.
  3. Ensure you service requests as quickly as possible – this is typically where most web developers have experience and where a lot of time is spent tuning resources such as databases and SANs to ensure that calls are as responsive as possible. This is a big topic all on its own so I’m not going to dive into it here!

If you’re a bit lost were to start it can pay to use tools like YSlow from Yahoo! Or PageSpeed from Google – these will give you clear guidance on areas to start working on. From there it’s a matter of determining if you need to make code or infrastructure changes (or both) to create a site that can scale to more traffic without needing to necessarily obtain more compute power.

Hope you’ve found this useful – if you have any tips, suggestions or corrections feel free to leave them in the comments below.

Tagged , , , ,

Easy Testing Of Your Web.Config Transformations

One of the powerful features of ASP.Net 4.0 was the introduction of web.config transformations that meant you could now do with ASP.Net out-of-the-box what you would have previously done with some form of custom XSLT transform in your build process. One thing that is not that easy is to test the outputs from the transformations.

One option is the simple online web.config tester from the guys over at AppHarbor.  While that’s great, personally I don’t want to round-trip my web.config files over the Net just to test something I should be able to do locally. The result was that after some playing I found a way to test locally utilising msbuild with the right parameters.

The one proviso to this simple test working is that you have successfully compiled the code for you web application (either via msbuild command-line or inside Visual Studio).  This test will fail if the binaries or other inputs for your package aren’t available.

All you need to do is issue this command line:


MSBuild.exe YourProject.csproj /T:Package /P:DeployOnBuild=True;Configuration=YourConfiguration

You will now find in the ‘obj’ folder of the project you targetted a set of folders – if you dig through them you will find a “TransformsWebConfig” sub-folder that will contain the output result of your transform.

Happy Days!

Updated!

New in Visual Studio 2012 is the ability to “Preview Transform” on configuration files that utilise the above technique.  Open your solution in Visual Studio, expand the transformation node of your config file, select the transform to review and choose “Preview Transform” from the menu.  Grab a look at screenshots either at Hanselman’s blog or here.

Tagged , , ,

Using Amazon SES for .Net Application Mail Delivery

Until March 2012 Amazon’s Simple Email Service (SES) had limited support for mail being sent via existing .Net code and the IIS SMTP virtual server.  Some recent changes mean this is now possible so in this post I’ll quickly cover how you can configure your existing apps to utilise SES.

If you don’t understand why you should be using SES for your applications then you should be looking at the Amazon SES FAQ and before you start any of this configuration you need to ensure that you have created your SMTP credentials on the AWS console and that you have an appropriately validated sender address (or addresses).  Amazon is really strict here as they don’t want to get blacklisted for being a spammer host.

IIS Virtual SMTP Server

Firstly, let’s look at how we can setup the SMTP server as a smart host that forwards mail on to SES for you.  This approach means that you can configure all your applications to forward via IIS rather than talking directly to the SES SMTP interface.

1. Open up the IIS 6 Manager and select the SMTP Virtual Server you want to configure.

1.iis_virtual_smtp

2. Right-click on the server and select Properties.

3. In the Properties Window click on the Delivery tab.

4. On the Delivery tab click on the Outbound Security button on the bottom right.

5. In the Outbound Security dialog select “Basic Authentication” and enter your AWS SES Credentials.  Make sure you check the “TLS Encryption” box at the bottom left of the dialog.  Click OK. Your screen should look similar to this:

2.delivery_setup

6. Now open the Advanced Delivery dialog by clicking on the button.

7. Modify the dialog so it looks similar to the below.  I put the internal DNS name for my host here – the problem with this is that if you shut off your Instance the name will change and you need to update this.  Click OK.

3.advanced_delivery

Now you should be ready to use this IIS SMTP Virtual Server as a relay for you applications to SES.  Make sure you set AWS SecurityGroups up correctly and that you are restricting which hosts can relay via your SMTP server.

Directly from .Net Code

Without utilising the Amazon AWS SDK for .Net you can also continue to send mail the way you always have – you will need to make the following change to your App.config or Web.config file.

<mailSettings>
      <smtp deliveryMethod="Network" from="validated@yourdomain.com">
          <network defaultCredentials="false"
                   enableSsl="true"
                   host="email-smtp.us-east-1.amazonaws.com"
                   port="25"
                   userName="xxxxxxxx"
                   password="xxxxxxxx" />
      </smtp>
</mailSettings>

Thanks to the author of the March 11, 2012 post on this thread on the AWS support forums for the configuration file edits above.

With these two changes most “legacy” .Net applications should be able to utilise Amazon’s SES service for mail delivery.  Happy e-mailing!

Tagged , , , ,

Getting Web Deploy Working For Non-Admin Logins

There’s a lot of good information around online about how to get Web Deploy (a.k.a. msdeploy) working.  What most of the information tends not to cover is how to get it functioning for non-admin users.

In this post I’m going to cover the steps to go through to get a non-Admin windows user working for deployments.

The Foundation

First of all, let’s get the basics out of the way.  This is the environment these instructions are applicable to:

  1. Windows Server 2008 R2 (with SP1).
  2. Web Role (IIS) Installed – make sure you have installed the Management Service (see below).
  3. Windows Firewall on but with an Inbound allow rule for TCP traffic on port 8172.
  4. You have downloaded Web Deploy.

Management Service Installed.

Now we have the main bits ready to go we need to setup Web Deploy.

Install and Configure Web Deploy

When you install Web Deploy you need to make sure all components are available.  Either select ‘Complete’ or ‘Custom’ when prompted for what to install.  You should find that the components to install looks like the following.

What items from Web Deploy you need to select.

Once you have finished the installation you can verify the state of your configuration by reviewing your server and you should find:

1. A new local user called WdeployAdmin.

2. Two new services – Web Deployment Agent Service and the Web Management Service.

New Services Installed.

Add Windows Login

We’re going to be using a non-Admin user for our deployments so lets go ahead and add a new Standard Windows login (i.e. one that is not an Administrator).

Note: Username and password should be chosen with care – in some deployment scenarios your password (particularly) may cause issues if it has characters that cannot be included in XML without being escaped. A simple rule of thumb is to avoid &, < and >.

Tip: If you have authentication issues test using a simple password that has no special characters.

Configure Management Service

We need configure the management service to allow remote connections and (in this instance) to only allow Windows credentials (the default).

Open up the IIS Manager on your server and ensure you have Features View on in the right pane.

Look for the Management Group (usually at the bottom) and then within that group select Management Service (see below).

Management Service Highlighted in Blue.

When this view opens you will most likely find the form is disabled – this is because the service is running – you can’t change the configuration.  If you look at the right pane you will see an option to Stop the service.

Make sure to check the ‘Enable remote connections’ option and to leave the ‘Windows credentials only’ selected (as below).  Now restart the service.

Management Service Configuratoin

Grant Windows Login IIS Manager Permissions

You can now grant the non-Admin user you created earlier the rights to manage sites on your IIS machine.

In the left pane of the IIS Manager select the site you wish to add your Windows login as a manager for (you will need to repeat for each site).

In the right pane you should see a Management group with two options (Configuration Editor and IIS Manager Permissions).  Open the IIS Manager Permissions view.

In the new view that opens on the right hand pane near the top you should see ‘Allow User…’ – click on it and a popup will appear.

From the popup you can select the Windows user you wish to add – click on the Select button and then search for the user you create previously.  Finally click OK on the two dialogs so you return to the initial screen where you will see a new entry for your user (sample below).

User View once granted access to deploy.

The Missing Link

I can almost guarantee you at this point that if you run the deployment it will fail.  This is something I spent a fair amount of time trying to troubleshoot and so I have this advice for you:

The non-Admin Windows login you granted IIS Manager Permissions to must be able to read / write to the root folder location that the IIS site is deployed to.

Using this approach I’ve been able to get non-Admin users publishing successfully so the approach should work for you too.

May 2012 – Updated!

One important addition to add to all of the above.

When you setup Web Deploy it will create a two local users on the host that have priveleges to setup IIS sites and modify configuration files.  The logins are WDeployAdmin and WDeployConfigWriter.

If you find that after a period of time Web Deploy starts giving errors and not deploying it is most likely due to the passwords for these users expiring and Windows setting the “user must change password on next logon” flag (assuming you left the default password policy in place on your Windows server).  Either set the password not to expire or update it and clear the next logon flag.

Tagged , , , , ,

Fixing Packaging Of Web Projects On Your .Net Build Server

On my current project I’m running up the build and deployment environment and hit a roadblock that took me a little while to fathom.  Hopefully reading this might save you some time if you’re having this issue.

The Scenario

1. A build server that does not have Visual Studio installed but has an appropriate .Net SDK that allows you to compile projects successfully.

2. The MSDeploy package on the server – I get mine via the Web Platform Installer (or Web PI for short).

3. A project that you know has valid deployment settings – typically one you can build and package using Visual Studio locally.

4. When building on the server you compile (build) everything OK but the packaging fails silently.

If this sounds like your situation (or similar to it) read on to find a solution.

The Clue

It took me a while to work this out and to be honest the Google Gods were not much use to me (including Scott Guthrie’s blog on all this – see if you can find the follow on blog on automating packaging he hints at).

I tried a range of things before I came across a post on Stack Overflow that pointed me in the right direction.  It refers to the much maligned Microsoft.WebApplication.targets that is installed along with Visual Studio but which is gloriously missing when you build a clean server without Visual Studio.  You’ve probably come across that file before because trying to build without it with Web Application projects ends up with nasty errors being emitted from MSBuild:

error MSB4019: The imported project “C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets” was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.

So you know how to fix that – you copy across the necessary files from a machine with Visual Studio and recreate that folder structure on your build server.  Done.

The Fix

The Stack Overflow post specifically mentions that understanding how the MSDeploy stuff works basically boils down to reading the contents of the  Microsoft.WebApplication.targets file.

Hang on, what’s that got to do with packaging?

So I cracked open the targets file and sure enough at one point in it it reads clearly (including good grammar):

<!–This will be overwrite by ..\web\Microsoft.Web.Publishing.targets when $(UseWPP_CopyWebApplication) set to true–>

OK, so now I’m a bit surprised… didn’t MSDeploy lay down some MSBuild support for me? Nope.

At this stage I switched to my working development box and sure enough found a ‘Web’ folder sitting at the same level as the ‘WebApplications’ one.

Web Folder Highlighted in Yellow.

Web Folder Highlighted in Yellow.

I zipped up the contents of this folder and copied to my build server and placed them in the right location (C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Web\).

After this change I re-ran my packaged build and found that the expected build steps and output (a zip and some manifest files) were produced.

So, Hollywood ending to this story then!

Hope it saves you some time.

Tagged , , , , , ,

A Subtle .Net Static Reference Type Gotcha

One of the fundamental concepts that any developer has to understand when developing .Net solutions is the difference between Reference and Value types.  There are a lot of technical discussions online already about the differences between the two but the key concept to pull out of the MSDN description on Reference types is:

“Variables of reference types, referred to as objects, store references to the actual data.”

So, rather than hold the data directly, a Reference type holds a reference to where the data actually is.

/*
Value Type
*/
int someValue = 2;

// someOtherValue holds 2
int someOtherValue = someValue;

/*
Reference Type
*/
MyCustomClass myClassOne = new MyCustomClass();

// myClassTwo holds a reference to myClassOne
MyCustomClass myClassTwo = myClassOne;

Too Much Static

The static keyword will be one that you will come across and use.  A good description of how it affects classes and members can be found on MSDN.  One of the justifications for using static declarations  is around performance – “…a static class remains in memory for the lifetime of the application domain in which your program resides”.

Because of the persistent nature of static variables you will see them used as a poor man’s equivalent to the Application store in ASP.Net.  While quick and easy this approach does have potential to cause issues – especially if you are unaware of the duration of the application domain lifetime and on how you utilise your static variable.  What do I mean by this?

Firstly, think about the application domain for your web application… that’s right, it’s the Application Pool that hosts your web app.

Secondly, think about the users of the web application… that’s right, they may be many different people using the code over many requests – all being served by one long running Application Pool.

So what?  Let us consider the following example.

// ContactList class
public static class ContactList
{
   public static List<FormEmail> Emails { get; private set; }

   public static ContactList()
   {
       Emails = SomeMethodToLoadEmails();
   }
}

Now to use this class in an ASP.Net Application thus.

// Prepare Email List For Sending
public PrepareEmail(FormType type, AdditionalLocation additionalLocation)
{
    FormEmail mailToList = ContactList.Emails.Find(x => x.EmailFormType == type);

    // If this is for NSW we need to add another email
    if(additionalLocation.Location == Location.NSW)
    {
        mailToList.Recipients.Add(additionalLocation.MailOption);
    }
    //...
}

But wait! What happens next time we run this code? Can you spot the problem?

The mailToList variable holds a reference to the ContactList Emails list object which is declared as static. When we call to ‘Add’ at line 9 in the last code sample we will actually add the entry to the original ContactList Emails list.

The result is that until the Application Pool is restarted the ContactList Emails List will continue to be added to which is not what is intended.

A Corrected Way

There are a couple of ways to resolve this issue.  One would be to remove the static nature of the ContactList class and its members, but that may not be the best way to go.

Depending on the object causing the issue you may be able to leverage one of the Copy() or Clone() methods provided by the .Net Framework.  Note that for custom classes you write you will need to provide your own implementation of the Copy() method.

A simple way to resolve the above is to modify the offending code as follows.

// Prepare Email List For Sending
public PrepareEmail(FormType type, AdditionalLocation additionalLocation)
{
    FormEmail mailToList = ContactList.Emails.Find(x => x.EmailFormType == type);

    // Create new local non-static variable and assign values from static matches.
    FormEmail localList = new FormEmail
                              {
                                  Subject = mailToList.Subject,
                                  Template = mailToList.Template
                              };

    // add all recipients from static instance to local one
    localList.Recipients.AddRange(mailToList.Recipients);

    // If this is for NSW we need to add another email
    if(additionalLocation.Location == Location.NSW)
    {
        localList.Recipients.Add(additionalLocation.MailOption);
    }
    //...
}

So there you go, a very subtle issue in using static reference types in your ASP.Net (and WinForms, etc.) projects and how you can go about fixing it.

Hope this saves you some time.

Safely Testing .Net Code That Does Email Delivery

As a .Net developer you will most likely have come across the need to create and send an SMTP (email) message as part of a solution you’ve built.  When under development you will have either stubbed out the mail delivery code or will have substituted a test email address for those of the final recipients in a live environment (did your mailbox get full?!).

This approach is simple and works pretty well under development, but you know one day someone will come to you with a production problem relating to mail delivery to multiple recipients with each receiving their own copy of the message.  How do you test this without needing multiple test mailboxes and without spamming the real recipients?

A few years back I learnt of a way to test mail delivery with real email addresses that can be performed locally of a development machine with minimal risk (note I didn’t say “no risk”) that email will actually be delivered to the intended recipients.  The great thing is you don’t need to understand a lot about networking or being a sysadmin god to get this working.

IIS To The Rescue

Yes, IIS.

In this case you don’t even need any third party software – just the web application server that most ASP.Net developers know (“and love” is probably pushing the relationship a little though I think).

First off, you will need to install the SMTP feature support for IIS on your local machine.  You can get instructions for IIS 7 from TechNet as well as for IIS 6.  If you’re on IIS Express, you’re out of luck – it only supports HTTP and HTTPS.

Once you have the SMTP Feature installed you will need to make one important change – set the SMTP server to use a drop folder.  The IIS SMTP process will simply drop files into a location you’ve selected and there they will sit – each file containing an emaill message.

To make this change open the IIS Manager and select the main server node.  You should see (screenshot below) an option for SMTP E-mail.

IIS Manager with SMTP Email option highlighted

IIS Manager with SMTP Email Option Highlighted.

Double-click the SMTP E-mail option to open the settings dialog. Notice at the bottom the option labelled Store e-mail in pickup directory – you should select this and then select an appropriate location on disk.

SMTP Email Settings Page with Drop Folder Highlighted

SMTP Email Settings Page with Drop Folder Highlighted.

Right, that’s the hard bit done.

Run Teh Codez

Now you have a safe place for your test mail to sit you need to ensure that your code is configured to attempt delivery via the SMTP instance you just configured.  You can most likely achieve this by changing your application’s settings (they’re in an XML config file, right?) so that you use either of “localhost” or “127.0.0.1” as the SMTP host name – you won’t need to change the port from the standard port 25.

Now when you run your code you should find that the mail delivery folder you set will be populated with a range of files consisting of a GUID with a .EML extension – each of these is an individual email awaiting your eager eyes.

Lovely EML Files Ready To View.

Lovely EML Files Ready To View.

The files are plain text so can be opened using Notepad or your favourite equivalent – you can view all the SMTP headers as well as the message body.  For extra goodness you can also open these files in Outlook Express (does anyone even use that any more?!) or Outlook as shown below.

Outlook Goodness - See HTML Email Body Content Easily.

Outlook Goodness – See HTML Email Body Content Easily.

I used this approach just recently to help debug some problems for a customer and I could do it using their real email addresses safe in the knowledge that I would not end up spamming them with junk test emails.

Hope you find this approach useful.

Update

I had a reader pass on a service link to Mailtrap which might be more useful if you’re looking to test with a large diverse team.


Tagged , , , , ,