Tag Archives: AWS

The False Promise of Cloud Auto-scale

Go to the cloud because it has an ability to scale elastically.

You’ve read that bit before right?

Certainly if you’ve been involved in anything remotely cloud related in the last few years you will most certainly have come across the concept of on-demand or elastic scaling. Azure has it, AWS has it, OpenStack has it.  It’s one of the cornerstone pieces of any public cloud platform.

Interestingly it’s also one of the facets of the cloud that I see often misunderstood by developers and IT Pros alike.

In this post I’ll explore why auto-scale isn’t black magic and why your existing applications can’t always just automagically start using it.

What is “Elastic Scalability”?

Gartner (2009) defines it as follows:

“Scalable and Elastic: The service can scale capacity up or down as the consumer demands at the speed of full automation (which may be seconds for some services and hours for others). Elasticity is a trait of shared pools of resources. Scalability is a feature of the underlying infrastructure and software platforms. Elasticity is associated with not only scale but also an economic model that enables scaling in both directions in an automated fashion. This means that services scale on demand to add or remove resources as needed.”

Source: http://www.gartner.com/newsroom/id/1035013

Sounds pretty neat – based on demand you can utilise platform features to scale out your application automatically.  Notice I didn’t say scale up your application.  Have a read of this Wikipedia article if you need to understand the difference.

On Microsoft Azure, for example, we have some cool sliders and thresholds we can use to determine how we can scale out our deployed solutions.

Azure Auto Scale Screenshot

Scale this service based on CPU or on a schedule.

What’s Not to Understand?

In order to answer this we should examine how we’ve tended to build and operate applications in the on-premise world:

  • More instances of most software means more money for licenses. While you might get some cost relief for warm or cold standby you are going to have to pony up the cash if you want to run more than a single instance of most off-the-shelf software in warm or hot standby mode.
  • Organisations have shied away from multi-instance applications to avoid needing to patch and maintain additional operating systems and virtualisation hosts (how many “mega” web servers are running in your data centre that host many web sites?)
  • On-premise compute resources are finite (relative to the cloud).  Tight control of used resources leads to the outcome in the previous point – consolidation takes place because that hardware your company bought needs to handle the next 3 years of growth.
  • Designing and building an application that can run in a multi-instance configuration can be hard (how many web sites are you running that need “sticky session” support on a load balancer to work properly?)  Designing and building applications that are stateless at some level may be viewed by many as black magic!

The upshot of all these above points is that we have tended to a “less is more” approach when building or operating solutions on premise.  The simplest mode of hosting the application in a way that meets business availability needs is typically the one that gets chosen. Anything more is a luxury (or a complete pain to operate!)

So, How to Realise the Promise?

In order to fully leverage auto-scale capabilities we need to:

  • Adopt off-the-shelf software that provides a consumption-based licensing model. Thankfully in many cases we are already here – we can run many enterprise operating system, application and database software solutions using a pay-as-you-go (PAYG) scheme.  I can bring my own license if I’ve already paid for one too.  Those vendors who don’t offer this flexibility will eventually be left behind as it will become a competitive advantage for others in their field.
  • Leverage programmable infrastructure via automation and a culture shift to “DevOps” within our organisations.  Automation removes the need for manual completion of many operational tasks thus enabling auto-scale scenarios.  The new collaborative structure of DevOps empowers our operational teams to be more agile and to deliver more innovative solutions than they perhaps have done in the past.
  • Be clever about measuring what our minimum and maximum thresholds are for acceptable user experience.  Be prepared to set those CPU sliders lower or higher than you might otherwise have if you were doing the same thing on-premise.  Does the potential beneficial performance of auto-scale at a lower CPU utilisation level out-weigh the marginally small cost you pay given that the platform will scale back as necessary?
  • Start building applications for the cloud.  If you’ve designed and built applications with many stateless components already then you may have little work to do.  If you haven’t then be prepared to deal with the technical debt to fix (or start over).  Treat as much of your application’s components as you can as cattle and minimise the pets (read a definition that hopefully clears up that analogy).

So there we have a few things we need to think about when trying to realise the potential value of elastic scalability.  The bottom line is your organisation or team will need to invest time before moving to the cloud to truly benefit from auto-scale once there.  You should also be prepared to accept that some of what you build or operate may never be suitable for auto-scale, but that it could easily benefit from manual scale up or out as required (for example at end of month / quarter / year for batch processing).

HTH

Tagged , , , , ,

Clean Backups Using Windows Server Backup and EBS Snapshots

One of the powerful features of AWS is the ability to snapshot any EBS volume you have operating. The challenge when utilising Windows 2008 R2 is that NTFS doesn’t support point-in-time snapshotting which can result in an inconsistent EBS snapshot. You can still create a snapshot but it may be that one or more file that was mid-change may not be accurate (or usable) in the resulting snapshot.

How to solve

The answer, it turns out, is fairly easy (if a little convoluted) – Windows Server Backup PLUS EBS snapshots.

In order for this to work you will need:

1. Windows Server Backup installed on your Instance (this does not require a reboot on install).

2. An appropriately sized EBS volume to support your intended backup (see the Technet arcticle on how to work out your allowance – remember you can always create a new volume later if you get the size wrong!)

3. Windows Task Scheduler (who needs Quartz or cron ;)).

4. Some Powershell Foo.

Volume Shadow Copy using the new EBS Volume

Once you have created your new EBS volume and attached it to the Instance you want to backup, open up Windows Server Backup and select what you want to backup (full, partial, custom – to be honest you shouldn’t need to do a “bare metal” backup….)  Note that Windows Server Backup will delete *everything* on the target EBS volume and it will no longer show up in Windows Explorer – it will still be attached and show up in Disk Manager, but you can’t copy files to or from it.

The benefit of Windows Server Backup is that it will utilise Windows Volume Shadow Copy Service (VSS)  resulting in a clean copy of files – even those in-flight at the time of backup.  This gets around the inability to “freeze” NTFS at runtime.

Set your schedule and save the backup.

Snapshot the backup volume

Now you have a nice clean static copy of the filesystem you can go ahead and create an EBS snapshot.  The best way I found to schedule this was to use some Powershell magic written by the guys over at messors.com.  I slightly modified the Daily Snapshot Powershell to snapshot a pre-defined Instance and Volume by calling the CreateSnapshotForInstance function that takes an Instance and Volume identifier.  The great thing with this script setup is it auto-manages the expiry of old snapshots so my S3 usage doesn’t just keep on growing.

I invoke this Powershell on the success of the initial Windows Server Backup scheduled task by using the Task Scheduling – see this post at Binary Royale on how to do this (they send an email – I use the same approach to do a snapshot instead).

The great thing about the Powershell is that it will send you an email once completed – I have also setup mail notifications at completion of the Windows Server Backup (success, failure, disk full) so I can know when / if the backup fails at an early stage.  Note that in Windows Server 2008 R2 you can send an email right in Windows Task Scheduler – on AWS you’ll need to have an IIS smart host up and running to support this though – see my earlier post on how to set up a IIS SES relay host.

Summary

The benefit of this approach is a consistent state for your backup contents as well as an easy way to do a controlled restore at a later date (“create volume from snapshot” in the AWS management console and then attach to an Instance).  Additionally you achieve a reasonably reliable backup for not much more than the cost of EBS I/O and S3 I/O and storage costs.

Hope you find this useful.

Tagged , , , ,

You Can See The Storm Early If You Watch The Clouds

Here we are at just past the half way mark of 2012 and it’s time to ask yourself “do I have any skin in the cloud game?”

Through 2011 and 2012 the relative strength of the Australian dollar has meant the cost of entry to the major cloud platforms has dropped significantly for Australian businesses.  KPMG is estimating that if 75% of Australian businesses moved to cloud services that this would have a positive influence on Australia’s GDP to the tune of $3.32 billion annually!

I will admit to having been a cloud sceptic in past – certainly of the Azure platform, but with the recent set of changes introduced by Microsoft I’d say that Azure is now mature enough that it can be considered a competitive option against Amazon Web Services for complex application build and host (certainly in the .Net space).

What’s Your Cloud Tier?

I will add some more criteria to my original question – are you engaged on a “Tier 1” public cloud platform?  I’d classify Tier 1 as any of the mature global players – Amazon Web Services, Microsoft Azure, Google App Engine are probably top three here (there are others but I’m not going to try to rate / rank the multitude available…)

Beyond this I’d classify a range of local “Tier 2” providers that don’t compete on the same global scale as Tier 1 but offer similar sorts of options.  For the most part these providers tend to be not much more than highly virtualised traditional hosting businesses where you aren’t actually that far away from the bare metal.

Private Clouds

If you’re doing something like a Virtual Private Cloud (VPC) on a Tier 1 cloud platform then OK, you’re in the game.  If you built your own “Private Cloud” (whatever that is) or you’re running virtual machines in a Tier 2 provider then I’m afraid you can go and take a shower and head home.

My point here is that if you’re needing to care about anything vaguely hardware related or you don’t have global reach as an option on your platform then you’re not truly in the cloud.

Architectural Change

Once upon a time we had to care about resource utilisation when we ran our applications in the shared context of a mainframe that had limited (and expensive) resources.  The commoditisation of compute resources over the last 30 years means we stopped caring about the cost of CPU, RAM, Disk and network resources (for the most part).  Virtualisation only added to this.

Also, we controlled entire platforms top-to-bottom so we could tune or tweak aspects of the platform to suit our demands.  Virtualisation took away some of this flexibility but if you’ve ever spent any time managing VMWare or other systems you’ll know that there’s an large number of tuning possibilities that make the virtualisation layer almost entirely transparent.

We have become lazy in our architectural practices.  We stopped needing to solve some challenges because we could assume them away based on tuning our resources. Guess what?  We don’t get that any more with the cloud.  We still get commoditised resources but we also share it with other tenants.  This means we do need to start solving these challenges again through better architectural design.

Examples of common cloud scenarios that we haven’t had to solve recently our own platforms include:

  • Bandwidth constrained shared LAN segments.
  • I/O constrained disk access.
  • Transient component failure.
  • Dynamic scale up / scale down.
  • Pay-per-use for LAN traffic, disk access and other components.

If you’re not changing your architectural practices to take the above into your designs then you can also pack the bags and head home.

Additionally, if you’re a vendor and you aren’t making your licensing work for dynamic scale up / scale down you also lose the right to a spot on the team (and I have spoken to some vendors who aren’t supporting the cloud because it will gut their licensing model – not that they said it in so many words!)

In Summary

Ultimately the point I am trying to make is that if you haven’t been actively engaged in looking how you can move to the cloud then you are already too late to gain any form of competitive advantage in moving to it.  You should immediately start looking at ways to utilise the cloud even if it is only via small-scale deployments that are not necessarily related to key parts of your business.

I know I haven’t touched here on the data privacy / jurisdiction issues that are obviously a big issue for most Australian businesses, but there are ways to work around those challenges in the way you design and build your solutions.   Also, it’s highly likely we will also see at least one Tier 1 cloud provider here in Australia  with a full offering prior to the end of 2013. You should be getting ready now.

Finally, despite my obvious advocacy for the cloud you should always be aware of shamen.
Love consultants!

Tagged , , , ,

Dr. Script or: How I Learned to Stop Worrying and Love Powershell

Powershell has been with us now since late 2006 but my experience is that widespread understanding and use of it is still very restricted within the .Net developer community.  If you’re a Windows administrator, operator or release manager I’m sure you’re all over it.  If you’re job description doesn’t fit in one of those three groups and you’re not inclined to understand how your software will be deployed or operated then the chances are you don’t know much about Powershell.

I have to say that Powershell’s syntactic differences to the “standard” C# most developers would know is an easy place to start disliking working with it (for example it’s not “==” for equals it’s “-eq” and null becomes $null).  I say to those developers: it’s worth persevering because you stand to gain a lot once you get your head around it.

If you need to deploy something to test or debug it or you have an awkward environment with which to work with (I’m looking at you SharePoint 2010) then you will save yourself a lot of time by scripting out as much manual stuff as you can.  Your release managers and ops teams will also love you if you can provide scripts that allow them to deploy without needing to perform a massive number of manual steps (if you ever want to get to the Continuous Deployment nirvana you can’t really avoid working with a scripting language).

I’m currently working on a project that includes the need for 84 IIS Websites in a load balanced environment.  These sites follow a pattern – think I want to manually configure 84 IIS instances?  Right.  I have a Powershell script that will setup each site including a new local windows login for the Application Pool, setup the Application Pool (including user), create the Website (including folders and holding page and association with the Application Pool) and set all various host headers I need.  Finally it will grant my WebDeploy user rights to deploy to the site.  I’ll put a version of that script up in a future post.

On the same project we’ve used Powershell and the Amazon Web Services .Net SDK to provide tooling for use to push database backups from SQL Server to S3 (what, no RDS yet for SQL Server????)  That’s another trick – you get full access to .Net in Powershell simply by pulling in a reference to the right .Net assembly.

Anyway, I could bang on about why you need to learn Powershell if you haven’t, but I’ll pick up this thread in a future post when I provide a sample script for setting up IIS  sites.

On a side note I read about the death of Adam Yauch (MCA) from the Beastie Boys on May 4th – he’d been battling cancer for the last three years and most certainly didn’t deserve to put down the mic at 47.  This one’s for him.

Tagged , , , ,

Using Amazon SES for .Net Application Mail Delivery

Until March 2012 Amazon’s Simple Email Service (SES) had limited support for mail being sent via existing .Net code and the IIS SMTP virtual server.  Some recent changes mean this is now possible so in this post I’ll quickly cover how you can configure your existing apps to utilise SES.

If you don’t understand why you should be using SES for your applications then you should be looking at the Amazon SES FAQ and before you start any of this configuration you need to ensure that you have created your SMTP credentials on the AWS console and that you have an appropriately validated sender address (or addresses).  Amazon is really strict here as they don’t want to get blacklisted for being a spammer host.

IIS Virtual SMTP Server

Firstly, let’s look at how we can setup the SMTP server as a smart host that forwards mail on to SES for you.  This approach means that you can configure all your applications to forward via IIS rather than talking directly to the SES SMTP interface.

1. Open up the IIS 6 Manager and select the SMTP Virtual Server you want to configure.

1.iis_virtual_smtp

2. Right-click on the server and select Properties.

3. In the Properties Window click on the Delivery tab.

4. On the Delivery tab click on the Outbound Security button on the bottom right.

5. In the Outbound Security dialog select “Basic Authentication” and enter your AWS SES Credentials.  Make sure you check the “TLS Encryption” box at the bottom left of the dialog.  Click OK. Your screen should look similar to this:

2.delivery_setup

6. Now open the Advanced Delivery dialog by clicking on the button.

7. Modify the dialog so it looks similar to the below.  I put the internal DNS name for my host here – the problem with this is that if you shut off your Instance the name will change and you need to update this.  Click OK.

3.advanced_delivery

Now you should be ready to use this IIS SMTP Virtual Server as a relay for you applications to SES.  Make sure you set AWS SecurityGroups up correctly and that you are restricting which hosts can relay via your SMTP server.

Directly from .Net Code

Without utilising the Amazon AWS SDK for .Net you can also continue to send mail the way you always have – you will need to make the following change to your App.config or Web.config file.

<mailSettings>
      <smtp deliveryMethod="Network" from="validated@yourdomain.com">
          <network defaultCredentials="false"
                   enableSsl="true"
                   host="email-smtp.us-east-1.amazonaws.com"
                   port="25"
                   userName="xxxxxxxx"
                   password="xxxxxxxx" />
      </smtp>
</mailSettings>

Thanks to the author of the March 11, 2012 post on this thread on the AWS support forums for the configuration file edits above.

With these two changes most “legacy” .Net applications should be able to utilise Amazon’s SES service for mail delivery.  Happy e-mailing!

Tagged , , , ,

Amazon AWS Elastic Load Balancers and MSBuild – BFF

Our jobs take us to some interesting places sometimes – me, well, recently I’ve spent a fair amount of time stuck in the land of Amazon’s cloud offering AWS.

Right now I’m working on a release process based around MSBuild that can deploy to a farm of web servers at AWS.  As with any large online offering ours makes use of load balancing to provide a reliable and responsive service across a set of web servers.

Anyone who has managed deployment of a solution in this scenario is most likely familiar with this approach:

  1. Remove target web host from load balancing pool.
  2. Update content on the web host and test.
  3. Return web host to load balancing pool.
  4. Repeat for all web hosts.
  5. (Profit! No?!)

Fantastic Elastic and the SDK

In AWS-land load balancing is provided by the Elastic Load Balancing (ELB) service which, like many of the components that make up AWS, provides a nice programmatic API in a range of languages.

Being focused primarily on .Net we are we are happy to see good support for it in the form of the AWS SDK for .NET.  The AWS SDK provides a series of client proxies and strongly-typed objects that can be used to programmatically interface with pretty much anything your AWS environment is running.

Rather than dive into the detail on the SDK I’d recommend downloading a copy and taking a look through the samples they have – note that you will need an AWS environment in order to actually test out code but this doesn’t stop you from reviewing the code samples.

Build and Depoy

As mentioned above we are looking to do minimal manual intervention deployments and are leveraging MSBuild to build, package and deploy our solution.  One item that is missing in this process is a way to take a target machine out of the load balancer pool so we can deploy to it.

I spent some time reviewing existing custom MSBuild task libraries that provide AWS support but it looks like many of them are out-of-date and haven’t been touched since early 2011.  AWS is constantly changing so being able to keep up with all it has to offer would probably require some effort!

The result is that I decided to create a few custom tasks so that I could use for registration / deregistration of EC2 Instances from one or more ELBs.

I’ve included a sample of a basic RegisterInstances custom task below to show you how you go about utilising the AWS SDK to register an Instance with an ELB.   Note that the code below works but that it’s not overly robust.

The things you need to know for this to work are:

  1. Your AWS Security Credentials (Access and Secret Keys).
  2. The names of the ELBs you want to register / deregister instances with.
  3. The names of the EC2 Instances to register / deregister.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Build.Utilities;
using Microsoft.Build.Framework;
using Amazon.ElasticLoadBalancing;
using Amazon.ElasticLoadBalancing.Model;

namespace TheFarm.MsBuild.CustomTasks.Aws.Elb
{
    /// <summary>
    /// Register one or more EC2 Instance with one or more AWS Elastic Load Balancer.
    /// </summary>
    /// <remarks>Requires the AWS .Net SDK.</remarks>
    public class RegisterInstances : Task
    {
         /// <summary>
         /// Gets or sets the load balancer names.
         /// </summary>
         /// <value>
         /// The load balancer names.
         /// </value>
         /// <remarks>The account associated with the AWS Access Key must have created the load balancer(s).</remarks>
         [Required]
         public ITaskItem[] LoadBalancerNames { get; set; }

         /// <summary>
         /// Gets or sets the instance identifiers.
         /// </summary>
         /// <value>
         /// The instance identifiers.
         /// </value>
         [Required]
         public ITaskItem[] InstanceIdentifiers { get; set; }

         /// <summary>
         /// Gets or sets the AWS access key.
         /// </summary>
         /// <value>
         /// The aws access key.
         /// </value>
         [Required]
         public ITaskItem AwsAccessKey { get; set; }

         /// <summary>
         /// Gets or sets the AWS secret key.
         /// </summary>
         /// <value>
         /// The aws secret key.
         /// </value>
         [Required]
         public ITaskItem AwsSecretKey { get; set; }

         /// <summary>
         /// Gets or sets the Elastic Load Balancing service URL.
         /// </summary>
         /// <value>
         /// The ELB service URL.
         /// </value>
         /// <remarks>Will typically take the form: https://elasticloadbalancing.region.amazonaws.com</remarks>
         [Required]
         public ITaskItem ElbServiceUrl { get; set; }

         /// <summary>
         /// When overridden in a derived class, executes the task.
         /// </summary>
         /// <returns>
         /// true if the task successfully executed; otherwise, false.
         /// </returns>
         public override bool Execute()
         {
             try
             {
                  // throw away - to test for valid URI.
                  new Uri(ElbServiceUrl.ItemSpec);

                  var config = new AmazonElasticLoadBalancingConfig { ServiceURL = ElbServiceUrl.ItemSpec };

                  using (var elbClient = new AmazonElasticLoadBalancingClient(AwsAccessKey.ItemSpec, AwsSecretKey.ItemSpec, config))
                  {
                        foreach (var loadBalancer in LoadBalancerNames)
                        {
                              Log.LogMessage(MessageImportance.Normal, "Preparing to add Instances to Load Balancer with name '{0}'.", loadBalancer.ItemSpec);

                              var initialInstanceCount = DetermineInstanceCount(elbClient, loadBalancer);

                              var instances = PrepareInstances();

                              var registerResponse = RegisterInstancesWithLoadBalancer(elbClient, loadBalancer, instances);

                              ValidateInstanceRegistration(initialInstanceCount, instances, registerResponse);

                              DetermineInstanceCount(elbClient, loadBalancer);
                        }
                  }
             }
             catch (InvalidInstanceException iie)
             {
                   Log.LogError("One or more supplied instances was invalid.", iie);
             }
             catch (LoadBalancerNotFoundException lbe)
             {
                   Log.LogError("The supplied Load Balancer could not be found.", lbe);
             }
             catch (UriFormatException)
             {
                   Log.LogError("The supplied ELB service URL is not a valid URI. Please confirm that it is in the format 'scheme://aws.host.name'");
             }

             return !Log.HasLoggedErrors;
        }

        /// <summary>
        /// Prepares the instances.
        /// </summary>
        /// <returns>List of Instance objects.</returns>
        private List<Instance> PrepareInstances()
        {
            var instances = new List<Instance>();

            foreach (var instance in InstanceIdentifiers)
            {
                Log.LogMessage(MessageImportance.Normal, "Adding Instance '{0}' to list.", instance.ItemSpec);

                instances.Add(new Instance { InstanceId = instance.ItemSpec });
            }
            return instances;
        }

        /// <summary>
        /// Registers the instances with load balancer.
        /// </summary>
        /// <param name="elbClient">The elb client.</param>
        /// <param name="loadBalancer">The load balancer.</param>
        /// <param name="instances">The instances.</param>
        /// <returns>RegisterInstancesWithLoadBalancerResponse containing response from AWS ELB.</returns>
        private RegisterInstancesWithLoadBalancerResponse RegisterInstancesWithLoadBalancer(AmazonElasticLoadBalancingClient elbClient, ITaskItem loadBalancer, List<Instance> instances)
        {
            var registerRequest = new RegisterInstancesWithLoadBalancerRequest { Instances = instances, LoadBalancerName = loadBalancer.ItemSpec };

            Log.LogMessage(MessageImportance.Normal, "Executing call to add {0} Instances to Load Balancer '{1}'.", instances.Count, loadBalancer.ItemSpec);

            return elbClient.RegisterInstancesWithLoadBalancer(registerRequest);
        }

        /// <summary>
        /// Validates the instance registration.
        /// </summary>
        /// <param name="initialInstanceCount">The initial instance count.</param>
        /// <param name="instances">The instances.</param>
        /// <param name="registerResponse">The register response.</param>
        private void ValidateInstanceRegistration(int initialInstanceCount, List<Instance> instances, RegisterInstancesWithLoadBalancerResponse registerResponse)
        {
            var postInstanceCount = registerResponse.RegisterInstancesWithLoadBalancerResult.Instances.Count();

            if (postInstanceCount != initialInstanceCount + instances.Count)
            {
                 Log.LogWarning("At least one Instance failed to register with the Load Balancer.");
            }
        }

        /// <summary>
        /// Determines the instance count.
        /// </summary>
        /// <param name="elbClient">The elb client.</param>
        /// <param name="loadBalancer">The load balancer.</param>
        /// <returns>integer containing the instance count.</returns>
        private int DetermineInstanceCount(AmazonElasticLoadBalancingClient elbClient, ITaskItem loadBalancer)
        {
             var response = elbClient.DescribeLoadBalancers(new DescribeLoadBalancersRequest { LoadBalancerNames = new List<string> { loadBalancer.ItemSpec } });

             var initialInstanceCount = response.DescribeLoadBalancersResult.LoadBalancerDescriptions[0].Instances.Count();

             Log.LogMessage(MessageImportance.Normal, "Load Balancer with name '{0}' reports {1} registered Instances.", loadBalancer.ItemSpec, initialInstanceCount);

             return initialInstanceCount;
        }
    }
}

So now we have a task compiled into an assembly we can reference that assembly in our build script and invoke the task using the following syntax:


    <RegisterInstances LoadBalancerNames="LoadBalancerName1" InstanceIdentifiers="i-SomeInstance1" AwsAccessKey="YourAccessKey" AwsSecretKey="YourSecretKey" ElbServiceUrl="https://elasticloadbalancing.your-region.amazonaws.com" />

That’s pretty much it… there are some vagaries to be aware of – the service call for deregistration of an Instance returns prior to the instance being fully de-registered with the load balancer so don’t key any deployment action directly off of that return – you should perform other checks first to make sure that the instance *is* no longer registered prior to deploying.

I hope you’ve found this post useful in showing you what is possible when combining the AWS SDK with the extensibility of MSBuild.

Tagged , , ,