Category Archives: Cloud

Microsoft Open Source Roadshow – Free training on Azure – Auckland and Wellington!

Microsoft Open Source Roadshow

Hello New Zealand friends!

I’m really happy to share that we are bringing the Open Source Roadshows to Auckland and Wellington in November 2017!

We’ll cover language support (application and Azure SDK), OS support (Linux, BSD), Datastores (MySQL, PostreSQL, MongoDB, SQL Server on Linux), Continuous Deployment and, last, but not least, Containers (Docker, Container Registry, Kubernetes, et al).

If you’re interested in learning more here are the links to register:

  • Auckland – Tuesday 7 November 2017: Register
  • Wellington – Wednesday 8 November 2017: Register

If you have any questions you’d like to see answered at the days feel free to leave a comment.

I hope to see you there!

Tagged , ,

Speaking: Azure Functions at MUG Strasbourg – 28 September

I’m really excited about this opportunity to share the power of Azure with the developer and IT Pro community in France that is soon to gain local Azure Regions in which to build their solutions.

If you live in the surrounding areas I’d love to see you there. More details available via Meetup.

Tagged , ,

Moving from Azure VMs to Azure VM Scale Sets – Runtime Instance Configuration

In my previous post I covered how you can move from deploying a solution to pre-provisioned Virtual Machines (VMs) in Azure to a process that allows you to create a custom VM Image that you deploy into VM Scale Sets (VMSS) in Azure.

As I alluded to in that post, one item we will need to take care of in order to truly move to a VMSS approach using a VM image is to remove any local static configuration data we might bake into our solution.

There are a range of options you can move to when going down this path, from solutions you custom build to running services such as Hashicorp’s Consul.

The environment I’m running in is fairly simple, so I decided to focus on a simple custom build. The remainder of this post is covering the approach I’ve used to build a solution that works for me, and perhaps might inspire you.

I am using an ASP.Net Web API as my example, but I am also using a similar pattern for Windows Services running on other VMSS instances – just the location your startup code goes will be different.

The Starting Point

Back in February I blogged about how I was managing configuration of a Web API I was deploying using VSTS Release Management. In that post I covered how you can use the excellent Tokenization Task to create a Web Deploy Parameters file that can be used to replace placeholders on deployment in the web.config of an application.

My sample web.config is shown below.

The problem with this approach when we shift to VM Images is that these values are baked into the VM Image which is the build output, which in turn can be deployed to any environment. I could work around this by building VM Images for each environment to which I deploy, but frankly that is less than ideal and breaks the idea of “one binary (immutable VM), many environments”.

The Solution

I didn’t really want to go down the route of service discovery using something like Consul, and I really only wanted to use Azure’s default networking setup. This networking requirement meant no custom private DNS I could use in some form of configuration service discovery based on hostname lookup.

…and…. to be honest, with the PaaS services I have in Azure, I can build my own solution pretty easily.

The solution I did land on looks similar to the below.

  • Store runtime configuration in Cosmos DB and geo-replicate this information so it is highly available. Each VMSS setup gets its own configuration document which is identified by a key-service pair as the document ID.
  • Leverage a read-only Access Key for Cosmos DB because we won’t ever ask clients to update their own config!
  • Use Azure Key Vault as to store the Cosmos DB Account and Access Key that can be used to read the actual configuration. Key Vault is Regionally available by default so we’re good there too.
  • Configure an Azure AD Service Principal with access to Key Vault to allow our solution to connect to Key Vault.

Update July 2018: Microsoft has released Managed Service Identities as a way to do delegated permissions between resources in Azure – I would strongly advise wrapping your head around this and leveraging MSI as the way to connect your VMSS instances to Key Vault (and potentially other resources).

I used a conventions-based approach to configuration, so that the whole process works based on the VMSS instance name and the service type requesting configuration. You can see this in the code below based on the URL being used to access Key Vault and the Cosmos DB document ID that uses the same approach.

The resulting changes to my Web API code (based on the earlier web.config sample) are shown below. This all occurs at application startup time.

I have also defined a default Application Insights account into which any instance can log should it have problems (which includes not being able to read its expected Application Insights key). This is important as it allows us to troubleshoot issues without needing to get access to the VMSS instances.

Here’s how we authorise our calls to Key Vault to retrieve our initial configuration Secrets (called on line 51 of the above sample code).

My goal was to make configuration easily manageable across multiple VMSS instances which requires some knowledge around how VMSS instance names are created.

The basic details are that they consist of a hostname prefix (based on what you input at VMSS creation time) that is appended with a base-36 (hexatrigesimal) value representing the actual instance. There’s a great blog from Guy Bowerman from Microsoft that covers this in detail so I won’t reproduce it here.

The final piece of the puzzle is the Cosmos DB configuration entry which I show below.

The ‘id’ field maps to the VMSS instance prefix that is determined at runtime based on the name you used when creating the VMSS. We strip the trailing 6 characters to remove the unique component of each VMSS instance hostname.

The outcome of the three components (code changes, Key Vault and Cosmos DB) is that I can quickly add or remove VMSS groups in configuration, change where their configuration data is stored by updating the Key Vault Secrets, and even update running VMSS instances by changing the configuration settings and then forcing a restart on the VMSS instances, causing them to re-read configuration.

Is the above the only or best way to do this? Absolutely not 🙂

I’d like to think it’s a good way that might inspire you to build something similar or better 🙂

Interestingly, getting to this stage as well, I’ve also realised there might be some value in considering moving this solution to Service Fabric in future, though I am more inclined to shift to Containers running under the control an orchestrator like Kubernetes.

What are you thoughts?

Until the next post!

Tagged , , , , , ,

Moving from Azure VMs to Azure VM Scale Sets – VM Image Build

I have previously blogged about using Visual Studio Team Services (VSTS) to securely build and deploy solutions to Virtual Machines running in Azure.

In this, and following posts I am going to take the existing build process I have and modify it so I can make use of VM Scale Sets to host my API solution. This switch is to allow the API to scale under load.

My current setup is very much fit for purpose for the limited trial it’s been used in, but I know (at minimum) I’ll see at least 150 times the traffic when I am running at full-scale in production, and while my trial environment barely scratches the surface in terms of consumed resources, I don’t want to have to capacity plan to the n-nth degree for production.

Shifting to VM Scale Sets with autoscale enabled will help me greatly in this respect!

Current State of Affairs

Let’s refresh ourselves with what is already in place.

Build

My existing build is fairly straightforward – we version the code (using a PowerShell script), restore packages, build the solution and then finally make sure all our artifacts are available for use by the Release Management process.

Existing build process

The output of this build is a Web Deploy package along with a PowerShell DSC module that configures the deployment on the target VM.

Release Management

I am using multiple Environments for Release Management to manage transformations of the Web Deploy Parameters file along with the Azure Subscription / Resource Group being deployed to. The Tasks in each Environment are the same though.

My Release Management Tasks (as shown below) open the NSG to allow DSC remote connections from VSTS, transform the Web Deploy Parameters file, find the VMs in a particular Azure Resource Group, copy the deployment package to each VM, run the DSC script to install the solution, before finally closing the NSG again to stop the unwashed masses from prying into my environment.

Existing release process

All good so far?

What’s the goal?

The goal is to make the minimum amount of changes to existing VSTS and deployment artifacts while moving to VM Scale Sets… sounds like an interesting challenge, so let’s go!

Converting the Build

The good news is that we can retain the majority of our existing Build definition.

Here are the items we do need to update.

Provisioning PowerShell

The old deployment approach leveraged PowerShell Desired State Configuration (DSC) to configure the target VM and deploy the custom code. The DSC script to achieve this is shown below.

The challenge with the above PowerShell is it assumes the target VM has been configured to allow WinRM / DSC to run. In our updated approach of creating a VM Image this presents some challenges, so I redeveloped the above script so it doesn’t require the use of DSC. The result is shown below.

As an aside, we could also drop the use of the Parameters file here too. As we’ll see in another post, we need to make the VM Image stateless, so any local web.config changes that are environment-specific are problematic and are best excluded from the resulting image.

Network Security Group Script

In the new model, which prepares a VM Image, we no longer need the Azure PowerShell script that opens / closes the Network Security Group (NSG) on deployment, so it’s removed in the new process.

No more use of Release Management

As the result of our Build is a VM Image we no longer need to leverage Release Management either, making our overall process much simpler.

The New Build

The new Build definition shown below – you will notice the above changes have been applied, with the addition of two new Tasks. The aspect of this I am most happy about is that our core build actually remains mostly unchanged – we have had to add two additional Tasks and change one PowerShell script to make this work.

New build process

Let’s look at the new Tasks.

Build Machine Image

This Task utilises Packer from Hashicorp to prepare a generalised Windows VM image that we can use in a VM Scale Set.

The key items to note are: you need an Azure Subscription where a temporary VM, and the final generalised VHD can be created so that Packer can build the baseline image for you.

New Build Packer Task

You will notice we are using the existing artifacts staging directory as the source of our configuration PowerShell (DeployVmSnap.ps1) which is used by Packer to configure up the host once the VM is created using an Azure Gallery Image.

The other important item here is the use of the output field. This will contain the fully qualified URL in blob storage where the resulting packed image will reside. We can use this in our next step.

Create VM Image Registration

The last Task I’ve added is to invoke an Azure PowerShell script, which is just a PowerShell script, but with the necessary environmental configuration to allow me to execute Cmdlets that interact with Azure’s APIs.

The result of the previous Packer-based Task is a VHD sitting in a Blob Storage account. While we can use this in various scenarios, I am interested in ensuring it is visible in the Azure Portal and also in allowing it to be used in VM Scale Sets that utilised Managed Disks.

The PowerShell script is shown below.

.. and here is how it is used in the Task in VSTS..

New build VM Image

You can see how we have utilised the Packer Task’s output parameter as an input into this Task (it’s in the “Script Arguments” box at the bottom of the Task).

The Result

Once we have this configured and running the result is a nice crisp VM Image that can be used in a VM Scale Set. The below screenshot shows you how this looks in my environment – I wrapped the Azure Storage Account where the VHDs live, along with the VM Image registrations in the same Resource Group for cleaner management.

Build Output

There are still some outstanding items we need to do with, specifically: configuration management (our VM Image has to be stateless) and VM Scale Set creation using the Image. We will deal with these two items in the following posts.

For now I hope you have a good grasp on how you can convert an existing VSTS build that deploys to existing VMs to one that produces a generalised VM Image that you can use either for new VMs or in VM Scale Sets.

Until the next post.

🙂

Want to see how I dealt with instance configuration? Then have a read of my next post in this series.

Tagged , , , , ,

Microsoft Open Source Roadshow – Free training on Open Source and Azure

Microsoft Open Source Roadshow

In early August I’ll be running a couple of free training days covering how developers who work in the Open Source space can bring their solutions to Microsoft’s Azure public cloud.

We’ll cover language support (application and Azure SDK), OS support (Linux, BSD), Datastores (MySQL, PostreSQL, MongoDB), Continuous Deployment and, last, but not least, Containers (Docker, Container Registry, Kubernetes, et al).

We’re starting off in Sydney and Melbourne, so if you’re interested here are the links to register:

  • Sydney – Monday 7 August 2017: Register
  • Melbourne – Friday 11 August 2017: Register

If you have any questions you’d like to see answered at the days feel free to leave a comment.

I hope to see you there!

Tagged , ,

Deploy a PHP site to Azure Web Apps using Dropbox

I’ve been having some good fun getting into the nitty gritty of Azure’s Open Source support and keep coming across some amazing things.

If you want to move away from those legacy hosting businesses and want a simple method to deploy static or dynamic websites, then this is worth a look.

The sample PHP site I used for this demonstration can be cloned on Github here: https://github.com/banago/simple-php-website

The video is without sound, but should be easy enough to follow without.

It’s so simple even your dog could do it.

Dogue

Tagged , , ,

AAD B2C Talk – Innovation Days 2016 Wrap

I recently spoke at the Innovation Days 2016 event held in Sydney on Azure AD B2C.

The presentation for my talk is available here:

https://1drv.ms/p/s!AqBI2LiKM4LHwNJvTxrXNAblpTBCJA

and you can find the sample code for the web and API apps here:

https://github.com/sjwaight/innovationdays2016/

Why You Should Care About Containers

The release this week of the Windows Server 2016 Technical Preview 3 which includes the first public release of Microsoft’s Docker-compatible container implementation is sure to bring additional focus onto an already hot topic.

I was going to write a long introductory post about why containers matter, but Azure CTO Mark Russinovich beat me to it with his great post this week over on the Azure site.

Instead, here’s a TL;DR summary on containers and the Windows announcement this week.

  • A Container isn’t a Virtual Machine it’s a Virtual Operating System. Low-level services are provided by a Container Host which manages resource allocation (CPU / RAM / Disk). Some smarts around filesystem use means a Container can effectively share most of the underlying Container Host’s filesystem and only needs to track delta changes to files.
  • Containers are not just a cloud computing thing: they can run anywhere you can run a Linux or Windows server.
  • Containers are, however well suited to cloud computing because they offer:
    • faster startup times (they aren’t an entire OS on their own)
    • easier duplication and snapshotting (no need to track an entire VM any more)
    • higher density of hosting (though noisy neighbour control still needs solving)
    • easier portability: anywhere you have a compatible Container Host you can run your Container. The underlying virtualisation platform no longer matters, just the OS.
  • Docker is supported on all major tier one public clouds. Azure, AWS, GCP, Bluemix and Softlayer.
  • A Linux Container can’t run on a Windows Host (and vice versa): the Container Host shares its filesystem with a Container so it’s not possible to mix and match them!
  • Containers are well suited to use in microservices architectures where a Container hosts a single service.
  • Docker isn’t the only Container tech about (but is the best known and considered most mature) and we can hold out hope of good interoperability in the Container space (for now) thanks to the Open Container Initiative (OCI).

Containers offer great value for DevOps-type custom software development and delivery, but can also work for standard off-the-shelf software too. I fully expect we will see Microsoft offer Containers for specific roles for their various server products.

As an example, for Exchange Server you may see Containers available for each Exchange role: Mailbox, Client Access (CAS), Hub Transport (Bridgehead), Unified Messaging and Edge Transport. You apply minimal configuration to the Container but can immediately deploy it into an existing or new Exchange environment. I would imagine this would make a great deal of sense to the teams running Office 365 given the number of instances of these they would have to run.

So, there we have it, hopefully an easily digestable intro and summary of all things Containers. If you want to play with the latest Windows Server release you can spin up a copy in Azure if you want. If you don’t have a subscription sign up for a trial. Alternatively, Docker offers some good introductory resources and training is available in Australia*.

HTH.

* Disclaimer: Cevo is a sister company of my employer Kloud Solutions.

Tagged , , , , ,

Setting Instance Level Public IPs on Azure VMs

Since October 2014 it has been possible to add a public IP address to a virtual machine in Azure so that it can be directly connected to by clients on the internet. This bypasses the load balancing in Azure and is primarily designed for those scenarios where you need to test a host without the load balancer, or you are deploying a technology that may require a connection type that isn’t suited to Azure’s Load Balancing technology.

This is all great, but the current implementation provides you with dynamic IP addresses only, which is not great unless you can wrap a DNS CNAME over the top of them. Reading the ILPIP documentation suggested that a custom FQDN was generated for an ILPIP, but for the life of me I couldn’t get it to work!

I went around in circles a bit based on the documentation Microsoft supplies as it looked like all I needed to do was to call the Set-AzurePublicIP Cmdlet and the Azure fabric would take care of the rest… but no such luck!

Get-AzureVM -ServiceName svc01 -Name vm01 | `
Set-AzurePublicIP -PublicIPName vm01ip -IdleTimeoutInMinutes 4 | `
Update-AzureVM

When I did a Get-AzureVM after the above I got the following output – note that I did get a public IP, but no hostname to go along with it!

DeploymentName              : svc01
Name                        : vm01
Label                       :
VM                          : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVM
InstanceStatus              : ReadyRole
IpAddress                   : 10.0.0.5
InstanceStateDetails        :
PowerState                  : Started
InstanceErrorCode           :
InstanceFaultDomain         : 1
InstanceName                : vm01
InstanceUpgradeDomain       : 1
InstanceSize                : Small
HostName                    : vm01
AvailabilitySetName         : asn01
DNSName                     : http://svc01.cloudapp.net/
Status                      : ReadyRole
GuestAgentStatus            : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.GuestAgentStatus
ResourceExtensionStatusList : {Microsoft.Compute.BGInfo}
PublicIPAddress             : 191.239.XX.XX
PublicIPName                : vm01ip
PublicIPDomainNameLabel     :
PublicIPFqdns               : {}
NetworkInterfaces           : {}
VirtualNetworkName          : Group demo01
ServiceName                 : svc01
OperationDescription        : Get-AzureVM
OperationId                 : 62fdb5b28dccb3xx7ede3yyy18c0454
OperationStatus             : OK

Aaarggh!

The Solution

It turns out, after a little experimentation, that you all you have to do to get this to work is to supply a value to an undocumented parameter DomainNameLabel for the Set-AzurePublicIP Cmdlet.

Note: there is also no way to achieve this at time of writing via the Azure web portals – you have to use PowerShell to get this configured.

Let’s try our call again above with the right arguments this time!

Get-AzureVM -ServiceName svc01 -Name vm01 | `
Set-AzurePublicIP -PublicIPName vm01ip `
   -IdleTimeoutInMinutes 4 -DomainNameLabel vm01ilpip | `
Update-AzureVM

Success!!

DeploymentName              : svc01
Name                        : vm01
Label                       :
VM                          : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVM
InstanceStatus              : ReadyRole
IpAddress                   : 10.0.0.5
InstanceStateDetails        :
PowerState                  : Started
InstanceErrorCode           :
InstanceFaultDomain         : 1
InstanceName                : vm01
InstanceUpgradeDomain       : 1
InstanceSize                : Small
HostName                    : vm01
AvailabilitySetName         : asn01
DNSName                     : http://svc01.cloudapp.net/
Status                      : ReadyRole
GuestAgentStatus            : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.GuestAgentStatus
ResourceExtensionStatusList : {Microsoft.Compute.BGInfo}
PublicIPAddress             : 191.239.XX.XX
PublicIPName                : vm01ip
PublicIPDomainNameLabel     : vm01ilpip
PublicIPFqdns               : {vm01ilpip.svc01.cloudapp.net , vm01ilpip.0.svc01.cloudapp.net}
NetworkInterfaces           : {}
VirtualNetworkName          : Group demo01
ServiceName                 : svc01
OperationDescription        : Get-AzureVM
OperationId                 : 62fdb5b28dccb3xx7ede3yyy18c0454
OperationStatus             : OK

Now that I have this information I can setup DNS CNAMEs against the PublicIPFqdns and use DNS to manage the invariable IP address change between instance recycles. Happy days!

Tagged , , , , , ,

Microsoft Ignite 2015 Event Review

Frank Sinatra sang “My Kind of Town (Chicago is)” and Ol’ Blue Eyes certainly knew a great town when he saw one!

The first ever Microsoft Ignite was held just this past week in Chicago at the McCormick Place Convention Centre (the largest in North America) and I was lucky enough to attend with the other 22,000+ attendees!

Ignite’s been a bit of an interesting event this time around as it has replaced a bunch of product-specific North American conferences such as MEC and Lync Conference, and it seemed to attract overflow from people who missed out on tickets to Build the week before. I think a lot of attendees seemed a little unsure about what Ignite actually was – is it for IT Pros or Developers, or both? More on this later!

Let me share my experience with you.

Firstly, as you might guess from my introduction, Ignite was huge – 22,000+ attendees, 4.5 days and a session catalogue that ran into easily 100+ sessions (I haven’t counted, but I’m sure someone has the full number and that my estimate is way-way too low). The Expo floor itself was massive, with Microsoft product teams taking substantial floor space and being available and open to talk and take feedback.

The sheer scale of this event lead to some fairly interesting experiences…

Eating

I think everyone got used to being herded to the first open food buffet where breakfast and lunch were served. Obviously humans will head to the nearest table, but I’m pretty sure by day 5 everyone was a little over the phrase ‘keep moving down the line to the first open table’ (followed closely by ‘food this way!’). It was generally done very politely though.

Food variation was pretty good and the serving style meant you avoided large servings, though some offerings, were, errr, not what I’m used to (but I gave some a go in the name of international relations).

The red velvet cake was pretty amazing. I can’t pick a top meal (mainly because I don’t remember them all), but overall the food gets a thumbs up.

Moving Around

The distances needing to be travelled between sessions sometimes resulted in needing to almost sprint between them. Using one speaker’s joke: your Fitbit thanks you.

The size of McCormick Place meant that travel time between two sessions in the gap between sessions (typically 15 minutes) could be a challenge. Couple this with a crowd who are unfamiliar with the location and all sorts of mayhem ensues. I would say by day three the chaos had settled down as people started to get familiar with locations (or were back at the hotel with a hangover).

If you wanted to have a meaningful discussion with anyone in the Expo you would effectively forgo a session or lunch depending on which was more important to you :).

💡 Pro-tip: learn the locations / map before you go as there are lot of signs in the centre that may not make much sense at first.

Getting Out

McCormick Place is a substantial distance from downtown Chicago which presented some challenges. Shuttle buses picked up and dropped off during morning and evening periods, but not in the middle of the day. If you needed anything in the middle of the day it was via taxi. The Chicago Metra train runs through here, but appears to be so infrequent that it’s not that useful.

On Tuesday evening many social events had been organised by various product teams and vendors which were mostly held downtown. Trying to make these immediately after the end of the day was tricky as shuttle buses to hotels filled very quickly and a massive taxi queue formed.

For me this meant an hour long walk to my first event, essentially missing most of it!

The second event, also downtown, was a bit more of a success though 🙂

Did I mention the Queues?

For…

  • Toilets: I can now appreciate what major events are like for women who usually end up queuing for the toilet. Many of the breakout sessions were held near toilets that were woefully inadequate for the volume of people (particularly if you’re serving the same people free coffee and drinks…)

    💡 Pro-tip: there are a set of massive gents toilets located behind the Connies Pizza on North Level 2. Patently I didn’t go searching for the Ladies…

  • Swag: yep, you could tell the cool giveaways or prizes on the Expo floor simply by looking at the length of the queue.
  • Food: small ones at breakfast and lunch, some unending ones for the Attendee Celebration (hands up if you actually got a hot dog?!)

    💡 Pro-tip: at the Celebration find the least popular food that you still like. Best one for me was the steamed pork and vegetable buns, though there are only so many you can eat.

  • Transport: as I already hinted at above – depending on time of day you could end up in a substantial queue to get on a bus or taxi.

    💡 Pro-tip: take a room in a hotel a fair distance away (less people) and also walk a little if you need a taxi and flag one down.

Session Content

I don’t come from an IT Pro background and I don’t have an alignment with a particular product such as Exchange, so for me Ignite consisted of Azure-focused content, some SharePoint development for Office 365 and custom Azure application development using Node. I got a lot of useful insights at the event so it hit the mark for me – the union of IT Pro and Developer competencies is being driven by public cloud technology so it was great!

I have the feeling quite a few attendees were those who missed out on entrance to Build the week before, and I suspect for many they may have found a lack of compelling content (unless they were SharePoint developers). I also felt that a lot of content advertised as level 300 was more like level 200, though there were some good sessions that got the depth just right. I’m not sure if this issue is because of the diverse range of roles expected to be attend (admins, developers, managers and C-levels) which meant content was written to the lowest common denominator.

Also finding suitable sessions was a bit of a challenge too given the volume available. While the online session builder (and mobile app) was certainly useful I did spend a bit of time just scrolling through things and I would say the repeated sessions were probably also unnecessary. I certainly missed a couple of sessions I would have liked to attend (though I can catch up on Channel 9) primarily because I missed them in the schedule completely.

I hope for 2016 some work is done on the content to:

  • Make it easier to build a schedule in advance – the web schedule builder was less than ideal
  • Increase the technical depth of sessions, or clearly demarcate content aimed only at architect or C-level attendees
  • Have presenters who can present. There were some sessions I went to that were trainwrecks – granted in a conference this size maybe that happens… but I just had the feeling here that some speakers had no training or prep time for their sessions
  • Reduce or remove repeated sessions.

💡 Pro-tip: make sure to get the mobile application for Ignite (and that you have it connected to the Internet). It really was the most useful thing to have at the event!

Ignite The Future

As I noted above, this was the first year Ignite was held (and also the first in Chicago). During the 2015 conference Microsoft announced that the conference will be back in Chicago for 2016.

Should you go? Absoutely!

Some tweaks to the event (granted, so fairly large ones) should help make it smoother next time round – and I’ve seen the Microsoft Global Events team actively taking feedback on board elsewhere online.

The Ignite Brand is also here to stay – I have it on good advice that TechEd as a brand is effectively “Done” and Ignite will be taking over. Witness the first change: Ignite New Zealand.

Chicago’s certainly my type of town!

PS – make sure to check out what’s on when you’re in town…

Tagged , ,