Category Archives: Cloud

AAD B2C Talk – Innovation Days 2016 Wrap

I recently spoke at the Innovation Days 2016 event held in Sydney on Azure AD B2C.

The presentation for my talk is available here:

https://1drv.ms/p/s!AqBI2LiKM4LHwNJvTxrXNAblpTBCJA

and you can find the sample code for the web and API apps here:

https://github.com/sjwaight/innovationdays2016/

Why You Should Care About Containers

The release this week of the Windows Server 2016 Technical Preview 3 which includes the first public release of Microsoft’s Docker-compatible container implementation is sure to bring additional focus onto an already hot topic.

I was going to write a long introductory post about why containers matter, but Azure CTO Mark Russinovich beat me to it with his great post this week over on the Azure site.

Instead, here’s a TL;DR summary on containers and the Windows announcement this week.

  • A Container isn’t a Virtual Machine it’s a Virtual Operating System. Low-level services are provided by a Container Host which manages resource allocation (CPU / RAM / Disk). Some smarts around filesystem use means a Container can effectively share most of the underlying Container Host’s filesystem and only needs to track delta changes to files.
  • Containers are not just a cloud computing thing: they can run anywhere you can run a Linux or Windows server.
  • Containers are, however well suited to cloud computing because they offer:
    • faster startup times (they aren’t an entire OS on their own)
    • easier duplication and snapshotting (no need to track an entire VM any more)
    • higher density of hosting (though noisy neighbour control still needs solving)
    • easier portability: anywhere you have a compatible Container Host you can run your Container. The underlying virtualisation platform no longer matters, just the OS.
  • Docker is supported on all major tier one public clouds. Azure, AWS, GCP, Bluemix and Softlayer.
  • A Linux Container can’t run on a Windows Host (and vice versa): the Container Host shares its filesystem with a Container so it’s not possible to mix and match them!
  • Containers are well suited to use in microservices architectures where a Container hosts a single service.
  • Docker isn’t the only Container tech about (but is the best known and considered most mature) and we can hold out hope of good interoperability in the Container space (for now) thanks to the Open Container Initiative (OCI).

Containers offer great value for DevOps-type custom software development and delivery, but can also work for standard off-the-shelf software too. I fully expect we will see Microsoft offer Containers for specific roles for their various server products.

As an example, for Exchange Server you may see Containers available for each Exchange role: Mailbox, Client Access (CAS), Hub Transport (Bridgehead), Unified Messaging and Edge Transport. You apply minimal configuration to the Container but can immediately deploy it into an existing or new Exchange environment. I would imagine this would make a great deal of sense to the teams running Office 365 given the number of instances of these they would have to run.

So, there we have it, hopefully an easily digestable intro and summary of all things Containers. If you want to play with the latest Windows Server release you can spin up a copy in Azure if you want. If you don’t have a subscription sign up for a trial. Alternatively, Docker offers some good introductory resources and training is available in Australia*.

HTH.

* Disclaimer: Cevo is a sister company of my employer Kloud Solutions.

Tagged , , , , ,

Setting Instance Level Public IPs on Azure VMs

Since October 2014 it has been possible to add a public IP address to a virtual machine in Azure so that it can be directly connected to by clients on the internet. This bypasses the load balancing in Azure and is primarily designed for those scenarios where you need to test a host without the load balancer, or you are deploying a technology that may require a connection type that isn’t suited to Azure’s Load Balancing technology.

This is all great, but the current implementation provides you with dynamic IP addresses only, which is not great unless you can wrap a DNS CNAME over the top of them. Reading the ILPIP documentation suggested that a custom FQDN was generated for an ILPIP, but for the life of me I couldn’t get it to work!

I went around in circles a bit based on the documentation Microsoft supplies as it looked like all I needed to do was to call the Set-AzurePublicIP Cmdlet and the Azure fabric would take care of the rest… but no such luck!

Get-AzureVM -ServiceName svc01 -Name vm01 | `
Set-AzurePublicIP -PublicIPName vm01ip -IdleTimeoutInMinutes 4 | `
Update-AzureVM

When I did a Get-AzureVM after the above I got the following output – note that I did get a public IP, but no hostname to go along with it!

DeploymentName              : svc01
Name                        : vm01
Label                       :
VM                          : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVM
InstanceStatus              : ReadyRole
IpAddress                   : 10.0.0.5
InstanceStateDetails        :
PowerState                  : Started
InstanceErrorCode           :
InstanceFaultDomain         : 1
InstanceName                : vm01
InstanceUpgradeDomain       : 1
InstanceSize                : Small
HostName                    : vm01
AvailabilitySetName         : asn01
DNSName                     : http://svc01.cloudapp.net/
Status                      : ReadyRole
GuestAgentStatus            : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.GuestAgentStatus
ResourceExtensionStatusList : {Microsoft.Compute.BGInfo}
PublicIPAddress             : 191.239.XX.XX
PublicIPName                : vm01ip
PublicIPDomainNameLabel     :
PublicIPFqdns               : {}
NetworkInterfaces           : {}
VirtualNetworkName          : Group demo01
ServiceName                 : svc01
OperationDescription        : Get-AzureVM
OperationId                 : 62fdb5b28dccb3xx7ede3yyy18c0454
OperationStatus             : OK

Aaarggh!

The Solution

It turns out, after a little experimentation, that you all you have to do to get this to work is to supply a value to an undocumented parameter DomainNameLabel for the Set-AzurePublicIP Cmdlet.

Note: there is also no way to achieve this at time of writing via the Azure web portals – you have to use PowerShell to get this configured.

Let’s try our call again above with the right arguments this time!

Get-AzureVM -ServiceName svc01 -Name vm01 | `
Set-AzurePublicIP -PublicIPName vm01ip `
   -IdleTimeoutInMinutes 4 -DomainNameLabel vm01ilpip | `
Update-AzureVM

Success!!

DeploymentName              : svc01
Name                        : vm01
Label                       :
VM                          : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVM
InstanceStatus              : ReadyRole
IpAddress                   : 10.0.0.5
InstanceStateDetails        :
PowerState                  : Started
InstanceErrorCode           :
InstanceFaultDomain         : 1
InstanceName                : vm01
InstanceUpgradeDomain       : 1
InstanceSize                : Small
HostName                    : vm01
AvailabilitySetName         : asn01
DNSName                     : http://svc01.cloudapp.net/
Status                      : ReadyRole
GuestAgentStatus            : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.GuestAgentStatus
ResourceExtensionStatusList : {Microsoft.Compute.BGInfo}
PublicIPAddress             : 191.239.XX.XX
PublicIPName                : vm01ip
PublicIPDomainNameLabel     : vm01ilpip
PublicIPFqdns               : {vm01ilpip.svc01.cloudapp.net , vm01ilpip.0.svc01.cloudapp.net}
NetworkInterfaces           : {}
VirtualNetworkName          : Group demo01
ServiceName                 : svc01
OperationDescription        : Get-AzureVM
OperationId                 : 62fdb5b28dccb3xx7ede3yyy18c0454
OperationStatus             : OK

Now that I have this information I can setup DNS CNAMEs against the PublicIPFqdns and use DNS to manage the invariable IP address change between instance recycles. Happy days!

Tagged , , , , , ,

Microsoft Ignite 2015 Event Review

Frank Sinatra sang “My Kind of Town (Chicago is)” and Ol’ Blue Eyes certainly knew a great town when he saw one!

The first ever Microsoft Ignite was held just this past week in Chicago at the McCormick Place Convention Centre (the largest in North America) and I was lucky enough to attend with the other 22,000+ attendees!

Ignite’s been a bit of an interesting event this time around as it has replaced a bunch of product-specific North American conferences such as MEC and Lync Conference, and it seemed to attract overflow from people who missed out on tickets to Build the week before. I think a lot of attendees seemed a little unsure about what Ignite actually was – is it for IT Pros or Developers, or both? More on this later!

Let me share my experience with you.

Firstly, as you might guess from my introduction, Ignite was huge – 22,000+ attendees, 4.5 days and a session catalogue that ran into easily 100+ sessions (I haven’t counted, but I’m sure someone has the full number and that my estimate is way-way too low). The Expo floor itself was massive, with Microsoft product teams taking substantial floor space and being available and open to talk and take feedback.

The sheer scale of this event lead to some fairly interesting experiences…

Eating

I think everyone got used to being herded to the first open food buffet where breakfast and lunch were served. Obviously humans will head to the nearest table, but I’m pretty sure by day 5 everyone was a little over the phrase ‘keep moving down the line to the first open table’ (followed closely by ‘food this way!’). It was generally done very politely though.

Food variation was pretty good and the serving style meant you avoided large servings, though some offerings, were, errr, not what I’m used to (but I gave some a go in the name of international relations).

The red velvet cake was pretty amazing. I can’t pick a top meal (mainly because I don’t remember them all), but overall the food gets a thumbs up.

Moving Around

The distances needing to be travelled between sessions sometimes resulted in needing to almost sprint between them. Using one speaker’s joke: your Fitbit thanks you.

The size of McCormick Place meant that travel time between two sessions in the gap between sessions (typically 15 minutes) could be a challenge. Couple this with a crowd who are unfamiliar with the location and all sorts of mayhem ensues. I would say by day three the chaos had settled down as people started to get familiar with locations (or were back at the hotel with a hangover).

If you wanted to have a meaningful discussion with anyone in the Expo you would effectively forgo a session or lunch depending on which was more important to you :).

💡 Pro-tip: learn the locations / map before you go as there are lot of signs in the centre that may not make much sense at first.

Getting Out

McCormick Place is a substantial distance from downtown Chicago which presented some challenges. Shuttle buses picked up and dropped off during morning and evening periods, but not in the middle of the day. If you needed anything in the middle of the day it was via taxi. The Chicago Metra train runs through here, but appears to be so infrequent that it’s not that useful.

On Tuesday evening many social events had been organised by various product teams and vendors which were mostly held downtown. Trying to make these immediately after the end of the day was tricky as shuttle buses to hotels filled very quickly and a massive taxi queue formed.

For me this meant an hour long walk to my first event, essentially missing most of it!

The second event, also downtown, was a bit more of a success though 🙂

Did I mention the Queues?

For…

  • Toilets: I can now appreciate what major events are like for women who usually end up queuing for the toilet. Many of the breakout sessions were held near toilets that were woefully inadequate for the volume of people (particularly if you’re serving the same people free coffee and drinks…)

    💡 Pro-tip: there are a set of massive gents toilets located behind the Connies Pizza on North Level 2. Patently I didn’t go searching for the Ladies…

  • Swag: yep, you could tell the cool giveaways or prizes on the Expo floor simply by looking at the length of the queue.
  • Food: small ones at breakfast and lunch, some unending ones for the Attendee Celebration (hands up if you actually got a hot dog?!)

    💡 Pro-tip: at the Celebration find the least popular food that you still like. Best one for me was the steamed pork and vegetable buns, though there are only so many you can eat.

  • Transport: as I already hinted at above – depending on time of day you could end up in a substantial queue to get on a bus or taxi.

    💡 Pro-tip: take a room in a hotel a fair distance away (less people) and also walk a little if you need a taxi and flag one down.

Session Content

I don’t come from an IT Pro background and I don’t have an alignment with a particular product such as Exchange, so for me Ignite consisted of Azure-focused content, some SharePoint development for Office 365 and custom Azure application development using Node. I got a lot of useful insights at the event so it hit the mark for me – the union of IT Pro and Developer competencies is being driven by public cloud technology so it was great!

I have the feeling quite a few attendees were those who missed out on entrance to Build the week before, and I suspect for many they may have found a lack of compelling content (unless they were SharePoint developers). I also felt that a lot of content advertised as level 300 was more like level 200, though there were some good sessions that got the depth just right. I’m not sure if this issue is because of the diverse range of roles expected to be attend (admins, developers, managers and C-levels) which meant content was written to the lowest common denominator.

Also finding suitable sessions was a bit of a challenge too given the volume available. While the online session builder (and mobile app) was certainly useful I did spend a bit of time just scrolling through things and I would say the repeated sessions were probably also unnecessary. I certainly missed a couple of sessions I would have liked to attend (though I can catch up on Channel 9) primarily because I missed them in the schedule completely.

I hope for 2016 some work is done on the content to:

  • Make it easier to build a schedule in advance – the web schedule builder was less than ideal
  • Increase the technical depth of sessions, or clearly demarcate content aimed only at architect or C-level attendees
  • Have presenters who can present. There were some sessions I went to that were trainwrecks – granted in a conference this size maybe that happens… but I just had the feeling here that some speakers had no training or prep time for their sessions
  • Reduce or remove repeated sessions.

💡 Pro-tip: make sure to get the mobile application for Ignite (and that you have it connected to the Internet). It really was the most useful thing to have at the event!

Ignite The Future

As I noted above, this was the first year Ignite was held (and also the first in Chicago). During the 2015 conference Microsoft announced that the conference will be back in Chicago for 2016.

Should you go? Absoutely!

Some tweaks to the event (granted, so fairly large ones) should help make it smoother next time round – and I’ve seen the Microsoft Global Events team actively taking feedback on board elsewhere online.

The Ignite Brand is also here to stay – I have it on good advice that TechEd as a brand is effectively “Done” and Ignite will be taking over. Witness the first change: Ignite New Zealand.

Chicago’s certainly my type of town!

PS – make sure to check out what’s on when you’re in town…

Tagged , ,

Get Started with Docker on Azure

The most important part of this whole post is that you need to know that the whale in the Docker logo is officially named “Moby Dock“. Once you know that you can probably bluff your way through at least an introductory session on Docker :).

It’s been hard to miss the increasing presence of Docker, particularly if you work in cloud technology. Each of the major cloud providers has raced to provide container services (Azure, AWS, GCE and IBM) and these platforms see benefits in the higher density hosting they can achieve with minimal changes to existing infrastructure.

In this post I’m going to look at first steps to getting Docker running in Azure. There are other posts about that will cover this but there are a few gotchas along the way that I will cover off here.

First You Need a Beard

Anyone worth their take home pay who works with *nix needs to grow a beard. Not one of those hipstery-type things you see on bare-ankled fixie riders. No – a real beard.

While Microsoft works on adding Docker support in the next Windows Server release you are, for the most part, stuck using a Linux variant to host and manage your Docker containers.

The Azure Cross-Platform Command-Line Interface teases you with the ability to create Docker hosts from a Windows-based computer, but ultimately you’ll have a much easier experience running it all from a Linux environment (even if you do download the xplat-cli there anyway).

If you do try to set things up using a Windows machine you’ll have to do a little dancing to get certificates setup (see my answer on this stackoverflow post). This is shortly followed by the realisation that you then can’t manage the host you just created by getting those nice certificates onto another host – too much work if you ask me :).

While we’re on Docker and Windows let’s talk a little about boot2docker. This is designed to provide an easy way to get started with Docker and while it’s a great idea (especially for Windows users) you will have problems if you are running Hyper-V already due to boot2docker’s use of Virtualbox which won’t run if you already have Hyper-V installed.

So Linux it is then!

Management Machine

Firstly let’s setup a Linux host that will be our Docker management host. For this post we’ll use a CentOS 7 host (I’ve avoided using Ubuntu because there are some challenges installing and using node.js which is required for the Azure xplat CLI).

Once this machine is up and running we can SSH into it and install the required packages. Note that you’ll need to run this script as a root-equivalent user.

Now we have our bits to manage the Docker environment we can now build an image and actual Docker container host.

Docker Container Host

On Azure the easiest way to get going to with Docker is to use the cross platform CLI’s Docker features.

As a non-root user on our management linux box we can run the following commands to get our Docker host up and running. I’m using an Organisational Account here so I don’t need to download any settings files.

# will prompt for username and password
[sw@sw1 ~]$ azure login

# set mode to service management
[sw@sw1 ~]$ azure config mode asm

# get the list of Ubuntu images - select one for the next command
[sw@sw1 ~]$ azure vm image list | grep Ubuntu-14_04

# setup the host - replace placeholders
[sw@sw1 ~]$ azure vm docker create -e 22 -l "West US" {dockerhost} "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20141125-en-us-30GB" {linxuser} {linxpwd}

At this point we now have a new Azure VM up and running that has all the necessary Docker bits installed for us. If we look at the VM’s entry in the Azure Portal we can see that ports 22 and 4243 are already open for us. We can go ahead and test that everything’s good. Don’t forget to substitute your hostname!

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 info

Deploy an Image to a Container

As we have our baseline infrastructure ready to rock so let’s go ahead and deploy an image to it. For the purpose of this post we are going to use the wordpress-nginx image that can be built using the configuration in this Github repository.

On our management host we can run the following commands to build the image from the Dockerfile contained in the Git repository.

[sw@sw1 ~]$ git clone https://github.com/eugeneware/docker-wordpress-nginx.git

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 build -t="docker-wordpress-nginx" docker-wordpress-nginx/

Note: you need to make sure you run this as the user who setup the Docker container host and that you do it in the home directory of the user. This is because the certificates generated by the container host setup are stored in the user’s home folder in a directory called .docker. Also, expect this process to take a reasonable amount of time because it’s having to pull down a lot of data!

Once our image build is finished we can verify that it is on the Docker host by issuing this command:

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 images

Let’s create a new containerised version of the image and map the HTTP port out so we can access it from elsewhere in the world (we’re going to map port 80 to port 80). I’m also going to supply a friendly name for the container so I can easily reference it going forward (if I didn’t do this I’d get a nice long random string I’d need to use each time).

[sw@sw1 ~]$  docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 create -p 80:80 --name="dwn01" docker-wordpress-nginx

Now that we have created this we can start the container and it will happily run until we stop it :).

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 start dwn01

If we return to the VM management section in the Azure Management Portal and add an Endpoint to map to port 80 on our Docker container host we can then open up our WordPress setup page in a web browser and configure up WordPress.

If we simply stop the container we will lose any changes to the running environment. Docker provides us with the ‘commit’ command to rectify this. Let’s go ahead and save our state:

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 commit dwn01 sw/dwn01

and then we can stop the Container.

[sw@sw1 ~]$ docker --tls -H tcp://{dockerhost}.cloudapp.net:4243 stop dwn01

We now have a preserved state container along with the original unchanged one. If we want to move this container to another platform that supports Docker we could also do that, or we could repeat all our changes based on the original unchanged container.

This has been a very brief overview of Docker on Azure – hopefully it will get you started with the basics and comfortable with the mechanics of setting and up and managing Docker.

Tagged , ,

Microsoft Azure: 2014 Year in Review

What a massive year it’s been for Microsoft’s Azure public cloud platform. Running the Azure Sydney User Group this year has been great fun and seeing the growing local interest has been fantastic.

The focus from Microsoft has really changed in this space and has been clearly signalled with the change in name of Azure from Windows Azure to Microsoft Azure during the year and an increasingly broad set of non-Microsoft services offered on it.

2015 promises to be another big year, but let’s look back at what happened during 2014 with Azure.


January

The year got off to a fairly quiet start, but as we’ll see, it soon ramped up.

Preview

Everything this month was under GA only, so see below!

Generally Available

  • Websites:
    • staged publishing support
    • Always On support *
    • more frequent metric updates and monitoring alerts
  • SQL Database: new metrics and alerts
  • Mobile Services: SenchaTouch support
  • Cloud Services: A8 and A9 machine sizes now supported.

* If you’re using New Relic there are some known issues with this feature.

Other News

The Azure platform received PCI-DSS compliance validation and introduced reduced pricing rates for storage and storage transactions.


February and March

The headline item in this period was the launch of the Japan Geography with Japan East (Saitama Prefecture) and West (Osaka Prefecture) providing that market with in-country services. Also during this period we had the following announcements and launches:

Preview

Generally Available

Other News

Local gamers unhappy not to have a local Xbox server platform to run on. Who knew it was such an issue having lag and big ping times 😉

Can we haz l0c4l serverz?


April

The big change this month was the change in name for Azure. Guaranteeing a million-and-one outdated websites, slides and documents in one swoop, the service name was changed from Windows Azure to Microsoft Azure. Just for fun there is no “official” logo, just text-based branding.

This change was a subtle nod to Azure’s ability to run Infrastructure-as-a-Service (IaaS) workloads on platforms other than Windows – something it had been doing for quite some time when this change was made.

Preview

  • Newly designed management portal
  • Mobile services: documented offline support and role-based Azure AD authentication
  • Resource Manager via PowerShell
  • SQL Database: active geo-replication (read replicas); self-service restore; 500GB support; 99.95% SLA
  • Media Services: secure delivery and Office 365 Video Portal.

Generally Available

  • Azure SDK 2.3: increased Visual Studio support – create VMs using Server Explorer
  • Autoscale – Virtual Machines, Cloud Services, Web Sites and Mobile Services
  • Azure AD Premium – Multi-factor Authentication (MFA) and security reporting
  • Websites: SSL bundled; Java support; Web Hosting Plans; Available in SE Asia
  • Web Jobs SDK
  • Media Services: Live Streaming; Partnerships for Content Management and Analytics (Ooyala) and Live Ingest (iStreamPlanet)
  • Basic Tier introduction: lower cost for dev/test scenarios. Applies to VMs and Websites
  • Puppet and Chef support on Azure VMs via VM Agent Extensions
  • Scheduler Service
  • Read Access Geo Redundant Storage (RA-GRS).

May and June

The pace from the first quarter of the year carried over into these two months! The stand out amongst the range of announcements in this period was the launch of the API Management service which was the result of the October 2013 acquisition of Apiphany.

Preview

  • Azure API Management – publish, manage and secure your existing REST APIs
  • Azure File Service (SMB shares) – even use on Linux VMs
  • BizTalk Hybrid Connections – on-prem connects without the secops guys 😉
  • Redis Cache support – now the preferred caching platform in Azure
  • RemoteApp – Lay down common Apps on demand
  • Site Recovery – backup your on-prem VMs to Azure
  • Secure VMs using security extensions from Microsoft, Symantec and McAfee
  • Internal Load Balancing for VMs and Cloud Services
  • HDInsights: Apache HBASE and Hadoop 3.1
  • Azure Machine Learning (or as I like to call it “Skynet”).

Generally Available

  • ExpressRoute – WAN and DC cross-connects
  • Multi-connection Virtual Networks (VNET) and VNET-to-VNET connections
  • Public IP Address Reservation (IPv4 shortage anyone?)
  • Traffic Manager: use Azure and non-Azure (“external”) endpoints
  • A8 and A9 VM support – lots of everything (8 / 16 cores – 7 GB RAM per core)
  • Storage Import/Export service – check region availability!

Other News

MSDN subscribers gained the ability to deploy Windows 7 and 8 images onto Azure VMs for dev/test scenarios and Enterprise Agreement (EA) customers were given the ability to purchase add-ons via the Azure Store which had previously not been possible.

We also learned about availability of IPv4 addresses with some US-based services being issued IPv4 addresses assigned to South America, causing many LOLs for service admins out there who thought their services were in Brazil!


July and August

This period’s summary: Ice Bucket Challenge.

Preview

  • Event Hubs: capture data from all the Internet connected things!
  • Redis cache: in more places and sizes
  • Preview management portal: manage Azure SQL Database
  • DocumentDB
  • Azure Search.

Generally Available


September

No single announcement jumps out so I was going to put a picture of a kitten here but I thought you might want to see this (even if it is from 2012).

Preview

  • Role-based access control (RBAC) for Azure management in preview portal only
  • Resource Tagging support: filter by tag – useful for billing and ops
  • Azure SQL Database – Elastic Scale preview. Replaces Federations model
  • DocumentDB – enhanced management tooling and metrics
  • Azure Automation – AD auth; PowerShell converter; Runbook gallery and scheduling
  • Media Services – Live Streaming and DRM, faster encoding and indexer.

Generally Available

  • ‘D’ Series VMs: 60% faster CPU, more RAM and local SSD disk
  • Redis Cache: recommended cache solution in Azure. 250MB – 53GB! support
  • Site Recovery: on-prem DR with Azure – Win / Linux
  • Notification Hubs: Baidu Push (China)
  • Virtual Machines: instance-level public IPs (no NAT/PAT)
  • Azure SQL Database: three new service tiers and hourly billing
  • API Management: added OAuth support and REST Management API
  • Websites: VNet support, “scalable CMS” with WordPress and backups improvements
  • Management Services Alerts.

October and November

Pretty hard to go by this news it terms of ‘most outstanding announcement’ for these two months, especially for those of us in Australia!

Preview

  • ‘G’ Series VMs – (“Godzilla” VM) more CPU/RAM/SSD than any VM in any cloud *
  • Premium Storage – SSD-based with more than 50k IOPS *
  • Marketplace changes – CoreOS and Cloudera
  • Increased focus on Docker including portal support
  • Cloud Platform System (CPS) from Dell.
  • Batch: parallel task coordination
  • Data Factory: build data processing pipelines
  • Stream Analytics: analyse your Event Hubs data.

* Announced but not yet in public preview.

Generally Available

  • Australia Geography launches!
  • Network Security Groups
  • Multi-NIC Support in VMs (VM size dependent)
  • Forced Tunnelling (route traffic back on-prem)
  • ExpressRoute:
    • Cross-Subscription Sharing
    • Multi-connect to an Azure VNET
  • Bigger Azure Virtual Gateways
  • Ops Logging for Gateways and ExpressRoute
  • More control over Gateway encryption
  • Azure Load Balancer Source IP Affinity (“Sticky Sessions”)
  • Nested Traffic Manager Profiles
  • Preview Portal: Internal Load Balancing and Instance / Reserved IP Management
  • Automation Service: PowerShell Service Orchestration
  • Microsoft Antimalware Extension on VMs and Cloud Services (for free)
  • Many more VM Extensions available (PowerShell DSC / Octopus Deploy Tentacle)
  • Event Hubs: ingest more messages; SLA-backed.

Other News

We always have this vision of large-scale services being relatively immune to wide-ranging outages, yet all the main cloud platforms have regular challenges resulting in service disruptions of some variety.

On November 18 (or 19 depending on your timezone) Azure had one of these events, causing a disruption across many of its Regions affecting Storage and VMs.

The final Root Cause Analysis (RCA) shows the sorts of challenges involved in running platforms of this size.


December

You can almost hear the drawing of the breath before the Azure team starts 2015…

Preview

  • Premium Storage
  • Azure SQL Database: better feature parity with SQL 2014 and better large DB support.
  • Search: management via portal, multi-lingual support.
  • DocumentDB: better management via portal.
  • Azure Data Factory: integration with Machine Learning.

Generally Available

  • RemoteApp: run desktop apps anywhere
  • Azure SQL Database: new auditing features
  • Live Media Streaming: access the same platform as used at the World Cup and Olympics
  • Site Recovery: supported without SCVMM being deployed
  • Active Directory: App Proxy and password write-back enabled
  • Mobile Services: Offline Sync Managed SDK
  • HDInsight: Cluster customisation.

Other News

Another big announcement for the Australian cloud market was the news that from early 2015 Microsoft would be offering Office 365 and CRM Online from within Australia’s borders. What a great time to be working in this market!


There we have it! What a year! I haven’t detailed every single announcement to come out from the Azure team (this post would easily be twice as long), but if you think I’ve missed anything important leave a comment and I’ll update the post.

Simon.

Tagged , , , ,

Manage Azure Resources Using Tags

I’m having fun playing with the new Azure Resource Manager Tags feature.  So much so I blogged about it over on Kloud’s blog.  Check it out!

http://blog.kloud.com.au/2014/10/14/manage-azure-resources-using-tags/

Tagged , ,

Before that Pizza-as-a-Service diagram there was Pizza Party

Look, I’m not even going to reproduce the diagram here. I know you’ve seen it. Everybody’s seen it. Goodness knows my tweet stream has been full of it for the best part of the last month.

Just in case you haven’t seen it: http://lmgtfy.com/?q=pizza-as-a-service

In doing my prep for my upcoming talk at TechEd Australia I came across this gem from April 2004 (yep, that’s over 10 years ago folks!) that shows how a public API can have a positive upside to any business, even if the usage is not strictly that which was intended.

The back story is that some guys worked out how to directly call Domino’s online ordering backend web service at the time without needing to drive it all through a web interface.

This small example really demonstrates the power of a public API and how people will take it and use it in ways you had not intended, but in ways which will have a positive impact on your business.

What’s the bet that CompSci dorm rooms all over America in 2004 were happily ordering their delivered pizzas from a Linux command prompt?!

Enjoy.

Check out the sourcecode up on Github!

https://github.com/coryarcangel/Pizza-Party-0.1.b

Tagged , ,

Azure Portal access for identical Microsoft and Organisational Accounts after federation.

People are making the right choice of federating their Office 365-created Azure Active Directory with their Azure subscriptions thus allowing their users to login into Office 365, Azure and other Microsoft services using the same set of credentials. This also provides a centralised place to manage all user accounts.

In some cases, however, organisations have previously mandated that staff create Microsoft Accounts (formerly Live ID) that match their corporate email addresses so they can easily identify those users in their Azure subscription or other services such as Visual Studio Online.

As I previously blogged on PAL licenses and Office 365, you will start to have login challenges once you start exposing your Azure AD and (typically) using ADFS because the Microsoft Account login service stops being authoritative and users will be automatically redirected to your ADFS login page based on the email address they enter.

The following workaround is suggested if you need to unblock someone in this scenario:

  1. Make sure the user is logged out of all accounts (Office 365 and Microsoft)
  2. They then navigate to https://manage.windowsazure.com/
  3. When prompted put in a valid Microsoft Account login (say, jsmith7787@live.com).  This redirects the user to the Microsoft Account login page.
  4. Enter the Microsoft Account actually required (i.e.johnsmith@example.com) and password and login.

This scenario doesn’t currently apply on the Office 365 login page because you can choose to swap the login type you are using by clicking on the “Sign in with a Microsoft account” link.

You should be looking migrating away from Microsoft Accounts that use organisational email addresses and instead start investing in converting users to Azure AD. This will certainly be the case with the current changes happening with Visual Studio Online.

June 24: I’m talking about Azure HDInsight at Sydney ALT.NET

Myself and my colleague Jibin Johnson will be talking about Microsoft’s cloud Big Data story based on Azure HDInsight and Power BI.

Come along and see how Microsoft is making use of the Power of the Elephant in the Cloud!

http://sydney.ozalt.net/2014/06/june-24-big-data.html

Tagged , ,