Category Archives: Powershell

Creating Azure AD B2C Service Principals with PowerShell

I’ve been lucky enough over the last few months to be working on some cool consumer-facing solutions with one of my customers. A big part of the work we’ve been doing in building Minimum Viable Product (MVP) solutions to allow us to quickly test concepts in-market using stable, production ready technologies.

As these are consumer solutions, the Azure Active Directory (AAD) B2C service was an obvious choice for identity management, made even more so by AAD B2C’s ability to act as a source-of-truth for consumer identity and profile information across a portfolio of applications and services.

AAD B2C and Graph API

The AAD B2C schema is extensible which allows you to add custom attributes to an identity. Some of these extension attributes you may wish the user to manage themselves (i.e. mobile phone number), and some may be system-managed or remotely-sourced value associated with the identity (i.e. Salesforce ContactID) that a user may never see or edit.

When we have attributes that the user doesn’t necessarily manage themselves, or we wish to do some other processing that isn’t part of the AAD B2C Policy framework we need to use the Graph API to programmatically access AAD B2C identities.

The AAD B2C team has a good overview document on how use Graph API with AAD B2C, but I ran into an issue creating a Service Principal for my Graph API code because I used an Azure AD (Enterprise) identity to create and manage my B2C instance. As I suspect this will be how the majority of instances are created I thought I would document my solution here.

Background

I have a demo AAD B2C setup below and you can clearly see my Kloud identity (creator / admin of the tenant) is sourced from “Microsoft Azure AD (other directory)”.

Admin user from another directory

Note that with this user I am still able to manage identities contained in the B2C directory via the web UI, but where I run into issues is with PowerShell as we will see.

As you can see in the AAD B2C post referenced earlier, I need to use the Azure AD PowerShell module to setup a Service Principal. Firstly, let’s connect:

Connect-MsolService

At the prompt I enter my admin credentials (simon.waight@kloud.com.au) and am connected.

You can probably already spot the issue… there is no way to pass a TenantId to this command – the context is entirely based on the user’s User Principal Name (UPN).

When I run:

Get-MSolDomain

all I see is the verified domains attached to my home tenant:

Home tenant domains

.. and my B2C domain isn’t one… so… no luck 😦

I read on through the documentation and looked at the PowerShell Cmdlets and found what I thought would be my solution – the ability to specify a Tenant ID on the New-MsolServicePrincipal Cmdlet as shown:

New-MsolServicePrincipal -DisplayName "Demo AAD B2C Graph Client" `
                         -TenantId bc1ec9c8-xxxx-xxxx-xxxx-e10e3ee114a8 `
                         -Type Password -Value "notmypassword"

I promptly received an error message advising me that I was not authorised to make changes in the specified tenant 🙂

The Solution

It’s actually pretty straight-forward – create a local adminstrative account in the AAD B2C directory and use this to authenticate when using PowerShell.

Add user step 1


Add user step 2


AAD B2C with extra admin

Once you have done this make sure to log into the Azure Portal using this new user (localadmin@simondemob2c.onmicrosoft.com in my example) and reset their password. If you are using the new AAD PowerShell Module that supports modern authentication you can do this in-line at login time.

Note: in order for MFA to work for this user at the PowerShell command prompt you should install the preview AAD module that supports modern authentication.

If I now run

Get-MSolDomain

I see the B2C directory I expect:

B2C tenant domains

I am now able to create the Service Principal I need for my Graph API client too:

New-MsolServicePrincipal -DisplayName "Demo AAD B2C Graph Client" `
                         -Type Password -Value "notmypassword"

returns the expected result of creating a Service Principal I can use for my Graph client.

Happy Days!

:mrgreen:

Tagged , , , , ,

Using Active Directory Security Groups to Grant Permissions to Azure Resources

Kloud Blog

The introduction of the Azure Resource Manager platform in Azure continues to expose new possibilities for managing your deployed resources.

One scenario that you may not be aware of is the ability to use scoped RBAC role assignments to grant limited rights to Azure AD-based users and groups.

We know Azure provides us with many built-in RBAC roles, but it may not be immediately obvious that you can control their assignment scope.

What do I mean by this?

Simply that each RBAC role (including custom ones you create) can be used at various levels within Azure starting at the Subscription level (i.e. applies to anything in the Subscription) down to a Resource (i.e. applies just to one particular resource such as a Storage Account). Role assignments are also cascading – if I assign “Owner” rights to a User or Group at the Subscription level then they have that role…

View original post 355 more words

Tagged ,

Easy Debugging of PowerShell DSC for Azure Virtual Machines

I’ve been doing a lot of PowerShell DSC on Azure VMs recently, so I thought I’d share my experience in debugging custom DSC Modules when working in Azure.

My blog entry is over on the Kloud blog so head on over and have a read.

Tagged , , ,

Setting Instance Level Public IPs on Azure VMs

Since October 2014 it has been possible to add a public IP address to a virtual machine in Azure so that it can be directly connected to by clients on the internet. This bypasses the load balancing in Azure and is primarily designed for those scenarios where you need to test a host without the load balancer, or you are deploying a technology that may require a connection type that isn’t suited to Azure’s Load Balancing technology.

This is all great, but the current implementation provides you with dynamic IP addresses only, which is not great unless you can wrap a DNS CNAME over the top of them. Reading the ILPIP documentation suggested that a custom FQDN was generated for an ILPIP, but for the life of me I couldn’t get it to work!

I went around in circles a bit based on the documentation Microsoft supplies as it looked like all I needed to do was to call the Set-AzurePublicIP Cmdlet and the Azure fabric would take care of the rest… but no such luck!

Get-AzureVM -ServiceName svc01 -Name vm01 | `
Set-AzurePublicIP -PublicIPName vm01ip -IdleTimeoutInMinutes 4 | `
Update-AzureVM

When I did a Get-AzureVM after the above I got the following output – note that I did get a public IP, but no hostname to go along with it!

DeploymentName              : svc01
Name                        : vm01
Label                       :
VM                          : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVM
InstanceStatus              : ReadyRole
IpAddress                   : 10.0.0.5
InstanceStateDetails        :
PowerState                  : Started
InstanceErrorCode           :
InstanceFaultDomain         : 1
InstanceName                : vm01
InstanceUpgradeDomain       : 1
InstanceSize                : Small
HostName                    : vm01
AvailabilitySetName         : asn01
DNSName                     : http://svc01.cloudapp.net/
Status                      : ReadyRole
GuestAgentStatus            : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.GuestAgentStatus
ResourceExtensionStatusList : {Microsoft.Compute.BGInfo}
PublicIPAddress             : 191.239.XX.XX
PublicIPName                : vm01ip
PublicIPDomainNameLabel     :
PublicIPFqdns               : {}
NetworkInterfaces           : {}
VirtualNetworkName          : Group demo01
ServiceName                 : svc01
OperationDescription        : Get-AzureVM
OperationId                 : 62fdb5b28dccb3xx7ede3yyy18c0454
OperationStatus             : OK

Aaarggh!

The Solution

It turns out, after a little experimentation, that you all you have to do to get this to work is to supply a value to an undocumented parameter DomainNameLabel for the Set-AzurePublicIP Cmdlet.

Note: there is also no way to achieve this at time of writing via the Azure web portals – you have to use PowerShell to get this configured.

Let’s try our call again above with the right arguments this time!

Get-AzureVM -ServiceName svc01 -Name vm01 | `
Set-AzurePublicIP -PublicIPName vm01ip `
   -IdleTimeoutInMinutes 4 -DomainNameLabel vm01ilpip | `
Update-AzureVM

Success!!

DeploymentName              : svc01
Name                        : vm01
Label                       :
VM                          : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.PersistentVM
InstanceStatus              : ReadyRole
IpAddress                   : 10.0.0.5
InstanceStateDetails        :
PowerState                  : Started
InstanceErrorCode           :
InstanceFaultDomain         : 1
InstanceName                : vm01
InstanceUpgradeDomain       : 1
InstanceSize                : Small
HostName                    : vm01
AvailabilitySetName         : asn01
DNSName                     : http://svc01.cloudapp.net/
Status                      : ReadyRole
GuestAgentStatus            : Microsoft.WindowsAzure.Commands.ServiceManagement.Model.GuestAgentStatus
ResourceExtensionStatusList : {Microsoft.Compute.BGInfo}
PublicIPAddress             : 191.239.XX.XX
PublicIPName                : vm01ip
PublicIPDomainNameLabel     : vm01ilpip
PublicIPFqdns               : {vm01ilpip.svc01.cloudapp.net , vm01ilpip.0.svc01.cloudapp.net}
NetworkInterfaces           : {}
VirtualNetworkName          : Group demo01
ServiceName                 : svc01
OperationDescription        : Get-AzureVM
OperationId                 : 62fdb5b28dccb3xx7ede3yyy18c0454
OperationStatus             : OK

Now that I have this information I can setup DNS CNAMEs against the PublicIPFqdns and use DNS to manage the invariable IP address change between instance recycles. Happy days!

Tagged , , , , , ,

Microsoft Ignite 2015 Event Review

Frank Sinatra sang “My Kind of Town (Chicago is)” and Ol’ Blue Eyes certainly knew a great town when he saw one!

The first ever Microsoft Ignite was held just this past week in Chicago at the McCormick Place Convention Centre (the largest in North America) and I was lucky enough to attend with the other 22,000+ attendees!

Ignite’s been a bit of an interesting event this time around as it has replaced a bunch of product-specific North American conferences such as MEC and Lync Conference, and it seemed to attract overflow from people who missed out on tickets to Build the week before. I think a lot of attendees seemed a little unsure about what Ignite actually was – is it for IT Pros or Developers, or both? More on this later!

Let me share my experience with you.

Firstly, as you might guess from my introduction, Ignite was huge – 22,000+ attendees, 4.5 days and a session catalogue that ran into easily 100+ sessions (I haven’t counted, but I’m sure someone has the full number and that my estimate is way-way too low). The Expo floor itself was massive, with Microsoft product teams taking substantial floor space and being available and open to talk and take feedback.

The sheer scale of this event lead to some fairly interesting experiences…

Eating

I think everyone got used to being herded to the first open food buffet where breakfast and lunch were served. Obviously humans will head to the nearest table, but I’m pretty sure by day 5 everyone was a little over the phrase ‘keep moving down the line to the first open table’ (followed closely by ‘food this way!’). It was generally done very politely though.

Food variation was pretty good and the serving style meant you avoided large servings, though some offerings, were, errr, not what I’m used to (but I gave some a go in the name of international relations).

The red velvet cake was pretty amazing. I can’t pick a top meal (mainly because I don’t remember them all), but overall the food gets a thumbs up.

Moving Around

The distances needing to be travelled between sessions sometimes resulted in needing to almost sprint between them. Using one speaker’s joke: your Fitbit thanks you.

The size of McCormick Place meant that travel time between two sessions in the gap between sessions (typically 15 minutes) could be a challenge. Couple this with a crowd who are unfamiliar with the location and all sorts of mayhem ensues. I would say by day three the chaos had settled down as people started to get familiar with locations (or were back at the hotel with a hangover).

If you wanted to have a meaningful discussion with anyone in the Expo you would effectively forgo a session or lunch depending on which was more important to you :).

💡 Pro-tip: learn the locations / map before you go as there are lot of signs in the centre that may not make much sense at first.

Getting Out

McCormick Place is a substantial distance from downtown Chicago which presented some challenges. Shuttle buses picked up and dropped off during morning and evening periods, but not in the middle of the day. If you needed anything in the middle of the day it was via taxi. The Chicago Metra train runs through here, but appears to be so infrequent that it’s not that useful.

On Tuesday evening many social events had been organised by various product teams and vendors which were mostly held downtown. Trying to make these immediately after the end of the day was tricky as shuttle buses to hotels filled very quickly and a massive taxi queue formed.

For me this meant an hour long walk to my first event, essentially missing most of it!

The second event, also downtown, was a bit more of a success though 🙂

Did I mention the Queues?

For…

  • Toilets: I can now appreciate what major events are like for women who usually end up queuing for the toilet. Many of the breakout sessions were held near toilets that were woefully inadequate for the volume of people (particularly if you’re serving the same people free coffee and drinks…)

    💡 Pro-tip: there are a set of massive gents toilets located behind the Connies Pizza on North Level 2. Patently I didn’t go searching for the Ladies…

  • Swag: yep, you could tell the cool giveaways or prizes on the Expo floor simply by looking at the length of the queue.
  • Food: small ones at breakfast and lunch, some unending ones for the Attendee Celebration (hands up if you actually got a hot dog?!)

    💡 Pro-tip: at the Celebration find the least popular food that you still like. Best one for me was the steamed pork and vegetable buns, though there are only so many you can eat.

  • Transport: as I already hinted at above – depending on time of day you could end up in a substantial queue to get on a bus or taxi.

    💡 Pro-tip: take a room in a hotel a fair distance away (less people) and also walk a little if you need a taxi and flag one down.

Session Content

I don’t come from an IT Pro background and I don’t have an alignment with a particular product such as Exchange, so for me Ignite consisted of Azure-focused content, some SharePoint development for Office 365 and custom Azure application development using Node. I got a lot of useful insights at the event so it hit the mark for me – the union of IT Pro and Developer competencies is being driven by public cloud technology so it was great!

I have the feeling quite a few attendees were those who missed out on entrance to Build the week before, and I suspect for many they may have found a lack of compelling content (unless they were SharePoint developers). I also felt that a lot of content advertised as level 300 was more like level 200, though there were some good sessions that got the depth just right. I’m not sure if this issue is because of the diverse range of roles expected to be attend (admins, developers, managers and C-levels) which meant content was written to the lowest common denominator.

Also finding suitable sessions was a bit of a challenge too given the volume available. While the online session builder (and mobile app) was certainly useful I did spend a bit of time just scrolling through things and I would say the repeated sessions were probably also unnecessary. I certainly missed a couple of sessions I would have liked to attend (though I can catch up on Channel 9) primarily because I missed them in the schedule completely.

I hope for 2016 some work is done on the content to:

  • Make it easier to build a schedule in advance – the web schedule builder was less than ideal
  • Increase the technical depth of sessions, or clearly demarcate content aimed only at architect or C-level attendees
  • Have presenters who can present. There were some sessions I went to that were trainwrecks – granted in a conference this size maybe that happens… but I just had the feeling here that some speakers had no training or prep time for their sessions
  • Reduce or remove repeated sessions.

💡 Pro-tip: make sure to get the mobile application for Ignite (and that you have it connected to the Internet). It really was the most useful thing to have at the event!

Ignite The Future

As I noted above, this was the first year Ignite was held (and also the first in Chicago). During the 2015 conference Microsoft announced that the conference will be back in Chicago for 2016.

Should you go? Absoutely!

Some tweaks to the event (granted, so fairly large ones) should help make it smoother next time round – and I’ve seen the Microsoft Global Events team actively taking feedback on board elsewhere online.

The Ignite Brand is also here to stay – I have it on good advice that TechEd as a brand is effectively “Done” and Ignite will be taking over. Witness the first change: Ignite New Zealand.

Chicago’s certainly my type of town!

PS – make sure to check out what’s on when you’re in town…

Tagged , ,

Microsoft Azure: 2014 Year in Review

What a massive year it’s been for Microsoft’s Azure public cloud platform. Running the Azure Sydney User Group this year has been great fun and seeing the growing local interest has been fantastic.

The focus from Microsoft has really changed in this space and has been clearly signalled with the change in name of Azure from Windows Azure to Microsoft Azure during the year and an increasingly broad set of non-Microsoft services offered on it.

2015 promises to be another big year, but let’s look back at what happened during 2014 with Azure.


January

The year got off to a fairly quiet start, but as we’ll see, it soon ramped up.

Preview

Everything this month was under GA only, so see below!

Generally Available

  • Websites:
    • staged publishing support
    • Always On support *
    • more frequent metric updates and monitoring alerts
  • SQL Database: new metrics and alerts
  • Mobile Services: SenchaTouch support
  • Cloud Services: A8 and A9 machine sizes now supported.

* If you’re using New Relic there are some known issues with this feature.

Other News

The Azure platform received PCI-DSS compliance validation and introduced reduced pricing rates for storage and storage transactions.


February and March

The headline item in this period was the launch of the Japan Geography with Japan East (Saitama Prefecture) and West (Osaka Prefecture) providing that market with in-country services. Also during this period we had the following announcements and launches:

Preview

Generally Available

Other News

Local gamers unhappy not to have a local Xbox server platform to run on. Who knew it was such an issue having lag and big ping times 😉

Can we haz l0c4l serverz?


April

The big change this month was the change in name for Azure. Guaranteeing a million-and-one outdated websites, slides and documents in one swoop, the service name was changed from Windows Azure to Microsoft Azure. Just for fun there is no “official” logo, just text-based branding.

This change was a subtle nod to Azure’s ability to run Infrastructure-as-a-Service (IaaS) workloads on platforms other than Windows – something it had been doing for quite some time when this change was made.

Preview

  • Newly designed management portal
  • Mobile services: documented offline support and role-based Azure AD authentication
  • Resource Manager via PowerShell
  • SQL Database: active geo-replication (read replicas); self-service restore; 500GB support; 99.95% SLA
  • Media Services: secure delivery and Office 365 Video Portal.

Generally Available

  • Azure SDK 2.3: increased Visual Studio support – create VMs using Server Explorer
  • Autoscale – Virtual Machines, Cloud Services, Web Sites and Mobile Services
  • Azure AD Premium – Multi-factor Authentication (MFA) and security reporting
  • Websites: SSL bundled; Java support; Web Hosting Plans; Available in SE Asia
  • Web Jobs SDK
  • Media Services: Live Streaming; Partnerships for Content Management and Analytics (Ooyala) and Live Ingest (iStreamPlanet)
  • Basic Tier introduction: lower cost for dev/test scenarios. Applies to VMs and Websites
  • Puppet and Chef support on Azure VMs via VM Agent Extensions
  • Scheduler Service
  • Read Access Geo Redundant Storage (RA-GRS).

May and June

The pace from the first quarter of the year carried over into these two months! The stand out amongst the range of announcements in this period was the launch of the API Management service which was the result of the October 2013 acquisition of Apiphany.

Preview

  • Azure API Management – publish, manage and secure your existing REST APIs
  • Azure File Service (SMB shares) – even use on Linux VMs
  • BizTalk Hybrid Connections – on-prem connects without the secops guys 😉
  • Redis Cache support – now the preferred caching platform in Azure
  • RemoteApp – Lay down common Apps on demand
  • Site Recovery – backup your on-prem VMs to Azure
  • Secure VMs using security extensions from Microsoft, Symantec and McAfee
  • Internal Load Balancing for VMs and Cloud Services
  • HDInsights: Apache HBASE and Hadoop 3.1
  • Azure Machine Learning (or as I like to call it “Skynet”).

Generally Available

  • ExpressRoute – WAN and DC cross-connects
  • Multi-connection Virtual Networks (VNET) and VNET-to-VNET connections
  • Public IP Address Reservation (IPv4 shortage anyone?)
  • Traffic Manager: use Azure and non-Azure (“external”) endpoints
  • A8 and A9 VM support – lots of everything (8 / 16 cores – 7 GB RAM per core)
  • Storage Import/Export service – check region availability!

Other News

MSDN subscribers gained the ability to deploy Windows 7 and 8 images onto Azure VMs for dev/test scenarios and Enterprise Agreement (EA) customers were given the ability to purchase add-ons via the Azure Store which had previously not been possible.

We also learned about availability of IPv4 addresses with some US-based services being issued IPv4 addresses assigned to South America, causing many LOLs for service admins out there who thought their services were in Brazil!


July and August

This period’s summary: Ice Bucket Challenge.

Preview

  • Event Hubs: capture data from all the Internet connected things!
  • Redis cache: in more places and sizes
  • Preview management portal: manage Azure SQL Database
  • DocumentDB
  • Azure Search.

Generally Available


September

No single announcement jumps out so I was going to put a picture of a kitten here but I thought you might want to see this (even if it is from 2012).

Preview

  • Role-based access control (RBAC) for Azure management in preview portal only
  • Resource Tagging support: filter by tag – useful for billing and ops
  • Azure SQL Database – Elastic Scale preview. Replaces Federations model
  • DocumentDB – enhanced management tooling and metrics
  • Azure Automation – AD auth; PowerShell converter; Runbook gallery and scheduling
  • Media Services – Live Streaming and DRM, faster encoding and indexer.

Generally Available

  • ‘D’ Series VMs: 60% faster CPU, more RAM and local SSD disk
  • Redis Cache: recommended cache solution in Azure. 250MB – 53GB! support
  • Site Recovery: on-prem DR with Azure – Win / Linux
  • Notification Hubs: Baidu Push (China)
  • Virtual Machines: instance-level public IPs (no NAT/PAT)
  • Azure SQL Database: three new service tiers and hourly billing
  • API Management: added OAuth support and REST Management API
  • Websites: VNet support, “scalable CMS” with WordPress and backups improvements
  • Management Services Alerts.

October and November

Pretty hard to go by this news it terms of ‘most outstanding announcement’ for these two months, especially for those of us in Australia!

Preview

  • ‘G’ Series VMs – (“Godzilla” VM) more CPU/RAM/SSD than any VM in any cloud *
  • Premium Storage – SSD-based with more than 50k IOPS *
  • Marketplace changes – CoreOS and Cloudera
  • Increased focus on Docker including portal support
  • Cloud Platform System (CPS) from Dell.
  • Batch: parallel task coordination
  • Data Factory: build data processing pipelines
  • Stream Analytics: analyse your Event Hubs data.

* Announced but not yet in public preview.

Generally Available

  • Australia Geography launches!
  • Network Security Groups
  • Multi-NIC Support in VMs (VM size dependent)
  • Forced Tunnelling (route traffic back on-prem)
  • ExpressRoute:
    • Cross-Subscription Sharing
    • Multi-connect to an Azure VNET
  • Bigger Azure Virtual Gateways
  • Ops Logging for Gateways and ExpressRoute
  • More control over Gateway encryption
  • Azure Load Balancer Source IP Affinity (“Sticky Sessions”)
  • Nested Traffic Manager Profiles
  • Preview Portal: Internal Load Balancing and Instance / Reserved IP Management
  • Automation Service: PowerShell Service Orchestration
  • Microsoft Antimalware Extension on VMs and Cloud Services (for free)
  • Many more VM Extensions available (PowerShell DSC / Octopus Deploy Tentacle)
  • Event Hubs: ingest more messages; SLA-backed.

Other News

We always have this vision of large-scale services being relatively immune to wide-ranging outages, yet all the main cloud platforms have regular challenges resulting in service disruptions of some variety.

On November 18 (or 19 depending on your timezone) Azure had one of these events, causing a disruption across many of its Regions affecting Storage and VMs.

The final Root Cause Analysis (RCA) shows the sorts of challenges involved in running platforms of this size.


December

You can almost hear the drawing of the breath before the Azure team starts 2015…

Preview

  • Premium Storage
  • Azure SQL Database: better feature parity with SQL 2014 and better large DB support.
  • Search: management via portal, multi-lingual support.
  • DocumentDB: better management via portal.
  • Azure Data Factory: integration with Machine Learning.

Generally Available

  • RemoteApp: run desktop apps anywhere
  • Azure SQL Database: new auditing features
  • Live Media Streaming: access the same platform as used at the World Cup and Olympics
  • Site Recovery: supported without SCVMM being deployed
  • Active Directory: App Proxy and password write-back enabled
  • Mobile Services: Offline Sync Managed SDK
  • HDInsight: Cluster customisation.

Other News

Another big announcement for the Australian cloud market was the news that from early 2015 Microsoft would be offering Office 365 and CRM Online from within Australia’s borders. What a great time to be working in this market!


There we have it! What a year! I haven’t detailed every single announcement to come out from the Azure team (this post would easily be twice as long), but if you think I’ve missed anything important leave a comment and I’ll update the post.

Simon.

Tagged , , , ,

How to add a Site-to-Site VPN to an Azure Virtual Network after setup.

I have recently been working on a couple of engagements that involve utilising the site-to-site connectivity features of Windows Azure Virtual Networks. On one engagement we went through the setup of the Virtual Network early on before the customer’s network team had gotten involved and we just skipped the setup of the site-to-site connection at network creation time because we could come back and define it later.

What we found, however, was once the Virtual Network was setup and populated with DNS servers and Virtual Machines we were unable to change the site-to-site settings using the Azure Management Portal – they were greyed out. We defined the Local Network within the Management Portal but were unable to associate it with the Virtual Network.

A key thing to note before you read on – if you haven’t defined any subnets on your Virtual Network and your address space is in use already you can stop reading now. You will not be able to add a VPN connection to your environment. This is because the Azure Gateway solution requires its own subnet (it’s essentially a set of specially configured Windows VMs) and if you don’t have a subnet with no addresses in use then you will need to start over.

If the above paragraph doesn’t apply to you – read on.

  1. Setup your Local Network (on-prem / DC) via the Azure Management Portal by clicking on +NEW in the bottom left of screen and selecting Network Services > Virtual Network > Add Local Network. Fill in all the necessary details.
  2. Open up PowerShell and make sure you have initialised your environment so it will work with your Azure Subcription.
  3. Retrieve the Virtual Network Configuration for the network you wish to add the gateway to.
    Get-AzureVNetConfig -ExportToFile "c:\temp\MyAzNets.netcfg"
    
  4. This produces an XML file you can edit.  You should see something similar to the below (we had already provisioned a “vpnsubnet” that was reserved for future VPN use).
    <?xml version="1.0" encoding="utf-8"?>
    <NetworkConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration">
      <VirtualNetworkConfiguration>
        <Dns>
          <DnsServers>
            <DnsServer name="POCDNS01" IPAddress="10.16.210.132" />
            <DnsServer name="POCDNS02" IPAddress="10.16.210.133" />
          </DnsServers>
        </Dns>
        <LocalNetworkSites>
          <LocalNetworkSite name="OnPremiseNetwork">
            <AddressSpace>
              <AddressPrefix>10.150.0.0/14</AddressPrefix>
              <AddressPrefix>10.154.0.0/15</AddressPrefix>
            </AddressSpace>
            <VPNGatewayAddress>95.11.12.124</VPNGatewayAddress>
          </LocalNetworkSite>
        </LocalNetworkSites>
        <VirtualNetworkSites>
          <VirtualNetworkSite name="pocvnet" AffinityGroup="pocvnet-ag">
            <AddressSpace>
              <AddressPrefix>10.16.210.0/23</AddressPrefix>
            </AddressSpace>
            <Subnets>
              <Subnet name="appsubnet">
                <AddressPrefix>10.16.210.0/25</AddressPrefix>
              </Subnet>
              <Subnet name="dcsubnet">
                <AddressPrefix>10.16.210.128/25</AddressPrefix>
              </Subnet>
              <Subnet name="ressubnet">
                <AddressPrefix>10.16.211.0/25</AddressPrefix>
              </Subnet>
              <Subnet name="vpnsubnet">
                <AddressPrefix>10.16.211.128/25</AddressPrefix>
              </Subnet>
            </Subnets>
            <DnsServersRef>
              <DnsServerRef name="POCDNS01" />
              <DnsServerRef name="POCDNS02" />
            </DnsServersRef>
          </VirtualNetworkSite>
        </VirtualNetworkSites>
      </VirtualNetworkConfiguration>
    </NetworkConfiguration>
    
  5. Firstly I thought I could just add in my Gateway details to VirtualNetworkSite and re-submit but when I submitted the file back to Azure I receieved this response:

    Set-AzureVNetConfig : “An exception occurred when calling the ServiceManagement API. HTTP Status Code: 400. Service Management Error Code: BadRequest. Message: Missing subnet referenced ‘GatewaySubnet’ in virtual network ‘pocvnet’.. Operation Tracking ID:”

    Oooooohhh, I see, the Gateway’s expecting a reserved subnet with name “GatewaySubnet” to exist. Let’s modify the file and change the details on line 39 (rename the subnet). My VPN setup is shown on lines 21 – 25.

    <?xml version="1.0" encoding="utf-8"?>
    <NetworkConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration">
      <VirtualNetworkConfiguration>
        <Dns>
          <DnsServers>
            <DnsServer name="POCDNS01" IPAddress="10.16.210.132" />
            <DnsServer name="POCDNS02" IPAddress="10.16.210.133" />
          </DnsServers>
        </Dns>
        <LocalNetworkSites>
          <LocalNetworkSite name="OnPremiseNetwork">
            <AddressSpace>
              <AddressPrefix>10.150.0.0/14</AddressPrefix>
              <AddressPrefix>10.154.0.0/15</AddressPrefix>
            </AddressSpace>
            <VPNGatewayAddress>95.11.12.124</VPNGatewayAddress>
          </LocalNetworkSite>
        </LocalNetworkSites>
        <VirtualNetworkSites>
          <VirtualNetworkSite name="pocvnet" AffinityGroup="pocvnet-ag">
              <Gateway profile="Small">
              <ConnectionsToLocalNetwork>
                <LocalNetworkSiteRef name="OnPremiseNetwork"/>
              </ConnectionsToLocalNetwork>
              </Gateway>
            <AddressSpace>
              <AddressPrefix>10.16.210.0/23</AddressPrefix>
            </AddressSpace>
            <Subnets>
              <Subnet name="appsubnet">
                <AddressPrefix>10.16.210.0/25</AddressPrefix>
              </Subnet>
              <Subnet name="dcsubnet">
                <AddressPrefix>10.16.210.128/25</AddressPrefix>
              </Subnet>
              <Subnet name="ressubnet">
                <AddressPrefix>10.16.211.0/25</AddressPrefix>
              </Subnet>
              <Subnet name="GatewaySubnet">
                <AddressPrefix>10.16.211.128/25</AddressPrefix>
              </Subnet>
            </Subnets>
            <DnsServersRef>
              <DnsServerRef name="POCDNS01" />
              <DnsServerRef name="POCDNS02" />
            </DnsServersRef>
          </VirtualNetworkSite>
        </VirtualNetworkSites>
      </VirtualNetworkConfiguration>
    </NetworkConfiguration>
    
  6. Now that the file is updated let’s resubmit it back to Azure for action:
    Set-AzureVNetConfig "c:\temp\MyAzNets.netcfg"
    

That was it! If you switch back to your Azure Portal you’ll see some changes – the “GatewaySubnet” magically transforms into simply “Gateway” in the UI and your on-premise Local Network will be associated with site-to-site connectivity. After a period you will start to see changes in your Virtual Network as Azure provisions the VPN endpoints and starts to spin up instances and connect to the on-premise gateway.

Azure PowerShell Cmdlets FTW!

Tagged , , ,

Feature Request: Generate PowerShell Scriptlets From the Azure Management Portal

I was at a customer the other day wrapping a successful project delivered on Windows Azure when one the customer’s team suggested a great idea – why not generate PowerShell scriptlets from actions completed within the Azure Mangement Portal so that you can re-issue the command at any future point (or in a different Azure Location).  As a result of this great feedback I’ve created this feature request on UserVoice.  Go ahead an upvote please!

http://www.mygreatwindowsazureidea.com/forums/216665-scripting-and-command-line-tools/suggestions/4242681-generate-powershell-script-from-actions-user-takes

November 2013 Update: I’ve given direct feedback to the Azure automation team – let’s see where it goes!

Tagged , , , ,

Secure Remote Management Studio access to SQL Server on Azure IaaS

If you have ever provisioned a SQL Server instance running on Azure IaaS and not used a VPN solution you will find that by default you are unable to connect to it from a local Management Studio instance.  By default all Virtual Machines are wrapped by a Cloud Service which behaves to a degree like ingress Security Groups do on AWS.  In this blog post I’ll show you how you can open up a connection and then connect securely to it using SQL Authentication.

Note: making this change effectively opens your SQL Server up to traffic from the Internet though it is on a non-standard TCP port.  If you don’t want this you should consider using an Azure Virtual Network and a VPN to protect connections to / from SQL server and a known location or device.  Alternatively you could setup a bastion or jump host that you first RDP to before connecting to SQL Server.

Updated: The release of the Azure SDK 2.0 introduces the concept of ACL on exposed endpoints and the 2.1 SDK exposes the setting of these values via PowerShell (see Set-AzureAclConfig). Awesome!

When you provision a new Virtual Machine by default it will provide two default TCP endpoints: Remote Desktop (RDP) and PowerShell.  As a first step we need to open access to port 1433 – we can do this using on the following two methods:

1. Via the Azure Management Portal:

  • Click on Virtual Machines in the left navigation and select your shiny new SQL VM.
  • Click on ENDPOINTS in the top navigation.  You should see a view similar to below:

Azure VM Endpoints

  • Now click the + ADD button at the bottom and select “Add Endpoint”.
  • On the Add Endpoint page complete:
    • Name: SQL TDS
    • Protocal: TCP
    • Public Port: Random number > 49152 (and not already in use on this Cloud Service or VM)
    • Private Port: 1433.
    • Click Tick to save new endpoint.

2. Via Powershell with the Azure Powershell Module.  Note that you will need to setup your Powershell environment to know about and connect to your Azure Subcription.  More information on this topic can be found on MSDN.

Get-AzureVM -ServiceName "yourcloudsevice" -Name "yourvmhostname" |
Add-AzureEndpoint -Name "SQL TDS" -Protocol tcp -LocalPort 1433 -PublicPort 57153 |
Update-AzureVM

Now that you completed the above you can connect to your SQL Server using Management Studio and encrypt the connection.  Open Management Studio and in the Connect to Server make the following changes:

1. Under “Server name” put the Cloud Service Public Virtual IP (VIP) address of your VM (find it on the Dashboard screen for the VM), a comma and then include the Public Port you mapped previously.  Your resulting input should look like “123.123.123.123,57153”.

SQL Connection Dialog

2. Click on the Options>> button and on the Connection Properties tab select “Encrypt connection”.

SQL Connection Dialog

3. Finally we need to tell Management Studio to trust the SSL certificate on the server. Click on the “Additional Connection Parameters” tab and enter “TrustServerCertificate=true”. If you don’t do this you will get an error and be unable to connect using encryption.

You should find that you can now connect to the VM.

I had a look to see if you could use Windows Firewall to restrict the traffic coming into your SQL Server by remote IP but at first glance it looks like it’s not possible due to the NAT occuring at the cloud service interface.  I haven’t had time to inspect the TCP traffic to see what’s coming into the host but I suspect you can probably create a firewall rule to protect your machine, though as I said up front – use a VPN and Virtual Network if you really want to be protected.

Updated: The release of the Azure SDK 2.0 introduces the concept of ACL on exposed endpoints and the 2.1 SDK exposes the setting of these values via PowerShell (see Set-AzureAclConfig). Awesome!

HTH.

Tagged , , , ,

SharePoint Online 2013 ALM Practices

SharePoint has always been a bit a challenge when it comes to structured ALM and developer practices which is something Microsoft partially addressed with the release of SharePoint and Visual Studio 2010. Deploying and building solutions for SharePoint 2013 pretty much retains most of the IP from 2010 with the noted deprecation of Sandbox Solutions (this means they’ll be gone in SharePoint vNext).

As part of the project I’m leading at Kloud at the moment we are rebuilding an Intranet so it runs on SharePoint Online 2013 so I wanted to share some of the Application Lifecycle Management (ALM) processes we’ve been using.

Packaging

Most of the work we have been doing to date has leveraged existing features within the SharePoint core – we have, however, spent time utilising the Visual Studio 2012 SharePoint templates to package our customisations so they can be moved between multiple environments. SharePoint Online still provides support for Sandboxed Solutions and we’ve found that they provide a convenient way to deploy elements that are not developed as Apps. Designer packages can also be exported and edited in Visual Studio and produce a re-deployable package (which result in Sandboxed Solutions).

Powershell

At the time of writing, the number of Powershell Commandlets for managing SharePoint Online are substantially less those for on-premise. If you need to modify any element below a Site Collection you are pretty much forced to write custom tooling or perform the tasks manually – we have made a call in come cases to build tooling using the Client Side Object Model (CSOM) or to perform tasks manually.

Development Environment

Microsoft has invested some time in the developer experience around SharePoint Online and now provides you with free access to an “Office 365 Developer Site” which gives you a single-license Office 365 environment in which to develop solutions. The General Availability of Office 365 Wave 15 (the 2013 suite) sees these sites only being available for businesses holding enterprise (E3 or E4) licenses.  Anyone else will need to utilise a 30 day trial tenant.

We have had each team member setup their own site and develop solutions locally prior to rolling them into our main deployment. Packaging and deployment is obviously key here as we need to be able to keep the developer instances in sync with each other and the easiest way to achieve that is with WSPs that can be redeployed as required.

One other item we have done around development is to utilise an on-premise setup in a VM to provide developers with a more rapid development experience in some cases (and more transparent troubleshooting). As you mostly stick to the SharePoint CSOM a lot of your development these days resides in JavaScript which means you shouldn’t hit any snags in relying in on-premise / full-trust features in your delivered solutions.

Note that the Office 365 Developer Site is a single-license environment which means you can’t do multi-user testing or content targeting. That’s where test environments come into play!

Test Environment

The best way to achieve a more structured ALM approach with Office 365 is to leverage an intermediate test environment – the easiest way for anyone to achieve this is to register for a trial Office 365 tenant – while only technically available for 30 days this still provides you with the ability to test prior to deploying to your production environment.

Once everything is tested and good to go into production you’re already in a position to know the steps involved in deployment!

As you can see – it’s still not a perfect world for SharePoint ALM, but with a little work you can get to a point where you are at least starting to enforce a little rigour around build and deployment.

Hope this helps!

Useful Links

Tagged , , , , ,