Recommend you to take a read:
Unsurprisingly I think the way Cloud Computing is transforming the IT industry is also leading to easier ways to learn and develop skills about the Cloud. In this post I’m going to give a run down on what I think some of the best ways are to start dipping your toe into this space if you haven’t already.
Sign up for a free trial
This is easy AND low cost. Turn up to the sign-up page for most major players and you’ll get free or low-cost services for a timed period. Sure, you couldn’t start the next Facebook at this level, but it will give you enough to start to learn what’s on offer. You can run VMs, deploy solutions, utilise IaaS, PaaS and SaaS offerings and generally kick the tyres of the features of each. At time of writing these are:
- Amazon Web Services (AWS) Free Usage Tier – 12 months: http://aws.amazon.com/free/
- Google App Engine – each app gets a portion of free resource usage:
- Rackspace Cloud – no published information on any form of free resource usage tier.
- Windows Azure Free Trial – 3 months:
- Office 365 – depending on your license level you can get a free developer site setup. If you don’t qualify then a 30 day trial is available.
Learn the APIs and use the SDKs
Each of Amazon, Azure, Google, Office 365 and Rackspace offer some form of remote programmable API (typically presented as REST endpoints). If you’re going to move into Cloud from traditional hosting or system development practices then starting to learn about programmable infrastructure is a must. Understanding the APIs available will depend on leveraging existing documentation:
- Amazon Web Services: Pretty much every AWS component has its own REST API – the best thing to do is utilise the available documentation to identify the REST APIs you want to use:
- Google App Engine: Again, documentation is your start:
- Rackspace Cloud:
- Windows Azure: Like AWS each component of Azure provides a REST API:
- Office 365: as this platform is a combined offering, each of the products on offer (AD, Exchange, Lync and SharePoint) provide various APIs that can continue to be used in a relatively consistent state with their on-premise equivalents. Documentation is spread about a bit on MSDN but here’s a taster:
- Exchange Web Services:
- Lync via Unified Communications Web API (UCWA):
- SharePoint REST API:
- Exchange Web Services:
If you aren’t a fan of working so close to the wire you can always leverage one of the associated SDKs in the language of your choice:
- Amazon Web Services: .Net, Java, Node.js, PHP, Python and Ruby. Mobile: iOS and Android.
- Google App Engine: Java, Python and Go.
- Rackspace Cloud: .Net, Java, PHP, Python and Ruby.
- Windows Azure: .Net, Java, Node.js, PHP, Python and Ruby. Mobile: Windows Phone (C#), iOS and Android.
The great thing about having .Net support is you can then leverage those SDKs directly in PowerShell and automate a lot of items via scripting.
Developer Tool Support
While having an SDK is fine there’s also a need to support developers within whatever IDE they happen to be using. Luckily you get support here too:
- Amazon Web Services: Toolkit for Visual Studio:
(also on Nuget) or for Eclipse:
- Google App Engine: Eclipse:
- Rackspace Cloud: there is a Visual Studio plugin for Visual Studio 2010 but the page is offline at time of writing and there’s no news on it being updated to support Visual Studio 2012.
- Windows Azure: Visual Studio can build Cloud Applications using project templates and there is also support for Eclipse:
Source Control and Release Management
The final piece of the puzzle and one not necessarily tied to the individual Cloud providers is where to put your source code and how to deploy it.
- Amazon Web Services: You can leverage Elastic Beanstalk for deployment purposes (this is a part of the Visual Studio and Eclipse toolkits).
- Google App Engine: Depending on language you have a few options for auto-deploying applications using command-line tools from build scripts. Eclipse tooling (covered above) also provides deployment capabilities.
- Rackspace Cloud: no publicly available information on build and deploy.
- Windows Azure: You can leverage deployment capabilities out of Visual Studio (probably not the best solution though) or utilise the in-built Azure platform support to deploy from a range of hosted source control providers such as BitBucket (Git or Mercurial), Codeplex, Dropbox (yes, I know), GitHub or TFS. A really strong showing here from the Azure platform!
So, there we have it – probably one of the most link-heavy posts you’ll ever come across – hopefully the links will stay valid for a while yet! If you spot anything that’s dead or that is just plain wrong leave me a comment.
SharePoint has always been a bit a challenge when it comes to structured ALM and developer practices which is something Microsoft partially addressed with the release of SharePoint and Visual Studio 2010. Deploying and building solutions for SharePoint 2013 pretty much retains most of the IP from 2010 with the noted deprecation of Sandbox Solutions (this means they’ll be gone in SharePoint vNext).
As part of the project I’m leading at Kloud at the moment we are rebuilding an Intranet so it runs on SharePoint Online 2013 so I wanted to share some of the Application Lifecycle Management (ALM) processes we’ve been using.
Most of the work we have been doing to date has leveraged existing features within the SharePoint core – we have, however, spent time utilising the Visual Studio 2012 SharePoint templates to package our customisations so they can be moved between multiple environments. SharePoint Online still provides support for Sandboxed Solutions and we’ve found that they provide a convenient way to deploy elements that are not developed as Apps. Designer packages can also be exported and edited in Visual Studio and produce a re-deployable package (which result in Sandboxed Solutions).
At the time of writing, the number of Powershell Commandlets for managing SharePoint Online are substantially less those for on-premise. If you need to modify any element below a Site Collection you are pretty much forced to write custom tooling or perform the tasks manually – we have made a call in come cases to build tooling using the Client Side Object Model (CSOM) or to perform tasks manually.
Microsoft has invested some time in the developer experience around SharePoint Online and now provides you with free access to an “Office 365 Developer Site” which gives you a single-license Office 365 environment in which to develop solutions. The General Availability of Office 365 Wave 15 (the 2013 suite) sees these sites only being available for businesses holding enterprise (E3 or E4) licenses. Anyone else will need to utilise a 30 day trial tenant.
We have had each team member setup their own site and develop solutions locally prior to rolling them into our main deployment. Packaging and deployment is obviously key here as we need to be able to keep the developer instances in sync with each other and the easiest way to achieve that is with WSPs that can be redeployed as required.
Note that the Office 365 Developer Site is a single-license environment which means you can’t do multi-user testing or content targeting. That’s where test environments come into play!
The best way to achieve a more structured ALM approach with Office 365 is to leverage an intermediate test environment – the easiest way for anyone to achieve this is to register for a trial Office 365 tenant – while only technically available for 30 days this still provides you with the ability to test prior to deploying to your production environment.
Once everything is tested and good to go into production you’re already in a position to know the steps involved in deployment!
As you can see – it’s still not a perfect world for SharePoint ALM, but with a little work you can get to a point where you are at least starting to enforce a little rigour around build and deployment.
Hope this helps!
- Provision an Office 365 Developer Site. (Note: no longer valid with GA of Wave 15).
- Provision a Developer Site using your existing Office 365 subscription.
- View SharePoint Online Powershell Commandlets.
The increasing adoption of Office 365 is driving a lot of traditional development on the SharePoint platform online. As you might expect there are some big differences between on-premise and cloud and the ways in which you achieve customisation and implementation of features.
Traditionally timer jobs played a large part in the way background services could be implemented in SharePoint. You will find that timer jobs are absent in SharePoint Online and that the alternative is to leverage the workflow capabilities of SharePoint to achieve the same sort of outcome.
A fairly typical scenario for timed jobs is to poll external services for some form of information to be cached locally on SharePoint. The good news is that the standard Call HTTP Web Service Action of SharePoint 2013 workflows execute the same in Office 365 as they do on-premise.
There is a blog post and demo on MSDN that you can use to test this out for yourself.
Gotcha: and a fairly big (and unobvious) one: this workflow Action can only handle calls to webservices that return responses of type text/html, text/plain and application/json. You will find you are unable to accept and process text/xml responses and the Action will only pipe the response from a webservice call to a Dictionary object so you can’t even do any string manipulation foo on the result if it’s not one of the three response types accepted!
Hope this post saves you some time!
If, like a lot of people who’ve worked heavily with TFS you may not have spent much time working with Git or any of its DVCS bretheren.
Firstly, a few key things:
1. Read and absorb the tutorial on how best to work with Git from the guys over at Atlassian.
2. Install the Visual Studio 2012 Update 2 (currently in CTP, possibly in RTM by the time you read this).
http://www.microsoft.com/en-us/download/details.aspx?id=36539 (grab just vsupdate_KB2707250.exe)
3. Install the Git Tools for Visual Studio http://visualstudiogallery.msdn.microsoft.com/abafc7d6-dcaa-40f4-8a5e-d6724bdb980c
4. Install the most recent Git client software from http://git-scm.com/downloads
5. Set your default Visual Studio Source Control provider to be “Microsoft Git Provider”.
6. Setup an account on Team Foundation Service (https://tfs.visualstudio.com/), or if you’re lucky enough maybe you can even do this with your on-premise TFS instance now…
7. Make sure you enable and set alternative credentials in your TFS profile:
8. Setup a project that uses Git for source control.
At this stage you have a couple of options – you can clone the repository using Visual Studio’s Git support
OR you can do it right from the commandline using the standard Git tooling (make sure you’re at a good location on disk when you run this command):
git clone https://thesimpsons.visualstudio.com/defaultcollection/_git/bart milhouse Cloning into 'milhouse'... Username for 'https://thesimpsons.visualstudio.com/': homer Password for 'https://thesimpsons.visualstudio.com/': Warning: You appear to have cloned an empty repository.
I tend to setup a project directory hierarchy early on and with Git support in Visual Studio I’d say it’s even more important as you don’t have a Source Control Explorer view of the world and Visual Studio can quickly create a mess when adding lots of projects or solution elements. The challenge is that (as of writing) Git won’t support empty folders and the easiest work around is to create your folder structure and drop an empty file into each folder.
Now this is where Visual Studio’s Git tools won’t help you – they have no concept of files / folders held outside of Visual Studio solutions so you will need to use the Git tools at the commandline to affect this change. Once have your hierarchy setup with empty files in each folder, at a command prompt change into the root of your local repository and then do the following.
git add -A git commit -m "Hmmmm donuts."
Now, at this point, if you issue “git push” you may experience a problem and receive this message:
No refs in common and none specified; doing nothing.
Perhaps you should specify a branch such as ‘master’.
Which apart from being pretty good english (if we ignore ‘refs’) is pretty damn useless.
How to fix? Like this:
git push origin master
This will perform a forced push and your newly populated hierachy should be pushed to TFS, er Git, er TFS. You get the idea. Then the others on your team are able to clone the repository (or perform a pull) and will receive the updates.
Update: A big gotcha that I’ve found, and it results in a subtle issue is this: if you have a project that has spaces in its title (i.e. “Big Web”) then Git happily URL encodes that and will write the folder to disk in the form “Big%20Web” which is all fine and dandy until you try to compile anything in Visual Studio. Then you’ll start getting CS0006 compilation errors (unable to find metadata files). The fix is to override the target when cloning the repository to make sure the folder is validly named (in my example above this checks out the “bart” project to the local “milhouse” folder).
In this day of elastic on-demand compute resource it can be easy to lose focus on how best to leverage a smaller footprint when it’s so easy to add capacity. Having spent many a year working on the web it’s interesting to see how development frameworks and web infrastructure has matured to better support developers in delivering scalable solutions for not much effort. Still, it goes without saying that older applications don’t easily benefit from more modern tooling and even newer solutions sometimes fail to leverage tools because the solution architects and developers just don’t know about them. In this blog post I’ll try to cover off some and provide background as to why it’s important.
Peak hour traffic
We’ve all driven on roads during peak hour – what a nightmare! A short trip can take substantially longer when the traffic is heavy. Processes like paying for tolls or going through traffic lights suddenly start to take exponentially longer which has a knock-on effect to each individual joining the road (and so on). I’m pretty sure you can see the analogy here with the peaks in demand that websites often have, but, unlike on the road the web has this problem two-fold because your request generates a response that has to return to your client (and suffer a similar fate).
At a very high level the keys to better performance on the web are:
- ensure your web infrastructure takes the least amount of time to handle a request
- make sure your responses are streamlined to be as small as possible
- avoid forcing clients to make multiple round-trips to your infrastructure.
All requests (and responses) are not equal
This is subtle and not immediately obvious if you haven’t seen how hosts with different latencies can affect your website. You may have built a very capable platform to service a high volume of requests but you may not have considered the time it takes for those requests to be serviced.
What do I mean?
A practical example is probably best and is something you can visualise yourself using your favourite web browser. In Internet Explorer or Chrome open the developer tools by hitting F12 on your keyboard (in IE make sure to hit “Start Capturing” too) – if you’re using Firefox, Safari, et al… I’m sure you can figure it out . Once open visit a website you know well and watch the list of resources that are loaded. Here I’m hitting Google’s Australia homepage.
I’m on a very low latency cable connection so I have a response in the milliseconds.
This means that despite the Google homepage sending me almost 100 KB of data it serviced my entire request in under half a second (I also got some pre-cached goodness thrown in which also makes the response quicker). The real interest beyond this is what is that time actually made up of? Let Chrome explain:
My client (browser) spent 5ms setting up the connection, 1ms sending my request (GET
), 197ms waiting for Google to respond at all, and then 40ms receiving the response. If this was a secure connection there would be more setup as my client and the server do all the necessary encryption / decryption to secure the message transport.
As you can imagine, if I was on a high latency connection each one of these values could be substantially higher. The net result on Google’s infrastructure would be:
- It takes longer to receive the full request from my client after connection initialisation
- It takes longer to stream the full response from their infrastructure to my client.
Both of which means my slower connection would use Google’s server resources for longer thus stopping those resources servicing another request.
As you can see this effectively limits the infrastructure to run at lower capacity than it really could and also demonstrates why performing load testing requires that you run test agents that utilise different latencies so you can gauge realistically what your capacity is.
Some things you can do
Given you have no control over how or where the requests will come from there are a few things you can do to help reduce the effect of low latency clients will impact your site.
- Reduce the number of requests or round trips: often overlooked but is increasingly becoming easier to achieve. The ways you can achieve a reduction in requests include:
- Use a CDN for resources: Microsoft and Google both host jQuery (and various jQuery plugins) on their CDNs. You can leverage these today with minimal effort. Avoid issues with SSL requests by mapping the CDN using a src attribute similar to “//ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.min.js” (without the http: prefix). Beyond jQuery push static images, CSS and other assets to utilise a CDN (regardless of provider) – cost should be no big issue for most scenarios.
- Use CSS Sprites: many moons ago each individual image reference in CSS would be loaded as an individual asset onto your server. While you can still do this the obvious net effect is the need to request multiple assets from the server. CSS sprites combine multiple images into one image and then utilise offsets in CSS to show the right section of the sprite. The upside is also client-side caching means any image reference in that sprite will be serviced very quickly.
- Reduce the size of the resources you are serving using these approaches:
- Compress imagery: yes, yes, sounds like the pre-2000 web. Know what? It hasn’t changed. This does become increasingly difficult when you have user generated content (UGC) but even there you can provide server-side compression and resizing to avoid serving multi-MB pages!
- Use GZIP compression: there is a trade-off here – under load can your server cope with the compression demands? Does the web server you’re using support GZIP of dynamic content? This change, while typically an easy one (it’s on or off on the server) requires testing to ensure other parts of your infrastructure will support it properly.
- Ensure you service requests as quickly as possible – this is typically where most web developers have experience and where a lot of time is spent tuning resources such as databases and SANs to ensure that calls are as responsive as possible. This is a big topic all on its own so I’m not going to dive into it here!
If you’re a bit lost were to start it can pay to use tools like YSlow from Yahoo! Or PageSpeed from Google – these will give you clear guidance on areas to start working on. From there it’s a matter of determining if you need to make code or infrastructure changes (or both) to create a site that can scale to more traffic without needing to necessarily obtain more compute power.
Hope you’ve found this useful – if you have any tips, suggestions or corrections feel free to leave them in the comments below.
I saw LinkedIn post today from Matt Barrie (head of freelancer.com) where he calls for the end of the ACS due to its questionable relevancy and the way in which it operates its Skills Assessment program. You should have a read.
So why am I blogging about it?
One item called out in the post is the sub-heading attached to each specialisation of “Network & Communications”, “Software Engineer”, “Electrical & Electronic Engineering”. Sure, they’re simplifications and questionably accurate, but for most people of any age they are as detailed as needed to provide a flavour of what each is about.
I think that as technology professionals and as citizens in a very tech-savvy world we assume the detail of what we do for a job can be explained to and comprehended by most of the population around us. While this may be true to a degree, I bet if you asked most non-tech people you know to explain what you do for a job (without you first giving them a detailed refresher) they may explain it as:
“Writes software for computers”
“Looks after networking at company X”
“Helps people access the internet”.
Sure, they’re not detailed and they most likely fail to capture in any way the complexity of your job. But they are the way those people have understood what you do for a job and most likely reflect how most people would interpret it.
As professionals we should not be insulted by this.
A key role for the ACS is to help drive the next generation of professionals into our businesses whatever their background. The ACS simplify the descriptions because they must. You cannot explain a new concept with another in less characters than a Tweet.
Here’s a challenge – come up with a sub-heading that describes those specialisations in terms that the general public can understand, that captures the job’s main purpose and that fits in that space. Then Tweet it to the ACS.