Build an Azure Functions Extension in 20 minutes at Sydney Serverless Meetup March 2023
- Published on
- Reading time
- Authors
- Name
- Simon Waight
- Mastodon
- @simonwaight
On Wednesday, March 8, I spoke at the Sydney Serverless Meetup on how you can quickly build an Azure Function Extension and in what situations you should consider doing so.
Scenario
For the purpose of my talk, I had a simple use case of reading a text file on AWS S3 bucket and displaying its contents in a web browser. The browser invokes a HTTP API presented by an Azure Function using a HTTP Trigger.
First Implementation
I started by showing one way to achieve a functioning solution with what would honestly be the way most people would go about implementing it. I can simply add the AWS .NET SDK to my Azure Function project, create a new AmazonS3Client
and download my intended file from S3 this way. In fact, I merely re-used the AWS sample that shows how to do this in .NET. The below diagram shows you the full flow.
This is the full implementation of the PullS3File
Function HTTP API.
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
namespace samplefunc01
{
public static class PullS3File
{
private static string bucketName = Environment.GetEnvironmentVariable("bucketName");
private static string keyName = Environment.GetEnvironmentVariable("keyName");
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.GetBySystemName(Environment.GetEnvironmentVariable("regionName"));
private static IAmazonS3 client;
[FunctionName("PullS3File")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = null)] HttpRequest req,
ILogger log)
{
log.LogInformation("C# HTTP trigger function received a request.");
client = new AmazonS3Client(bucketRegion);
string responseMessage = await ReadObjectDataAsync(bucketName, keyName, log);
return new OkObjectResult(responseMessage);
}
/// <summary>
/// Read an object's data from an Amazon S3 bucket.
///
/// Essentially the .NET sample from here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/download-objects.html
///
/// </summary>
/// <param name="bucketName">The name of the bucket containing the object.</param>
/// <param name="keyName">The name of the object.</param>
/// <param name="log">Logger</param>
/// <returns>Object data</returns>
private static async Task<string> ReadObjectDataAsync(string bucketName, string keyName, ILogger log)
{
string responseBody = "Failed to read remote file.";
try
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucketName,
Key = keyName
};
using (GetObjectResponse response = await client.GetObjectAsync(request))
using (Stream responseStream = response.ResponseStream)
using (StreamReader reader = new StreamReader(responseStream))
{
string contentType = response.Headers["Content-Type"];
log.LogInformation("Content type: {0}", contentType);
responseBody = reader.ReadToEnd();
}
}
catch (AmazonS3Exception e)
{
// If bucket or object does not exist
log.LogError("Error encountered ***. Message:'{0}' when reading object", e.Message);
}
catch (Exception e)
{
log.LogError("Unknown error encountered on server. Message:'{0}' when reading object", e.Message);
}
return responseBody;
}
}
}
You can find the fully working sample on GitHub, including the setup necessary to run it in a Codespace. All you will need is the AWS credentials (kept as Codespace Secrets) to authorise the AWS SDK. The Azure Function can run locally and doesn't require an Azure Subscription though you will need to enable the Azurite blob storage emulator (hint: click the "[Azurite blob storage]" text in the bottom of VS Code / Codespace and wait a few seconds for it to start).
While there is nothing inherently wrong about this approach, I do think there are some questions we could ask:
- How do we declaratively define the file to download?
- Can we abstract AWS primitives away from developers?
- Shouldn’t we write less code with serverless? "Code less"?
The main issue for me is the inevitable duplication of code. As a one-off solution, what we have above is fine. But what if we wanted to perform this capability in multiple Functions or expose the capability to developers who don't (and shouldn't) need to know anything about the AWS ecosystem?
Target Sample
What if our implementation looked like this?
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Siliconvalve.AwsS3Extension;
using Siliconvalve.AwsS3Extension.Model;
namespace samplefunc02
{
public static class PullS3FileExtension
{
[FunctionName("PullS3FileExtension")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
[S3TextFile(BucketName = "%bucketName%",FileKeyName ="%keyName%",AwsRegionName ="%regionName%")] AwsTextFile s3FileContents,
ILogger log)
{
log.LogInformation("C# HTTP trigger function received a request.");
return new OkObjectResult(s3FileContents.Content);
}
}
}
Wait. Where did all the AWS S3 code go? I'm glad you asked! 🙂
You might have spotted the S3TextFile
attribute. It's the secret here. This is an Azure Function Extension Binding that I've written. In C#, Bindings are implemented as an Attribute which is a super handy language construct as you can see.
The code also uses a Plain Old C# Object (POCO) in the form of AwsTextFile
to represent the return value from the binding.
Here's how our implementation now looks.
I am hoping you are already seeing the value to a developer in taking this approach. I have removed all the AWS logic I previously had and now all my developers need to do is define a few properties and they are ready to go (the execution environment still needs the right AWS credentials to be available, but the developers don't even need to know what those values are). I am still required to add the AWS SDK as it is dependency of the Extension I've developed, but my developers don't need to interact with any of the primitives in that library.
The complete, functional, implementation is on GitHub and can happily run in a Codespace.
The Custom Extension
I'm not going unpack the entire Extension here even though there isn't actually much code in my implementation. I previously blogged about building complex extensions if you want to read that post and like the two sample Azure Functions, you will find the Extension's code in a GitHub repository ready for you to Fork and play with. This is probably the best way to understand how it hangs together.
For the purpose of this talk I merely took the .NET assembly (DLL) and added it manually as a reference to the second implementation. In a production scenario you would package your Extension in a nuget package which would make it easy to distribute and update, as well as ensuring dependencies such as the AWS .NET SDK were also installed for developers as well.
Consuming Extensions in other languages
I briefly touched on this in my talk, but once you have written your extension in C# you can then expose the logic in other Function runtime languages (Java, Nodejs, Python) by writing and publishing an annotation (in Java parlance) package for the target language. This exposes the extensions triggers, bindings and objects to the consuming language.
Hopefully this has been a really useful exploration of what's possible using the extensibility model for Azure Functions. Along with Extensions, there is also a very well-defined language extension model (also known as a Handler) and the majority of the core Functions runtime, tools and Microsoft-maintained Extensions can all be found on GitHub as well.
Happy days! 😎