One of my gripes with the technological developments over the past couple of years is that the names that have been bestowed upon them have not been great. Microservices seem to indicate they have something to with size (spoiler-alert: they don’t) and serverless architectures suggest they don’t need a server (spoiler-alert: they do). But poor naming should not sour you on their usefulness. If nothing else, they are a reminder to never stop thinking for yourself.

That said, I don’t want to talk about the name for serverless architectures. I don’t even want to talk about anything serverless architecture related. The serverless label merely is my bridge to what I do want to talk about: the portability of what is best described as “serverless computing”, often referred to as Functions.

While I might be oversimplifying things, I like to think of serverless computing as a runtime that you can simply drop some code into, and that code will be executed when certain criteria are met (like an incoming HTTP request). To validate this interpretation, I figured I’d put it to the test and write a bit of ’serverless‘ code and see how much work it would be to migrate it from one cloud to the next. In my case I wanted to write a .Net Core (2.1) Azure Function and move it over to an AWS .Net Core 2.1 Lambda Function…how hard could it be?

I set out to create my Functions and so, after figuring out what was needed to get everything working in Microsoft’s Visual Studio, I ended up with two ‘vanilla’ Functions from each platform.

The vanilla “HelloWorld” AWS Function:

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using System.Net.Http;
using Newtonsoft.Json; using Amazon.Lambda.Core;
using Amazon.Lambda.APIGatewayEvents; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))] namespace HelloWorld
{     public class Function
    {          private static readonly HttpClient client = new HttpClient();          private static async Task<string> GetCallingIP()
        {
            client.DefaultRequestHeaders.Accept.Clear();
            client.DefaultRequestHeaders.Add("User-Agent", "AWS Lambda .Net Client");             var msg = await client.GetStringAsync("http://checkip.amazonaws.com/"). ConfigureAwait(continueOnCapturedContext:false);             return msg.Replace("\n","");
        }         public async Task<APIGatewayProxyResponse> FunctionHandler(APIGatewayProxyRequest apigProxyEvent, ILambdaContext context)
        {             var location = await GetCallingIP();
            var body = new Dictionary<string, string>
            {
                { "message", "hello world" },
                { "location", location }
            };             return new APIGatewayProxyResponse
            {
               Body = JsonConvert.SerializeObject(body),
                StatusCode = 200,
                Headers = new Dictionary<string, string> { { "Content-Type", "application/json" } }
            };
        }
    }
}

The vanilla Azure Function:

using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json; namespace FunctionAppExperiment
{
    public static class Function1
    {
        [FunctionName("Function1")]
        public static async Task<IActionResult> Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
            ILogger log)
        {
            log.LogInformation("C# HTTP trigger function processed a request.");

            string name = req.Query["name"];            string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
            dynamic data = JsonConvert.DeserializeObject(requestBody);             name = name ?? data?.name;             return name != null
                ? (ActionResult)new OkObjectResult($"Hello, {name}")
                : new BadRequestObjectResult("Please pass a name on the query string or in the request body");
        }
    }
}

Let’s examine what we have to work with. For those familiar with .Net Core you’ll see that both are using some libraries specific to their respective platforms. The Lambda uses a couple of Amazon.Lambda.* packages and the Azure Functions requires some Microsoft.Azure.* ones.

What these packages represent, are the differences in how the respective runtimes will interact with the code. Commonly referred to as triggers, these definitions describe how an incoming event is translated into a .Net Core object to work with. In both cases the incoming trigger is an HTTP event. On the Azure end this results in a HttpRequest object called req and on the AWS you get the wonderfully named APIGatewayProxyRequest object called apigProxyEvent. While these objects are different, they are similar in a lot of ways but not enough to make code easily portable between them. So, dropping the same code into these different runtimes will not work. Well, not without doing at least a little additional work.

 To help us forward, the AWS Lambda code shows the way. While the Azure Function starts with a literal single Function, the Lambda code splits the code between the incoming trigger and the actual code. This division is the key to be able to properly migrate between platforms. Of course, the platforms involved need to run on the same version of the language, but at the time of writing this seems to be the case as both Azure & AWS specifically support .Net Core 2.1, even though newer versions of .Net Core exist and Azure even offers a newer Functions runtime in preview. Something to keep in mind, if migration is a concern, is that it seems wise to stick to the LTS (long time support) release of the language you’re interested in.

 The trick in the division comes down to what is basically a best practice already: splitting the functional logic and its interface. The interface is the part that handles the interaction with the runtime. In this case it’s the APIGatewayProxyRequest and HttpRequest objects. The key is to take from these objects what you need to run your logic (like the appropriate HTTP header, query parameters etc. but also environment specific variables like connectionstrings) and then send that data to a separate method. Once the logic finishes, its return values need to be processed by the interface part to hook into the runtime’s requirements (an APIGatewayProxyResponse and IActionResult for AWS & Azure respectively in this example). In doing so, migrating the logic will become a true copy-and-paste affair.

 In closing, the idea of having a bit of code and dropping it into any old runtime isn’t true. There still is a specific runtime underneath and you’ll have to do some work to make your bit of code work in said runtime. But if you take care of properly splitting interface and logic you should be able to so with relative ease.

Houd jij je kennis graag up to date?

Mis niets meer van onze kennisdocumenten, events, blogs en cases: ontvang als eerste het laatste nieuws in je inbox!

Fijn dat we je op de hoogte mogen houden!