In this article, we will create a lambda function (dotnet core 3.1) and deploy it to AWS as a docker container. In December 2020 re:Invent AWS announced container support in lambda functions. It will help customers who have container tooling in place for their development workflows. Lambda will support container images up to 10 GB in size. AWS provides base images for Lambda development.
We will be using dotnet CLI for creating, building and deploying lambda function. Amazon.Lambda.Templates nuget package provides templates to create lambda functions.
To install, lambda templates run
dotnetnew-iAmazon.Lambda.Templates. Check installation of templates by running
dotnet new -all command which will display a list of templates.
Lets us create lambda function which will be triggered for S3 file upload event. We will use lambda.S3 templete to create function.
- dotnet new lambda.S3 --name s3Listener
Open project folder in VS Code. You will see src and test folders created for lambda function. src folder will have Function.cs file. It contains a function handler which will get s3 file metadat from event object.
We are going to deploy lambda function as container, hence we have to build docker image out of lambda code. To create image of lambda function, I will add Docker file to src folder. We will use public.ecr.aws/lambda/dotnet:core3.1 image as base image. Base images contain the Amazon Linux Base operating system, the runtime for a programming language, dependencies and the Lambda Runtime Interface Client (RIC), which implements the Lambda
Runtime API.
For building and publishing lambda code artefacts, we will use mcr.microsoft.com/dotnet/sdk:3.1 image from microsoft repository. On successful build and publish steps, we will copy publish output to execution directory of labda i. e. var/task. Finally we will CMD to function handler path i. r. project name::namespace.class name::function name.
- FROM public.ecr.aws/lambda/dotnet:core3.1 AS base
-
- FROM mcr.microsoft.com/dotnet/sdk:3.1 as build
- WORKDIR /src
- COPY ["s3Listener.csproj", "base/"]
- RUN dotnet restore "base/s3Listener.csproj"
-
- WORKDIR "/src"
- COPY . .
- RUN dotnet build "s3Listener.csproj" --configuration Release --output /app/build
-
- FROM build AS publish
- RUN dotnet publish "s3Listener.csproj" \
- --configuration Release \
- --runtime linux-x64 \
- --self-contained false \
- --output /app/publish \
- -p:PublishReadyToRun=true
-
- FROM base AS final
- WORKDIR /var/task
- COPY --from=publish /app/publish .
- CMD ["s3Listener::s3Listener.Function::FunctionHandler"]
Build and Test
Let us build image using docker and run that image to test. Lambda functionality will be exposed over port 8080 which we will map to 9000, the host port. We will pass additional parameters as environment variables. It is required because we are creating S3 client in our code which will not be available in locally running container.
- docker build -t s3listener .
- docker run -p 9000:8080 s3listener -e AWS_REGION="us-east-1" -e AWS_ACCESS_KEY_ID="AKIA***JNL" -e AWS_SECRET_ACCESS_KEY="FfDlcqZ7************"
Open another command prompt instance. Run the following command to invoke lambda function. We are passing s3 event as data parameter, which contains s3 file metadata.
- curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations"
- -d '{ "Records":
- [{
- "s3":
- {
- "bucket": { "name": "example-bucket" },
- "object": { "key": "test/key" }
- }
- }]
- }'
Deploy
If you see desired output while tesing, let us deploy the function to AWS environment. Before deploying add image as package type in aws-lambda-tools-defaults.json file. It will be used by lambda deployment command to get deployment parameters.
- dotnet lambda deploy-function s3listener --function-role lambda-s3Listener
Command will,
- Build docker image.
- Create ECR repo and upload image to that repo.
- Create lambda function using an image uri.
Great! We have deployed .Net core based lambda function to AWS as container. I will discuss more scenarios and solutions using AWS services in upcoming articles, until then stay tuned! :)