Skip to content

Instantly share code, notes, and snippets.

@mmoehrlein
Last active November 8, 2021 14:58
Show Gist options
  • Save mmoehrlein/46009fdea22c939338f2653a18f646fb to your computer and use it in GitHub Desktop.
Save mmoehrlein/46009fdea22c939338f2653a18f646fb to your computer and use it in GitHub Desktop.
lambda - deploy docker image

TEMPLATE: Deploy Lambda Function with SQS Trigger Using GithubActions

This project uses a GithubActions workflow to deploy the lambda function to AWS. The function is deployed to different regions (eu-central-1 or us-west-2) as either a production or a stage (suffixes: -prod or -stage) version depending on which branch the code was pushed to. Branch options: eu-prod, eu-stage, us-prod and us-stage. The respective regions that are used are eu-central-1 and us-west-2.

Getting Started

  1. Create an AWS User for deployment or use an existing on e.g. fp-lambda-sqs-worker. For necessary permissions see AWS Deployment User.
  2. Add the secrets to github repo - for a list see Github Secrets.
  3. ECR Repository
    1. create a new repository named according to package.json
    2. add permissions to the repository for lambda to be able to pull the images (see: ECR Repository)
  4. Role for lambda function
    1. Create a role for the lambda function or use an existing one e.g. Lambda-SQS-DB-Worker. For necessary permissions see Lambda Role.
    2. Enter arn of the lambda role in workflow (specific instructions see Lambda Role)
  5. Push to github on one of the supported branches.
  6. The workflow will run and...
    1. build the docker image
    2. push to ecr
    3. create or update the lambda function
  7. After the deployment finished you can create a trigger for the lambda (this is only necessary the first time after a function is created. If a function is updated, the triggers will persist)

Project

TS Source

The ts source code can be found in ./src. It is compiled into ./dist. The source can be compiled manually with npm run build. For development it can be helpful to compile in watch mode with npm run watch.

Docker

The lambda function is built into a docker image during the deployment.

Build Docker Image

A docker image can be build with npm run docker:build. This will compile the code and build an image named like the npm package name. This produces theoretically the exact same image as the build process on github actions.

Run Docker Image

Environment variables for the Lambda Function, among other things, are provided through a docker.env file. Create one in the root of the project.

MSSQL_PASSWORD=the_MSSQL-password
PG_PASSWORD=the_PG-password

Run Full Image

After the docker image is built it can be run with npm run docker:start. The container then exposes the runtime interface client on port 9000.

Run Image In Dev Mode

A development version can be run with npm run dev. This runs the base image of our Dockerfile and mounts the project root folder (incl. node_modules) into the lambda task root. The entrypoint is overwritten and set to node which then executes watch.js, which is a custom script, that watches for file changes in the dist directory and then restarts the runtime interface environment. It is advised to run npm run watch simultaneously in another terminal, to watch for changes of the js sources and automatically compile into dist, which will then trigger watch.js. To test your function you can invoke it like described in Invoke Lambda Function.

Run Image In Dev Mode With Packages Being Installed

Running the image with the node_modules mounted in the container like "the normal dev mode" can lead to problems if the project uses packages that are compiled for a specific architecture. Running the image with npm run dev:installOnDocker does not mount the whole project folder to the lambda task root and instead mounts specific files and directories. The import difference is that the node_modules folder is not mounted. Instead npm install is executed after the container starts and before launching watch.js. To do this the entrypoint is set to sh and the cmd to devSetup.sh. This method differs from the "full"/poduction Image because all packages are installed (incl. dev dependencies)

Mounted files and directories:

Invoke Lambda Function

A function in a running container that exposes the runtime interface client on port 9000 (see: run locally) can be invoked in the following ways:

# invoke with json object
npm run test -- '{"test":"me"}'

# invoke with file - the @ symbol indicates that a filename follows
npm run test -- @event.example.json

Resource Configuration

Examples and explanations regarding the different resources.

AWS Deployment User

A user with programmatic access is needed to deploy the function to lambda. It is used in the github workflow to create and update resources using the aws cli. The policies json of an example user can be found below. The access and secret key are provided to the workflow through Github Secrets.

Example IAM policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "GetAuthorizationToken",
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AllowPush",
            "Effect": "Allow",
            "Action": [
                "ecr:PutImage",
                "ecr:InitiateLayerUpload",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload",
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchCheckLayerAvailability",
                "ecr:DescribeRepositories"
            ],
            "Resource": "arn:aws:ecr:eu-central-1:151879656186:repository/*"
        },
        {
            "Sid": "Lambda",
            "Effect": "Allow",
            "Action": [
                "lambda:AddPermission",
                "lambda:CreateFunction",
                "lambda:DeleteFunction",
                "lambda:GetFunction",
                "lambda:GetFunctionConfiguration",
                "lambda:ListTags",
                "lambda:RemovePermission",
                "lambda:TagResource",
                "lambda:UntagResource",
                "lambda:UpdateFunctionCode",
                "lambda:UpdateFunctionConfiguration"
            ],
            "Resource": [
                "arn:aws:lambda:*:*:function:*"
            ]
        },
        {
            "Sid": "Lambdalistfunctions",
            "Effect": "Allow",
            "Action": [
                "lambda:ListFunctions"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Sid": "IAM",
            "Effect": "Allow",
            "Action": [
                "iam:AttachRolePolicy",
                "iam:DetachRolePolicy",
                "iam:GetRole",
                "iam:TagRole"
            ],
            "Resource": [
                "arn:aws:iam::*:role/*"
            ]
        },
        {
            "Sid": "IAMPassRole",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": "lambda.amazonaws.com"
                }
            }
        }
    ]
}

Lambda Role

The Lambda function needs an role attached to it. The role needs to have at least some basic permissions. This is an example of a basic role, that grants permission to create CloudWatch logs, poll from queues (i.e. get triggered by messages) and access a VPC to connect to a database.

AWS managed policies attached to role:

  • AWSLambdaBasicExecutionRole
  • AWSLambdaSQSQueueExecutionRole
  • AWSLambdaVPCAccessExecutionRole

Replace the < LAMBDA ROLE ARN > placeholder shown in the section below in the workflow file

- name: create function
        if: ${{ steps.check-func.outputs.function != steps.package-name.outputs.name-suff}}
        run: |
          aws lambda create-function \
          --region ${{steps.region.outputs.region}} \
          --function-name ${{steps.package-name.outputs.name-suff}} \
          --package-type Image \
          --code ImageUri=${{steps.build-image.outputs.image}} \
          --role < LAMBDA ROLE ARN > \
          --environment '{
            "Variables": {
              "PG_PASSWORD": "${{steps.env-vars.outputs.pgpass}}",
              "MSSQL_PASSWORD": "${{steps.env-vars.outputs.mssqlpass}}"
            }
          }' \

Github Secrets

name default
AWS_ACCESS_KEY_ID null
AWS_SECRET_ACCESS_KEY null
EU_PROD_PG_PASSWORD null
EU_PROD_MSSQL_PASSWORD null
EU_STAGE_PG_PASSWORD null
EU_STAGE_MSSQL_PASSWORD null
US_PROD_PG_PASSWORD null
US_PROD_MSSQL_PASSWORD null
US_STAGE_PG_PASSWORD null
US_STAGE_MSSQL_PASSWORD null

NOTE: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are required. For deploying to eu-stage only to EU_STAGE_* secrets are necessary. Same is true for the other branches respectively. If they are not set the function will still be deployed and the env vars will be set to an empty string. The env vars for the function will always be available as PG_PASSWORD and MSSQL_PASSWORD, no matter the branch that has been pushed to.

ECR Repository

Repository permission statement:

{
      "Sid": "LambdaECRImageRetrievalPolicy",
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": [
        "ecr:BatchGetImage",
        "ecr:GetDownloadUrlForLayer"
      ]
}

This allows all lambda to pull images from this repository. For help on how to set a repository policy see this AWS User Guide.

on:
push:
branches:
- eu-prod
- eu-stage
- us-prod
- us-stage
jobs:
build-deploy:
runs-on: ubuntu-latest
if: "! contains(github.event.head_commit.message, '[skip ci]')"
steps:
- name: get region
id: region
run: |
REG=$([[ ${{github.ref}} == *"eu"* ]] && echo eu-central-1 || echo us-west-2)
echo "::set-output name=region::$REG"
- name: get suffix
id: suffix
run: |
SUF=$([[ ${{github.ref}} == *"prod"* ]] && echo prod || echo stage)
echo "::set-output name=suffix::$SUF"
- name: get correct environment vars
id: env-vars
run: |
PG_PASSWORD=$( \
[[ ${{github.ref}} == *"eu-prod" ]] && echo ${{secrets.EU_PROD_PG_PASSWORD}} \
|| ([[ ${{github.ref}} == *"eu-stage" ]] && echo ${{secrets.EU_STAGE_PG_PASSWORD}} \
|| ([[ ${{github.ref}} == *"us-prod" ]] && echo ${{secrets.US_PROD_PG_PASSWORD}} \
|| ([[ ${{github.ref}} == *"us-stage" ]] && echo ${{secrets.US_STAGE_PG_PASSWORD}}))) \
)
echo "::set-output name=pgpass::$PG_PASSWORD"
MSSQL_PASSWORD=$( \
[[ ${{github.ref}} == *"eu-prod" ]] && echo ${{secrets.EU_PROD_MSSQL_PASSWORD}} \
|| ([[ ${{github.ref}} == *"eu-stage" ]] && echo ${{secrets.EU_STAGE_MSSQL_PASSWORD}} \
|| ([[ ${{github.ref}} == *"us-prod" ]] && echo ${{secrets.US_PROD_MSSQL_PASSWORD}} \
|| ([[ ${{github.ref}} == *"us-stage" ]] && echo ${{secrets.US_STAGE_MSSQL_PASSWORD}}))) \
)
echo "::set-output name=mssqlpass::$MSSQL_PASSWORD"
- uses: actions/checkout@v2
- name: get package name
id: package-name
run: |
echo "::set-output name=name::$(jq '.name' package.json | sed 's/"//g')"
echo "::set-output name=name-suff::$(jq '.name' package.json | sed 's/"//g')-${{steps.suffix.outputs.suffix}}"
- uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ steps.region.outputs.region }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: check if repository exists
id: check-repo
run: |
REPOSITORY=$( \
aws ecr describe-repositories \
--region ${{steps.region.outputs.region}} \
| jq '.repositories[].repositoryName | select(. == "${{steps.package-name.outputs.name}}")' \
| sed 's/"//g' \
)
echo "::set-output name=repository::$REPOSITORY"
- name: if repo does not exist print command to create
if: ${{ steps.check-repo.outputs.repository != steps.package-name.outputs.name}}
run: |
echo '
aws ecr create-repository \
--region ${{steps.region.outputs.region}} \
--repository-name "${{steps.package-name.outputs.name}}" \
--image-scanning-configuration scanOnPush=true \
--image-tag-mutability MUTABLE
' && exit 1
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: ${{steps.package-name.outputs.name}}
IMAGE_TAG: ${{ github.sha }}
run: |
# Build a docker container and
# push it to ECR so that it can
# be deployed to ECS.
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
- name: check if function exists
id: check-func
run: |
FUNCTION=$( \
aws lambda list-functions \
--region ${{steps.region.outputs.region}} \
| jq '.Functions[].FunctionName | select( . == "${{steps.package-name.outputs.name-suff}}")' \
| sed 's/"//g' \
)
echo "::set-output name=function::$FUNCTION"
- name: create function
if: ${{ steps.check-func.outputs.function != steps.package-name.outputs.name-suff}}
run: |
aws lambda create-function \
--region ${{steps.region.outputs.region}} \
--function-name ${{steps.package-name.outputs.name-suff}} \
--package-type Image \
--code ImageUri=${{steps.build-image.outputs.image}} \
--role arn:aws:iam::151879656186:role/Lambda-SQS-DB-Worker \
--environment '{
"Variables": {
"PG_PASSWORD": "${{steps.env-vars.outputs.pgpass}}",
"MSSQL_PASSWORD": "${{steps.env-vars.outputs.mssqlpass}}"
}
}' \
- name: update function code and environment vars
if: ${{ steps.check-func.outputs.function == steps.package-name.outputs.name-suff}}
run: |
aws lambda update-function-code \
--region ${{steps.region.outputs.region}} \
--function-name ${{steps.package-name.outputs.name-suff}} \
--image-uri ${{steps.build-image.outputs.image}}
# function code need time to update
sleep 30
aws lambda update-function-configuration \
--region ${{steps.region.outputs.region}} \
--function-name ${{steps.package-name.outputs.name-suff}} \
--environment '{
"Variables": {
"PG_PASSWORD": "${{steps.env-vars.outputs.pgpass}}",
"MSSQL_PASSWORD": "${{steps.env-vars.outputs.mssqlpass}}"
}
}'
FROM node:14.18.1-alpine as builder
RUN mkdir /app
WORKDIR /app
ADD ./package.json /app/package.json
ADD ./package-lock.json /app/package-lock.json
RUN npm install
COPY src /app/src
COPY tsconfig.json /app/tsconfig.json
RUN npm run build
RUN rm -rf node_modules && npm install --production
FROM amazon/aws-lambda-nodejs:14.2021.10.14.13
COPY --from=builder /app/node_modules ${LAMBDA_TASK_ROOT}/node_modules
COPY --from=builder /app/package.json ${LAMBDA_TASK_ROOT}/package.json
COPY --from=builder /app/package-lock.json ${LAMBDA_TASK_ROOT}/package-lock.json
COPY --from=builder /app/dist ${LAMBDA_TASK_ROOT}/dist
COPY config.json ${LAMBDA_TASK_ROOT}/config.json
CMD ["dist/index.handler"]
{
"name": "lambda-docker-image",
"description": "A lambda function that is deployed as a docker image.",
"version": "0.0.1",
"private": true,
"dependencies": {
},
"devDependencies": {
"chokidar": "^3.5.2",
"typescript": "^4.4.4"
},
"scripts": {
"build": "tsc",
"watch": "tsc -w",
"docker:build": "docker build -t $npm_package_name .",
"docker:start": "docker run --env-file docker.env -p 9000:8080 $npm_package_name",
"dev:installOnDocker": "docker run -p 9000:8080 --env-file docker.env -v $(pwd)/dist:/var/task/dist:ro -v $(pwd)/package.json:/var/task/package.json:ro -v $(pwd)/package-lock.json:/var/task/package-lock.json:ro -v $(pwd)/config.json:/var/task/config.json:ro -v $(pwd)/devSetup.sh:/var/task/devSetup.sh:ro -v $(pwd)/watch.js:/var/task/watch.js:ro --entrypoint sh amazon/aws-lambda-nodejs devSetup.sh",
"dev": "docker run -p 9000:8080 --env-file docker.env -v $(pwd):/var/task:ro --entrypoint node amazon/aws-lambda-nodejs watch.js",
"test": "curl -XPOST \"http://localhost:9000/2015-03-31/functions/function/invocations\" -d"
}
}
const chokidar = require('chokidar');
const { spawn } = require('child_process');
(async ()=>{
let lock = false;
let rieProc = null;
// start watching for file changes in dist
chokidar.watch('dist/*.js', {ignoreInitial: true}).on('all', async (event, path) => {
// don't trigger restart if already restarting
if(lock) return;
// lock so other events are ignored during restart
lock = true;
console.log(`Event '${event}' triggered restart.`);
try {
// send SIGTERM to rie
rieProc.kill();
// block till rie is terminated
while (rieProc.exitCode === null) {
console.log("stopping...")
await sleep(5);
}
} catch (err){
// happens if rieProc is null e.g. initial events are not ignored.
if(err instanceof TypeError) return console.log("no process");
throw new Error(err);
}
// start rie again
console.log("starting...");
rieProc = spawnRci();
// release lock
lock = false;
});
// initially start rie if not done by file watcher
if(rieProc === null) rieProc = spawnRci();
// trap SIGINT if script is run directly
process.on('SIGINT', function() {
console.log("Caught interrupt signal");
rieProc.kill();
process.exit();
});
})()
async function sleep(ms){
return new Promise(res=>setTimeout(()=>res(), ms))
}
function spawnRci(){
// spawn a rie process and attach listeners to log all stdoutput and erroutput
const rci = spawn("/lambda-entrypoint.sh", ["dist/index.handler"])
rci.stdout.on('data', (data) => {
console.log(data.toString());
});
rci.stderr.on('data', (data) => {
console.error(data.toString());
});
return rci;
};
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment