Skip to content

Instantly share code, notes, and snippets.

@hermanbanken
Last active May 23, 2022 10:15
Show Gist options
  • Save hermanbanken/0010df2d1760e745798723da7e49969e to your computer and use it in GitHub Desktop.
Save hermanbanken/0010df2d1760e745798723da7e49969e to your computer and use it in GitHub Desktop.

Convert from AWS Lambda to Cloud Run

It should not be too much work, but you gain a lot.

Cloud Run has some important advantages, for I/O-bound things like a BFF (Backend for Frontend).

  1. The BFF is mostly idle waiting for the database- and API-calls it issues to come back. During this time, the instance could easily serve more requests, but AWS Lambda has concurrency=1. With Cloud Run concurrency defaults to 80 and can be set to 1000.
  2. In OCI container setups like Cloud Run it is much more common to have the whole BFF in 1 container (with Express/Koa) instead of separate per route. This makes it much easier to run locally and it is boring ™️.

These two point together make that you can eliminate cold starts almost 100%. Only a rollout or first visit of the day would be a cold start. With "minimum running instances" (gcloud run services update SERVICE --min-instances MIN-VALUE) you eliminate cold starts completely. With AWS Lambda "reserved capacity" this trick won't eliminate cold starts, due to the concurrency=1: each request coming in while the BFF is already waiting for a backend will be a cold start regardless. And if your workaround is to set reserved capacity way higher, you really should reconsider your choices for serverless.

I'm curious what AWS is going to build as an answer to Cloud Run, and I'm curous about your opinion (leave a comment!). I think Cloud Run is superior for BFF's, and AWS Lambda is the wrong choice for I/O heavy things like a BFF.

import {routes as sstRouteFormat} from './routes';
import esbuild from 'esbuild';
import {writeFileSync} from 'fs';
import {join} from 'path';
const routes = Object.entries(sstRouteFormat).map(([key, func]) => {
const [, method, pathTemplate] =
/^(GET|POST|PUT|DELETE|HEAD|ANY|OPTIONS) (.*)$/.exec(key) || [];
if (!method || !pathTemplate) {
throw new Error(`config err: invalid key ${key}`);
}
const dot = func.lastIndexOf('.');
const fnFile = func.substring(0, dot);
const fnName = func.substring(dot + 1);
return {method, pathTemplate, fnFile, fnName};
});
const out = join(process.cwd(), 'out');
writeFileSync(
join(__dirname, 'gen.routes.ts'),
`export const routes = [${routes.map(
({method, pathTemplate, fnFile, fnName}) =>
`{
entrypoint: require("${fnFile}"),
...(${JSON.stringify(
{
method,
pathTemplate,
fnName,
},
null,
2,
)})
}`,
)} ];`,
);
esbuild.buildSync({
outfile: join(out, 'server.js'),
entryPoints: [join(__dirname, 'server.ts')],
platform: 'node',
bundle: true,
sourcemap: true,
});
PROJECT=${PROJECT:-my-project}
API_URL=https://something.a.run.app # only known after first deploy! update then & deploy again
docker tag my-image eu.gcr.io/$PROJECT/my-image
docker push eu.gcr.io/$PROJECT/my-image
# Deploy
set -o xtrace
gcloud run deploy my-service \
--project="$PROJECT" \
--image=eu.gcr.io/$PROJECT/my-image \
--region=europe-west4 \
--concurrency=1000 \
--allow-unauthenticated \
--set-env-vars=API_URL=$API_URL \
--service-account=my-sa \
--update-secrets=SOME_SECRET=SOME_SECRET:latest,OTHER_SECRET=OTHER_SECRET:latest
FROM node:alpine
COPY server.js /srv/www/
COPY server.js.map /srv/www/
ENV API_URL=http://example.com NODE_OPTIONS=--enable-source-maps
ENTRYPOINT [ "node", "/srv/www/server.js" ]
{ "scripts": {
"gcp:build": "ts-node build.ts",
"gcp:run": "API_URL=${API_URL:-http://example.com} NODE_OPTIONS=--enable-source-maps node ./out/server.js",
"gcp:docker": "docker build --platform=linux/amd64 -t my-image -f Dockerfile out"
}}
export const routes: Record<string, string> = {
'GET /welcome': 'src/routes/screen/welcome.main',
'GET /login': 'src/routes/screen/login.main',
'GET /home': 'src/routes/screen/home.main',
'GET /categories': 'src/routes/screen/categories.main',
'POST /login': 'src/routes/action/login.main',
'POST /debug/reset-basket': 'src/routes/debug/reset-basket-action.main',
} as const;
import {createServer, IncomingMessage} from 'http';
import {routes} from './gen.routes';
import {URI} from 'uri-template-lite';
import {randomUUID} from 'crypto';
const matchers = routes.map(
route =>
[new URI.Template(route.pathTemplate), route] as [
URI.Template,
typeof route,
],
);
createServer(async (req, res) => {
console.log(req.socket.remoteAddress, req.method, req.url);
try {
// Prepare event
const Url = new URL(req.url || '/', process.env.API_URL);
const {match, route} = router(Url.pathname);
const handler = route.entrypoint[route.fnName];
const event = {
headers: req.headers,
body:
req.method === 'POST' || req.method === 'PUT'
? await readBody(req)
: undefined,
isBase64Encoded: false,
pathParameters: match,
queryStringParameters: Object.fromEntries(Url.searchParams.entries()),
requestContext: {requestId: randomUUID()},
};
// Invoke
const resp = await handler(event);
console.log(resp.statusCode, resp.headers);
for (const key in resp.headers || {}) {
res.setHeader(key, resp.headers[key]);
}
res.writeHead(resp.statusCode);
res.write(resp.body || '');
res.end();
} catch (e) {
console.error(e);
if (HttpError.isHttpError(e)) {
res.writeHead(e.code);
res.end();
} else {
res.writeHead(500);
res.write(`${e}`);
res.end;
}
}
}).listen(process.env.PORT || '3000');
function router(url: string) {
for (const [matcher, route] of matchers) {
const match = matcher.match(url);
if (match) {
return {match, route};
}
}
throw new HttpError(404);
}
function readBody(req: IncomingMessage) {
return new Promise<string>((resolve, reject) => {
const buffers: Buffer[] = [];
req.on('data', d => buffers.push(d));
req.on('end', () => resolve(Buffer.concat(buffers).toString('utf-8')));
req.on('error', reject);
});
}
export class HttpError extends Error {
constructor(public code: number) {
super(`${code}`);
}
static isHttpError(e: any): e is HttpError {
return typeof e === 'object' && 'code' in e;
}
}
PROJECT=${PROJECT:-your-project}
gcloud services enable secretmanager.googleapis.com --project="$PROJECT"
gcloud services enable containerregistry.googleapis.com --project="$PROJECT"
gcloud services enable cloudscheduler.googleapis.com --project="$PROJECT"
gcloud services enable cloudtasks.googleapis.com --project="$PROJECT"
gcloud services enable run.googleapis.com --project="$PROJECT"
gcloud secrets create SOME_SECRET --project="$PROJECT" && \
echo $SOME_SECRET | gcloud secrets versions add SOME_SECRET --project="$PROJECT" --data-file=-
gcloud secrets create OTHER_SECRET --project="$PROJECT" && \
echo $OTHER_SECRET | gcloud secrets versions add OTHER_SECRET --project="$PROJECT" --data-file=-
gcloud iam service-accounts create my-sa --project="$PROJECT"
gcloud secrets add-iam-policy-binding --project="$PROJECT" projects/$PROJECT/secrets/SOME_SECRET --member serviceAccount:my-sa@$PROJECT.iam.gserviceaccount.com --role roles/secretmanager.secretAccessor
gcloud secrets add-iam-policy-binding --project="$PROJECT" projects/$PROJECT/secrets/OTHER_SECRET --member serviceAccount:my-sa@$PROJECT.iam.gserviceaccount.com --role roles/secretmanager.secretAccessor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment