Skip to content

Instantly share code, notes, and snippets.

@fabiancook
Last active May 27, 2019 02:25
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save fabiancook/524c217e33e66fcd06e654f8a5acaad9 to your computer and use it in GitHub Desktop.
Save fabiancook/524c217e33e66fcd06e654f8a5acaad9 to your computer and use it in GitHub Desktop.

HTTP Signatures with AWS Lambda

HTTP Signatures are an alternative way to authenticate requests, and verify responses, without requiring any private identity information to be accessible to the authenticating service.

Instead of utilising a traditional bearer token or cookie, the client signs relevant headers, and the body of the request (if needed), which allows for the client to show that they're authorising the content.

When the service receives the request from the client, the service looks up the provided key identifier, and then uses said key to verify the provided signature.

In this article we're going to go through both registering these keys, and authenticating requests using an authoriser function in AWS Lambda.

We're going to utilise the serverless aws-nodejs-ecma-script template for this tutorial.

You'll need an AWS account for this tutorial, and have serverless installed & setup

First lets create our project:

$ serverless create --template aws-nodejs-ecma-script --path http-signatures-primer
Serverless: Generating boilerplate...
Serverless: Generating boilerplate in "http-signatures-primer"
 _______                             __
|   _   .-----.----.--.--.-----.----|  .-----.-----.-----.
|   |___|  -__|   _|  |  |  -__|   _|  |  -__|__ --|__ --|
|____   |_____|__|  \___/|_____|__| |__|_____|_____|_____|
|   |   |             The Serverless Application Framework
|       |                           serverless.com, v1.30.1
 -------'

Serverless: Successfully generated boilerplate for template: "aws-nodejs-ecma-script"
$ cd http-signatures-primer

We're going to need to install http-signature which we will use to verify our signaturs as well as aws-sdk which we will use to invoke S3 to read & write provided public keys. We'll also use content-type to match content types.

$ npm install && npm install --save http-signature aws-sdk content-type

First lets define the bucket that we will use to store our public keys, we'll do this by defining it as a resource at the end of our serverless.yml file:

resources:
  Resources:
    PublicKeysBucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: ${self:provider.stage}-${self:provider.region}-${self:service.name}-public-keys

Now when we deploy our service, a new bucket will be created matching our stage, region, and service name. Whats great about this as well, is that we won't get conflicts of a mismatch of data across stages or regions, which is great.

We could remove the region here, but for now we'll keep it straightfoward and consistent.

Next we're going to replace the provider key with some better defaults so its more obvious whats going on, there may be a newer version of node when you're reading this, so if there is, feel free to use the newer of the options. We'll also include an environment variable that will include the name of our bucket as well as IAM permissions so our service can access it:

provider:
  name: aws
  runtime: nodejs10.x
  stage: ${opt:stage, 'dev'}
  region: ${opt:region, 'us-east-1'}
  environment:
    AWS_S3_PUBLIC_KEYS_BUCKET:
      Ref: PublicKeysBucket
  iamRoleStatements:
    - Effect: "Allow"
      Action:
        - "s3:*"
      Resource:
        - "arn:aws:s3:::${self:provider.stage}-${self:provider.region}-${self:service.name}-public-keys"
        - "arn:aws:s3:::${self:provider.stage}-${self:provider.region}-${self:service.name}-public-keys/*"

Next, lets create a function that will authorise any incoming requests, we'll call this authorise.js.

We'll first import our required dependencies:

import { S3 } from "aws-sdk";
import { parseRequest, verifySignature } from "http-signature";

Then we'll need a way to find an associated public key, if we can't find one, we'll return undefined so we can handle it in the calling function.

Any AWS SDK client instance that we create within our AWS Lambda functions will automatically have credentials provided by the environment.

const s3 = new S3();

function getPublicKeyFromS3(keyIdentifier) {
  return (
    s3
      .getObject({
        Bucket: process.env.AWS_S3_PUBLIC_KEYS_BUCKET,
        Key: keyIdentifier
      })
      .promise()
      .then(({ Body }) => Body.toString("utf8"))
      // Just return undefined, we couldn't find it
      .catch(() => undefined)
  );
}

We're going to also need a way to take the headers passed to our lambda function and match the format that they would normally be in for http.IncomingMessage so that parseRequest can utilise them, for this we need to lowercase all the available header keys:

function getHeaders(headers) {
  return Object.keys(headers).reduce((map, key) => {
    map[key.toLowerCase()] = headers[key];
    return map;
  }, {});
}

Before we write our handler we will need a function that we will use to generate a IAM policy for the request:

function generatePolicy(principalId, effect, resource, context) {
  const authResponse = {};
  authResponse.principalId = principalId;
  authResponse.context = context;
  if (effect && resource) {
    const policyDocument = {};
    policyDocument.Version = "2012-10-17";
    policyDocument.Statement = [];
    const statementOne = {};
    statementOne.Action = "execute-api:Invoke";
    statementOne.Effect = effect;
    statementOne.Resource = resource;
    policyDocument.Statement[0] = statementOne;
    authResponse.policyDocument = policyDocument;
  }
  return authResponse;
}

Now we can utilise parseRequest to process the provided headers and extract the associated key identifier:

export const handler = async (event, context) => {
  const nodeLookingRequest = {
    headers: getHeaders(event.headers),
    url: event.requestContext.path,
    method: event.requestContext.httpMethod,
    httpVersion: event.requestContext.protocol.split("/")[1]
  };
  const parsedSignature = parseRequest(nodeLookingRequest);
  const publicKey = await getPublicKeyFromS3(parsedSignature.keyId);
  if (!publicKey) {
    throw new Error("Unauthorized");
  }
};

Now we have a public key we can verify the signature of the request inside our handler:

if (!verifySignature(parsedSignature, publicKey)) {
  throw new Error("Unauthorized");
}

Once we have verified everything is okay, we can continue on by returning a policy that allows the requested method:

return generatePolicy(parsedSignature.keyId, "Allow", event.methodArn, context);

This is our authoriser function ready, for it to be usable we'll replace the functions key within our serverless.yml file with this:

functions:
  authorise:
    handler: authorise.handler

Now, any other function can utilise this authoriser to authenticate clients.

Next we'll need a function handler that accepts new public keys, which we will create within create-public-key.js.

One thing we haven't mentioned yet is how we come up with the key identifier, in our case we're going to take the SHA256 hash of the public key contents and use that, it creates a consistent and unique identifier for every public key.

Using the SHA256 hash of the resulting file may be suboptimal, but I'm wanting to provide a straightfoward way browsers can generate the identifier I would suggest looking into generating the fingerprint of each public key instead, maybe I am missing something and this is straightforward in the browser as well, if so, please do let me know!

Within this function we'll first define our dependencies:

import { S3 } from "aws-sdk";
import { parse as parseContentType } from "content-type";
import Crypto from "crypto";

Next we'll verify the content type, and then create a new identifier for our public key, and then upload our public key to S3.

We're also going to add an upper limit to our public key file size, in theory there is no limit to the possible size of keys, so we'll just set it at 20k characters and let it be.

Feel free to select your own value for a maximum length

const s3 = new S3();

export const handler = async event => {
  if (
    parseContentType(event.headers["Content-Type"]).type !==
    "application/x-pem-file"
  ) {
    throw new Error("Invalid content type");
  }
  if (!event.body) {
    throw new Error("No body provided");
  }
  if (event.body.length > 10000) {
    throw new Error("Public key too large");
  }
  const identifier = Crypto.createHash("sha256")
    .update(event.body, "utf8")
    .digest()
    .toString("hex");
  await s3
    .putObject({
      Bucket: process.env.AWS_S3_PUBLIC_KEYS_BUCKET,
      Key: identifier,
      Body: event.body
    })
    .promise();
  return {
    statusCode: 200,
    headers: {
      "Access-Control-Allow-Origin": "*",
      "Access-Control-Allow-Credentials": true,
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      identifier
    })
  };
};

We just need to add this as a function in our serverless.yml file, and then allow it to accept an http event on it, after our authorise function:

create-public-key:
  handler: create-public-key.handler
  events:
    - http:
        path: /public-key
        method: put
        cors:
          origin: "*"

This is now open to the world, allowing anyone to upload a public key, which is perfectly fine because without the matching private key, they still won't be able to authenticate.

Just as a note, there is no rate limiting on this function, so someone could throw hundreds of public keys and fill your S3 bucket with nonsense. I recommend looking into both rate limiting, and verifying that the public keys provided are

Next we'll need a function that we can test our authoriser with, so lets just create a simple handler in ping.js that responds with pong:

export const handler = async event => {
  return {
    statusCode: 200,
    headers: {
      "Access-Control-Allow-Origin": "*",
      "Access-Control-Allow-Credentials": true,
      "Content-Type": "text/plain"
    },
    body: "pong"
  };
};

Next we'll define our authoriser under a new key top level custom, so we can reference this in our functions:

custom:
  authorise:
    name: authorise
    resultTtlInSeconds: 0
    type: request
    identitySource: ""

Now we can create our ping function and reference our authoriser:

ping:
  handler: ping.handler
  events:
    - http:
        path: /ping
        method: get
        cors:
          origin: "*"
        authorizer: ${self:custom.authorise}

Now, whenever we invoke PUT /public-key our authorise function will be invoked, which can then be used to authenticate our request.

We can try this by first deploying our service:

$ serverless deploy

The deployment will spit out a URL that you can use to invoke /public-key, we will use this in the next step.

You can test this by running this test script, this will upload our public key, and then make a signed request:

Don't worry! These keys are from the HTTP Signatures spec and should only be used for testing!

const https = require("https"),
  fs = require("fs"),
  { sign } = require("http-signature"),
  crypto = require("crypto");

const privateKey = `
-----BEGIN RSA PRIVATE KEY-----
MIICXgIBAAKBgQDCFENGw33yGihy92pDjZQhl0C36rPJj+CvfSC8+q28hxA161QF
NUd13wuCTUcq0Qd2qsBe/2hFyc2DCJJg0h1L78+6Z4UMR7EOcpfdUE9Hf3m/hs+F
UR45uBJeDK1HSFHD8bHKD6kv8FPGfJTotc+2xjJwoYi+1hqp1fIekaxsyQIDAQAB
AoGBAJR8ZkCUvx5kzv+utdl7T5MnordT1TvoXXJGXK7ZZ+UuvMNUCdN2QPc4sBiA
QWvLw1cSKt5DsKZ8UETpYPy8pPYnnDEz2dDYiaew9+xEpubyeW2oH4Zx71wqBtOK
kqwrXa/pzdpiucRRjk6vE6YY7EBBs/g7uanVpGibOVAEsqH1AkEA7DkjVH28WDUg
f1nqvfn2Kj6CT7nIcE3jGJsZZ7zlZmBmHFDONMLUrXR/Zm3pR5m0tCmBqa5RK95u
412jt1dPIwJBANJT3v8pnkth48bQo/fKel6uEYyboRtA5/uHuHkZ6FQF7OUkGogc
mSJluOdc5t6hI1VsLn0QZEjQZMEOWr+wKSMCQQCC4kXJEsHAve77oP6HtG/IiEn7
kpyUXRNvFsDE0czpJJBvL/aRFUJxuRK91jhjC68sA7NsKMGg5OXb5I5Jj36xAkEA
gIT7aFOYBFwGgQAQkWNKLvySgKbAZRTeLBacpHMuQdl1DfdntvAyqpAZ0lY0RKmW
G6aFKaqQfOXKCyWoUiVknQJAXrlgySFci/2ueKlIE1QqIiLSZ8V8OlpFLRnb1pzI
7U1yQXnTAEFYM560yJlzUpOb1V4cScGd365tiSMvxLOvTA==
-----END RSA PRIVATE KEY-----
`.trim();

const publicKey = `
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDCFENGw33yGihy92pDjZQhl0C3
6rPJj+CvfSC8+q28hxA161QFNUd13wuCTUcq0Qd2qsBe/2hFyc2DCJJg0h1L78+6
Z4UMR7EOcpfdUE9Hf3m/hs+FUR45uBJeDK1HSFHD8bHKD6kv8FPGfJTotc+2xjJw
oYi+1hqp1fIekaxsyQIDAQAB
-----END PUBLIC KEY-----
`.trim();

const identifier = crypto
  .createHash("sha256")
  .update(publicKey, "utf8")
  .digest()
  .toString("hex");

const putRequest = https.request(
  {
    host: "<your serverless host>",
    port: 443,
    path: "/dev/public-key",
    method: "PUT",
    headers: {
      "Content-Type": "application/x-pem-file"
    }
  },
  function(putResponse) {
    console.log(putResponse.statusCode);

    if (putResponse.statusCode !== 200) {
      return;
    }

    const pingRequest = https.request(
      {
        host: "<your serverless host>",
        port: 443,
        path: "/dev/ping",
        method: "GET"
      },
      function(pingResponse) {
        console.log(pingResponse.statusCode);
      }
    );

    sign(pingRequest, { key: privateKey, keyId: identifier });

    pingRequest.end();
  }
);

putRequest.write(publicKey);

putRequest.end();

In your console, you should see

If you don't, you may want to look in CloudWatch logs within the AWS Console, you may also want to wrap your handler function bodies in a try catch function and log the error (don't forget to re-throw it!)

200
200

Now we will want to use our function from difference services, in your other service serverless.yml definition, first define a variableSyntax key within provider:

variableSyntax: "\\${((?!AWS)[ ~:a-zA-Z0-9._'\",\\-\\/\\(\\)]+?)}"

And then you can reference the authoriser function by way of an ARN (You will need to replace http-signatures-primer with the service name you used):

custom:
  authorise:
    name: authorise
    arn:
      Fn::Sub: "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:http-signatures-primer-${self:provider.stage}-authorise"
    resultTtlInSeconds: 0
    type: request
    identitySource: ""

Thanks for reading, we'll be able to use what we learnt in this article in future articles to make requests from the browser with no identity setup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment