Skip to content

Instantly share code, notes, and snippets.

@ktheory
Last active January 20, 2018 14:41
Show Gist options
  • Save ktheory/262a49bea08b8821ee18e1bd15684486 to your computer and use it in GitHub Desktop.
Save ktheory/262a49bea08b8821ee18e1bd15684486 to your computer and use it in GitHub Desktop.
Idea for authorizing ECS tasks with Vault using AppRole and S3

AWS ECS auth with AppRole and S3

I'm interested in having AWS ECS services authenticate to Vault (like issue #1298).

Here's an idea for how to implement ECS auth using recent features, like Vault's AppRole auth backend, and IAM roles per ECS task.

I'm eager to hear if folks think it's a reasonable approach.

(Big caveat: I'm just learning Vault, so apologies in advance for my inevitable misunderstandings.)

How it works

When you create an IAM role, say, myProductionService, you create a corresponding Vault AppRole with the same name. You then publish both the AppRole role-id and a secret-id as JSON to an S3 path named after the role, say s3://my-secrets-bucket/approles/myProductionService.json.

(In our case, we can accomplish this using CloudFormation and a lambda-backed custom resource.)

When an ECS task starts with the myProductionService IAM role, the container reads the role-id and secret-id from S3, and exchanges them for a Vault token.

A periodic task (say, AWS Lambda function) can rotate the secret in Vault, and update the S3 file.

Locking down S3 access

The main threat model is to guarantee that only tasks running myProductionService IAM role can read the S3 path.

We can accomplish this very well with an S3 bucket policy that rejects access except those using the myProductionService IAM role.

{
  "Version":"2012-10-17",
  "Statement":[
    {
      "Effect":"Deny",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::my-secrets-bucket/approles/myProductionService.json"],
      "Condition": { "StringNotEquals": { "awsSourceArn": "arn:aws:iam::myaccountid/role/myProductionService" } }
    }
  ]
}

Unfortunately, we'd have to append to this bucket policy for each new role + resource, and bucket policies are limited to 20kb. While using a bucket policy works for a few roles, it doesn't work for potentially many dynamically created ones.

This blog post describes some best practices for locking down S3 access.

In particular, we can:

  1. Use an S3 bucket policy to restrict access to a particular VPC endpoint and CIDR
  2. Require SSL & server side encryption when uploading objects
  3. Consider using AWS KMS to encrypt the contents server-side, where myProductionService has permission to decrypt. This gives us additional audit logs in CloudTrail, but at an extra cost & complexity.
  4. Audit S3 access logs to check that only clients authenticated as myProductionService download the path.
  5. Use AWS Config to audit IAM policies to make sure that other IAM roles can't access "arn:aws:s3:::my-secrets-bucket/approles/*".

AWS Lambda

An nice side effect of this approach is that it's generic to other AWS services like AWS Lambda. A lambda function running in your VPC with myProductionService IAM role could also get a Vault token.

Again, I'd love to hear what other people think about this. If it works well, I'd be glad to publish example tooling (such as the lambda function) as open source.

Cheers, Aaron

@ktheory
Copy link
Author

ktheory commented Sep 13, 2016

@sirocode
Copy link

sirocode commented May 8, 2017

Hi, would you be able to share your Lambda code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment