Skip to content

Instantly share code, notes, and snippets.

@tlakomy
Last active January 10, 2022 01:06
Show Gist options
  • Save tlakomy/e44e6d98719133e19c3bd641f64e0cf0 to your computer and use it in GitHub Desktop.
Save tlakomy/e44e6d98719133e19c3bd641f64e0cf0 to your computer and use it in GitHub Desktop.
Notes from Production Ready Serverless course with theburningmonk

Week 01

Lambda 101:

  • You can pin frequently used services to top bar in AWS Console
  • By default there's a limit of 1000 concurrent lambda executions, this can be raised with a support ticket. There are companies that have this limit raised up to tens of thousands concurrent lambda executions.
  • By default you get 75GB of code storage (so up to 10 React apps, lol) which can also be raised
  • Looking at Throttles graph is useful, we don't want our functions to be throttled
  • ConcurrentExecutions graph is useful as well - to understand if we're not approaching a limit
  • You can search for lambda functions using function name (adding prefixed help!) or using tags, which are really useful
  • It's possible to use custom runtimes for Lambda (apart from Node,.NET, Python etc.) so if you really want to use Haskell you can do that
  • ARN is a unique identifier for a resource in AWS
  • Layers can be used to avoid re-including something (like an external SDK) in every single lambda function
  • You can get a memory limit of a function from within the function using the context object
  • A lambda function is allocated CPU proportional to the memory configured for it, so 128MB (the default) is not going to have a lot of CPU power
  • You get charged for the lambda execution time in 100ms blocks (so if your function takes 10ms to execute, you'll pay for 100ms)
  • You also pay for the amount of memory given to the lambda function
  • With more memory you get more consistent performance but you may end up paying more
  • It's possible to run Lambda functions on a custom VPC, which is useful when you need to work with RDS, EC2, containers etc.
  • When using a custom VPC you should create a custom subnet for lambda functions to prevent issues when the function will scale massively
  • You can use reserved concurrency to limit the number of function invocations - for instance setting it to 1 will ensure that at any given moment the function can be executed only one time
  • Provisioned concurrency allows you to ensure that you always have a number of containers available (to ensure that you won't see cold starts)
  • You cannot provision concurrency for the $LATEST alias
  • Provisioned concurrency is not free, you have to figure out what's cheaper for you
  • Creating versions allows you to have 'point-in-time' versions of your lambda functions
  • You can push failed async lambda function invocations to dead letter queue (for instance - SQS)
  • Database proxies are available in preview
  • With aliases you can run different versions of Lambda function with a different probability (for instance run version A 90% of the time and version B 10% of the time)
  • You can use lambda destinations to create simple lambda -> lambda workflows, for something more complicated you should use Step Functions
  • In monitoring tab it's useful to monitor async delivery values to make sure you're not missing any events in lambda destinations and dead letter queue

Serverless framework 101

  • Supports not only AWS but other cloud providers as well
  • You can set up environment variables to be used in the stack
  • You can define different events for a Lambda function that define different triggers for a Lambda function
  • SLS by default is going to create two stacks: a stack for create and update
  • It's going to automatically package the lambda function in a .zip file and upload it to S3
  • Serverless framework has different values for default timeout and memory size than AWS Console
  • New versions of the function are automatically created whenever you deploy
  • sls invoke allows you to call a lambda function locally, even before the deployment
  • sls invoke can also be used to execute a function that was already deployed remotely

Securing APIs with IAM

  • You generally want to have different levels of access for different APIs in your system (obviously not everything should be public)
  • One way to address that is to use usage plans + API keys. They are designed for rate limiting, not auth and they allow the client to access the selected API at agreed upon request rates and quotas (like Google Maps API). Request rate and quote apply to all APIs and stages covered by the usage plan.
  • Another thing is to allow certain APIs to be accessed by your infrastructure only - by using IAM authorization
  • API Gateway also supports an custom authorizer that you can build yourself
  • A VPC Endpoint allows you to securely connect your VPC to another service

Cognito 101

  • We can think of Cognito as a collection of 3 different services: Cognito User Pools, Cognito Federated Identities and Cognito Sync
  • Cognito User Pools is a managed identity service (registration/verify email/password policy etc. etc.). After signign in user can access APIs on API Gateway that require sign in
  • Cognito Federated Identities - allows you to take auth token issues by auth providers and exchange it for a set of temporary AWS credentials
  • Cognito Sync - nobody uses it lmao, it syncs user data across multiple devices

Securing API with Cognito User Pools

  • In short - when user registers, confirms their email etc. the client talks with Cognito User Pools, and after a successful sign-in, Cognito User Pools returns a JWT token. This token is later used for authorization in API Gateway.

API Gateway Best Practices

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment