Skip to content

Instantly share code, notes, and snippets.

@rix0rrr
Last active August 30, 2023 17:26
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save rix0rrr/7c4a651f8eef1e94b006059209e805dd to your computer and use it in GitHub Desktop.
Save rix0rrr/7c4a651f8eef1e94b006059209e805dd to your computer and use it in GitHub Desktop.
Python READMEs

Continuous Integration / Continuous Delivery for CDK Applications---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


This library includes a CodePipeline composite Action for deploying AWS CDK Applications.

This module is part of the AWS Cloud Development Kit project.

Limitations

The construct library in it's current form has the following limitations:

  1. It can only deploy stacks that are hosted in the same AWS account and region as the CodePipeline is.
  2. Stacks that make use of Assets cannot be deployed successfully.

Getting Started

In order to add the PipelineDeployStackAction to your CodePipeline, you need to have a CodePipeline artifact that contains the result of invoking cdk synth -o <dir> on your CDK App. You can for example achieve this using a CodeBuild project.

The example below defines a CDK App that contains 3 stacks:

  • CodePipelineStack manages the CodePipeline resources, and self-updates before deploying any other stack
  • ServiceStackA and ServiceStackB are service infrastructure stacks, and need to be deployed in this order
  ┏━━━━━━━━━━━━━━━━┓  ┏━━━━━━━━━━━━━━━━┓  ┏━━━━━━━━━━━━━━━━━┓  ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
  ┃     Source     ┃  ┃     Build      ┃  ┃  Self-Update    ┃  ┃             Deploy              ┃
  ┃                ┃  ┃                ┃  ┃                 ┃  ┃                                 ┃
  ┃ ┌────────────┐ ┃  ┃ ┌────────────┐ ┃  ┃ ┌─────────────┐ ┃  ┃ ┌─────────────┐ ┌─────────────┐ ┃
  ┃ │   GitHub   ┣━╋━━╋━▶ CodeBuild  ┣━╋━━╋━▶Deploy Stack ┣━╋━━╋━▶Deploy Stack ┣━▶Deploy Stack │ ┃
  ┃ │            │ ┃  ┃ │            │ ┃  ┃ │PipelineStack│ ┃  ┃ │ServiceStackA│ │ServiceStackB│ ┃
  ┃ └────────────┘ ┃  ┃ └────────────┘ ┃  ┃ └─────────────┘ ┃  ┃ └─────────────┘ └─────────────┘ ┃
  ┗━━━━━━━━━━━━━━━━┛  ┗━━━━━━━━━━━━━━━━┛  ┗━━━━━━━━━━━━━━━━━┛  ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

index.ts

import aws_cdk.aws_codebuild as codebuild
import aws_cdk.aws_codepipeline as codepipeline
import aws_cdk.aws_codepipeline_actions as codepipeline_actions
import aws_cdk.core as cdk
import aws_cdk.app_delivery as cicd

app = cdk.App()

# We define a stack that contains the CodePipeline
pipeline_stack = cdk.Stack(app, "PipelineStack")
pipeline = codepipeline.Pipeline(pipeline_stack, "CodePipeline",
    # Mutating a CodePipeline can cause the currently propagating state to be
    # "lost". Ensure we re-run the latest change through the pipeline after it's
    # been mutated so we're sure the latest state is fully deployed through.
    restart_execution_on_update=True
)

# Configure the CodePipeline source - where your CDK App's source code is hosted
source_output = codepipeline.Artifact()
source = codepipeline_actions.GitHubSourceAction(
    action_name="GitHub",
    output=source_output
)
pipeline.add_stage(
    stage_name="source",
    actions=[source]
)

project = codebuild.PipelineProject(pipeline_stack, "CodeBuild")
synthesized_app = codepipeline.Artifact()
build_action = codepipeline_actions.CodeBuildAction(
    action_name="CodeBuild",
    project=project,
    input=source_output,
    outputs=[synthesized_app]
)
pipeline.add_stage(
    stage_name="build",
    actions=[build_action]
)

# Optionally, self-update the pipeline stack
self_update_stage = pipeline.add_stage(stage_name="SelfUpdate")
self_update_stage.add_action(cicd.PipelineDeployStackAction(
    stack=pipeline_stack,
    input=synthesized_app,
    admin_permissions=True
))

# Now add our service stacks
deploy_stage = pipeline.add_stage(stage_name="Deploy")
service_stack_a = MyServiceStackA(app, "ServiceStackA")
# Add actions to deploy the stacks in the deploy stage:
deploy_service_aAction = cicd.PipelineDeployStackAction(
    stack=service_stack_a,
    input=synthesized_app,
    # See the note below for details about this option.
    admin_permissions=False
)
deploy_stage.add_action(deploy_service_aAction)
# Add the necessary permissions for you service deploy action. This role is
# is passed to CloudFormation and needs the permissions necessary to deploy
# stack. Alternatively you can enable [Administrator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_administrator) permissions above,
# users should understand the privileged nature of this role.
deploy_service_aAction.add_to_role_policy(iam.PolicyStatement(
    actions=["service:SomeAction"],
    resources=[my_resource.my_resource_arn]
))

service_stack_b = MyServiceStackB(app, "ServiceStackB")
deploy_stage.add_action(cicd.PipelineDeployStackAction(
    stack=service_stack_b,
    input=synthesized_app,
    create_change_set_run_order=998,
    admin_permissions=True
))

buildspec.yml

The repository can contain a file at the root level named buildspec.yml, or you can in-line the buildspec. Note that buildspec.yaml is not compatible.

For example, a TypeScript or Javascript CDK App can add the following buildspec.yml at the root of the repository:

version: 0.2
phases:
  install:
    commands:
      # Installs the npm dependencies as defined by the `package.json` file
      # present in the root directory of the package
      # (`cdk init app --language=typescript` would have created one for you)
      - npm install
  build:
    commands:
      # Builds the CDK App so it can be synthesized
      - npm run build
      # Synthesizes the CDK App and puts the resulting artifacts into `dist`
      - npm run cdk synth -- -o dist
artifacts:
  # The output artifact is all the files in the `dist` directory
  base-directory: dist
  files: '**/*'

The PipelineDeployStackAction expects it's input to contain the result of synthesizing a CDK App using the cdk synth -o <directory>.

Amazon API Gateway Construct Library---

Stability: Stable


Amazon API Gateway is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale. Create an API to access data, business logic, or functionality from your back-end services, such as applications running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, or any web application.

Defining APIs

APIs are defined as a hierarchy of resources and methods. addResource and addMethod can be used to build this hierarchy. The root resource is api.root.

For example, the following code defines an API that includes the following HTTP endpoints: ANY /, GET /books, POST /books, GET /books/{book_id}, DELETE /books/{book_id}.

api = apigateway.RestApi(self, "books-api")

api.root.add_method("ANY")

books = api.root.add_resource("books")
books.add_method("GET")
books.add_method("POST")

book = books.add_resource("{book_id}")
book.add_method("GET")
book.add_method("DELETE")

AWS Lambda-backed APIs

A very common practice is to use Amazon API Gateway with AWS Lambda as the backend integration. The LambdaRestApi construct makes it easy:

The following code defines a REST API that routes all requests to the specified AWS Lambda function:

backend = lambda.Function(...)
apigateway.LambdaRestApi(self, "myapi",
    handler=backend
)

You can also supply proxy: false, in which case you will have to explicitly define the API model:

backend = lambda.Function(...)
api = apigateway.LambdaRestApi(self, "myapi",
    handler=backend,
    proxy=False
)

items = api.root.add_resource("items")
items.add_method("GET")# GET /items
items.add_method("POST")# POST /items

item = items.add_resource("{item}")
item.add_method("GET")# GET /items/{item}

# the default integration for methods is "handler", but one can
# customize this behavior per method or even a sub path.
item.add_method("DELETE", apigateway.HttpIntegration("http://amazon.com"))

Integration Targets

Methods are associated with backend integrations, which are invoked when this method is called. API Gateway supports the following integrations:

  • MockIntegration - can be used to test APIs. This is the default integration if one is not specified.
  • LambdaIntegration - can be used to invoke an AWS Lambda function.
  • AwsIntegration - can be used to invoke arbitrary AWS service APIs.
  • HttpIntegration - can be used to invoke HTTP endpoints.

The following example shows how to integrate the GET /book/{book_id} method to an AWS Lambda function:

get_book_handler = lambda.Function(...)
get_book_integration = apigateway.LambdaIntegration(get_book_handler)
book.add_method("GET", get_book_integration)

Integration options can be optionally be specified:

get_book_integration = apigateway.LambdaIntegration(get_book_handler,
    content_handling=apigateway.ContentHandling.CONVERT_TO_TEXT, # convert to base64
    credentials_passthrough=True
)

Method options can optionally be specified when adding methods:

book.add_method("GET", get_book_integration,
    authorization_type=apigateway.AuthorizationType.IAM,
    api_key_required=True
)

The following example shows how to use an API Key with a usage plan:

hello = lambda.Function(self, "hello",
    runtime=lambda.Runtime.NODEJS_10_X,
    handler="hello.handler",
    code=lambda.Code.from_asset("lambda")
)

api = apigateway.RestApi(self, "hello-api")
integration = apigateway.LambdaIntegration(hello)

v1 = api.root.add_resource("v1")
echo = v1.add_resource("echo")
echo_method = echo.add_method("GET", integration, api_key_required=True)
key = api.add_api_key("ApiKey")

plan = api.add_usage_plan("UsagePlan",
    name="Easy",
    api_key=key
)

# INCORRECT
plan.add_api_stage(
    stage=api.deployment_stage,
    throttle=[{
        "method": echo_method,
        "throttle": {
            "rate_limit": 10,
            "burst_limit": 2
        }
    }
    ]
)

Working with models

When you work with Lambda integrations that are not Proxy integrations, you have to define your models and mappings for the request, response, and integration.

hello = lambda.Function(self, "hello",
    runtime=lambda.Runtime.NODEJS_10_X,
    handler="hello.handler",
    code=lambda.Code.from_asset("lambda")
)

api = apigateway.RestApi(self, "hello-api")
resource = api.root.add_resource("v1")

You can define more parameters on the integration to tune the behavior of API Gateway

# INCORRECT
integration = LambdaIntegration(hello,
    proxy=False,
    request_parameters={
        # You can define mapping parameters from your method to your integration
        # - Destination parameters (the key) are the integration parameters (used in mappings)
        # - Source parameters (the value) are the source request parameters or expressions
        # @see: https://docs.aws.amazon.com/apigateway/latest/developerguide/request-response-data-mappings.html
        ""integration.request.querystring.who"": "method.request.querystring.who"
    },
    allow_test_invoke=True,
    request_templates={
        # You can define a mapping that will build a payload for your integration, based
        #  on the integration parameters that you have specified
        # Check: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html
        ""application/json"": JSON.stringify(action="sayHello", poll_id="$util.escapeJavaScript($input.params('who'))")
    },
    # This parameter defines the behavior of the engine is no suitable response template is found
    passthrough_behavior=PassthroughBehavior.NEVER,
    integration_responses=[{
        # Successful response from the Lambda function, no filter defined
        #  - the selectionPattern filter only tests the error message
        # We will set the response status code to 200
        "status_code": "200",
        "response_templates": {
            # This template takes the "message" result from the Lambda function, adn embeds it in a JSON response
            # Check https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html
            ""application/json"": JSON.stringify(state="ok", greeting="$util.escapeJavaScript($input.body)")
        },
        "response_parameters": {
            # We can map response parameters
            # - Destination parameters (the key) are the response parameters (used in mappings)
            # - Source parameters (the value) are the integration response parameters or expressions
            ""method.response.header.Content-Type"": "'application/json'",
            ""method.response.header.Access-Control-Allow-Origin"": "'*'",
            ""method.response.header.Access-Control-Allow-Credentials"": "'true'"
        }
    }, {
        # For errors, we check if the error message is not empty, get the error data
        "selection_pattern": "(\n|.)+",
        # We will set the response status code to 200
        "status_code": "400",
        "response_templates": {
            ""application/json"": JSON.stringify(state="error", message="$util.escapeJavaScript($input.path('$.errorMessage'))")
        },
        "response_parameters": {
            ""method.response.header.Content-Type"": "'application/json'",
            ""method.response.header.Access-Control-Allow-Origin"": "'*'",
            ""method.response.header.Access-Control-Allow-Credentials"": "'true'"
        }
    }
    ]
)

You can define models for your responses (and requests)

# INCORRECT
# We define the JSON Schema for the transformed valid response
response_model = api.add_model("ResponseModel",
    content_type="application/json",
    model_name="ResponseModel",
    schema={""$schema"": "http://json-schema.org/draft-04/schema#", ""title"": "pollResponse", ""type"": "object", ""properties"": {""state"": {""type"": "string"}, ""greeting"": {""type"": "string"}}}
)

# We define the JSON Schema for the transformed error response
error_response_model = api.add_model("ErrorResponseModel",
    content_type="application/json",
    model_name="ErrorResponseModel",
    schema={""$schema"": "http://json-schema.org/draft-04/schema#", ""title"": "errorResponse", ""type"": "object", ""properties"": {""state"": {""type"": "string"}, ""message"": {""type"": "string"}}}
)

And reference all on your method definition.

# INCORRECT
# If you want to define parameter mappings for the request, you need a validator
validator = api.add_request_validator("DefaultValidator",
    validate_request_body=False,
    validate_request_parameters=True
)
resource.add_method("GET", integration,
    # We can mark the parameters as required
    request_parameters={
        ""method.request.querystring.who"": True
    },
    # We need to set the validator for ensuring they are passed
    request_validator=validator,
    method_responses=[{
        # Successful response from the integration
        "status_code": "200",
        # Define what parameters are allowed or not
        "response_parameters": {
            ""method.response.header.Content-Type"": True,
            ""method.response.header.Access-Control-Allow-Origin"": True,
            ""method.response.header.Access-Control-Allow-Credentials"": True
        },
        # Validate the schema on the response
        "response_models": {
            ""application/json"": response_model
        }
    }, {
        # Same thing for the error responses
        "status_code": "400",
        "response_parameters": {
            ""method.response.header.Content-Type"": True,
            ""method.response.header.Access-Control-Allow-Origin"": True,
            ""method.response.header.Access-Control-Allow-Credentials"": True
        },
        "response_models": {
            ""application/json"": error_response_model
        }
    }
    ]
)

Default Integration and Method Options

The defaultIntegration and defaultMethodOptions properties can be used to configure a default integration at any resource level. These options will be used when defining method under this resource (recursively) with undefined integration or options.

If not defined, the default integration is MockIntegration. See reference documentation for default method options.

The following example defines the booksBackend integration as a default integration. This means that all API methods that do not explicitly define an integration will be routed to this AWS Lambda function.

books_backend = apigateway.LambdaIntegration(...)
api = apigateway.RestApi(self, "books",
    default_integration=books_backend
)

books = api.root.add_resource("books")
books.add_method("GET")# integrated with `booksBackend`
books.add_method("POST")# integrated with `booksBackend`

book = books.add_resource("{book_id}")
book.add_method("GET")

Proxy Routes

The addProxy method can be used to install a greedy {proxy+} resource on a path. By default, this also installs an "ANY" method:

proxy = resource.add_proxy(
    default_integration=LambdaIntegration(handler),

    # "false" will require explicitly adding methods on the `proxy` resource
    any_method=True
)

Deployments

By default, the RestApi construct will automatically create an API Gateway Deployment and a "prod" Stage which represent the API configuration you defined in your CDK app. This means that when you deploy your app, your API will be have open access from the internet via the stage URL.

The URL of your API can be obtained from the attribute restApi.url, and is also exported as an Output from your stack, so it's printed when you cdk deploy your app:

$ cdk deploy
...
books.booksapiEndpointE230E8D5 = https://6lyktd4lpk.execute-api.us-east-1.amazonaws.com/prod/

To disable this behavior, you can set { deploy: false } when creating your API. This means that the API will not be deployed and a stage will not be created for it. You will need to manually define a apigateway.Deployment and apigateway.Stage resources.

Use the deployOptions property to customize the deployment options of your API.

The following example will configure API Gateway to emit logs and data traces to AWS CloudWatch for all API calls:

By default, an IAM role will be created and associated with API Gateway to allow it to write logs and metrics to AWS CloudWatch unless cloudWatchRole is set to false.

# INCORRECT
api = apigateway.RestApi(self, "books",
    deploy_options={
        "logging_level": apigateway.MethodLoggingLevel.INFO,
        "data_trace_enabled": True
    }
)

Deeper dive: invalidation of deployments

API Gateway deployments are an immutable snapshot of the API. This means that we want to automatically create a new deployment resource every time the API model defined in our CDK app changes.

In order to achieve that, the AWS CloudFormation logical ID of the AWS::ApiGateway::Deployment resource is dynamically calculated by hashing the API configuration (resources, methods). This means that when the configuration changes (i.e. a resource or method are added, configuration is changed), a new logical ID will be assigned to the deployment resource. This will cause CloudFormation to create a new deployment resource.

By default, old deployments are deleted. You can set retainDeployments: true to allow users revert the stage to an old deployment manually.

Custom Domains

To associate an API with a custom domain, use the domainName configuration when you define your API:

# INCORRECT
api = apigw.RestApi(self, "MyDomain",
    domain_name={
        "domain_name": "example.com",
        "certificate": acm_certificate_for_example_com
    }
)

This will define a DomainName resource for you, along with a BasePathMapping from the root of the domain to the deployment stage of the API. This is a common set up.

To route domain traffic to an API Gateway API, use Amazon Route 53 to create an alias record. An alias record is a Route 53 extension to DNS. It's similar to a CNAME record, but you can create an alias record both for the root domain, such as example.com, and for subdomains, such as www.example.com. (You can create CNAME records only for subdomains.)

route53.ARecord(self, "CustomDomainAliasRecord",
    zone=hosted_zone_for_example_com,
    target=route53.AddressRecordTarget.from_alias(route53_targets.ApiGateway(api))
)

You can also define a DomainName resource directly in order to customize the default behavior:

apigw.DomainName(self, "custom-domain",
    domain_name="example.com",
    certificate=acm_certificate_for_example_com,
    endpoint_type=apigw.EndpointType.EDGE
)

Once you have a domain, you can map base paths of the domain to APIs. The following example will map the URL https://example.com/go-to-api1 to the api1 API and https://example.com/boom to the api2 API.

domain.add_base_path_mapping(api1, base_path="go-to-api1")
domain.add_base_path_mapping(api2, base_path="boom")

NOTE: currently, the mapping will always be assigned to the APIs deploymentStage, which will automatically assigned to the latest API deployment. Raise a GitHub issue if you require more granular control over mapping base paths to stages.

If you don't specify basePath, all URLs under this domain will be mapped to the API, and you won't be able to map another API to the same domain:

domain.add_base_path_mapping(api)

This can also be achieved through the mapping configuration when defining the domain as demonstrated above.

If you wish to setup this domain with an Amazon Route53 alias, use the route53_targets.ApiGatewayDomain:

route53.ARecord(self, "CustomDomainAliasRecord",
    zone=hosted_zone_for_example_com,
    target=route53.AddressRecordTarget.from_alias(route53_targets.ApiGatewayDomain(domain_name))
)

This module is part of the AWS Cloud Development Kit project.

AWS Auto Scaling Construct Library---

Stability: Stable


Application AutoScaling is used to configure autoscaling for all services other than scaling EC2 instances. For example, you will use this to scale ECS tasks, DynamoDB capacity, Spot Fleet sizes and more.

As a CDK user, you will probably not have to interact with this library directly; instead, it will be used by other construct libraries to offer AutoScaling features for their own constructs.

This document will describe the general autoscaling features and concepts; your particular service may offer only a subset of these.

AutoScaling basics

Resources can offer one or more attributes to autoscale, typically representing some capacity dimension of the underlying service. For example, a DynamoDB Table offers autoscaling of the read and write capacity of the table proper and its Global Secondary Indexes, an ECS Service offers autoscaling of its task count, an RDS Aurora cluster offers scaling of its replica count, and so on.

When you enable autoscaling for an attribute, you specify a minimum and a maximum value for the capacity. AutoScaling policies that respond to metrics will never go higher or lower than the indicated capacity (but scheduled scaling actions might, see below).

There are three ways to scale your capacity:

  • In response to a metric (also known as step scaling); for example, you might want to scale out if the CPU usage across your cluster starts to rise, and scale in when it drops again.
  • By trying to keep a certain metric around a given value (also known as target tracking scaling); you might want to automatically scale out an in to keep your CPU usage around 50%.
  • On a schedule; you might want to organize your scaling around traffic flows you expect, by scaling out in the morning and scaling in in the evening.

The general pattern of autoscaling will look like this:

capacity = resource.auto_scale_capacity(
    min_capacity=5,
    max_capacity=100
)

# Enable a type of metric scaling and/or schedule scaling
capacity.scale_on_metric(...)
capacity.scale_to_track_metric(...)
capacity.scale_on_schedule(...)

Step Scaling

This type of scaling scales in and out in deterministic steps that you configure, in response to metric values. For example, your scaling strategy to scale in response to CPU usage might look like this:

 Scaling        -1          (no change)          +1       +3
            │        │                       │        │        │
            ├────────┼───────────────────────┼────────┼────────┤
            │        │                       │        │        │
CPU usage   0%      10%                     50%       70%     100%

(Note that this is not necessarily a recommended scaling strategy, but it's a possible one. You will have to determine what thresholds are right for you).

You would configure it like this:

# INCORRECT
capacity.scale_on_metric("ScaleToCPU",
    metric=service.metric_cpu_utilization(),
    scaling_steps=[{"upper": 10, "change": -1}, {"lower": 50, "change": +1}, {"lower": 70, "change": +3}
    ],

    # Change this to AdjustmentType.PercentChangeInCapacity to interpret the
    # 'change' numbers before as percentages instead of capacity counts.
    adjustment_type=autoscaling.AdjustmentType.CHANGE_IN_CAPACITY
)

The AutoScaling construct library will create the required CloudWatch alarms and AutoScaling policies for you.

Target Tracking Scaling

This type of scaling scales in and out in order to keep a metric (typically representing utilization) around a value you prefer. This type of scaling is typically heavily service-dependent in what metric you can use, and so different services will have different methods here to set up target tracking scaling.

The following example configures the read capacity of a DynamoDB table to be around 60% utilization:

read_capacity = table.autos_scale_read_capacity(
    min_capacity=10,
    max_capacity=1000
)
read_capacity.scale_on_utilization(
    target_utilization_percent=60
)

Scheduled Scaling

This type of scaling is used to change capacities based on time. It works by changing the minCapacity and maxCapacity of the attribute, and so can be used for two purposes:

  • Scale in and out on a schedule by setting the minCapacity high or the maxCapacity low.
  • Still allow the regular scaling actions to do their job, but restrict the range they can scale over (by setting both minCapacity and maxCapacity but changing their range over time).

The following schedule expressions can be used:

  • at(yyyy-mm-ddThh:mm:ss) -- scale at a particular moment in time
  • rate(value unit) -- scale every minute/hour/day
  • cron(mm hh dd mm dow) -- scale on arbitrary schedules

Of these, the cron expression is the most useful but also the most complicated. A schedule is expressed as a cron expression. The Schedule class has a cron method to help build cron expressions.

The following example scales the fleet out in the morning, and lets natural scaling take over at night:

capacity = resource.auto_scale_capacity(
    min_capacity=1,
    max_capacity=50
)

capacity.scale_on_schedule("PrescaleInTheMorning",
    schedule=autoscaling.Schedule.cron(hour="8", minute="0"),
    min_capacity=20
)

capacity.scale_on_schedule("AllowDownscalingAtNight",
    schedule=autoscaling.Schedule.cron(hour="20", minute="0"),
    min_capacity=1
)

AWS App Mesh Construct Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


AWS App Mesh is a service mesh based on the Envoy proxy that makes it easy to monitor and control microservices. App Mesh standardizes how your microservices communicate, giving you end-to-end visibility and helping to ensure high-availability for your applications.

App Mesh gives you consistent visibility and network traffic controls for every microservice in an application.

App Mesh supports microservice applications that use service discovery naming for their components. To use App Mesh, you must have an existing application running on AWS Fargate, Amazon ECS, Amazon EKS, Kubernetes on AWS, or Amazon EC2.

For futher information on AWS AppMesh visit the AWS Docs for AppMesh.

Create the App and Stack

app = cdk.App()
stack = cdk.Stack(app, "stack")

Creating the Mesh

A service mesh is a logical boundary for network traffic between the services that reside within it.

After you create your service mesh, you can create virtual services, virtual nodes, virtual routers, and routes to distribute traffic between the applications in your mesh.

The following example creates the AppMesh service mesh with the default filter of DROP_ALL, see docs here for more info on egress filters.

mesh = Mesh(stack, "AppMesh",
    name="myAwsmMesh"
)

The mesh can also be created with the "ALLOW_ALL" egress filter by overwritting the property.

mesh = Mesh(stack, "AppMesh",
    name="myAwsmMesh",
    mesh_spec={
        "egress_filter": appmesh.MeshFilterType.Allow_All
    }
)

Adding VirtualRouters

The Mesh needs VirtualRouters as logical units to route to VirtualNodes.

Virtual routers handle traffic for one or more virtual services within your mesh. After you create a virtual router, you can create and associate routes for your virtual router that direct incoming requests to different virtual nodes.

# INCORRECT
router = mesh.add_virtual_router("router",
    port_mappings=[{
        "port": 8081,
        "protocol": appmesh.Protocol.HTTP
    }
    ]
)

The router can also be created using the constructor and passing in the mesh instead of calling the addVirtualRouter() method for the mesh.

# INCORRECT
mesh = Mesh(stack, "AppMesh",
    name="myAwsmMesh",
    mesh_spec={
        "egress_filter": appmesh.MeshFilterType.Allow_All
    }
)

router = appmesh.VirtualRouter(stack, "router",
    mesh=mesh, # notice that mesh is a required property when creating a router with a new statement
    port_mappings=[{
        "port": 8081,
        "protocol": appmesh.Protocol.HTTP
    }
    ]
)

The listener protocol can be either HTTP or TCP.

The same pattern applies to all constructs within the appmesh library, for any mesh.addXZY method, a new constuctor can also be used. This is particularly useful for cross stack resources are required. Where creating the mesh as part of an infrastructure stack and creating the resources such as nodes is more useful to keep in the application stack.

Adding VirtualService

A virtual service is an abstraction of a real service that is provided by a virtual node directly or indirectly by means of a virtual router. Dependent services call your virtual service by its virtualServiceName, and those requests are routed to the virtual node or virtual router that is specified as the provider for the virtual service.

We recommend that you use the service discovery name of the real service that you're targeting (such as my-service.default.svc.cluster.local).

When creating a virtual service:

  • If you want the virtual service to spread traffic across multiple virtual nodes, specify a Virtual router.
  • If you want the virtual service to reach a virtual node directly, without a virtual router, specify a Virtual node.

Adding a virtual router as the provider:

mesh.add_virtual_service("virtual-service",
    virtual_router=router,
    virtual_service_name="my-service.default.svc.cluster.local"
)

Adding a virtual node as the provider:

mesh.add_virtual_service("virtual-service",
    virtual_node=node,
    virtual_service_name=`my-service.default.svc.cluster.local`
)

Note that only one must of virtualNode or virtualRouter must be chosen.

Adding a VirtualNode

A virtual node acts as a logical pointer to a particular task group, such as an Amazon ECS service or a Kubernetes deployment.

Virtual node logical diagram

When you create a virtual node, you must specify the DNS service discovery hostname for your task group. Any inbound traffic that your virtual node expects should be specified as a listener. Any outbound traffic that your virtual node expects to reach should be specified as a backend.

The response metadata for your new virtual node contains the Amazon Resource Name (ARN) that is associated with the virtual node. Set this value (either the full ARN or the truncated resource name) as the APPMESH_VIRTUAL_NODE_NAME environment variable for your task group's Envoy proxy container in your task definition or pod spec. For example, the value could be mesh/default/virtualNode/simpleapp. This is then mapped to the node.id and node.cluster Envoy parameters.

Note If you require your Envoy stats or tracing to use a different name, you can override the node.cluster value that is set by APPMESH_VIRTUAL_NODE_NAME with the APPMESH_VIRTUAL_NODE_CLUSTER environment variable.

vpc = ec2.Vpc(stack, "vpc")
namespace = cloudmap.PrivateDnsNamespace(stack, "test-namespace",
    vpc=vpc,
    name="domain.local"
)

# INCORRECT
node = mesh.add_virtual_node("virtual-node",
    hostname="node-a",
    namespace=namespace,
    listeners={
        "port_mappings": [{
            "port": 8081,
            "protocol": appmesh.Protocol.HTTP
        }
        ],
        "health_checks": [{
            "healthy_threshold": 3,
            "interval_millis": 5000, # minimum
            "path": `/health-check-path`,
            "port": 8080,
            "protocol": appmesh.Protocol.HTTP,
            "timeout_millis": 2000, # minimum
            "unhealthy_threshold": 2
        }
        ]
    }
)

Create a VirtualNode with the the constructor and add tags.

# INCORRECT
node = appmesh.VirtualNode(stack, "node",
    mesh=mesh,
    hostname="node-1",
    namespace=namespace,
    listener={
        "port_mappings": [{
            "port": 8080,
            "protocol": appmesh.Protocol.HTTP
        }
        ],
        "health_checks": [{
            "healthy_threshold": 3,
            "interval_millis": 5000, # min
            "path": "/ping",
            "port": 8080,
            "protocol": appmesh.Protocol.HTTP,
            "timeout_millis": 2000, # min
            "unhealthy_threshold": 2
        }
        ]
    }
)

node.node.apply(cdk.Tag("Environment", "Dev"))

The listeners property can be left blank dded later with the mesh.addListeners() method. The healthcheck property is optional but if specifying a listener, the portMappings must contain at least one property.

Adding a Route

A route is associated with a virtual router, and it's used to match requests for a virtual router and distribute traffic accordingly to its associated virtual nodes.

You can use the prefix parameter in your route specification for path-based routing of requests. For example, if your virtual service name is my-service.local and you want the route to match requests to my-service.local/metrics, your prefix should be /metrics.

If your route matches a request, you can distribute traffic to one or more target virtual nodes with relative weighting.

# INCORRECT
router.add_route("route",
    route_targets=[{
        "virtual_node": virtual_node,
        "weight": 1
    }
    ],
    prefix=`/path-to-app`,
    is_http_route=True
)

Add a single route with multiple targets and split traffic 50/50

# INCORRECT
router.add_route("route",
    route_targets=[{
        "virtual_node": virtual_node,
        "weight": 50
    }, {
        "virtual_node2": virtual_node2,
        "weight": 50
    }
    ],
    prefix=`/path-to-app`,
    is_http_route=True
)

Multiple routes may also be added at once to different applications or targets.

# INCORRECT
ratings_router.add_routes(["route1", "route2"], [
    route_targets=[{
        "virtual_node": virtual_node,
        "weight": 1
    }
    ],
    prefix=`/path-to-app`,
    is_http_route=True
,
    route_targets=[{
        "virtual_node": virtual_node2,
        "weight": 1
    }
    ],
    prefix=`/path-to-app2`,
    is_http_route=True

])

The number of route ids and route targets must match as each route needs to have a unique name per router.

Amazon EC2 Auto Scaling Construct Library---

Stability: Stable


This module is part of the AWS Cloud Development Kit project.

Fleet### Auto Scaling Group

An AutoScalingGroup represents a number of instances on which you run your code. You pick the size of the fleet, the instance type and the OS image:

import aws_cdk.aws_autoscaling as autoscaling
import aws_cdk.aws_ec2 as ec2

autoscaling.AutoScalingGroup(self, "ASG",
    vpc=vpc,
    instance_type=ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.MICRO),
    machine_image=ec2.AmazonLinuxImage()
)

NOTE: AutoScalingGroup has an property called allowAllOutbound (allowing the instances to contact the internet) which is set to true by default. Be sure to set this to false if you don't want your instances to be able to start arbitrary connections.

Machine Images (AMIs)

AMIs control the OS that gets launched when you start your EC2 instance. The EC2 library contains constructs to select the AMI you want to use.

Depending on the type of AMI, you select it a different way.

The latest version of Amazon Linux and Microsoft Windows images are selectable by instantiating one of these classes:

# Pick a Windows edition to use
windows = ec2.WindowsImage(ec2.WindowsVersion.WINDOWS_SERVER_2019_ENGLISH_FULL_BASE)

# Pick the right Amazon Linux edition. All arguments shown are optional
# and will default to these values when omitted.
amzn_linux = ec2.AmazonLinuxImage(
    generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX,
    edition=ec2.AmazonLinuxEdition.STANDARD,
    virtualization=ec2.AmazonLinuxVirt.HVM,
    storage=ec2.AmazonLinuxStorage.GENERAL_PURPOSE
)

# For other custom (Linux) images, instantiate a `GenericLinuxImage` with
# a map giving the AMI to in for each region:

# INCORRECT
linux = ec2.GenericLinuxImage(
    "us-east-1"="ami-97785bed",
    "eu-west-1"="ami-12345678"
)

NOTE: The Amazon Linux images selected will be cached in your cdk.json, so that your AutoScalingGroups don't automatically change out from under you when you're making unrelated changes. To update to the latest version of Amazon Linux, remove the cache entry from the context section of your cdk.json.

We will add command-line options to make this step easier in the future.

AutoScaling Instance Counts

AutoScalingGroups make it possible to raise and lower the number of instances in the group, in response to (or in advance of) changes in workload.

When you create your AutoScalingGroup, you specify a minCapacity and a maxCapacity. AutoScaling policies that respond to metrics will never go higher or lower than the indicated capacity (but scheduled scaling actions might, see below).

There are three ways to scale your capacity:

  • In response to a metric (also known as step scaling); for example, you might want to scale out if the CPU usage across your cluster starts to rise, and scale in when it drops again.
  • By trying to keep a certain metric around a given value (also known as target tracking scaling); you might want to automatically scale out and in to keep your CPU usage around 50%.
  • On a schedule; you might want to organize your scaling around traffic flows you expect, by scaling out in the morning and scaling in in the evening.

The general pattern of autoscaling will look like this:

auto_scaling_group = autoscaling.AutoScalingGroup(self, "ASG",
    min_capacity=5,
    max_capacity=100
)

# Step scaling
auto_scaling_group.scale_on_metric(...)

# Target tracking scaling
auto_scaling_group.scale_on_cpu_utilization(...)
auto_scaling_group.scale_on_incoming_bytes(...)
auto_scaling_group.scale_on_outgoing_bytes(...)
auto_scaling_group.scale_on_request_count(...)
auto_scaling_group.scale_to_track_metric(...)

# Scheduled scaling
auto_scaling_group.scale_on_schedule(...)

Step Scaling

This type of scaling scales in and out in deterministics steps that you configure, in response to metric values. For example, your scaling strategy to scale in response to a metric that represents your average worker pool usage might look like this:

 Scaling        -1          (no change)          +1       +3
            │        │                       │        │        │
            ├────────┼───────────────────────┼────────┼────────┤
            │        │                       │        │        │
Worker use  0%      10%                     50%       70%     100%

(Note that this is not necessarily a recommended scaling strategy, but it's a possible one. You will have to determine what thresholds are right for you).

Note that in order to set up this scaling strategy, you will have to emit a metric representing your worker utilization from your instances. After that, you would configure the scaling something like this:

worker_utilization_metric = cloudwatch.Metric(
    namespace="MyService",
    metric_name="WorkerUtilization"
)

# INCORRECT
capacity.scale_on_metric("ScaleToCPU",
    metric=worker_utilization_metric,
    scaling_steps=[{"upper": 10, "change": -1}, {"lower": 50, "change": +1}, {"lower": 70, "change": +3}
    ],

    # Change this to AdjustmentType.PERCENT_CHANGE_IN_CAPACITY to interpret the
    # 'change' numbers before as percentages instead of capacity counts.
    adjustment_type=autoscaling.AdjustmentType.CHANGE_IN_CAPACITY
)

The AutoScaling construct library will create the required CloudWatch alarms and AutoScaling policies for you.

Target Tracking Scaling

This type of scaling scales in and out in order to keep a metric around a value you prefer. There are four types of predefined metrics you can track, or you can choose to track a custom metric. If you do choose to track a custom metric, be aware that the metric has to represent instance utilization in some way (AutoScaling will scale out if the metric is higher than the target, and scale in if the metric is lower than the target).

If you configure multiple target tracking policies, AutoScaling will use the one that yields the highest capacity.

The following example scales to keep the CPU usage of your instances around 50% utilization:

auto_scaling_group.scale_on_cpu_utilization("KeepSpareCPU",
    target_utilization_percent=50
)

To scale on average network traffic in and out of your instances:

auto_scaling_group.scale_on_incoming_bytes("LimitIngressPerInstance",
    target_bytes_per_second=10 * 1024 * 1024
)
auto_scaling_group.scale_on_outcoming_bytes("LimitEgressPerInstance",
    target_bytes_per_second=10 * 1024 * 1024
)

To scale on the average request count per instance (only works for AutoScalingGroups that have been attached to Application Load Balancers):

auto_scaling_group.scale_on_request_count("LimitRPS",
    target_requests_per_second=1000
)

Scheduled Scaling

This type of scaling is used to change capacities based on time. It works by changing minCapacity, maxCapacity and desiredCapacity of the AutoScalingGroup, and so can be used for two purposes:

  • Scale in and out on a schedule by setting the minCapacity high or the maxCapacity low.
  • Still allow the regular scaling actions to do their job, but restrict the range they can scale over (by setting both minCapacity and maxCapacity but changing their range over time).

A schedule is expressed as a cron expression. The Schedule class has a cron method to help build cron expressions.

The following example scales the fleet out in the morning, going back to natural scaling (all the way down to 1 instance if necessary) at night:

auto_scaling_group.scale_on_schedule("PrescaleInTheMorning",
    schedule=autoscaling.Schedule.cron(hour="8", minute="0"),
    min_capacity=20
)

auto_scaling_group.scale_on_schedule("AllowDownscalingAtNight",
    schedule=autoscaling.Schedule.cron(hour="20", minute="0"),
    min_capacity=1
)

Allowing Connections

See the documentation of the @aws-cdk/aws-ec2 package for more information about allowing connections between resources backed by instances.

Future work

  • CloudWatch Events (impossible to add currently as the AutoScalingGroup ARN is necessary to make this rule and this cannot be accessed from CloudFormation).

Amazon Certificate Manager Construct Library---

Stability: Stable


This package provides Constructs for provisioning and referencing certificates which can be used in CloudFront and ELB.

The following requests a certificate for a given domain:

cert = certmgr.Certificate(self, "Certificate",
    domain_name="example.com"
)

After requesting a certificate, you will need to prove that you own the domain in question before the certificate will be granted. The CloudFormation deployment will wait until this verification process has been completed.

Because of this wait time, it's better to provision your certificates either in a separate stack from your main service, or provision them manually and import them into your CDK application.

The CDK also provides a custom resource which can be used for automatic validation if the DNS records for the domain are managed through Route53 (see below).

Email validation

Email-validated certificates (the default) are validated by receiving an email on one of a number of predefined domains and following the instructions in the email.

See Validate with Email in the Amazon Certificate Manager User Guide.

DNS validation

DNS-validated certificates are validated by configuring appropriate DNS records for your domain.

See Validate with DNS in the Amazon Certificate Manager User Guide.

Automatic DNS-validated certificates using Route53

The DnsValidatedCertificateRequest class provides a Custom Resource by which you can request a TLS certificate from AWS Certificate Manager that is automatically validated using a cryptographically secure DNS record. For this to work, there must be a Route 53 public zone that is responsible for serving records under the Domain Name of the requested certificate. For example, if you request a certificate for www.example.com, there must be a Route 53 public zone example.com that provides authoritative records for the domain.

Example:

hosted_zone = route53.HostedZone.from_lookup(self, "HostedZone",
    domain_name="example.com",
    private_zone=False
)

certificate = certmgr.DnsValidatedCertificate(self, "TestCertificate",
    domain_name="test.example.com",
    hosted_zone=hosted_zone
)

Importing

If you want to import an existing certificate, you can do so from its ARN:

arn = "arn:aws:..."
certificate = Certificate.from_certificate_arn(self, "Certificate", arn)

Sharing between Stacks

To share the certificate between stacks in the same CDK application, simply pass the Certificate object between the stacks.

AWS CloudFormation Construct Library---

Stability: Stable


This module is part of the AWS Cloud Development Kit project.

Custom Resources

Custom Resources are CloudFormation resources that are implemented by arbitrary user code. They can do arbitrary lookups or modifications during a CloudFormation synthesis run.

You will typically use Lambda to implement a Construct implemented as a Custom Resource (though SNS topics can be used as well). Your Lambda function will be sent a CREATE, UPDATE or DELETE message, depending on the CloudFormation life cycle. It will perform whatever actions it needs to, and then return any number of output values which will be available as attributes of your Construct. In turn, those can be used as input to other Constructs in your model.

In general, consumers of your Construct will not need to care whether it is implemented in term of other CloudFormation resources or as a custom resource.

Note: when implementing your Custom Resource using a Lambda, use a SingletonLambda so that even if your custom resource is instantiated multiple times, the Lambda will only get uploaded once.

Example

The following shows an example of a declaring Custom Resource that copies files into an S3 bucket during deployment (the implementation of the actual Lambda handler is elided for brevity).

class CopyOperation(Construct):
    def __init__(self, parent, name, *, sourceBucket, targetBucket):
        super().__init__(parent, name)

        lambda_provider = lambda.SingletonFunction(self, "Provider",
            uuid="f7d4f730-4ee1-11e8-9c2d-fa7ae01bbebc",
            runtime=lambda.Runtime.PYTHON_3_7,
            code=lambda.Code.from_asset("../copy-handler"),
            handler="index.handler",
            timeout=Duration.seconds(60)
        )

        CustomResource(self, "Resource",
            provider=CustomResourceProvider.lambda(lambda_provider),
            properties={
                "source_bucket_arn": source_bucket.bucket_arn,
                "target_bucket_arn": target_bucket.bucket_arn
            }
        )

The aws-cdk-examples repository has examples for adding custom resources.

References

See the following section of the docs on details to write Custom Resources:

Amazon CloudFront Construct Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


A CloudFront construct - for setting up the AWS CDN with ease!

Example usage:

source_bucket = Bucket(self, "Bucket")

# INCORRECT
distribution = CloudFrontWebDistribution(self, "MyDistribution",
    origin_configs=[{
        "s3_origin_source": {
            "s3_bucket_source": source_bucket
        },
        "behaviors": [{"is_default_behavior": True}]
    }
    ]
)

AWS CloudTrail Construct Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


Add a CloudTrail construct - for ease of setting up CloudTrail logging in your account

Example usage:

import aws_cdk.aws_cloudtrail as cloudtrail

trail = cloudtrail.Trail(self, "CloudTrail")

You can instantiate the CloudTrail construct with no arguments - this will by default:

  • Create a new S3 Bucket and associated Policy that allows CloudTrail to write to it
  • Create a CloudTrail with the following configuration:
  • Logging Enabled
  • Log file validation enabled
  • Multi Region set to true
  • Global Service Events set to true
  • The created S3 bucket
  • CloudWatch Logging Disabled
  • No SNS configuartion
  • No tags
  • No fixed name

You can override any of these properties using the CloudTrailProps configuraiton object.

For example, to log to CloudWatch Logs

import aws_cdk.aws_cloudtrail as cloudtrail

trail = cloudtrail.Trail(self, "CloudTrail",
    send_to_cloud_watch_logs=True
)

This creates the same setup as above - but also logs events to a created CloudWatch Log stream. By default, the created log group has a retention period of 365 Days, but this is also configurable.

For using CloudTrail event selector to log specific S3 events, you can use the CloudTrailProps configuration object. Example:

import aws_cdk.aws_cloudtrail as cloudtrail

trail = cloudtrail.Trail(self, "MyAmazingCloudTrail")

# Adds an event selector to the bucket magic-bucket.
# By default, this includes management events and all operations (Read + Write)
trail.add_s3_event_selector(["arn:aws:s3:::magic-bucket/"])

# Adds an event selector to the bucket foo, with a specific configuration
trail.add_s3_event_selector(["arn:aws:s3:::foo/"],
    include_management_events=False,
    read_write_type=ReadWriteType.ALL
)

Amazon CloudWatch Construct Library---

Stability: Stable


Metric objects represent a metric that is emitted by AWS services or your own application, such as CPUUsage, FailureCount or Bandwidth.

Metric objects can be constructed directly or are exposed by resources as attributes. Resources that expose metrics will have functions that look like metricXxx() which will return a Metric object, initialized with defaults that make sense.

For example, lambda.Function objects have the fn.metricErrors() method, which represents the amount of errors reported by that Lambda function:

errors = fn.metric_errors()

Aggregation

To graph or alarm on metrics you must aggregate them first, using a function like Average or a percentile function like P99. By default, most Metric objects returned by CDK libraries will be configured as Average over 300 seconds (5 minutes). The exception is if the metric represents a count of discrete events, such as failures. In that case, the Metric object will be configured as Sum over 300 seconds, i.e. it represents the number of times that event occurred over the time period.

If you want to change the default aggregation of the Metric object (for example, the function or the period), you can do so by passing additional parameters to the metric function call:

minute_error_rate = fn.metric_errors(
    statistic="avg",
    period=Duration.minutes(1),
    label="Lambda failure rate"
)

This function also allows changing the metric label or color (which will be useful when embedding them in graphs, see below).

Rates versus Sums

The reason for using Sum to count discrete events is that some events are emitted as either 0 or 1 (for example Errors for a Lambda) and some are only emitted as 1 (for example NumberOfMessagesPublished for an SNS topic).

In case 0-metrics are emitted, it makes sense to take the Average of this metric: the result will be the fraction of errors over all executions.

If 0-metrics are not emitted, the Average will always be equal to 1, and not be very useful.

In order to simplify the mental model of Metric objects, we default to aggregating using Sum, which will be the same for both metrics types. If you happen to know the Metric you want to alarm on makes sense as a rate (Average) you can always choose to change the statistic.

Alarms

Alarms can be created on metrics in one of two ways. Either create an Alarm object, passing the Metric object to set the alarm on:

Alarm(self, "Alarm",
    metric=fn.metric_errors(),
    threshold=100,
    evaluation_periods=2
)

Alternatively, you can call metric.createAlarm():

fn.metric_errors().create_alarm(self, "Alarm",
    threshold=100,
    evaluation_periods=2
)

The most important properties to set while creating an Alarms are:

  • threshold: the value to compare the metric against.
  • comparisonOperator: the comparison operation to use, defaults to metric >= threshold.
  • evaluationPeriods: how many consecutive periods the metric has to be breaching the the threshold for the alarm to trigger.

Dashboards

Dashboards are set of Widgets stored server-side which can be accessed quickly from the AWS console. Available widgets are graphs of a metric over time, the current value of a metric, or a static piece of Markdown which explains what the graphs mean.

The following widgets are available:

  • GraphWidget -- shows any number of metrics on both the left and right vertical axes.
  • AlarmWidget -- shows the graph and alarm line for a single alarm.
  • SingleValueWidget -- shows the current value of a set of metrics.
  • TextWidget -- shows some static Markdown.

Graph widget

A graph widget can display any number of metrics on either the left or right vertical axis:

dashboard.add_widgets(GraphWidget(
    title="Executions vs error rate",

    left=[execution_count_metric],

    right=[error_count_metric.with(
        statistic="average",
        label="Error rate",
        color="00FF00"
    )]
))

Alarm widget

An alarm widget shows the graph and the alarm line of a single alarm:

dashboard.add_widgets(AlarmWidget(
    title="Errors",
    alarm=error_alarm
))

Single value widget

A single-value widget shows the latest value of a set of metrics (as opposed to a graph of the value over time):

dashboard.add_widgets(SingleValueWidget(
    metrics=[visitor_count, purchase_count]
))

Text widget

A text widget shows an arbitrary piece of MarkDown. Use this to add explanations to your dashboard:

dashboard.add_widgets(TextWidget(
    markdown="# Key Performance Indicators"
))

Dashboard Layout

The widgets on a dashboard are visually laid out in a grid that is 24 columns wide. Normally you specify X and Y coordinates for the widgets on a Dashboard, but because this is inconvenient to do manually, the library contains a simple layout system to help you lay out your dashboards the way you want them to.

Widgets have a width and height property, and they will be automatically laid out either horizontally or vertically stacked to fill out the available space.

Widgets are added to a Dashboard by calling add(widget1, widget2, ...). Widgets given in the same call will be laid out horizontally. Widgets given in different calls will be laid out vertically. To make more complex layouts, you can use the following widgets to pack widgets together in different ways:

  • Column: stack two or more widgets vertically.
  • Row: lay out two or more widgets horizontally.
  • Spacer: take up empty space

AWS CodeBuild Construct Library---

Stability: Stable


AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can get started quickly by using prepackaged build environments, or you can create custom build environments that use your own build tools. With CodeBuild, you are charged by the minute for the compute resources you use.

Installation

Install the module:

$ npm i @aws-cdk/aws-codebuild

Import it into your code:

import aws_cdk.aws_codebuild as codebuild

The codebuild.Project construct represents a build project resource. See the reference documentation for a comprehensive list of initialization properties, methods and attributes.

Source

Build projects are usually associated with a source, which is specified via the source property which accepts a class that extends the Source abstract base class. The default is to have no source associated with the build project; the buildSpec option is required in that case.

Here's a CodeBuild project with no source which simply prints Hello, CodeBuild!:

codebuild.Project(self, "MyProject",
    build_spec=codebuild.BuildSpec.from_object(
        version="0.2",
        phases={
            "build": {
                "commands": ["echo \"Hello, CodeBuild!\""
                ]
            }
        }
    )
)

CodeCommitSource

Use an AWS CodeCommit repository as the source of this build:

import aws_cdk.aws_codebuild as codebuild
import aws_cdk.aws_codecommit as codecommit

repository = codecommit.Repository(self, "MyRepo", repository_name="foo")
codebuild.Project(self, "MyFirstCodeCommitProject",
    source=codebuild.Source.code_commit(repository=repository)
)

S3Source

Create a CodeBuild project with an S3 bucket as the source:

import aws_cdk.aws_codebuild as codebuild
import aws_cdk.aws_s3 as s3

bucket = s3.Bucket(self, "MyBucket")
codebuild.Project(self, "MyProject",
    source=codebuild.Source.s3(
        bucket=bucket,
        path="path/to/file.zip"
    )
)

GitHubSource and GitHubEnterpriseSource

These source types can be used to build code from a GitHub repository. Example:

git_hub_source = codebuild.Source.git_hub(
    owner="awslabs",
    repo="aws-cdk",
    webhook=True, # optional, default: true if `webhookFilteres` were provided, false otherwise
    webhook_filters=[
        codebuild.FilterGroup.in_event_of(codebuild.EventAction.PUSH).and_branch_is("master")
    ]
)

To provide GitHub credentials, please either go to AWS CodeBuild Console to connect or call ImportSourceCredentials to persist your personal access token. Example:

aws codebuild import-source-credentials --server-type GITHUB --auth-type PERSONAL_ACCESS_TOKEN --token <token_value>

BitBucketSource

This source type can be used to build code from a BitBucket repository.

bb_source = codebuild.Source.bit_bucket(
    owner="owner",
    repo="repo"
)

CodePipeline

To add a CodeBuild Project as an Action to CodePipeline, use the PipelineProject class instead of Project. It's a simple class that doesn't allow you to specify sources, secondarySources, artifacts or secondaryArtifacts, as these are handled by setting input and output CodePipeline Artifact instances on the Action, instead of setting them on the Project.

project = codebuild.PipelineProject(self, "Project")

For more details, see the readme of the @aws-cdk/@aws-codepipeline package.

Caching

You can save time when your project builds by using a cache. A cache can store reusable pieces of your build environment and use them across multiple builds. Your build project can use one of two types of caching: Amazon S3 or local. In general, S3 caching is a good option for small and intermediate build artifacts that are more expensive to build than to download. Local caching is a good option for large intermediate build artifacts because the cache is immediately available on the build host.

S3 Caching

With S3 caching, the cache is stored in an S3 bucket which is available from multiple hosts.

codebuild.Project(self, "Project",
    source=codebuild.Source.bit_bucket(
        owner="awslabs",
        repo="aws-cdk"
    ),
    cache=codebuild.Cache.bucket(Bucket(self, "Bucket"))
)

Local Caching

With local caching, the cache is stored on the codebuild instance itself. This is simple, cheap and fast, but CodeBuild cannot guarantee a reuse of instance and hence cannot guarantee cache hits. For example, when a build starts and caches files locally, if two subsequent builds start at the same time afterwards only one of those builds would get the cache. Three different cache modes are supported, which can be turned on individually.

  • LocalCacheMode.Source caches Git metadata for primary and secondary sources.
  • LocalCacheMode.DockerLayer caches existing Docker layers.
  • LocalCacheMode.Custom caches directories you specify in the buildspec file.
codebuild.Project(self, "Project",
    source=codebuild.Source.git_hub_enterprise(
        https_clone_url="https://my-github-enterprise.com/owner/repo"
    ),

    # Enable Docker AND custom caching
    cache=codebuild.Cache.local(LocalCacheMode.DockerLayer, LocalCacheMode.Custom)
)

Environment

By default, projects use a small instance with an Ubuntu 18.04 image. You can use the environment property to customize the build environment:

  • buildImage defines the Docker image used. See Images below for details on how to define build images.
  • computeType defines the instance type used for the build.
  • privileged can be set to true to allow privileged access.
  • environmentVariables can be set at this level (and also at the project level).

Images

The CodeBuild library supports both Linux and Windows images via the LinuxBuildImage and WindowsBuildImage classes, respectively.

You can either specify one of the predefined Windows/Linux images by using one of the constants such as WindowsBuildImage.WIN_SERVER_CORE_2016_BASE or LinuxBuildImage.UBUNTU_14_04_RUBY_2_5_1.

Alternatively, you can specify a custom image using one of the static methods on XxxBuildImage:

  • Use .fromDockerRegistry(image[, { secretsManagerCredentials }]) to reference an image in any public or private Docker registry.
  • Use .fromEcrRepository(repo[, tag]) to reference an image available in an ECR repository.
  • Use .fromAsset(directory) to use an image created from a local asset.

The following example shows how to define an image from a Docker asset:

environment: {
  buildImage: codebuild.LinuxBuildImage.fromAsset(this, 'MyImage', {
    directory: path.join(__dirname, 'demo-image')
  })
}

The following example shows how to define an image from an ECR repository:

environment: {
  buildImage: codebuild.LinuxBuildImage.fromEcrRepository(ecrRepository, "v1.0")
}

The following example shows how to define an image from a private docker registry:

environment: {
  buildImage: codebuild.LinuxBuildImage.fromDockerRegistry('my-registry/my-repo', {
    secretsManagerCredentials: secrets,
  }),
}

Events

CodeBuild projects can be used either as a source for events or be triggered by events via an event rule.

Using Project as an event target

The @aws-cdk/aws-events-targets.CodeBuildProject allows using an AWS CodeBuild project as a AWS CloudWatch event rule target:

# start build when a commit is pushed
targets = require("@aws-cdk/aws-events-targets")

code_commit_repository.on_commit("OnCommit", targets.CodeBuildProject(project))

Using Project as an event source

To define Amazon CloudWatch event rules for build projects, use one of the onXxx methods:

rule = project.on_state_change("BuildStateChange",
    target=targets.LambdaFunction(fn)
)

Secondary sources and artifacts

CodeBuild Projects can get their sources from multiple places, and produce multiple outputs. For example:

project = codebuild.Project(self, "MyProject",
    secondary_sources=[
        codebuild.Source.code_commit(
            identifier="source2",
            repository=repo
        )
    ],
    secondary_artifacts=[
        codebuild.Artifacts.s3(
            identifier="artifact2",
            bucket=bucket,
            path="some/path",
            name="file.zip"
        )
    ]
)

Note that the identifier property is required for both secondary sources and artifacts.

The contents of the secondary source is available to the build under the directory specified by the CODEBUILD_SRC_DIR_<identifier> environment variable (so, CODEBUILD_SRC_DIR_source2 in the above case).

The secondary artifacts have their own section in the buildspec, under the regular artifacts one. Each secondary artifact has its own section, beginning with their identifier.

So, a buildspec for the above Project could look something like this:

# INCORRECT
project = codebuild.Project(self, "MyProject",
    # secondary sources and artifacts as above...
    build_spec=codebuild.BuildSpec.from_object(
        version="0.2",
        phases={
            "build": {
                "commands": ["cd $CODEBUILD_SRC_DIR_source2", "touch output2.txt"
                ]
            }
        },
        artifacts={
            ""secondary-artifacts"": {
                ""artifact2"": {
                    ""base-directory"": "$CODEBUILD_SRC_DIR_source2",
                    ""files"": ["output2.txt"
                    ]
                }
            }
        }
    )
)

Definition of VPC configuration in CodeBuild Project

Typically, resources in an VPC are not accessible by AWS CodeBuild. To enable access, you must provide additional VPC-specific configuration information as part of your CodeBuild project configuration. This includes the VPC ID, the VPC subnet IDs, and the VPC security group IDs. VPC-enabled builds are then able to access resources inside your VPC.

For further Information see https://docs.aws.amazon.com/codebuild/latest/userguide/vpc-support.html

Use Cases VPC connectivity from AWS CodeBuild builds makes it possible to:

  • Run integration tests from your build against data in an Amazon RDS database that's isolated on a private subnet.
  • Query data in an Amazon ElastiCache cluster directly from tests.
  • Interact with internal web services hosted on Amazon EC2, Amazon ECS, or services that use internal Elastic Load Balancing.
  • Retrieve dependencies from self-hosted, internal artifact repositories, such as PyPI for Python, Maven for Java, and npm for Node.js.
  • Access objects in an Amazon S3 bucket configured to allow access through an Amazon VPC endpoint only.
  • Query external web services that require fixed IP addresses through the Elastic IP address of the NAT gateway or NAT instance associated with your subnet(s).

Your builds can access any resource that's hosted in your VPC.

Enable Amazon VPC Access in your CodeBuild Projects

Pass the VPC when defining your Project, then make sure to give the CodeBuild's security group the right permissions to access the resources that it needs by using the connections object.

For example:

vpc = ec2.Vpc(self, "MyVPC")
project = codebuild.Project(self, "MyProject",
    vpc=vpc,
    build_spec=codebuild.BuildSpec.from_object()
)

project.connections.allow_to(load_balancer, ec2.Port.tcp(443))

AWS CodeCommit Construct Library---

Stability: Stable


AWS CodeCommit is a version control service that enables you to privately store and manage Git repositories in the AWS cloud.

For further information on CodeCommit, see the AWS CodeCommit documentation.

To add a CodeCommit Repository to your stack:

import aws_cdk.aws_codecommit as codecommit

repo = codecommit.Repository(self, "Repository",
    repository_name="MyRepositoryName",
    description="Some description."
)

To add an Amazon SNS trigger to your repository:

# trigger is established for all repository actions on all branches by default.
repo.notify("arn:aws:sns:*:123456789012:my_topic")

Events

CodeCommit repositories emit Amazon CloudWatch events for certain activities. Use the repo.onXxx methods to define rules that trigger on these events and invoke targets as a result:

# starts a CodeBuild project when a commit is pushed to the "master" branch of the repo
repo.on_commit("CommitToMaster",
    target=targets.CodeBuildProject(project),
    branches=["master"]
)

# publishes a message to an Amazon SNS topic when a comment is made on a pull request
rule = repo.on_comment_on_pull_request("CommentOnPullRequest",
    target=targets.SnsTopic(my_topic)
)

AWS CodeDeploy Construct Library---

Stability: Stable


AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services.

The CDK currently supports Amazon EC2, on-premise and AWS Lambda applications.

EC2/on-premise Applications

To create a new CodeDeploy Application that deploys to EC2/on-premise instances:

import aws_cdk.aws_codedeploy as codedeploy

application = codedeploy.ServerApplication(self, "CodeDeployApplication",
    application_name="MyApplication"
)

To import an already existing Application:

application = codedeploy.ServerApplication.from_server_application_name(self, "ExistingCodeDeployApplication", "MyExistingApplication")

EC2/on-premise Deployment Groups

To create a new CodeDeploy Deployment Group that deploys to EC2/on-premise instances:

# INCORRECT
deployment_group = codedeploy.ServerDeploymentGroup(self, "CodeDeployDeploymentGroup",
    application=application,
    deployment_group_name="MyDeploymentGroup",
    auto_scaling_groups=[asg1, asg2],
    # adds User Data that installs the CodeDeploy agent on your auto-scaling groups hosts
    # default: true
    install_agent=True,
    # adds EC2 instances matching tags
    ec2_instance_tags=codedeploy.InstanceTagSet(
        # any instance with tags satisfying
        # key1=v1 or key1=v2 or key2 (any value) or value v3 (any key)
        # will match this group
        "key1"=["v1", "v2"],
        "key2"=[],
        ""=["v3"]
    ),
    # adds on-premise instances matching tags
    on_premise_instance_tags=codedeploy.InstanceTagSet({
        ""key1"": ["v1", "v2"]
    },
        "key2"=["v3"]
    ),
    # CloudWatch alarms
    alarms=[
        cloudwatch.Alarm()
    ],
    # whether to ignore failure to fetch the status of alarms from CloudWatch
    # default: false
    ignore_poll_alarms_failure=False,
    # auto-rollback configuration
    auto_rollback={
        "failed_deployment": True, # default: true
        "stopped_deployment": True, # default: false
        "deployment_in_alarm": True
    }
)

All properties are optional - if you don't provide an Application, one will be automatically created.

To import an already existing Deployment Group:

deployment_group = codedeploy.ServerDeploymentGroup.from_lambda_deployment_group_attributes(self, "ExistingCodeDeployDeploymentGroup",
    application=application,
    deployment_group_name="MyExistingDeploymentGroup"
)

Load balancers

You can specify a load balancer with the loadBalancer property when creating a Deployment Group.

LoadBalancer is an abstract class with static factory methods that allow you to create instances of it from various sources.

With Classic Elastic Load Balancer, you provide it directly:

import aws_cdk.aws_elasticloadbalancing as lb

elb = lb.LoadBalancer(self, "ELB")
elb.add_target()
elb.add_listener()

deployment_group = codedeploy.ServerDeploymentGroup(self, "DeploymentGroup",
    load_balancer=codedeploy.LoadBalancer.classic(elb)
)

With Application Load Balancer or Network Load Balancer, you provide a Target Group as the load balancer:

import aws_cdk.aws_elasticloadbalancingv2 as lbv2

alb = lbv2.ApplicationLoadBalancer(self, "ALB")
listener = alb.add_listener("Listener")
target_group = listener.add_targets("Fleet")

deployment_group = codedeploy.ServerDeploymentGroup(self, "DeploymentGroup",
    load_balancer=codedeploy.LoadBalancer.application(target_group)
)

Deployment Configurations

You can also pass a Deployment Configuration when creating the Deployment Group:

deployment_group = codedeploy.ServerDeploymentGroup(self, "CodeDeployDeploymentGroup",
    deployment_config=codedeploy.ServerDeploymentConfig.ALL_AT_ONCE
)

The default Deployment Configuration is ServerDeploymentConfig.ONE_AT_A_TIME.

You can also create a custom Deployment Configuration:

deployment_config = codedeploy.ServerDeploymentConfig(self, "DeploymentConfiguration",
    deployment_config_name="MyDeploymentConfiguration", # optional property
    # one of these is required, but both cannot be specified at the same time
    min_healthy_host_count=2,
    min_healthy_host_percentage=75
)

Or import an existing one:

deployment_config = codedeploy.ServerDeploymentConfig.from_server_deployment_config_name(self, "ExistingDeploymentConfiguration", "MyExistingDeploymentConfiguration")

Lambda Applications

To create a new CodeDeploy Application that deploys to a Lambda function:

import aws_cdk.aws_codedeploy as codedeploy

application = codedeploy.LambdaApplication(self, "CodeDeployApplication",
    application_name="MyApplication"
)

To import an already existing Application:

application = codedeploy.LambdaApplication.from_lambda_application_name(self, "ExistingCodeDeployApplication", "MyExistingApplication")

Lambda Deployment Groups

To enable traffic shifting deployments for Lambda functions, CodeDeploy uses Lambda Aliases, which can balance incoming traffic between two different versions of your function. Before deployment, the alias sends 100% of invokes to the version used in production. When you publish a new version of the function to your stack, CodeDeploy will send a small percentage of traffic to the new version, monitor, and validate before shifting 100% of traffic to the new version.

To create a new CodeDeploy Deployment Group that deploys to a Lambda function:

import aws_cdk.aws_codedeploy as codedeploy
import aws_cdk.aws_lambda as lambda

my_application = codedeploy.LambdaApplication()
func = lambda.Function()
version = func.add_version("1")
version1_alias = lambda.Alias(self, "alias",
    alias_name="prod",
    version=version
)

deployment_group = codedeploy.LambdaDeploymentGroup(stack, "BlueGreenDeployment",
    application=my_application, # optional property: one will be created for you if not provided
    alias=version1_alias,
    deployment_config=codedeploy.LambdaDeploymentConfig.LINEAR_10PERCENT_EVERY_1MINUTE
)

In order to deploy a new version of this function:

  1. Increment the version, e.g. const version = func.addVersion('2').
  2. Re-deploy the stack (this will trigger a deployment).
  3. Monitor the CodeDeploy deployment as traffic shifts between the versions.

Rollbacks and Alarms

CodeDeploy will roll back if the deployment fails. You can optionally trigger a rollback when one or more alarms are in a failed state:

deployment_group = codedeploy.LambdaDeploymentGroup(stack, "BlueGreenDeployment",
    alias=alias,
    deployment_config=codedeploy.LambdaDeploymentConfig.LINEAR_10PERCENT_EVERY_1MINUTE,
    alarms=[
        # pass some alarms when constructing the deployment group
        cloudwatch.Alarm(stack, "Errors",
            comparison_operator=cloudwatch.ComparisonOperator.GREATER_THAN_THRESHOLD,
            threshold=1,
            evaluation_periods=1,
            metric=alias.metric_errors()
        )
    ]
)

# or add alarms to an existing group
deployment_group.add_alarm(cloudwatch.Alarm(stack, "BlueGreenErrors",
    comparison_operator=cloudwatch.ComparisonOperator.GREATER_THAN_THRESHOLD,
    threshold=1,
    evaluation_periods=1,
    metric=blue_green_alias.metric_errors()
))

Pre and Post Hooks

CodeDeploy allows you to run an arbitrary Lambda function before traffic shifting actually starts (PreTraffic Hook) and after it completes (PostTraffic Hook). With either hook, you have the opportunity to run logic that determines whether the deployment must succeed or fail. For example, with PreTraffic hook you could run integration tests against the newly created Lambda version (but not serving traffic). With PostTraffic hook, you could run end-to-end validation checks.

warm_up_user_cache = lambda.Function()
end_to_end_validation = lambda.Function()

# pass a hook whe creating the deployment group
deployment_group = codedeploy.LambdaDeploymentGroup(stack, "BlueGreenDeployment",
    alias=alias,
    deployment_config=codedeploy.LambdaDeploymentConfig.LINEAR_10PERCENT_EVERY_1MINUTE,
    pre_hook=warm_up_user_cache
)

# or configure one on an existing deployment group
deployment_group.on_post_hook(end_to_end_validation)

Import an existing Deployment Group

To import an already existing Deployment Group:

deployment_group = codedeploy.LambdaDeploymentGroup.import(self, "ExistingCodeDeployDeploymentGroup",
    application=application,
    deployment_group_name="MyExistingDeploymentGroup"
)

AWS CodePipeline Actions---

Stability: Stable


This package contains Actions that can be used in a CodePipeline.

import aws_cdk.aws_codepipeline as codepipeline
import aws_cdk.aws_codepipeline_actions as codepipeline_actions

Sources#### AWS CodeCommit

To use a CodeCommit Repository in a CodePipeline:

import aws_cdk.aws_codecommit as codecommit

repo = codecommit.Repository(self, "Repo")

pipeline = codepipeline.Pipeline(self, "MyPipeline",
    pipeline_name="MyPipeline"
)
source_output = codepipeline.Artifact()
source_action = codepipeline_actions.CodeCommitSourceAction(
    action_name="CodeCommit",
    repository=repo,
    output=source_output
)
pipeline.add_stage(
    stage_name="Source",
    actions=[source_action]
)

GitHub

To use GitHub as the source of a CodePipeline:

# Read the secret from Secrets Manager
source_output = codepipeline.Artifact()
source_action = codepipeline_actions.GitHubSourceAction(
    action_name="GitHub_Source",
    owner="awslabs",
    repo="aws-cdk",
    oauth_token=cdk.SecretValue.secrets_manager("my-github-token"),
    output=source_output,
    branch="develop", # default: 'master'
    trigger=codepipeline_actions.GitHubTrigger.POLL
)
pipeline.add_stage(
    stage_name="Source",
    actions=[source_action]
)

AWS S3

To use an S3 Bucket as a source in CodePipeline:

import aws_cdk.aws_s3 as s3

source_bucket = s3.Bucket(self, "MyBucket",
    versioned=True
)

pipeline = codepipeline.Pipeline(self, "MyPipeline")
source_output = codepipeline.Artifact()
source_action = codepipeline_actions.S3SourceAction(
    action_name="S3Source",
    bucket=source_bucket,
    bucket_key="path/to/file.zip",
    output=source_output
)
pipeline.add_stage(
    stage_name="Source",
    actions=[source_action]
)

By default, the Pipeline will poll the Bucket to detect changes. You can change that behavior to use CloudWatch Events by setting the trigger property to S3Trigger.EVENTS (it's S3Trigger.POLL by default). If you do that, make sure the source Bucket is part of an AWS CloudTrail Trail - otherwise, the CloudWatch Events will not be emitted, and your Pipeline will not react to changes in the Bucket. You can do it through the CDK:

import aws_cdk.aws_cloudtrail as cloudtrail

key = "some/key.zip"
trail = cloudtrail.Trail(self, "CloudTrail")
trail.add_s3_event_selector([source_bucket.arn_for_objects(key)],
    read_write_type=cloudtrail.ReadWriteType.WRITE_ONLY
)
source_action = codepipeline_actions.S3SourceAction(
    action_name="S3Source",
    bucket_key=key,
    bucket=source_bucket,
    output=source_output,
    trigger=codepipeline_actions.S3Trigger.EVENTS
)

AWS ECR

To use an ECR Repository as a source in a Pipeline:

import aws_cdk.aws_ecr as ecr

pipeline = codepipeline.Pipeline(self, "MyPipeline")
source_output = codepipeline.Artifact()
source_action = codepipeline_actions.EcrSourceAction(
    action_name="ECR",
    repository=ecr_repository,
    image_tag="some-tag", # optional, default: 'latest'
    output=source_output
)
pipeline.add_stage(
    stage_name="Source",
    actions=[source_action]
)

Build & test#### AWS CodeBuild

Example of a CodeBuild Project used in a Pipeline, alongside CodeCommit:

import aws_cdk.aws_codebuild as codebuild
import aws_cdk.aws_codecommit as codecommit

repository = codecommit.Repository(self, "MyRepository",
    repository_name="MyRepository"
)
project = codebuild.PipelineProject(self, "MyProject")

source_output = codepipeline.Artifact()
source_action = codepipeline_actions.CodeCommitSourceAction(
    action_name="CodeCommit",
    repository=repository,
    output=source_output
)
build_action = codepipeline_actions.CodeBuildAction(
    action_name="CodeBuild",
    project=project,
    input=source_output,
    outputs=[codepipeline.Artifact()]
)

# INCORRECT
codepipeline.Pipeline(self, "MyPipeline",
    stages=[{
        "stage_name": "Source",
        "actions": [source_action]
    }, {
        "stage_name": "Build",
        "actions": [build_action]
    }
    ]
)

The default category of the CodeBuild Action is Build; if you want a Test Action instead, override the type property:

test_action = codepipeline_actions.CodeBuildAction(
    action_name="IntegrationTest",
    project=project,
    input=source_output,
    type=codepipeline_actions.CodeBuildActionType.TEST
)
Multiple inputs and outputs

When you want to have multiple inputs and/or outputs for a Project used in a Pipeline, instead of using the secondarySources and secondaryArtifacts properties of the Project class, you need to use the extraInputs and extraOutputs properties of the CodeBuild CodePipeline Actions. Example:

source_output1 = codepipeline.Artifact()
source_action1 = codepipeline_actions.CodeCommitSourceAction(
    action_name="Source1",
    repository=repository1,
    output=source_output1
)
source_output2 = codepipeline.Artifact("source2")
source_action2 = codepipeline_actions.CodeCommitSourceAction(
    action_name="Source2",
    repository=repository2,
    output=source_output2
)

build_action = codepipeline_actions.CodeBuildAction(
    action_name="Build",
    project=project,
    input=source_output1,
    extra_inputs=[source_output2
    ],
    outputs=[
        codepipeline.Artifact("artifact1"), # for better buildspec readability - see below
        codepipeline.Artifact("artifact2")
    ]
)

Note: when a CodeBuild Action in a Pipeline has more than one output, it only uses the secondary-artifacts field of the buildspec, never the primary output specification directly under artifacts. Because of that, it pays to explicitly name all output artifacts of that Action, like we did above, so that you know what name to use in the buildspec.

Example buildspec for the above project:

# INCORRECT
project = codebuild.PipelineProject(self, "MyProject",
    build_spec=codebuild.BuildSpec.from_object(
        version="0.2",
        phases={
            "build": {
                "commands": []
            }
        },
        artifacts={
            ""secondary-artifacts"": {
                ""artifact1"": {},
                ""artifact2"": {}
            }
        }
    )
)

Jenkins

In order to use Jenkins Actions in the Pipeline, you first need to create a JenkinsProvider:

jenkins_provider = codepipeline_actions.JenkinsProvider(self, "JenkinsProvider",
    provider_name="MyJenkinsProvider",
    server_url="http://my-jenkins.com:8080",
    version="2"
)

If you've registered a Jenkins provider in a different CDK app, or outside the CDK (in the CodePipeline AWS Console, for example), you can import it:

jenkins_provider = codepipeline_actions.JenkinsProvider.import(self, "JenkinsProvider",
    provider_name="MyJenkinsProvider",
    server_url="http://my-jenkins.com:8080",
    version="2"
)

Note that a Jenkins provider (identified by the provider name-category(build/test)-version tuple) must always be registered in the given account, in the given AWS region, before it can be used in CodePipeline.

With a JenkinsProvider, we can create a Jenkins Action:

build_action = codepipeline_actions.JenkinsAction(
    action_name="JenkinsBuild",
    jenkins_provider=jenkins_provider,
    project_name="MyProject",
    type=ccodepipeline_actions.JenkinsActionType.BUILD
)

Deploy#### AWS CloudFormation

This module contains Actions that allows you to deploy to CloudFormation from AWS CodePipeline.

For example, the following code fragment defines a pipeline that automatically deploys a CloudFormation template directly from a CodeCommit repository, with a manual approval step in between to confirm the changes:

# Source stage: read from repository
repo = codecommit.Repository(stack, "TemplateRepo",
    repository_name="template-repo"
)
source_output = codepipeline.Artifact("SourceArtifact")
source = cpactions.CodeCommitSourceAction(
    action_name="Source",
    repository=repo,
    output=source_output,
    trigger=cpactions.CodeCommitTrigger.POLL
)
source_stage = {
    "stage_name": "Source",
    "actions": [source]
}

# Deployment stage: create and deploy changeset with manual approval
stack_name = "OurStack"
change_set_name = "StagedChangeSet"

prod_stage = {
    "stage_name": "Deploy",
    "actions": [
        cpactions.CloudFormationCreateReplaceChangeSetAction(
            action_name="PrepareChanges",
            stack_name=stack_name,
            change_set_name=change_set_name,
            admin_permissions=True,
            template_path=source_output.at_path("template.yaml"),
            run_order=1
        ),
        cpactions.ManualApprovalAction(
            action_name="ApproveChanges",
            run_order=2
        ),
        cpactions.CloudFormationExecuteChangeSetAction(
            action_name="ExecuteChanges",
            stack_name=stack_name,
            change_set_name=change_set_name,
            run_order=3
        )
    ]
}

codepipeline.Pipeline(stack, "Pipeline",
    stages=[source_stage, prod_stage
    ]
)

See the AWS documentation for more details about using CloudFormation in CodePipeline.

Actions defined by this package

This package defines the following actions:

  • CloudFormationCreateUpdateStackAction - Deploy a CloudFormation template directly from the pipeline. The indicated stack is created, or updated if it already exists. If the stack is in a failure state, deployment will fail (unless replaceOnFailure is set to true, in which case it will be destroyed and recreated).
  • CloudFormationDeleteStackAction - Delete the stack with the given name.
  • CloudFormationCreateReplaceChangeSetAction - Prepare a change set to be applied later. You will typically use change sets if you want to manually verify the changes that are being staged, or if you want to separate the people (or system) preparing the changes from the people (or system) applying the changes.
  • CloudFormationExecuteChangeSetAction - Execute a change set prepared previously.
Lambda deployed through CodePipeline

If you want to deploy your Lambda through CodePipeline, and you don't use assets (for example, because your CDK code and Lambda code are separate), you can use a special Lambda Code class, CfnParametersCode. Note that your Lambda must be in a different Stack than your Pipeline. The Lambda itself will be deployed, alongside the entire Stack it belongs to, using a CloudFormation CodePipeline Action. Example:

lambda_stack = cdk.Stack(app, "LambdaStack")
lambda_code = lambda.Code.from_cfn_parameters()
lambda.Function(lambda_stack, "Lambda",
    code=lambda_code,
    handler="index.handler",
    runtime=lambda.Runtime.NODEJS_8_10
)
# other resources that your Lambda needs, added to the lambdaStack...

pipeline_stack = cdk.Stack(app, "PipelineStack")
pipeline = codepipeline.Pipeline(pipeline_stack, "Pipeline")

# add the source code repository containing this code to your Pipeline,
# and the source code of the Lambda Function, if they're separate
cdk_source_output = codepipeline.Artifact()
cdk_source_action = codepipeline_actions.CodeCommitSourceAction(
    repository=codecommit.Repository(pipeline_stack, "CdkCodeRepo",
        repository_name="CdkCodeRepo"
    ),
    action_name="CdkCode_Source",
    output=cdk_source_output
)
lambda_source_output = codepipeline.Artifact()
lambda_source_action = codepipeline_actions.CodeCommitSourceAction(
    repository=codecommit.Repository(pipeline_stack, "LambdaCodeRepo",
        repository_name="LambdaCodeRepo"
    ),
    action_name="LambdaCode_Source",
    output=lambda_source_output
)
pipeline.add_stage(
    stage_name="Source",
    actions=[cdk_source_action, lambda_source_action]
)

# synthesize the Lambda CDK template, using CodeBuild
# the below values are just examples, assuming your CDK code is in TypeScript/JavaScript -
# adjust the build environment and/or commands accordingly
cdk_build_project = codebuild.Project(pipeline_stack, "CdkBuildProject",
    environment={
        "build_image": codebuild.LinuxBuildImage.UBUNTU_14_04_NODEJS_10_1_0
    },
    build_spec=codebuild.BuildSpec.from_object(
        version="0.2",
        phases={
            "install": {
                "commands": "npm install"
            },
            "build": {
                "commands": ["npm run build", "npm run cdk synth LambdaStack -- -o ."
                ]
            }
        },
        artifacts={
            "files": "LambdaStack.template.yaml"
        }
    )
)
cdk_build_output = codepipeline.Artifact()
cdk_build_action = codepipeline_actions.CodeBuildAction(
    action_name="CDK_Build",
    project=cdk_build_project,
    input=cdk_source_output,
    outputs=[cdk_build_output]
)

# build your Lambda code, using CodeBuild
# again, this example assumes your Lambda is written in TypeScript/JavaScript -
# make sure to adjust the build environment and/or commands if they don't match your specific situation
lambda_build_project = codebuild.Project(pipeline_stack, "LambdaBuildProject",
    environment={
        "build_image": codebuild.LinuxBuildImage.UBUNTU_14_04_NODEJS_10_1_0
    },
    build_spec=codebuild.BuildSpec.from_object(
        version="0.2",
        phases={
            "install": {
                "commands": "npm install"
            },
            "build": {
                "commands": "npm run build"
            }
        },
        artifacts={
            "files": ["index.js", "node_modules/**/*"
            ]
        }
    )
)
lambda_build_output = codepipeline.Artifact()
lambda_build_action = codepipeline_actions.CodeBuildAction(
    action_name="Lambda_Build",
    project=lambda_build_project,
    input=lambda_source_output,
    outputs=[lambda_build_output]
)

pipeline.add_stage(
    stage_name="Build",
    actions=[cdk_build_action, lambda_build_action]
)

# finally, deploy your Lambda Stack
# INCORRECT
pipeline.add_stage(
    stage_name="Deploy",
    actions=[
        codepipeline_actions.CloudFormationCreateUpdateStackAction(
            action_name="Lambda_CFN_Deploy",
            template_path=cdk_build_output.at_path("LambdaStack.template.yaml"),
            stack_name="LambdaStackDeployedName",
            admin_permissions=True,
            parameter_overrides={
                (SpreadAssignment ...lambdaCode.assign(lambdaBuildOutput.s3Location)
                  lambda_code.assign(lambda_build_output.s3_location))
            },
            extra_inputs=[lambda_build_output
            ]
        )
    ]
)
Cross-account actions

If you want to update stacks in a different account, pass the account property when creating the action:

codepipeline_actions.CloudFormationCreateUpdateStackAction(
    # ...
    account="123456789012"
)

This will create a new stack, called <PipelineStackName>-support-123456789012, in your App, that will contain the role that the pipeline will assume in account 123456789012 before executing this action. This support stack will automatically be deployed before the stack containing the pipeline.

You can also pass a role explicitly when creating the action - in that case, the account property is ignored, and the action will operate in the same account the role belongs to:

from aws_cdk.core import PhysicalName

# in stack for account 123456789012...
action_role = iam.Role(other_account_stack, "ActionRole",
    assumed_by=iam.AccountPrincipal(pipeline_account),
    # the role has to have a physical name set
    role_name=PhysicalName.GENERATE_IF_NEEDED
)

# in the pipeline stack...
codepipeline_actions.CloudFormationCreateUpdateStackAction(
    # ...
    role=action_role
)

AWS CodeDeploy##### Server deployments

To use CodeDeploy for EC2/on-premise deployments in a Pipeline:

import aws_cdk.aws_codedeploy as codedeploy

pipeline = codepipeline.Pipeline(self, "MyPipeline",
    pipeline_name="MyPipeline"
)

# add the source and build Stages to the Pipeline...

deploy_action = codepipeline_actions.CodeDeployServerDeployAction(
    action_name="CodeDeploy",
    input=build_output,
    deployment_group=deployment_group
)
pipeline.add_stage(
    stage_name="Deploy",
    actions=[deploy_action]
)
Lambda deployments

To use CodeDeploy for blue-green Lambda deployments in a Pipeline:

lambda_code = lambda.Code.from_cfn_parameters()
func = lambda.Function(lambda_stack, "Lambda",
    code=lambda_code,
    handler="index.handler",
    runtime=lambda.Runtime.NODEJS_8_10
)
# used to make sure each CDK synthesis produces a different Version
version = func.add_version("NewVersion")
alias = lambda.Alias(lambda_stack, "LambdaAlias",
    alias_name="Prod",
    version=version
)

codedeploy.LambdaDeploymentGroup(lambda_stack, "DeploymentGroup",
    alias=alias,
    deployment_config=codedeploy.LambdaDeploymentConfig.LINEAR_10PERCENT_EVERY_1MINUTE
)

Then, you need to create your Pipeline Stack, where you will define your Pipeline, and deploy the lambdaStack using a CloudFormation CodePipeline Action (see above for a complete example).

ECS

CodePipeline can deploy an ECS service. The deploy Action receives one input Artifact which contains the image definition file:

deploy_stage = pipeline.add_stage(
    stage_name="Deploy",
    actions=[
        codepipeline_actions.EcsDeployAction(
            action_name="DeployAction",
            service=service,
            # if your file is called imagedefinitions.json,
            # use the `input` property,
            # and leave out the `imageFile` property
            input=build_output,
            # if your file name is _not_ imagedefinitions.json,
            # use the `imageFile` property,
            # and leave out the `input` property
            image_file=build_output.at_path("imageDef.json")
        )
    ]
)

AWS S3

To use an S3 Bucket as a deployment target in CodePipeline:

target_bucket = s3.Bucket(self, "MyBucket")

pipeline = codepipeline.Pipeline(self, "MyPipeline")
deploy_action = codepipeline_actions.S3DeployAction(
    action_name="S3Deploy",
    stage=deploy_stage,
    bucket=target_bucket,
    input=source_output
)
deploy_stage = pipeline.add_stage(
    stage_name="Deploy",
    actions=[deploy_action]
)

Alexa Skill

You can deploy to Alexa using CodePipeline with the following Action:

# Read the secrets from ParameterStore
client_id = cdk.SecretValue.secrets_manager("AlexaClientId")
client_secret = cdk.SecretValue.secrets_manager("AlexaClientSecret")
refresh_token = cdk.SecretValue.secrets_manager("AlexaRefreshToken")

# Add deploy action
codepipeline_actions.AlexaSkillDeployAction(
    action_name="DeploySkill",
    run_order=1,
    input=source_output,
    client_id=client_id.to_string(),
    client_secret=client_secret,
    refresh_token=refresh_token,
    skill_id="amzn1.ask.skill.12345678-1234-1234-1234-123456789012"
)

If you need manifest overrides you can specify them as parameterOverridesArtifact in the action:

cloudformation = require("@aws-cdk/aws-cloudformation")

# Deploy some CFN change set and store output
execute_output = codepipeline.Artifact("CloudFormation")
execute_change_set_action = codepipeline_actions.CloudFormationExecuteChangeSetAction(
    action_name="ExecuteChangesTest",
    run_order=2,
    stack_name=stack_name,
    change_set_name=change_set_name,
    output_file_name="overrides.json",
    output=execute_output
)

# Provide CFN output as manifest overrides
codepipeline_actions.AlexaSkillDeployAction(
    action_name="DeploySkill",
    run_order=1,
    input=source_output,
    parameter_overrides_artifact=execute_output,
    client_id=client_id.to_string(),
    client_secret=client_secret,
    refresh_token=refresh_token,
    skill_id="amzn1.ask.skill.12345678-1234-1234-1234-123456789012"
)

Approve & invoke#### Manual approval Action

This package contains an Action that stops the Pipeline until someone manually clicks the approve button:

manual_approval_action = codepipeline_actions.ManualApprovalAction(
    action_name="Approve",
    notification_topic=sns.Topic(self, "Topic"), # optional
    notify_emails=["some_email@example.com"
    ], # optional
    additional_information="additional info"
)
approve_stage.add_action(manual_approval_action)

If the notificationTopic has not been provided, but notifyEmails were, a new SNS Topic will be created (and accessible through the notificationTopic property of the Action).

AWS Lambda

This module contains an Action that allows you to invoke a Lambda function in a Pipeline:

import aws_cdk.aws_lambda as lambda

pipeline = codepipeline.Pipeline(self, "MyPipeline")
lambda_action = codepipeline_actions.LambdaInvokeAction(
    action_name="Lambda",
    lambda=fn
)
pipeline.add_stage(
    stage_name="Lambda",
    actions=[lambda_action]
)

The Lambda Action can have up to 5 inputs, and up to 5 outputs:

lambda_action = codepipeline_actions.LambdaInvokeAction(
    action_name="Lambda",
    inputs=[source_output, build_output
    ],
    outputs=[
        codepipeline.Artifact("Out1"),
        codepipeline.Artifact("Out2")
    ],
    lambda=fn
)

See the AWS documentation on how to write a Lambda function invoked from CodePipeline.

AWS CodePipeline Construct Library---

Stability: Stable


### Pipeline

To construct an empty Pipeline:

import aws_cdk.aws_codepipeline as codepipeline

pipeline = codepipeline.Pipeline(self, "MyFirstPipeline")

To give the Pipeline a nice, human-readable name:

pipeline = codepipeline.Pipeline(self, "MyFirstPipeline",
    pipeline_name="MyPipeline"
)

Stages

You can provide Stages when creating the Pipeline:

# INCORRECT
pipeline = codepipeline.Pipeline(self, "MyFirstPipeline",
    stages=[{
        "stage_name": "Source",
        "actions": []
    }
    ]
)

Or append a Stage to an existing Pipeline:

source_stage = pipeline.add_stage(
    stage_name="Source",
    actions=[]
)

You can insert the new Stage at an arbitrary point in the Pipeline:

# INCORRECT
some_stage = pipeline.add_stage(
    stage_name="SomeStage",
    placement={
        # note: you can only specify one of the below properties
        "right_before": another_stage,
        "just_after": another_stage
    }
)

Actions

Actions live in a separate package, @aws-cdk/aws-codepipeline-actions.

To add an Action to a Stage, you can provide it when creating the Stage, in the actions property, or you can use the IStage.addAction() method to mutate an existing Stage:

source_stage.add_action(some_action)

Cross-region CodePipelines

You can also use the cross-region feature to deploy resources (currently, only CloudFormation Stacks are supported) into a different region than your Pipeline is in.

It works like this:

pipeline = codepipeline.Pipeline(self, "MyFirstPipeline",
    # ...
    cross_region_replication_buckets={
        # note that a physical name of the replication Bucket must be known at synthesis time
        ""us-west-1"": s3.Bucket.from_bucket_attributes(self, "UsWest1ReplicationBucket",
            bucket_name="my-us-west-1-replication-bucket",
            # optional KMS key
            encryption_key=kms.Key.from_key_arn(self, "UsWest1ReplicationKey", "arn:aws:kms:us-west-1:123456789012:key/1234-5678-9012")
        )
    }
)

# later in the code...
codepipeline_actions.CloudFormationCreateUpdateStackAction(
    action_name="CFN_US_West_1",
    # ...
    region="us-west-1"
)

This way, the CFN_US_West_1 Action will operate in the us-west-1 region, regardless of which region your Pipeline is in.

If you don't provide a bucket for a region (other than the Pipeline's region) that you're using for an Action, there will be a new Stack, called <nameOfYourPipelineStack>-support-<region>, defined for you, containing a replication Bucket. This new Stack will depend on your Pipeline Stack, so deploying the Pipeline Stack will deploy the support Stack(s) first. Example:

$ cdk ls
MyMainStack
MyMainStack-support-us-west-1
$ cdk deploy MyMainStack
# output of cdk deploy here...

See the AWS docs here for more information on cross-region CodePipelines.

Creating an encrypted replication bucket

If you're passing a replication bucket created in a different stack, like this:

# INCORRECT
replication_stack = Stack(app, "ReplicationStack",
    env={
        "region": "us-west-1"
    }
)
key = kms.Key(replication_stack, "ReplicationKey")
replication_bucket = s3.Bucket(replication_stack, "ReplicationBucket",
    # like was said above - replication buckets need a set physical name
    bucket_name=PhysicalName.GENERATE_IF_NEEDED,
    encryption_key=key
)

# later...
codepipeline.Pipeline(pipeline_stack, "Pipeline",
    cross_region_replication_buckets={
        ""us-west-1"": replication_bucket
    }
)

When trying to encrypt it (and note that if any of the cross-region actions happen to be cross-account as well, the bucket has to be encrypted - otherwise the pipeline will fail at runtime), you cannot use a key directly - KMS keys don't have physical names, and so you can't reference them across environments.

In this case, you need to use an alias in place of the key when creating the bucket:

key = kms.Key(replication_stack, "ReplicationKey")
alias = kms.Alias(replication_stack, "ReplicationAlias",
    # aliasName is required
    alias_name=PhysicalName.GENERATE_IF_NEEDED,
    target_key=key
)
replication_bucket = s3.Bucket(replication_stack, "ReplicationBucket",
    bucket_name=PhysicalName.GENERATE_IF_NEEDED,
    encryption_key=alias
)

Events#### Using a pipeline as an event target

A pipeline can be used as a target for a CloudWatch event rule:

import aws_cdk.aws_events_targets as targets
import aws_cdk.aws_events as events

# kick off the pipeline every day
rule = events.Rule(self, "Daily",
    schedule=events.Schedule.rate(Duration.days(1))
)

rule.add_target(targets.CodePipeline(pipeline))

When a pipeline is used as an event target, the "codepipeline:StartPipelineExecution" permission is granted to the AWS CloudWatch Events service.

Event sources

Pipelines emit CloudWatch events. To define event rules for events emitted by the pipeline, stages or action, use the onXxx methods on the respective construct:

my_pipeline.on_state_change("MyPipelineStateChange", target)
my_stage.on_state_change("MyStageStateChange", target)
my_action.on_state_change("MyActionStateChange", target)

AWS Config Construct Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


This module is part of the AWS Cloud Development Kit project.

Supported:

  • Config rules

Not supported

  • Configuration recoder
  • Delivery channel
  • Aggregation

Rules#### AWS managed rules

To set up a managed rule, define a ManagedRule and specify its identifier:

ManagedRule(self, "AccessKeysRotated",
    identifier="ACCESS_KEYS_ROTATED"
)

Available identifiers and parameters are listed in the List of AWS Config Managed Rules.

Higher level constructs for managed rules are available, see Managed Rules. Prefer to use those constructs when available (PRs welcome to add more of those).

Custom rules

To set up a custom rule, define a CustomRule and specify the Lambda Function to run and the trigger types:

CustomRule(self, "CustomRule",
    lambda_function=my_fn,
    configuration_changes=True,
    periodic=True
)

Restricting the scope

By default rules are triggered by changes to all resources. Use the scopeToResource(), scopeToResources() or scopeToTag() methods to restrict the scope of both managed and custom rules:

ssh_rule = ManagedRule(self, "SSH",
    identifier="INCOMING_SSH_DISABLED"
)

# Restrict to a specific security group
rule.scope_to_resource("AWS::EC2::SecurityGroup", "sg-1234567890abcdefgh")

custom_rule = CustomRule(self, "CustomRule",
    lambda_function=my_fn,
    configuration_changes=True
)

# Restrict to a specific tag
custom_rule.scope_to_tag("Cost Center", "MyApp")

Only one type of scope restriction can be added to a rule (the last call to scopeToXxx() sets the scope).

Events

To define Amazon CloudWatch event rules, use the onComplianceChange() or onReEvaluationStatus() methods:

rule = CloudFormationStackDriftDetectionCheck(self, "Drift")
rule.on_compliance_change("TopicEvent",
    target=targets.SnsTopic(topic)
)

Example

Creating custom and managed rules with scope restriction and events:

# A custom rule that runs on configuration changes of EC2 instances
fn = lambda.Function(self, "CustomFunction",
    code=lambda.AssetCode.from_inline("exports.handler = (event) => console.log(event);"),
    handler="index.handler",
    runtime=lambda.Runtime.NODEJS_8_10
)

custom_rule = config.CustomRule(self, "Custom",
    configuration_changes=True,
    lambda_function=fn
)

custom_rule.scope_to_resource("AWS::EC2::Instance")

# A rule to detect stacks drifts
drift_rule = config.CloudFormationStackDriftDetectionCheck(self, "Drift")

# Topic for compliance events
compliance_topic = sns.Topic(self, "ComplianceTopic")

# Send notification on compliance change
drift_rule.on_compliance_change("ComplianceChange",
    target=targets.SnsTopic(compliance_topic)
)

@aws-cdk/aws-dynamodb-global---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


Global Tables builds upon DynamoDB’s global footprint to provide you with a fully managed, multi-region, and multi-master database that provides fast, local, read and write performance for massively scaled, global applications. Global Tables replicates your Amazon DynamoDB tables automatically across your choice of AWS regions.

Here is a minimal deployable Global DynamoDB tables definition:

from aws_cdk.aws_dynamodb import AttributeType
from aws_cdk.aws_dynamodb_global import GlobalTable
from aws_cdk.core import App

app = App()
GlobalTable(app, "globdynamodb",
    partition_key={"name": "hashKey", "type": AttributeType.String},
    table_name="GlobalTable",
    regions=["us-east-1", "us-east-2", "us-west-2"]
)
app.synth()

Implementation Notes

AWS Global DynamoDB Tables is an odd case currently. The way this package works -

  • Creates a DynamoDB table in a separate stack in each DynamoDBGlobalStackProps.region specified
  • Deploys a CFN Custom Resource to your stack's specified region that calls a lambda that runs the aws cli which calls createGlobalTable()

Notes

GlobalTable() will set dynamoProps.stream to be NEW_AND_OLD_IMAGES since this is a required attribute for AWS Global DynamoDB tables to work. The package will throw an error if any other stream specification is set in DynamoDBGlobalStackProps.

Amazon DynamoDB Construct Library---

Stability: Stable


Here is a minimal deployable DynamoDB table definition:

import aws_cdk.aws_dynamodb as dynamodb

table = dynamodb.Table(self, "Table",
    partition_key={"name": "id", "type": dynamodb.AttributeType.STRING}
)

Keys

When a table is defined, you must define it's schema using the partitionKey (required) and sortKey (optional) properties.

Billing Mode

DynamoDB supports two billing modes:

  • PROVISIONED - the default mode where the table and global secondary indexes have configured read and write capacity.
  • PAY_PER_REQUEST - on-demand pricing and scaling. You only pay for what you use and there is no read and write capacity for the table or its global secondary indexes.
import aws_cdk.aws_dynamodb as dynamodb

table = dynamodb.Table(self, "Table",
    partition_key={"name": "id", "type": dynamodb.AttributeType.STRING},
    billing_mode=dynamodb.BillingMode.PAY_PER_REQUEST
)

Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.

Configure AutoScaling for your table

You can have DynamoDB automatically raise and lower the read and write capacities of your table by setting up autoscaling. You can use this to either keep your tables at a desired utilization level, or by scaling up and down at preconfigured times of the day:

Auto-scaling is only relevant for tables with the billing mode, PROVISIONED.

read_scaling = table.auto_scale_read_capacity(min_capacity=1, max_capacity=50)

read_scaling.scale_on_utilization(
    target_utilization_percent=50
)

read_scaling.scale_on_schedule("ScaleUpInTheMorning",
    schedule=appscaling.Schedule.cron(hour="8", minute="0"),
    min_capacity=20
)

read_scaling.scale_on_schedule("ScaleDownAtNight",
    schedule=appscaling.Schedule.cron(hour="20", minute="0"),
    max_capacity=20
)

Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html https://aws.amazon.com/blogs/database/how-to-use-aws-cloudformation-to-configure-auto-scaling-for-amazon-dynamodb-tables-and-indexes/

Amazon DynamoDB Global Tables

Please see the @aws-cdk/aws-dynamodb-global package.

Amazon EC2 Construct Library---

Stability: Stable


The @aws-cdk/aws-ec2 package contains primitives for setting up networking and instances.

VPC

Most projects need a Virtual Private Cloud to provide security by means of network partitioning. This is achieved by creating an instance of Vpc:

import aws_cdk.aws_ec2 as ec2

vpc = ec2.Vpc(self, "VPC")

All default constructs require EC2 instances to be launched inside a VPC, so you should generally start by defining a VPC whenever you need to launch instances for your project.

Subnet Types

A VPC consists of one or more subnets that instances can be placed into. CDK distinguishes three different subnet types:

  • Public - public subnets connect directly to the Internet using an Internet Gateway. If you want your instances to have a public IP address and be directly reachable from the Internet, you must place them in a public subnet.
  • Private - instances in private subnets are not directly routable from the Internet, and connect out to the Internet via a NAT gateway. By default, a NAT gateway is created in every public subnet for maximum availability. Be aware that you will be charged for NAT gateways.
  • Isolated - isolated subnets do not route from or to the Internet, and as such do not require NAT gateways. They can only connect to or be connected to from other instances in the same VPC. A default VPC configuration will not include isolated subnets,

A default VPC configuration will create public and private subnets, but not isolated subnets. See Advanced Subnet Configuration below for information on how to change the default subnet configuration.

Constructs using the VPC will "launch instances" (or more accurately, create Elastic Network Interfaces) into one or more of the subnets. They all accept a property called subnetSelection (sometimes called vpcSubnets) to allow you to select in what subnet to place the ENIs, usually defaulting to private subnets if the property is omitted.

If you would like to save on the cost of NAT gateways, you can use isolated subnets instead of private subnets (as described in Advanced Subnet Configuration). If you need private instances to have internet connectivity, another option is to reduce the number of NAT gateways created by setting the natGateways property to a lower value (the default is one NAT gateway per availability zone). Be aware that this may have availability implications for your application.

Read more about subnets.

Control over availability zones

By default, a VPC will spread over at most 3 Availability Zones available to it. To change the number of Availability Zones that the VPC will spread over, specify the maxAzs property when defining it.

The number of Availability Zones that are available depends on the region and account of the Stack containing the VPC. If the region and account are specified on the Stack, the CLI will look up the existing Availability Zones and get an accurate count. If region and account are not specified, the stack could be deployed anywhere and it will have to make a safe choice, limiting itself to 2 Availability Zones.

Therefore, to get the VPC to spread over 3 or more availability zones, you must specify the environment where the stack will be deployed.

Advanced Subnet Configuration

If the default VPC configuration (public and private subnets spanning the size of the VPC) don't suffice for you, you can configure what subnets to create by specifying the subnetConfiguration property. It allows you to configure the number and size of all subnets. Specifying an advanced subnet configuration could look like this:

# INCORRECT
vpc = ec2.Vpc(self, "TheVPC",
    # 'cidr' configures the IP range and size of the entire VPC.
    # The IP space will be divided over the configured subnets.
    cidr="10.0.0.0/21",

    # 'maxAzs' configures the maximum number of availability zones to use
    max_azs=3,

    # 'subnetConfiguration' specifies the "subnet groups" to create.
    # Every subnet group will have a subnet for each AZ, so this
    # configuration will create `3 groups × 3 AZs = 9` subnets.
    subnet_configuration=[{
        # 'subnetType' controls Internet access, as described above.
        "subnet_type": ec2.SubnetType.PUBLIC,

        # 'name' is used to name this particular subnet group. You will have to
        # use the name for subnet selection if you have more than one subnet
        # group of the same type.
        "name": "Ingress",

        # 'cidrMask' specifies the IP addresses in the range of of individual
        # subnets in the group. Each of the subnets in this group will contain
        # `2^(32 address bits - 24 subnet bits) - 2 reserved addresses = 254`
        # usable IP addresses.
        #
        # If 'cidrMask' is left out the available address space is evenly
        # divided across the remaining subnet groups.
        "cidr_mask": 24
    }, {
        "cidr_mask": 24,
        "name": "Application",
        "subnet_type": ec2.SubnetType.PRIVATE
    }, {
        "cidr_mask": 28,
        "name": "Database",
        "subnet_type": ec2.SubnetType.ISOLATED,

        # 'reserved' can be used to reserve IP address space. No resources will
        # be created for this subnet, but the IP range will be kept available for
        # future creation of this subnet, or even for future subdivision.
        "reserved": True
    }
    ]
)

The example above is one possible configuration, but the user can use the constructs above to implement many other network configurations.

The Vpc from the above configuration in a Region with three availability zones will be the following:

Subnet Name Type IP Block AZ Features
IngressSubnet1 PUBLIC 10.0.0.0/24 #1 NAT Gateway
IngressSubnet2 PUBLIC 10.0.1.0/24 #2 NAT Gateway
IngressSubnet3 PUBLIC 10.0.2.0/24 #3 NAT Gateway
ApplicationSubnet1 PRIVATE 10.0.3.0/24 #1 Route to NAT in IngressSubnet1
ApplicationSubnet2 PRIVATE 10.0.4.0/24 #2 Route to NAT in IngressSubnet2
ApplicationSubnet3 PRIVATE 10.0.5.0/24 #3 Route to NAT in IngressSubnet3
DatabaseSubnet1 ISOLATED 10.0.6.0/28 #1 Only routes within the VPC
DatabaseSubnet2 ISOLATED 10.0.6.16/28 #2 Only routes within the VPC
DatabaseSubnet3 ISOLATED 10.0.6.32/28 #3 Only routes within the VPC

Reserving subnet IP space

There are situations where the IP space for a subnet or number of subnets will need to be reserved. This is useful in situations where subnets would need to be added after the vpc is originally deployed, without causing IP renumbering for existing subnets. The IP space for a subnet may be reserved by setting the reserved subnetConfiguration property to true, as shown below:

# INCORRECT
import aws_cdk.aws_ec2 as ec2
vpc = ec2.Vpc(self, "TheVPC",
    nat_gateways=1,
    subnet_configuration=[{
        "cidr_mask": 26,
        "name": "Public",
        "subnet_type": ec2.SubnetType.PUBLIC
    }, {
        "cidr_mask": 26,
        "name": "Application1",
        "subnet_type": ec2.SubnetType.PRIVATE
    }, {
        "cidr_mask": 26,
        "name": "Application2",
        "subnet_type": ec2.SubnetType.PRIVATE,
        "reserved": True
    }, {
        "cidr_mask": 27,
        "name": "Database",
        "subnet_type": ec2.SubnetType.ISOLATED
    }
    ]
)

In the example above, the subnet for Application2 is not actually provisioned but its IP space is still reserved. If in the future this subnet needs to be provisioned, then the reserved: true property should be removed. Reserving parts of the IP space prevents the other subnets from getting renumbered.

Sharing VPCs between stacks

If you are creating multiple Stacks inside the same CDK application, you can reuse a VPC defined in one Stack in another by simply passing the VPC instance around:

#
# Stack1 creates the VPC
#
class Stack1(cdk.Stack):

    def __init__(self, scope, id, props=None):
        super().__init__(scope, id, props)

        self.vpc = ec2.Vpc(self, "VPC")

#
# Stack2 consumes the VPC
#
class Stack2(cdk.Stack):
    def __init__(self, scope, id, *, vpc):
        super().__init__(scope, id, vpc=vpc)

        # Pass the VPC to a construct that needs it
        ConstructThatTakesAVpc(self, "Construct",
            vpc=vpc
        )

stack1 = Stack1(app, "Stack1")
stack2 = Stack2(app, "Stack2",
    vpc=stack1.vpc
)

Importing an existing VPC

If your VPC is created outside your CDK app, you can use Vpc.fromLookup(). The CDK CLI will search for the specified VPC in the the stack's region and account, and import the subnet configuration. Looking up can be done by VPC ID, but more flexibly by searching for a specific tag on the VPC.

The import does assume that the VPC will be symmetric, i.e. that there are subnet groups that have a subnet in every Availability Zone that the VPC spreads over. VPCs with other layouts cannot currently be imported, and will either lead to an error on import, or when another construct tries to access the subnets.

Subnet types will be determined from the aws-cdk:subnet-type tag on the subnet if it exists, or the presence of a route to an Internet Gateway otherwise. Subnet names will be determined from the aws-cdk:subnet-name tag on the subnet if it exists, or will mirror the subnet type otherwise (i.e. a public subnet will have the name "Public").

Here's how Vpc.fromLookup() can be used:

vpc = ec2.Vpc.from_lookup(stack, "VPC",
    # This imports the default VPC but you can also
    # specify a 'vpcName' or 'tags'.
    is_default=True
)

Allowing Connections

In AWS, all network traffic in and out of Elastic Network Interfaces (ENIs) is controlled by Security Groups. You can think of Security Groups as a firewall with a set of rules. By default, Security Groups allow no incoming (ingress) traffic and all outgoing (egress) traffic. You can add ingress rules to them to allow incoming traffic streams. To exert fine-grained control over egress traffic, set allowAllOutbound: false on the SecurityGroup, after which you can add egress traffic rules.

You can manipulate Security Groups directly:

my_security_group = ec2.SecurityGroup(self, "SecurityGroup",
    vpc=vpc,
    description="Allow ssh access to ec2 instances",
    allow_all_outbound=True
)
my_security_group.add_ingress_rule(ec2.Peer.any_ipv4(), ec2.Port.tcp(22), "allow ssh access from the world")

All constructs that create ENIs on your behalf (typically constructs that create EC2 instances or other VPC-connected resources) will all have security groups automatically assigned. Those constructs have an attribute called connections, which is an object that makes it convenient to update the security groups. If you want to allow connections between two constructs that have security groups, you have to add an Egress rule to one Security Group, and an Ingress rule to the other. The connections object will automatically take care of this for you:

# Allow connections from anywhere
load_balancer.connections.allow_from_any_ipv4(ec2.Port.tcp(443), "Allow inbound HTTPS")

# The same, but an explicit IP address
load_balancer.connections.allow_from(ec2.Peer.ipv4("1.2.3.4/32"), ec2.Port.tcp(443), "Allow inbound HTTPS")

# Allow connection between AutoScalingGroups
app_fleet.connections.allow_to(db_fleet, ec2.Port.tcp(443), "App can call database")

Connection Peers

There are various classes that implement the connection peer part:

# Simple connection peers
peer = ec2.Peer.ipv4("10.0.0.0/16")
peer = ec2.Peer.any_ipv4()
peer = ec2.Peer.ipv6("::0/0")
peer = ec2.Peer.any_ipv6()
peer = ec2.Peer.prefix_list("pl-12345")
fleet.connections.allow_to(peer, ec2.Port.tcp(443), "Allow outbound HTTPS")

Any object that has a security group can itself be used as a connection peer:

# These automatically create appropriate ingress and egress rules in both security groups
fleet1.connections.allow_to(fleet2, ec2.Port.tcp(80), "Allow between fleets")

fleet.connections.allow_from_any_ipv4(ec2.Port.tcp(80), "Allow from load balancer")

Port Ranges

The connections that are allowed are specified by port ranges. A number of classes provide the connection specifier:

ec2.Port.tcp(80)
ec2.Port.tcp_range(60000, 65535)
ec2.Port.all_tcp()
ec2.Port.all_traffic()

NOTE: This set is not complete yet; for example, there is no library support for ICMP at the moment. However, you can write your own classes to implement those.

Default Ports

Some Constructs have default ports associated with them. For example, the listener of a load balancer does (it's the public port), or instances of an RDS database (it's the port the database is accepting connections on).

If the object you're calling the peering method on has a default port associated with it, you can call allowDefaultPortFrom() and omit the port specifier. If the argument has an associated default port, call allowDefaultPortTo().

For example:

# Port implicit in listener
listener.connections.allow_default_port_from_any_ipv4("Allow public")

# Port implicit in peer
fleet.connections.allow_default_port_to(rds_database, "Fleet can access database")

Machine Images (AMIs)

AMIs control the OS that gets launched when you start your EC2 instance. The EC2 library contains constructs to select the AMI you want to use.

Depending on the type of AMI, you select it a different way.

The latest version of Amazon Linux and Microsoft Windows images are selectable by instantiating one of these classes:

# Pick a Windows edition to use
windows = ec2.WindowsImage(ec2.WindowsVersion.WINDOWS_SERVER_2019_ENGLISH_FULL_BASE)

# Pick the right Amazon Linux edition. All arguments shown are optional
# and will default to these values when omitted.
amzn_linux = ec2.AmazonLinuxImage(
    generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX,
    edition=ec2.AmazonLinuxEdition.STANDARD,
    virtualization=ec2.AmazonLinuxVirt.HVM,
    storage=ec2.AmazonLinuxStorage.GENERAL_PURPOSE
)

# For other custom (Linux) images, instantiate a `GenericLinuxImage` with
# a map giving the AMI to in for each region:

# INCORRECT
linux = ec2.GenericLinuxImage(
    "us-east-1"="ami-97785bed",
    "eu-west-1"="ami-12345678"
)

# For other custom (Windows) images, instantiate a `GenericWindowsImage` with
# a map giving the AMI to in for each region:

generic_windows = ec2.GenericWindowsImage(
    "us-east-1"="ami-97785bed",
    "eu-west-1"="ami-12345678"
)

NOTE: The Amazon Linux images selected will be cached in your cdk.json, so that your AutoScalingGroups don't automatically change out from under you when you're making unrelated changes. To update to the latest version of Amazon Linux, remove the cache entry from the context section of your cdk.json.

We will add command-line options to make this step easier in the future.

VPN connections to a VPC

Create your VPC with VPN connections by specifying the vpnConnections props (keys are construct ids):

# INCORRECT
vpc = ec2.Vpc(stack, "MyVpc",
    vpn_connections={
        "dynamic": {# Dynamic routing (BGP)
            "ip": "1.2.3.4"},
        "static": {# Static routing
            "ip": "4.5.6.7",
            "static_routes": ["192.168.10.0/24", "192.168.20.0/24"
            ]}
    }
)

To create a VPC that can accept VPN connections, set vpnGateway to true:

vpc = ec2.Vpc(stack, "MyVpc",
    vpn_gateway=True
)

VPN connections can then be added:

vpc.add_vpn_connection("Dynamic",
    ip="1.2.3.4"
)

Routes will be propagated on the route tables associated with the private subnets.

VPN connections expose metrics (cloudwatch.Metric) across all tunnels in the account/region and per connection:

# Across all tunnels in the account/region
all_data_out = VpnConnection.metric_all_tunnel_data_out()

# For a specific vpn connection
vpn_connection = vpc.add_vpn_connection("Dynamic",
    ip="1.2.3.4"
)
state = vpn_connection.metric_tunnel_state()

VPC endpoints

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

# Add gateway endpoints when creating the VPC
# INCORRECT
vpc = ec2.Vpc(self, "MyVpc",
    gateway_endpoints={
        "S3": {
            "service": ec2.GatewayVpcEndpointAwsService.S3
        }
    }
)

# Alternatively gateway endpoints can be added on the VPC
dynamo_db_endpoint = vpc.add_gateway_endpoint("DynamoDbEndpoint",
    service=ec2.GatewayVpcEndpointAwsService.DYNAMODB
)

# This allows to customize the endpoint policy
dynamo_db_endpoint.add_to_policy(
    iam.PolicyStatement(# Restrict to listing and describing tables
        principals=[iam.AnyPrincipal()],
        actions=["dynamodb:DescribeTable", "dynamodb:ListTables"],
        resources=["*"]))

# Add an interface endpoint
ecr_docker_endpoint = vpc.add_interface_endpoint("EcrDockerEndpoint",
    service=ec2.InterfaceVpcEndpointAwsService.ECR_DOCKER
)

# When working with an interface endpoint, use the connections object to
# allow traffic to flow to the endpoint.
ecr_docker_endpoint.connections.allow_default_port_from_any_ipv4()

Bastion Hosts

A bastion host functions as an instance used to access servers and resources in a VPC without open up the complete VPC on a network level. You can use bastion hosts using a standard SSH connection targetting port 22 on the host. As an alternative, you can connect the SSH connection feature of AWS Systems Manager Session Manager, which does not need an opened security group. (https://aws.amazon.com/about-aws/whats-new/2019/07/session-manager-launches-tunneling-support-for-ssh-and-scp/)

A default bastion host for use via SSM can be configured like:

host = ec2.BastionHostLinux(self, "BastionHost", vpc=vpc)

If you want to connect from the internet using SSH, you need to place the host into a public subnet. You can then configure allowed source hosts.

host = ec2.BastionHostLinux(self, "BastionHost",
    vpc=vpc,
    subnet_selection={"subnet_type": SubnetType.PUBLIC}
)
host.allow_ssh_access_from(Peer.ipv4("1.2.3.4/32"))

As there are no SSH public keys deployed on this machine, you need to use EC2 Instance Connect with the command aws ec2-instance-connect send-ssh-public-key to provide your SSH public key.

AWS CDK Docker Image Assets---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


This module allows bundling Docker images as assets.

Images are built from a local Docker context directory (with a Dockerfile), uploaded to ECR by the CDK toolkit and/or your app's CI-CD pipeline, and can be naturally referenced in your CDK app.

from aws_cdk.aws_ecr_assets import DockerImageAsset

asset = DockerImageAsset(self, "MyBuildImage",
    directory=path.join(__dirname, "my-image")
)

The directory my-image must include a Dockerfile.

This will instruct the toolkit to build a Docker image from my-image, push it to an AWS ECR repository and wire the name of the repository as CloudFormation parameters to your stack.

Use asset.imageUri to reference the image (it includes both the ECR image URL and tag.

You can optionally pass build args to the docker build command by specifying the buildArgs property:

asset = DockerImageAsset(self, "MyBuildImage",
    directory=path.join(__dirname, "my-image"),
    build_args={
        "HTTP_PROXY": "http://10.20.30.2:1234"
    }
)

Pull Permissions

Depending on the consumer of your image asset, you will need to make sure the principal has permissions to pull the image.

In most cases, you should use the asset.repository.grantPull(principal) method. This will modify the IAM policy of the principal to allow it to pull images from this repository.

If the pulling principal is not in the same account or is an AWS service that doesn't assume a role in your account (e.g. AWS CodeBuild), pull permissions must be granted on the resource policy (and not on the principal's policy). To do that, you can use asset.repository.addToResourcePolicy(statement) to grant the desired principal the following permissions: "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" and "ecr:BatchCheckLayerAvailability".

Amazon ECR Construct Library---

Stability: Stable


This package contains constructs for working with Amazon Elastic Container Registry.

Repositories

Define a repository by creating a new instance of Repository. A repository holds multiple verions of a single container image.

repository = ecr.Repository(self, "Repository")

Automatically clean up repositories

You can set life cycle rules to automatically clean up old images from your repository. The first life cycle rule that matches an image will be applied against that image. For example, the following deletes images older than 30 days, while keeping all images tagged with prod (note that the order is important here):

repository.add_lifecycle_rule(tag_prefix_list=["prod"], max_image_count=9999)
repository.add_lifecycle_rule(max_image_age_days=cdk.Duration.days(30))

CDK Construct library for higher-level ECS Constructs---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


This library provides higher-level Amazon ECS constructs which follow common architectural patterns. It contains:

  • Application Load Balanced Services
  • Network Load Balanced Services
  • Queue Processing Services
  • Scheduled Tasks (cron jobs)

Application Load Balanced Services

To define an Amazon ECS service that is behind an application load balancer, instantiate one of the following:

  • ApplicationLoadBalancedEc2Service
load_balanced_ecs_service = ecs_patterns.ApplicationLoadBalancedEc2Service(stack, "Service",
    cluster=cluster,
    memory_limit_mi_b=1024,
    image=ecs.ContainerImage.from_registry("test"),
    desired_count=2,
    environment={
        "TEST_ENVIRONMENT_VARIABLE1": "test environment variable 1 value",
        "TEST_ENVIRONMENT_VARIABLE2": "test environment variable 2 value"
    }
)
  • ApplicationLoadBalancedFargateService
load_balanced_fargate_service = ecs_patterns.ApplicationLoadBalancedFargateService(stack, "Service",
    cluster=cluster,
    memory_limit_mi_b=1024,
    cpu=512,
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample")
)

Instead of providing a cluster you can specify a VPC and CDK will create a new ECS cluster. If you deploy multiple services CDK will only create on cluster per VPC.

You can omit cluster and vpc to let CDK create a new VPC with two AZs and create a cluster inside this VPC.

Network Load Balanced Services

To define an Amazon ECS service that is behind a network load balancer, instantiate one of the following:

  • NetworkLoadBalancedEc2Service
load_balanced_ecs_service = ecs_patterns.NetworkLoadBalancedEc2Service(stack, "Service",
    cluster=cluster,
    memory_limit_mi_b=1024,
    image=ecs.ContainerImage.from_registry("test"),
    desired_count=2,
    environment={
        "TEST_ENVIRONMENT_VARIABLE1": "test environment variable 1 value",
        "TEST_ENVIRONMENT_VARIABLE2": "test environment variable 2 value"
    }
)
  • NetworkLoadBalancedFargateService
load_balanced_fargate_service = ecs_patterns.NetworkLoadBalancedFargateService(stack, "Service",
    cluster=cluster,
    memory_limit_mi_b=1024,
    cpu=512,
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample")
)

The CDK will create a new Amazon ECS cluster if you specify a VPC and omit cluster. If you deploy multiple services the CDK will only create one cluster per VPC.

If cluster and vpc are omitted, the CDK creates a new VPC with subnets in two Availability Zones and a cluster within this VPC.

Queue Processing Services

To define a service that creates a queue and reads from that queue, instantiate one of the following:

  • QueueProcessingEc2Service
queue_processing_ec2_service = QueueProcessingEc2Service(stack, "Service",
    cluster=cluster,
    memory_limit_mi_b=1024,
    image=ecs.ContainerImage.from_registry("test"),
    command=["-c", "4", "amazon.com"],
    enable_logging=False,
    desired_task_count=2,
    environment={
        "TEST_ENVIRONMENT_VARIABLE1": "test environment variable 1 value",
        "TEST_ENVIRONMENT_VARIABLE2": "test environment variable 2 value"
    },
    queue=queue,
    max_scaling_capacity=5
)
  • QueueProcessingFargateService
queue_processing_fargate_service = QueueProcessingFargateService(stack, "Service",
    cluster=cluster,
    memory_limit_mi_b=512,
    image=ecs.ContainerImage.from_registry("test"),
    command=["-c", "4", "amazon.com"],
    enable_logging=False,
    desired_task_count=2,
    environment={
        "TEST_ENVIRONMENT_VARIABLE1": "test environment variable 1 value",
        "TEST_ENVIRONMENT_VARIABLE2": "test environment variable 2 value"
    },
    queue=queue,
    max_scaling_capacity=5
)

Scheduled Tasks

To define a task that runs periodically, instantiate an ScheduledEc2Task:

# Instantiate an Amazon EC2 Task to run at a scheduled interval
ecs_scheduled_task = ScheduledEc2Task(self, "ScheduledTask",
    cluster=cluster,
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
    schedule_expression="rate(1 minute)",
    environment=[{"name": "TRIGGER", "value": "CloudWatch Events"}],
    memory_limit_mi_b=256
)

Amazon ECS Construct Library---

Stability: Stable


This package contains constructs for working with Amazon Elastic Container Service (Amazon ECS).

Amazon ECS is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster of Amazon EC2 instances.

For further information on Amazon ECS, see the Amazon ECS documentation

The following example creates an Amazon ECS cluster, adds capacity to it, and instantiates the Amazon ECS Service with an automatic load balancer.

import aws_cdk.aws_ecs as ecs

# Create an ECS cluster
cluster = ecs.Cluster(self, "Cluster",
    vpc=vpc
)

# Add capacity to it
cluster.add_capacity("DefaultAutoScalingGroupCapacity",
    instance_type=ec2.InstanceType("t2.xlarge"),
    desired_capacity=3
)

# Instantiate an Amazon ECS Service
ecs_service = ecs.Ec2Service(self, "Service",
    cluster=cluster,
    memory_limit_mi_b=512,
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample")
)

For a set of constructs defining common ECS architectural patterns, see the @aws-cdk/aws-ecs-patterns package.

AWS Fargate vs Amazon ECS

There are two sets of constructs in this library; one to run tasks on Amazon ECS and one to run tasks on AWS Fargate.

  • Use the Ec2TaskDefinition and Ec2Service constructs to run tasks on Amazon EC2 instances running in your account.
  • Use the FargateTaskDefinition and FargateService constructs to run tasks on instances that are managed for you by AWS.

Here are the main differences:

  • Amazon EC2: instances are under your control. Complete control of task to host allocation. Required to specify at least a memory reseration or limit for every container. Can use Host, Bridge and AwsVpc networking modes. Can attach Classic Load Balancer. Can share volumes between container and host.
  • AWS Fargate: tasks run on AWS-managed instances, AWS manages task to host allocation for you. Requires specification of memory and cpu sizes at the taskdefinition level. Only supports AwsVpc networking modes and Application/Network Load Balancers. Only the AWS log driver is supported. Many host features are not supported such as adding kernel capabilities and mounting host devices/volumes inside the container.

For more information on Amazon EC2 vs AWS Fargate and networking see the AWS Documentation: AWS Fargate and Task Networking.

Clusters

A Cluster defines the infrastructure to run your tasks on. You can run many tasks on a single cluster.

The following code creates a cluster that can run AWS Fargate tasks:

cluster = ecs.Cluster(self, "Cluster",
    vpc=vpc
)

To use tasks with Amazon EC2 launch-type, you have to add capacity to the cluster in order for tasks to be scheduled on your instances. Typically, you add an AutoScalingGroup with instances running the latest Amazon ECS-optimized AMI to the cluster. There is a method to build and add such an AutoScalingGroup automatically, or you can supply a customized AutoScalingGroup that you construct yourself. It's possible to add multiple AutoScalingGroups with various instance types.

The following example creates an Amazon ECS cluster and adds capacity to it:

cluster = ecs.Cluster(self, "Cluster",
    vpc=vpc
)

# Either add default capacity
cluster.add_capacity("DefaultAutoScalingGroupCapacity",
    instance_type=ec2.InstanceType("t2.xlarge"),
    desired_capacity=3
)

# Or add customized capacity. Be sure to start the Amazon ECS-optimized AMI.
auto_scaling_group = autoscaling.AutoScalingGroup(self, "ASG",
    vpc=vpc,
    instance_type=ec2.InstanceType("t2.xlarge"),
    machine_image=EcsOptimizedImage.amazon_linux(),
    # Or use Amazon ECS-Optimized Amazon Linux 2 AMI
    # machineImage: EcsOptimizedImage.amazonLinux2(),
    desired_capacity=3
)

cluster.add_auto_scaling_group(auto_scaling_group)

If you omit the property vpc, the construct will create a new VPC with two AZs.

Task definitions

A task Definition describes what a single copy of a task should look like. A task definition has one or more containers; typically, it has one main container (the default container is the first one that's added to the task definition, and it is marked essential) and optionally some supporting containers which are used to support the main container, doings things like upload logs or metrics to monitoring services.

To run a task or service with Amazon EC2 launch type, use the Ec2TaskDefinition. For AWS Fargate tasks/services, use the FargateTaskDefinition. These classes provide a simplified API that only contain properties relevant for that specific launch type.

For a FargateTaskDefinition, specify the task size (memoryLimitMiB and cpu):

fargate_task_definition = ecs.FargateTaskDefinition(self, "TaskDef",
    memory_limit_mi_b=512,
    cpu=256
)

To add containers to a task definition, call addContainer():

container = fargate_task_definition.add_container("WebContainer",
    # Use an image from DockerHub
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample")
)

For a Ec2TaskDefinition:

ec2_task_definition = ecs.Ec2TaskDefinition(self, "TaskDef",
    network_mode=NetworkMode.BRIDGE
)

container = ec2_task_definition.add_container("WebContainer",
    # Use an image from DockerHub
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
    memory_limit_mi_b=1024
)

You can specify container properties when you add them to the task definition, or with various methods, e.g.:

container.add_port_mappings(
    container_port=3000
)

To use a TaskDefinition that can be used with either Amazon EC2 or AWS Fargate launch types, use the TaskDefinition construct.

When creating a task definition you have to specify what kind of tasks you intend to run: Amazon EC2, AWS Fargate, or both. The following example uses both:

task_definition = ecs.TaskDefinition(self, "TaskDef",
    memory_mi_b="512",
    cpu="256",
    network_mode=NetworkMode.AWS_VPC,
    compatibility=ecs.Compatibility.EC2_AND_FARGATE
)

Images

Images supply the software that runs inside the container. Images can be obtained from either DockerHub or from ECR repositories, or built directly from a local Dockerfile.

  • ecs.ContainerImage.fromRegistry(imageName): use a public image.
  • ecs.ContainerImage.fromRegistry(imageName, { credentials: mySecret }): use a private image that requires credentials.
  • ecs.ContainerImage.fromEcrRepository(repo, tag): use the given ECR repository as the image to start. If no tag is provided, "latest" is assumed.
  • ecs.ContainerImage.fromAsset('./image'): build and upload an image directly from a Dockerfile in your source directory.

Environment variables

To pass environment variables to the container, use the environment and secrets props.

task_definition.add_container("container",
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
    memory_limit_mi_b=1024,
    environment={# clear text, not for sensitive data
        "STAGE": "prod"},
    secrets={# Retrieved from AWS Secrets Manager or AWS Systems Manager Parameter Store at container start-up.
        "SECRET": ecs.Secret.from_secrets_manager(secret),
        "PARAMETER": ecs.Secret.from_ssm_parameter(parameter)}
)

The task execution role is automatically granted read permissions on the secrets/parameters.

Service

A Service instantiates a TaskDefinition on a Cluster a given number of times, optionally associating them with a load balancer. If a task fails, Amazon ECS automatically restarts the task.

task_definition =

service = ecs.FargateService(self, "Service",
    cluster=cluster,
    task_definition=task_definition,
    desired_count=5
)

Include a load balancer

Services are load balancing targets and can be directly attached to load balancers:

import aws_cdk.aws_elasticloadbalancingv2 as elbv2

service = ecs.FargateService(self, "Service")

lb = elbv2.ApplicationLoadBalancer(self, "LB", vpc=vpc, internet_facing=True)
listener = lb.add_listener("Listener", port=80)
target = listener.add_targets("ECS",
    port=80,
    targets=[service]
)

There are two higher-level constructs available which include a load balancer for you that can be found in the aws-ecs-patterns module:

  • LoadBalancedFargateService
  • LoadBalancedEc2Service

Task Auto-Scaling

You can configure the task count of a service to match demand. Task auto-scaling is configured by calling autoScaleTaskCount():

scaling = service.auto_scale_task_count(max_capacity=10)
scaling.scale_on_cpu_utilization("CpuScaling",
    target_utilization_percent=50
)

scaling.scale_on_request_count("RequestScaling",
    requests_per_target=10000,
    target_group=target
)

Task auto-scaling is powered by Application Auto-Scaling. See that section for details.

Instance Auto-Scaling

If you're running on AWS Fargate, AWS manages the physical machines that your containers are running on for you. If you're running an Amazon ECS cluster however, your Amazon EC2 instances might fill up as your number of Tasks goes up.

To avoid placement errors, configure auto-scaling for your Amazon EC2 instance group so that your instance count scales with demand. To keep your Amazon EC2 instances halfway loaded, scaling up to a maximum of 30 instances if required:

auto_scaling_group = cluster.add_capacity("DefaultAutoScalingGroup",
    instance_type=ec2.InstanceType("t2.xlarge"),
    min_capacity=3,
    max_capacity=30,
    desired_capacity=3,

    # Give instances 5 minutes to drain running tasks when an instance is
    # terminated. This is the default, turn this off by specifying 0 or
    # change the timeout up to 900 seconds.
    task_drain_time=Duration.seconds(300)
)

auto_scaling_group.scale_on_cpu_utilization("KeepCpuHalfwayLoaded",
    target_utilization_percent=50
)

See the @aws-cdk/aws-autoscaling library for more autoscaling options you can configure on your instances.

Integration with CloudWatch Events

To start an Amazon ECS task on an Amazon EC2-backed Cluster, instantiate an @aws-cdk/aws-events-targets.EcsTask instead of an Ec2Service:

import aws_cdk.aws_events_targets as targets

# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_asset(path.resolve(__dirname, "..", "eventhandler-image")),
    memory_limit_mi_b=256,
    logging=ecs.AwsLogDriver(stream_prefix="EventDemo")
)

# An Rule that describes the event trigger (in this case a scheduled run)
rule = events.Rule(self, "Rule",
    schedule=events.Schedule.expression("rate(1 min)")
)

# Pass an environment variable to the container 'TheContainer' in the task
# INCORRECT
rule.add_target(targets.EcsTask(
    cluster=cluster,
    task_definition=task_definition,
    task_count=1,
    container_overrides=[{
        "container_name": "TheContainer",
        "environment": [{
            "name": "I_WAS_TRIGGERED",
            "value": "From CloudWatch Events"
        }]
    }]
))

Log Drivers

Currently Supported Log Drivers:

  • awslogs
  • fluentd
  • gelf
  • journald
  • json-file
  • splunk
  • syslog

awslogs Log Driver

# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.awslogs(stream_prefix="EventDemo")
)

fluentd Log Driver

# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.fluentd()
)

gelf Log Driver

# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.gelf()
)

journald Log Driver

# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.journald()
)

json-file Log Driver

# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.json_file()
)

splunk Log Driver

# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.splunk()
)

syslog Log Driver

# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.syslog()
)

Generic Log Driver

A generic log driver object exists to provide a lower level abstraction of the log driver configuration.

# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.GenericLogDriver(
        log_driver="fluentd",
        options={
            "tag": "example-tag"
        }
    )
)

Amazon EKS Construct Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


This construct library allows you to define Amazon Elastic Container Service for Kubernetes (EKS) clusters programmatically. This library also supports programmatically defining Kubernetes resource manifests within EKS clusters.

This example defines an Amazon EKS cluster with the following configuration:

  • 2x m5.large instances (this instance type suits most common use-cases, and is good value for money)
  • Dedicated VPC with default configuration (see ec2.Vpc)
  • A Kubernetes pod with a container based on the paulbouwer/hello-kubernetes image.
cluster = eks.Cluster(self, "hello-eks")

# INCORRECT
cluster.add_resource("mypod",
    api_version="v1",
    kind="Pod",
    metadata={"name": "mypod"},
    spec={
        "containers": [{
            "name": "hello",
            "image": "paulbouwer/hello-kubernetes:1.5",
            "ports": [{"container_port": 8080}]
        }
        ]
    }
)

NOTE: in order to determine the default AMI for for Amazon EKS instances the eks.Cluster resource must be defined within a stack that is configured with an explicit env.region. See Environments in the AWS CDK Developer Guide for more details.

Here is a complete sample.

Capacity

By default, eks.Cluster is created with x2 m5.large instances.

eks.Cluster(self, "cluster-two-m5-large")

The quantity and instance type for the default capacity can be specified through the defaultCapacity and defaultCapacityInstance props:

eks.Cluster(self, "cluster",
    default_capacity=10,
    default_capacity_instance=ec2.InstanceType("m2.xlarge")
)

To disable the default capacity, simply set defaultCapacity to 0:

eks.Cluster(self, "cluster-with-no-capacity", default_capacity=0)

The cluster.defaultCapacity property will reference the AutoScalingGroup resource for the default capacity. It will be undefined if defaultCapacity is set to 0:

cluster = eks.Cluster(self, "my-cluster")
cluster.default_capacity.scale_on_cpu_utilization("up",
    target_utilization_percent=80
)

You can add customized capacity through cluster.addCapacity() or cluster.addAutoScalingGroup():

# INCORRECT
cluster.add_capacity("frontend-nodes",
    instance_type=ec2.InstanceType("t2.medium"),
    desired_capacity=3,
    vpc_subnets={"subnet_type": ec2.SubnetType.PUBLIC}
)

Spot Capacity

If spotPrice is specified, the capacity will be purchased from spot instances:

cluster.add_capacity("spot",
    spot_price="0.1094",
    instance_type=ec2.InstanceType("t3.large"),
    max_capacity=10
)

Spot instance nodes will be labeled with lifecycle=Ec2Spot and tainted with PreferNoSchedule.

The Spot Termination Handler DaemonSet will be installed on these nodes. The termination handler leverages EC2 Spot Instance Termination Notices to gracefully stop all pods running on spot nodes that are about to be terminated.

Bootstrapping

When adding capacity, you can specify options for /etc/eks/boostrap.sh which is responsible for associating the node to the EKS cluster. For example, you can use kubeletExtraArgs to add custom node labels or taints.

# up to ten spot instances
cluster.add_capacity("spot",
    instance_type=ec2.InstanceType("t3.large"),
    desired_capacity=2,
    bootstrap_options={
        "kubelet_extra_args": "--node-labels foo=bar,goo=far",
        "aws_api_retry_attempts": 5
    }
)

To disable bootstrapping altogether (i.e. to fully customize user-data), set bootstrapEnabled to false when you add the capacity.

Masters Role

The Amazon EKS construct library allows you to specify an IAM role that will be granted system:masters privileges on your cluster.

Without specifying a mastersRole, you will not be able to interact manually with the cluster.

The following example defines an IAM role that can be assumed by all users in the account and shows how to use the mastersRole property to map this role to the Kubernetes system:masters group:

# first define the role
cluster_admin = iam.Role(self, "AdminRole",
    assumed_by=iam.AccountRootPrincipal()
)

# now define the cluster and map role to "masters" RBAC group
eks.Cluster(self, "Cluster",
    masters_role=cluster_admin
)

When you cdk deploy this CDK app, you will notice that an output will be printed with the update-kubeconfig command.

Something like this:

Outputs:
eks-integ-defaults.ClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98Y

Copy & paste the "aws eks update-kubeconfig ..." command to your shell in order to connect to your EKS cluster with the "masters" role.

Now, given AWS CLI is configured to use AWS credentials for a user that is trusted by the masters role, you should be able to interact with your cluster through kubectl (the above example will trust all users in the account).

For example:

$ aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98Y
Added new context arn:aws:eks:eu-west-2:112233445566:cluster/cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 to /Users/boom/.kube/config

$ kubectl get nodes # list all nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-147-66.eu-west-2.compute.internal    Ready    <none>   21m   v1.13.7-eks-c57ff8
ip-10-0-169-151.eu-west-2.compute.internal   Ready    <none>   21m   v1.13.7-eks-c57ff8

$ kubectl get all -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
pod/aws-node-fpmwv             1/1     Running   0          21m
pod/aws-node-m9htf             1/1     Running   0          21m
pod/coredns-5cb4fb54c7-q222j   1/1     Running   0          23m
pod/coredns-5cb4fb54c7-v9nxx   1/1     Running   0          23m
pod/kube-proxy-d4jrh           1/1     Running   0          21m
pod/kube-proxy-q7hh7           1/1     Running   0          21m

NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
service/kube-dns   ClusterIP   172.20.0.10   <none>        53/UDP,53/TCP   23m

NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/aws-node     2         2         2       2            2           <none>          23m
daemonset.apps/kube-proxy   2         2         2       2            2           <none>          23m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           23m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-5cb4fb54c7   2         2         2       23m

For your convenience, an AWS CloudFormation output will automatically be included in your template and will be printed when running cdk deploy.

NOTE: if the cluster is configured with kubectlEnabled: false, it will be created with the role/user that created the AWS CloudFormation stack. See Kubectl Support for details.

Kubernetes Resources

The KubernetesResource construct or cluster.addResource method can be used to apply Kubernetes resource manifests to this cluster.

The following examples will deploy the paulbouwer/hello-kubernetes service on the cluster:

app_label = {"app": "hello-kubernetes"}

# INCORRECT
deployment = {
    "api_version": "apps/v1",
    "kind": "Deployment",
    "metadata": {"name": "hello-kubernetes"},
    "spec": {
        "replicas": 3,
        "selector": {"match_labels": app_label},
        "template": {
            "metadata": {"labels": app_label},
            "spec": {
                "containers": [{
                    "name": "hello-kubernetes",
                    "image": "paulbouwer/hello-kubernetes:1.5",
                    "ports": [{"container_port": 8080}]
                }
                ]
            }
        }
    }
}

service = {
    "api_version": "v1",
    "kind": "Service",
    "metadata": {"name": "hello-kubernetes"},
    "spec": {
        "type": "LoadBalancer",
        "ports": [{"port": 80, "target_port": 8080}],
        "selector": app_label
    }
}

# option 1: use a construct
KubernetesResource(self, "hello-kub",
    cluster=cluster,
    manifest=[deployment, service]
)

# or, option2: use `addResource`
cluster.add_resource("hello-kub", service, deployment)

Since Kubernetes resources are implemented as CloudFormation resources in the CDK. This means that if the resource is deleted from your code (or the stack is deleted), the next cdk deploy will issue a kubectl delete command and the Kubernetes resources will be deleted.

AWS IAM Mapping

As described in the Amazon EKS User Guide, you can map AWS IAM users and roles to Kubernetes Role-based access control (RBAC).

The Amazon EKS construct manages the aws-auth ConfigMap Kubernetes resource on your behalf and exposes an API through the cluster.awsAuth for mapping users, roles and accounts.

Furthermore, when auto-scaling capacity is added to the cluster (through cluster.addCapacity or cluster.addAutoScalingGroup), the IAM instance role of the auto-scaling group will be automatically mapped to RBAC so nodes can connect to the cluster. No manual mapping is required any longer.

NOTE: cluster.awsAuth will throw an error if your cluster is created with kubectlEnabled: false.

For example, let's say you want to grant an IAM user administrative privileges on your cluster:

admin_user = iam.User(self, "Admin")
cluster.aws_auth.add_user_mapping(admin_user, groups=["system:masters"])

A convenience method for mapping a role to the system:masters group is also available:

cluster.aws_auth.add_masters_role(role)

Node ssh Access

If you want to be able to SSH into your worker nodes, you must already have an SSH key in the region you're connecting to and pass it, and you must be able to connect to the hosts (meaning they must have a public IP and you should be allowed to connect to them on port 22):

asg = cluster.add_capacity("Nodes",
    instance_type=ec2.InstanceType("t2.medium"),
    vpc_subnets={"subnet_type": ec2.SubnetType.PUBLIC},
    key_name="my-key-name"
)

# Replace with desired IP
asg.connections.allow_from(ec2.Peer.ipv4("1.2.3.4/32"), ec2.Port.tcp(22))

If you want to SSH into nodes in a private subnet, you should set up a bastion host in a public subnet. That setup is recommended, but is unfortunately beyond the scope of this documentation.

kubectl Support

When you create an Amazon EKS cluster, the IAM entity user or role, such as a federated user that creates the cluster, is automatically granted system:masters permissions in the cluster's RBAC configuration.

In order to allow programmatically defining Kubernetes resources in your AWS CDK app and provisioning them through AWS CloudFormation, we will need to assume this "masters" role every time we want to issue kubectl operations against your cluster.

At the moment, the AWS::EKS::Cluster AWS CloudFormation resource does not support this behavior, so in order to support "programmatic kubectl", such as applying manifests and mapping IAM roles from within your CDK application, the Amazon EKS construct library uses a custom resource for provisioning the cluster. This custom resource is executed with an IAM role that we can then use to issue kubectl commands.

The default behavior of this library is to use this custom resource in order to retain programmatic control over the cluster. In other words: to allow you to define Kubernetes resources in your CDK code instead of having to manage your Kubernetes applications through a separate system.

One of the implications of this design is that, by default, the user who provisioned the AWS CloudFormation stack (executed cdk deploy) will not have administrative privileges on the EKS cluster.

  1. Additional resources will be synthesized into your template (the AWS Lambda function, the role and policy).
  2. As described in Interacting with Your Cluster, if you wish to be able to manually interact with your cluster, you will need to map an IAM role or user to the system:masters group. This can be either done by specifying a mastersRole when the cluster is defined, calling cluster.awsAuth.addMastersRole or explicitly mapping an IAM role or IAM user to the relevant Kubernetes RBAC groups using cluster.addRoleMapping and/or cluster.addUserMapping.

If you wish to disable the programmatic kubectl behavior and use the standard AWS::EKS::Cluster resource, you can specify kubectlEnabled: false when you define the cluster:

eks.Cluster(self, "cluster",
    kubectl_enabled=False
)

Take care: a change in this property will cause the cluster to be destroyed and a new cluster to be created.

When kubectl is disabled, you should be aware of the following:

  1. When you log-in to your cluster, you don't need to specify --role-arn as long as you are using the same user that created the cluster.
  2. As described in the Amazon EKS User Guide, you will need to manually edit the aws-auth ConfigMap when you add capacity in order to map the IAM instance role to RBAC to allow nodes to join the cluster.
  3. Any eks.Cluster APIs that depend on programmatic kubectl support will fail with an error: cluster.addResource, cluster.awsAuth, props.mastersRole.

Roadmap

  • AutoScaling (combine EC2 and Kubernetes scaling)

Amazon Elastic Load Balancing Construct Library---

Stability: Stable


The @aws-cdk/aws-elasticloadbalancing package provides constructs for configuring classic load balancers.

Configuring a Load Balancer

Load balancers send traffic to one or more AutoScalingGroups. Create a load balancer, set up listeners and a health check, and supply the fleet(s) you want to load balance to in the targets property.

# INCORRECT
lb = elb.LoadBalancer(self, "LB",
    vpc=vpc,
    internet_facing=True,
    health_check={
        "port": 80
    }
)

lb.add_target(my_auto_scaling_group)
lb.add_listener(
    external_port=80
)

The load balancer allows all connections by default. If you want to change that, pass the allowConnectionsFrom property while setting up the listener:

lb.add_listener(
    external_port=80,
    allow_connections_from=[my_security_group]
)

Amazon Elastic Load Balancing V2 Construct Library---

Stability: Stable


The @aws-cdk/aws-elasticloadbalancingv2 package provides constructs for configuring application and network load balancers.

For more information, see the AWS documentation for Application Load Balancers and Network Load Balancers.

Defining an Application Load Balancer

You define an application load balancer by creating an instance of ApplicationLoadBalancer, adding a Listener to the load balancer and adding Targets to the Listener:

import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_elasticloadbalancingv2 as elbv2
import aws_cdk.aws_autoscaling as autoscaling

# ...

vpc = ec2.Vpc(...)

# Create the load balancer in a VPC. 'internetFacing' is 'false'
# by default, which creates an internal load balancer.
lb = elbv2.ApplicationLoadBalancer(self, "LB",
    vpc=vpc,
    internet_facing=True
)

# Add a listener and open up the load balancer's security group
# to the world. 'open' is the default, set this to 'false'
# and use `listener.connections` if you want to be selective
# about who can access the listener.
listener = lb.add_listener("Listener",
    port=80,
    open=True
)

# Create an AutoScaling group and add it as a load balancing
# target to the listener.
asg = autoscaling.AutoScalingGroup(...)
listener.add_targets("ApplicationFleet",
    port=8080,
    targets=[asg]
)

The security groups of the load balancer and the target are automatically updated to allow the network traffic.

Use the addFixedResponse() method to add fixed response rules on the listener:

listener.add_fixed_response("Fixed",
    path_pattern="/ok",
    content_type=elbv2.ContentType.TEXT_PLAIN,
    message_body="OK",
    status_code="200"
)

Conditions

It's possible to route traffic to targets based on conditions in the incoming HTTP request. Path- and host-based conditions are supported. For example, the following will route requests to the indicated AutoScalingGroup only if the requested host in the request is example.com:

listener.add_targets("Example.Com Fleet",
    priority=10,
    host_header="example.com",
    port=8080,
    targets=[asg]
)

priority is a required field when you add targets with conditions. The lowest number wins.

Every listener must have at least one target without conditions.

Defining a Network Load Balancer

Network Load Balancers are defined in a similar way to Application Load Balancers:

import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_elasticloadbalancingv2 as elbv2
import aws_cdk.aws_autoscaling as autoscaling

# Create the load balancer in a VPC. 'internetFacing' is 'false'
# by default, which creates an internal load balancer.
lb = elbv2.NetworkLoadBalancer(self, "LB",
    vpc=vpc,
    internet_facing=True
)

# Add a listener on a particular port.
listener = lb.add_listener("Listener",
    port=443
)

# Add targets on a particular port.
listener.add_targets("AppFleet",
    port=443,
    targets=[asg]
)

One thing to keep in mind is that network load balancers do not have security groups, and no automatic security group configuration is done for you. You will have to configure the security groups of the target yourself to allow traffic by clients and/or load balancer instances, depending on your target types. See Target Groups for your Network Load Balancers and Register targets with your Target Group for more information.

Targets and Target Groups

Application and Network Load Balancers organize load balancing targets in Target Groups. If you add your balancing targets (such as AutoScalingGroups, ECS services or individual instances) to your listener directly, the appropriate TargetGroup will be automatically created for you.

If you need more control over the Target Groups created, create an instance of ApplicationTargetGroup or NetworkTargetGroup, add the members you desire, and add it to the listener by calling addTargetGroups instead of addTargets.

addTargets() will always return the Target Group it just created for you:

group = listener.add_targets("AppFleet",
    port=443,
    targets=[asg1]
)

group.add_target(asg2)

Using Lambda Targets

To use a Lambda Function as a target, use the integration class in the @aws-cdk/aws-elasticloadbalancingv2-targets package:

import aws_cdk.aws_lambda as lambda
import aws_cdk.aws_elasticloadbalancingv2 as elbv2
import aws_cdk.aws_elasticloadbalancingv2_targets as targets

lambda_function = lambda.Function(...)
load_balancer = elbv2.ApplicationLoadBalancer(...)

listener = lb.add_listener("Listener", port=80)
listener.add_targets("Targets",
    targets=[targets.LambdaTarget(lambda_function)]
)

Only a single Lambda function can be added to a single listener rule.

Configuring Health Checks

Health checks are configured upon creation of a target group:

# INCORRECT
listener.add_targets("AppFleet",
    port=8080,
    targets=[asg],
    health_check={
        "path": "/ping",
        "interval": cdk.Duration.minutes(1)
    }
)

The health check can also be configured after creation by calling configureHealthCheck() on the created object.

No attempts are made to configure security groups for the port you're configuring a health check for, but if the health check is on the same port you're routing traffic to, the security group already allows the traffic. If not, you will have to configure the security groups appropriately:

# INCORRECT
listener.add_targets("AppFleet",
    port=8080,
    targets=[asg],
    health_check={
        "port": 8088
    }
)

listener.connections.allow_from(lb, ec2.Port.tcp(8088))

Protocol for Load Balancer Targets

Constructs that want to be a load balancer target should implement IApplicationLoadBalancerTarget and/or INetworkLoadBalancerTarget, and provide an implementation for the function attachToXxxTargetGroup(), which can call functions on the load balancer and should return metadata about the load balancing target:

# INCORRECT
attach_to_application_target_group(target_group, ApplicationTargetGroup)LoadBalancerTargetProps
    target_group.register_connectable(...)return {
        "target_type": TargetType.Instance | TargetType.Ip,
        "target_json": {"id": , ..., "port": , ...}
    }

targetType should be one of Instance or Ip. If the target can be directly added to the target group, targetJson should contain the id of the target (either instance ID or IP address depending on the type) and optionally a port or availabilityZone override.

Application load balancer targets can call registerConnectable() on the target group to register themselves for addition to the load balancer's security group rules.

If your load balancer target requires that the TargetGroup has been associated with a LoadBalancer before registration can happen (such as is the case for ECS Services for example), take a resource dependency on targetGroup.loadBalancerDependency() as follows:

# Make sure that the listener has been created, and so the TargetGroup
# has been associated with the LoadBalancer, before 'resource' is created.
resourced.add_dependency(target_group.load_balancer_dependency())

Amazon CloudWatch Events Construct Library---

Stability: Stable


Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources. For example, an AWS CodePipeline emits the State Change event when the pipeline changes it's state.

  • Events: An event indicates a change in your AWS environment. AWS resources can generate events when their state changes. For example, Amazon EC2 generates an event when the state of an EC2 instance changes from pending to running, and Amazon EC2 Auto Scaling generates events when it launches or terminates instances. AWS CloudTrail publishes events when you make API calls. You can generate custom application-level events and publish them to CloudWatch Events. You can also set up scheduled events that are generated on a periodic basis. For a list of services that generate events, and sample events from each service, see CloudWatch Events Event Examples From Each Supported Service.
  • Targets: A target processes events. Targets can include Amazon EC2 instances, AWS Lambda functions, Kinesis streams, Amazon ECS tasks, Step Functions state machines, Amazon SNS topics, Amazon SQS queues, and built-in targets. A target receives events in JSON format.
  • Rules: A rule matches incoming events and routes them to targets for processing. A single rule can route to multiple targets, all of which are processed in parallel. Rules are not processed in a particular order. This enables different parts of an organization to look for and process the events that are of interest to them. A rule can customize the JSON sent to the target, by passing only certain parts or by overwriting it with a constant.

The Rule construct defines a CloudWatch events rule which monitors an event based on an event pattern and invoke event targets when the pattern is matched against a triggered event. Event targets are objects that implement the IRuleTarget interface.

Normally, you will use one of the source.onXxx(name[, target[, options]]) -> Rule methods on the event source to define an event rule associated with the specific activity. You can targets either via props, or add targets using rule.addTarget.

For example, to define an rule that triggers a CodeBuild project build when a commit is pushed to the "master" branch of a CodeCommit repository:

on_commit_rule = repo.on_commit("OnCommit",
    target=targets.CodeBuildProject(project),
    branches=["master"]
)

You can add additional targets, with optional input transformer using eventRule.addTarget(target[, input]). For example, we can add a SNS topic target which formats a human-readable message for the commit.

For example, this adds an SNS topic as a target:

on_commit_rule.add_target(targets.SnsTopic(topic,
    message=events.RuleTargetInput.from_text(f"A commit was pushed to the repository {codecommit.ReferenceEvent.repositoryName} on branch {codecommit.ReferenceEvent.referenceName}")
))

Event Targets

The @aws-cdk/aws-events-targets module includes classes that implement the IRuleTarget interface for various AWS services.

The following targets are supported:

  • targets.CodeBuildProject: Start an AWS CodeBuild build
  • targets.CodePipeline: Start an AWS CodePipeline pipeline execution
  • targets.EcsTask: Start a task on an Amazon ECS cluster
  • targets.LambdaFunction: Invoke an AWS Lambda function
  • targets.SnsTopic: Publish into an SNS topic
  • targets.SqsQueue: Send a message to an Amazon SQS Queue
  • targets.SfnStateMachine: Trigger an AWS Step Functions state machine
  • targets.AwsApi: Make an AWS API call

Cross-account targets

It's possible to have the source of the event and a target in separate AWS accounts:

from aws_cdk.core import App, Stack
import aws_cdk.aws_codebuild as codebuild
import aws_cdk.aws_codecommit as codecommit
import aws_cdk.aws_events_targets as targets

app = App()

# INCORRECT
stack1 = Stack(app, "Stack1", env={"account": account1, "region": "us-east-1"})
repo = codecommit.Repository(stack1, "Repository")

stack2 = Stack(app, "Stack2", env={"account": account2, "region": "us-east-1"})
project = codebuild.Project(stack2, "Project")

repo.on_commit("OnCommit",
    target=targets.CodeBuildProject(project)
)

In this situation, the CDK will wire the 2 accounts together:

  • It will generate a rule in the source stack with the event bus of the target account as the target
  • It will generate a rule in the target stack, with the provided target
  • It will generate a separate stack that gives the source account permissions to publish events to the event bus of the target account in the given region, and make sure its deployed before the source stack

Note: while events can span multiple accounts, they cannot span different regions (that is a CloudWatch, not CDK, limitation).

For more information, see the AWS documentation on cross-account events.

AWS Glue Construct Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


This module is part of the AWS Cloud Development Kit project.

Database

A Database is a logical grouping of Tables in the Glue Catalog.

glue.Database(stack, "MyDatabase",
    database_name="my_database"
)

By default, a S3 bucket is created and the Database is stored under s3://<bucket-name>/, but you can manually specify another location:

glue.Database(stack, "MyDatabase",
    database_name="my_database",
    location_uri="s3://explicit-bucket/some-path/"
)

Table

A Glue table describes a table of data in S3: its structure (column names and types), location of data (S3 objects with a common prefix in a S3 bucket), and format for the files (Json, Avro, Parquet, etc.):

# INCORRECT
glue.Table(stack, "MyTable",
    database=my_database,
    table_name="my_table",
    columns=[{
        "name": "col1",
        "type": glue.Schema.string
    }, {
        "name": "col2",
        "type": glue.Schema.array(Schema.string),
        "comment": "col2 is an array of strings"
    }],
    data_format=glue.DataFormat.Json
)

By default, a S3 bucket will be created to store the table's data but you can manually pass the bucket and s3Prefix:

glue.Table(stack, "MyTable",
    bucket=my_bucket,
    s3_prefix="my-table/", ...
)

Partitions

To improve query performance, a table can specify partitionKeys on which data is stored and queried separately. For example, you might partition a table by year and month to optimize queries based on a time window:

# INCORRECT
glue.Table(stack, "MyTable",
    database=my_database,
    table_name="my_table",
    columns=[{
        "name": "col1",
        "type": glue.Schema.string
    }],
    partition_keys=[{
        "name": "year",
        "type": glue.Schema.smallint
    }, {
        "name": "month",
        "type": glue.Schema.smallint
    }],
    data_format=glue.DataFormat.Json
)

You can enable encryption on a Table's data:

  • Unencrypted - files are not encrypted. The default encryption setting.
  • S3Managed - Server side encryption (SSE-S3) with an Amazon S3-managed key.
glue.Table(stack, "MyTable",
    encryption=glue.TableEncryption.S3Managed, ...
)
  • Kms - Server-side encryption (SSE-KMS) with an AWS KMS Key managed by the account owner.
# KMS key is created automatically
glue.Table(stack, "MyTable",
    encryption=glue.TableEncryption.Kms, ...
)

# with an explicit KMS key
glue.Table(stack, "MyTable",
    encryption=glue.TableEncryption.Kms,
    encryption_key=kms.Key(stack, "MyKey"), ...
)
  • KmsManaged - Server-side encryption (SSE-KMS), like Kms, except with an AWS KMS Key managed by the AWS Key Management Service.
glue.Table(stack, "MyTable",
    encryption=glue.TableEncryption.KmsManaged, ...
)
  • ClientSideKms - Client-side encryption (CSE-KMS) with an AWS KMS Key managed by the account owner.
# KMS key is created automatically
glue.Table(stack, "MyTable",
    encryption=glue.TableEncryption.ClientSideKms, ...
)

# with an explicit KMS key
glue.Table(stack, "MyTable",
    encryption=glue.TableEncryption.ClientSideKms,
    encryption_key=kms.Key(stack, "MyKey"), ...
)

Note: you cannot provide a Bucket when creating the Table if you wish to use server-side encryption (Kms, KmsManaged or S3Managed).

Types

A table's schema is a collection of columns, each of which have a name and a type. Types are recursive structures, consisting of primitive and complex types:

# INCORRECT
glue.Table(stack, "MyTable",
    columns=[{
        "name": "primitive_column",
        "type": glue.Schema.string
    }, {
        "name": "array_column",
        "type": glue.Schema.array(glue.Schema.integer),
        "comment": "array<integer>"
    }, {
        "name": "map_column",
        "type": glue.Schema.map(glue.Schema.string, glue.Schema.timestamp),
        "comment": "map<string,string>"
    }, {
        "name": "struct_column",
        "type": glue.Schema.struct([
            name="nested_column",
            type=glue.Schema.date,
            comment="nested comment"
        ]),
        "comment": "struct<nested_column:date COMMENT 'nested comment'>"
    }], ...
)

Primitive

Numeric:

  • bigint
  • float
  • integer
  • smallint
  • tinyint

Date and Time:

  • date
  • timestamp

String Types:

  • string
  • decimal
  • char
  • varchar

Misc:

  • boolean
  • binary

Complex

  • array - array of some other type
  • map - map of some primitive key type to any value type.
  • struct - nested structure containing individually named and typed columns.

AWS Identity and Access Management Construct Library---

Stability: Stable


Define a role and add permissions to it. This will automatically create and attach an IAM policy to the role:

role = Role(self, "MyRole",
    assumed_by=ServicePrincipal("sns.amazonaws.com")
)

role.add_to_policy(PolicyStatement(
    resources=["*"],
    actions=["lambda:InvokeFunction"]
))

Define a policy and attach it to groups, users and roles. Note that it is possible to attach the policy either by calling xxx.attachInlinePolicy(policy) or policy.attachToXxx(xxx).

user = User(self, "MyUser", password=SecretValue.plain_text("1234"))
group = Group(self, "MyGroup")

policy = Policy(self, "MyPolicy")
policy.attach_to_user(user)
group.attach_inline_policy(policy)

Managed policies can be attached using xxx.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName(policyName)):

group = Group(self, "MyGroup")
group.add_managed_policy(ManagedPolicy.from_aws_managed_policy_name("policy/AdministratorAccess"))

Granting permissions to resources

Many of the AWS CDK resources have grant* methods that allow you to grant other resources access to that resource. As an example, the following code gives a Lambda function write permissions (Put, Update, Delete) to a DynamoDB table.

fn = lambda.Function(...)
table = dynamodb.Table(...)

table.grant_write_data(fn)

The more generic grant method allows you to give specific permissions to a resource:

fn = lambda.Function(...)
table = dynamodb.Table(...)

table.grant(fn, "dynamodb:PutItem")

The grant* methods accept an IGrantable object. This interface is implemented by IAM principlal resources (groups, users and roles) and resources that assume a role such as a Lambda function, EC2 instance or a Codebuild project.

You can find which grant* methods exist for a resource in the AWS CDK API Reference.

Configuring an ExternalId

If you need to create roles that will be assumed by 3rd parties, it is generally a good idea to require an ExternalId to assume them. Configuring an ExternalId works like this:

role = iam.Role(self, "MyRole",
    assumed_by=iam.AccountPrincipal("123456789012"),
    external_ids=["SUPPLY-ME"]
)

Principals vs Identities

When we say Principal, we mean an entity you grant permissions to. This entity can be an AWS Service, a Role, or something more abstract such as "all users in this account" or even "all users in this organization". An Identity is an IAM representing a single IAM entity that can have a policy attached, one of Role, User, or Group.

IAM Principals

When defining policy statements as part of an AssumeRole policy or as part of a resource policy, statements would usually refer to a specific IAM principal under Principal.

IAM principals are modeled as classes that derive from the iam.PolicyPrincipal abstract class. Principal objects include principal type (string) and value (array of string), optional set of conditions and the action that this principal requires when it is used in an assume role policy document.

To add a principal to a policy statement you can either use the abstract statement.addPrincipal, one of the concrete addXxxPrincipal methods:

  • addAwsPrincipal, addArnPrincipal or new ArnPrincipal(arn) for { "AWS": arn }
  • addAwsAccountPrincipal or new AccountPrincipal(accountId) for { "AWS": account-arn }
  • addServicePrincipal or new ServicePrincipal(service) for { "Service": service }
  • addAccountRootPrincipal or new AccountRootPrincipal() for { "AWS": { "Ref: "AWS::AccountId" } }
  • addCanonicalUserPrincipal or new CanonicalUserPrincipal(id) for { "CanonicalUser": id }
  • addFederatedPrincipal or new FederatedPrincipal(federated, conditions, assumeAction) for { "Federated": arn } and a set of optional conditions and the assume role action to use.
  • addAnyPrincipal or new AnyPrincipal for { "AWS": "*" }

If multiple principals are added to the policy statement, they will be merged together:

statement = PolicyStatement()
statement.add_service_principal("cloudwatch.amazonaws.com")
statement.add_service_principal("ec2.amazonaws.com")
statement.add_arn_principal("arn:aws:boom:boom")

Will result in:

{
  "Principal": {
    "Service": [ "cloudwatch.amazonaws.com", "ec2.amazonaws.com" ],
    "AWS": "arn:aws:boom:boom"
  }
}

The CompositePrincipal class can also be used to define complex principals, for example:

role = iam.Role(self, "MyRole",
    assumed_by=iam.CompositePrincipal(
        iam.ServicePrincipal("ec2.amazonaws.com"),
        iam.AccountPrincipal("1818188181818187272"))
)

Features

  • Policy name uniqueness is enforced. If two policies by the same name are attached to the same principal, the attachment will fail.
  • Policy names are not required - the CDK logical ID will be used and ensured to be unique.

Amazon Kinesis Construct Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


Define an unencrypted Kinesis stream.

Stream(self, "MyFirstStream")

Encryption

Define a KMS-encrypted stream:

stream = new_stream(self, "MyEncryptedStream",
    encryption=StreamEncryption.Kms
)

# you can access the encryption key:
assert(stream.encryption_key instanceof kms.Key)

You can also supply your own key:

my_kms_key = kms.Key(self, "MyKey")

stream = Stream(self, "MyEncryptedStream",
    encryption=StreamEncryption.Kms,
    encryption_key=my_kms_key
)

assert(stream.encryption_key === my_kms_key)

AWS Key Management Service Construct Library---

Stability: Stable


Define a KMS key:

import aws_cdk.aws_kms as kms

kms.Key(self, "MyKey",
    enable_key_rotation=True
)

Add a couple of aliases:

key = kms.Key(self, "MyKey")
key.add_alias("alias/foo")
key.add_alias("alias/bar")

Sharing keys between stacks

To use a KMS key in a different stack in the same CDK application, pass the construct to the other stack:

#
# Stack that defines the key
#
class KeyStack(cdk.Stack):

    def __init__(self, scope, id, props=None):
        super().__init__(scope, id, props)
        self.key = kms.Key(self, "MyKey", removal_policy=RemovalPolicy.DESTROY)

#
# Stack that uses the key
#
class UseStack(cdk.Stack):
    def __init__(self, scope, id, *, key):
        super().__init__(scope, id, key=key)

        # Use the IKey object here.
        kms.Alias(self, "Alias",
            alias_name="alias/foo",
            target_key=key
        )

key_stack = KeyStack(app, "KeyStack")
UseStack(app, "UseStack", key=key_stack.key)

Importing existing keys

To use a KMS key that is not defined in this CDK app, but is created through other means, use Key.fromKeyArn(parent, name, ref):

my_key_imported = kms.Key.from_key_arn(self, "MyImportedKey", "arn:aws:...")

# you can do stuff with this imported key.
my_key_imported.add_alias("alias/foo")

Note that a call to .addToPolicy(statement) on myKeyImported will not have an affect on the key's policy because it is not owned by your stack. The call will be a no-op.

AWS Lambda Event Sources---

Stability: Stable


This module includes classes that allow using various AWS services as event sources for AWS Lambda via the high-level lambda.addEventSource(source) API.

NOTE: In most cases, it is also possible to use the resource APIs to invoke an AWS Lambda function. This library provides a uniform API for all Lambda event sources regardless of the underlying mechanism they use.

SQS

Amazon Simple Queue Service (Amazon SQS) allows you to build asynchronous workflows. For more information about Amazon SQS, see Amazon Simple Queue Service. You can configure AWS Lambda to poll for these messages as they arrive and then pass the event to a Lambda function invocation. To view a sample event, see Amazon SQS Event.

To set up Amazon Simple Queue Service as an event source for AWS Lambda, you first create or update an Amazon SQS queue and select custom values for the queue parameters. The following parameters will impact Amazon SQS's polling behavior:

  • visibilityTimeout: May impact the period between retries.
  • receiveMessageWaitTime: Will determine long poll duration. The default value is 20 seconds.
import aws_cdk.aws_sqs as sqs
from aws_cdk.aws_lambda_event_sources import SqsEventSource
from aws_cdk.core import Duration

queue = sqs.Queue(self, "MyQueue",
    visibility_timeout=Duration.seconds(30), # default,
    receive_message_wait_time=Duration.seconds(20)
)

lambda.add_event_source(SqsEventSource(queue,
    batch_size=10
))

S3

You can write Lambda functions to process S3 bucket events, such as the object-created or object-deleted events. For example, when a user uploads a photo to a bucket, you might want Amazon S3 to invoke your Lambda function so that it reads the image and creates a thumbnail for the photo.

You can use the bucket notification configuration feature in Amazon S3 to configure the event source mapping, identifying the bucket events that you want Amazon S3 to publish and which Lambda function to invoke.

import aws_cdk.aws_s3 as s3
from aws_cdk.aws_lambda_event_sources import S3EventSource

bucket = s3.Bucket(...)

# INCORRECT
lambda.add_event_source(S3EventSource(bucket,
    events=[s3.EventType.ObjectCreated, s3.EventType.ObjectDeleted],
    filters=[{"prefix": "subdir/"}]
))

SNS

You can write Lambda functions to process Amazon Simple Notification Service notifications. When a message is published to an Amazon SNS topic, the service can invoke your Lambda function by passing the message payload as a parameter. Your Lambda function code can then process the event, for example publish the message to other Amazon SNS topics, or send the message to other AWS services.

This also enables you to trigger a Lambda function in response to Amazon CloudWatch alarms and other AWS services that use Amazon SNS.

For an example event, see Appendix: Message and JSON Formats and Amazon SNS Sample Event. For an example use case, see Using AWS Lambda with Amazon SNS from Different Accounts.

import aws_cdk.aws_sns as sns
from aws_cdk.aws_lambda_event_sources import SnsEventSource

topic = sns.Topic(...)

lambda.add_event_source(SnsEventSource(topic))

When a user calls the SNS Publish API on a topic that your Lambda function is subscribed to, Amazon SNS will call Lambda to invoke your function asynchronously. Lambda will then return a delivery status. If there was an error calling Lambda, Amazon SNS will retry invoking the Lambda function up to three times. After three tries, if Amazon SNS still could not successfully invoke the Lambda function, then Amazon SNS will send a delivery status failure message to CloudWatch.

DynamoDB Streams

You can write Lambda functions to process change events from a DynamoDB Table. An event is emitted to a DynamoDB stream (if configured) whenever a write (Put, Delete, Update) operation is performed against the table. See Using AWS Lambda with Amazon DynamoDB for more information.

To process events with a Lambda function, first create or update a DynamoDB table and enable a stream specification. Then, create a DynamoEventSource and add it to your Lambda function. The following parameters will impact Amazon DynamoDB's polling behavior:

  • batchSize: Determines how many records are buffered before invoking your lambda function - could impact your function's memory usage (if too high) and ability to keep up with incoming data velocity (if too low).
  • startingPosition: Will determine where to being consumption, either at the most recent ('LATEST') record or the oldest record ('TRIM_HORIZON'). 'TRIM_HORIZON' will ensure you process all available data, while 'LATEST' will ignore all reocrds that arrived prior to attaching the event source.
import aws_cdk.aws_dynamodb as dynamodb
import aws_cdk.aws_lambda as lambda
from aws_cdk.aws_lambda_event_sources import DynamoEventSource

# INCORRECT
table = dynamodb.Table(...,
    partition_key=, ...,
    stream=dynamodb.StreamViewType.NewImage
)def ():
    passlambda.Function(...)
def ():
    passadd_event_source(DynamoEventSource(table,
    starting_position=lambda.StartingPosition.TrimHorizon
))

Kinesis

You can write Lambda functions to process streaming data in Amazon Kinesis Streams. For more information about Amazon SQS, see Amazon Kinesis Service. To view a sample event, see Amazon SQS Event.

To set up Amazon Kinesis as an event source for AWS Lambda, you first create or update an Amazon Kinesis stream and select custom values for the event source parameters. The following parameters will impact Amazon Kinesis's polling behavior:

  • batchSize: Determines how many records are buffered before invoking your lambnda function - could impact your function's memory usage (if too high) and ability to keep up with incoming data velocity (if too low).
  • startingPosition: Will determine where to being consumption, either at the most recent ('LATEST') record or the oldest record ('TRIM_HORIZON'). 'TRIM_HORIZON' will ensure you process all available data, while 'LATEST' will ignore all reocrds that arrived prior to attaching the event source.
import aws_cdk.aws_lambda as lambda
import aws_cdk.aws_kinesis as kinesis
from aws_cdk.aws_lambda_event_sources import KinesisEventSource

stream = kinesis.Stream(self, "MyStream")

my_function.add_event_source(KinesisEventSource(queue,
    batch_size=100, # default
    starting_position=lambda.StartingPosition.TrimHorizon
))

Roadmap

Eventually, this module will support all the event sources described under Supported Event Sources in the AWS Lambda Developer Guide.

AWS Lambda Construct Library---

Stability: Stable


This construct library allows you to define AWS Lambda Functions.

import aws_cdk.aws_lambda as lambda
import path as path

fn = lambda.Function(self, "MyFunction",
    runtime=lambda.Runtime.NODEJS_10_X,
    handler="index.handler",
    code=lambda.Code.from_asset(path.join(__dirname, "lambda-handler"))
)

Handler Code

The lambda.Code class includes static convenience methods for various types of runtime code.

  • lambda.Code.fromBucket(bucket, key[, objectVersion]) - specify an S3 object that contains the archive of your runtime code.
  • lambda.Code.fromInline(code) - inline the handle code as a string. This is limited to supported runtimes and the code cannot exceed 4KiB.
  • lambda.Code.fromAsset(path) - specify a directory or a .zip file in the local filesystem which will be zipped and uploaded to S3 before deployment.

The following example shows how to define a Python function and deploy the code from the local directory my-lambda-handler to it:

lambda.Function(self, "MyLambda",
    code=lambda.Code.from_asset(path.join(__dirname, "my-lambda-handler")),
    handler="index.main",
    runtime=lambda.Runtime.PYTHON_3_6
)

When deploying a stack that contains this code, the directory will be zip archived and then uploaded to an S3 bucket, then the exact location of the S3 objects will be passed when the stack is deployed.

During synthesis, the CDK expects to find a directory on disk at the asset directory specified. Note that we are referencing the asset directory relatively to our CDK project directory. This is especially important when we want to share this construct through a library. Different programming languages will have different techniques for bundling resources into libraries.

Layers

The lambda.LayerVersion class can be used to define Lambda layers and manage granting permissions to other AWS accounts or organizations.

layer = lambda.LayerVersion(stack, "MyLayer",
    code=lambda.Code.from_asset(path.join(__dirname, "layer-code")),
    compatible_runtimes=[lambda.Runtime.NODEJS_8_10],
    license="Apache-2.0",
    description="A layer to test the L2 construct"
)

# To grant usage by other AWS accounts
layer.add_permission("remote-account-grant", account_id=aws_account_id)

# To grant usage to all accounts in some AWS Ogranization
# layer.grantUsage({ accountId: '*', organizationId });

lambda.Function(stack, "MyLayeredLambda",
    code=lambda.InlineCode("foo"),
    handler="index.handler",
    runtime=lambda.Runtime.NODEJS_8_10,
    layers=[layer]
)

Event Rule Target

You can use an AWS Lambda function as a target for an Amazon CloudWatch event rule:

import aws_cdk.aws_events_targets as targets
rule.add_target(targets.LambdaFunction(my_function))

Event Sources

AWS Lambda supports a variety of event sources.

In most cases, it is possible to trigger a function as a result of an event by using one of the add<Event>Notification methods on the source construct. For example, the s3.Bucket construct has an onEvent method which can be used to trigger a Lambda when an event, such as PutObject occurs on an S3 bucket.

An alternative way to add event sources to a function is to use function.addEventSource(source). This method accepts an IEventSource object. The module @aws-cdk/aws-lambda-event-sources includes classes for the various event sources supported by AWS Lambda.

For example, the following code adds an SQS queue as an event source for a function:

from aws_cdk.aws_lambda_event_sources import SqsEventSource
fn.add_event_source(SqsEventSource(queue))

The following code adds an S3 bucket notification as an event source:

# INCORRECT
from aws_cdk.aws_lambda_event_sources import S3EventSource
fn.add_event_source(S3EventSource(bucket,
    events=[s3.EventType.OBJECT_CREATED, s3.EventType.OBJECT_DELETED],
    filters=[{"prefix": "subdir/"}]
))

See the documentation for the @aws-cdk/aws-lambda-event-sources module for more details.

Lambda with DLQ

A dead-letter queue can be automatically created for a Lambda function by setting the deadLetterQueueEnabled: true configuration.

import aws_cdk.aws_lambda as lambda

fn = lambda.Function(self, "MyFunction",
    runtime=lambda.Runtime.NODEJS_8_10,
    handler="index.handler",
    code=lambda.Code.from_inline("exports.handler = function(event, ctx, cb) { return cb(null, \"hi\"); }"),
    dead_letter_queue_enabled=True
)

It is also possible to provide a dead-letter queue instead of getting a new queue created:

import aws_cdk.aws_lambda as lambda
import aws_cdk.aws_sqs as sqs

dlq = sqs.Queue(self, "DLQ")
fn = lambda.Function(self, "MyFunction",
    runtime=lambda.Runtime.NODEJS_8_10,
    handler="index.handler",
    code=lambda.Code.from_inline("exports.handler = function(event, ctx, cb) { return cb(null, \"hi\"); }"),
    dead_letter_queue=dlq
)

See the AWS documentation to learn more about AWS Lambdas and DLQs.

Lambda with X-Ray Tracing

import aws_cdk.aws_lambda as lambda

fn = lambda.Function(self, "MyFunction",
    runtime=lambda.Runtime.NODEJS_8_10,
    handler="index.handler",
    code=lambda.Code.from_inline("exports.handler = function(event, ctx, cb) { return cb(null, \"hi\"); }"),
    tracing=lambda.Tracing.ACTIVE
)

See the AWS documentation to learn more about AWS Lambda's X-Ray support.

Lambda with Reserved Concurrent Executions

import aws_cdk.aws_lambda as lambda

fn = lambda.Function(self, "MyFunction",
    runtime=lambda.Runtime.NODEJS_8_10,
    handler="index.handler",
    code=lambda.Code.from_inline("exports.handler = function(event, ctx, cb) { return cb(null, \"hi\"); }"),
    reserved_concurrent_executions=100
)

See the AWS documentation managing concurrency.

Amazon CloudWatch Logs Construct Library---

Stability: Stable


This library supplies constructs for working with CloudWatch Logs.

Log Groups/Streams

The basic unit of CloudWatch is a Log Group. Every log group typically has the same kind of data logged to it, in the same format. If there are multiple applications or services logging into the Log Group, each of them creates a new Log Stream.

Every log operation creates a "log event", which can consist of a simple string or a single-line JSON object. JSON objects have the advantage that they afford more filtering abilities (see below).

The only configurable attribute for log streams is the retention period, which configures after how much time the events in the log stream expire and are deleted.

The default retention period if not supplied is 2 years, but it can be set to one of the values in the RetentionDays enum to configure a different retention period (including infinite retention).

# Configure log group for short retention
log_group = LogGroup(stack, "LogGroup",
    retention=RetentionDays.ONE_WEEK
)
# Configure log group for infinite retention
log_group = LogGroup(stack, "LogGroup",
    retention=Infinity
)

Subscriptions and Destinations

Log events matching a particular filter can be sent to either a Lambda function or a Kinesis stream.

If the Kinesis stream lives in a different account, a CrossAccountDestination object needs to be added in the destination account which will act as a proxy for the remote Kinesis stream. This object is automatically created for you if you use the CDK Kinesis library.

Create a SubscriptionFilter, initialize it with an appropriate Pattern (see below) and supply the intended destination:

fn = lambda.Function(self, "Lambda", ...)
log_group = LogGroup(self, "LogGroup", ...)

SubscriptionFilter(self, "Subscription",
    log_group=log_group,
    destination=fn,
    filter_pattern=FilterPattern.all_terms("ERROR", "MainThread")
)

Metric Filters

CloudWatch Logs can extract and emit metrics based on a textual log stream. Depending on your needs, this may be a more convenient way of generating metrics for you application than making calls to CloudWatch Metrics yourself.

A MetricFilter either emits a fixed number every time it sees a log event matching a particular pattern (see below), or extracts a number from the log event and uses that as the metric value.

Example:

MetricFilter(self, "MetricFilter",
    log_group=log_group,
    metric_namespace="MyApp",
    metric_name="Latency",
    filter_pattern=FilterPattern.exists("$.latency"),
    metric_value="$.latency"
)

Remember that if you want to use a value from the log event as the metric value, you must mention it in your pattern somewhere.

A very simple MetricFilter can be created by using the logGroup.extractMetric() helper function:

log_group.extract_metric("$.jsonField", "Namespace", "MetricName")

Will extract the value of jsonField wherever it occurs in JSON-structed log records in the LogGroup, and emit them to CloudWatch Metrics under the name Namespace/MetricName.

Patterns

Patterns describe which log events match a subscription or metric filter. There are three types of patterns:

  • Text patterns
  • JSON patterns
  • Space-delimited table patterns

All patterns are constructed by using static functions on the FilterPattern class.

In addition to the patterns above, the following special patterns exist:

  • FilterPattern.allEvents(): matches all log events.
  • FilterPattern.literal(string): if you already know what pattern expression to use, this function takes a string and will use that as the log pattern. For more information, see the Filter and Pattern Syntax.

Text Patterns

Text patterns match if the literal strings appear in the text form of the log line.

  • FilterPattern.allTerms(term, term, ...): matches if all of the given terms (substrings) appear in the log event.
  • FilterPattern.anyTerm(term, term, ...): matches if all of the given terms (substrings) appear in the log event.
  • FilterPattern.anyGroup([term, term, ...], [term, term, ...], ...): matches if all of the terms in any of the groups (specified as arrays) matches. This is an OR match.

Examples:

# Search for lines that contain both "ERROR" and "MainThread"
pattern1 = FilterPattern.all_terms("ERROR", "MainThread")

# Search for lines that either contain both "ERROR" and "MainThread", or
# both "WARN" and "Deadlock".
pattern2 = FilterPattern.any_group(["ERROR", "MainThread"], ["WARN", "Deadlock"])

JSON Patterns

JSON patterns apply if the log event is the JSON representation of an object (without any other characters, so it cannot include a prefix such as timestamp or log level). JSON patterns can make comparisons on the values inside the fields.

  • Strings: the comparison operators allowed for strings are = and !=. String values can start or end with a * wildcard.
  • Numbers: the comparison operators allowed for numbers are =, !=, <, <=, >, >=.

Fields in the JSON structure are identified by identifier the complete object as $ and then descending into it, such as $.field or $.list[0].field.

  • FilterPattern.stringValue(field, comparison, string): matches if the given field compares as indicated with the given string value.
  • FilterPattern.numberValue(field, comparison, number): matches if the given field compares as indicated with the given numerical value.
  • FilterPattern.isNull(field): matches if the given field exists and has the value null.
  • FilterPattern.notExists(field): matches if the given field is not in the JSON structure.
  • FilterPattern.exists(field): matches if the given field is in the JSON structure.
  • FilterPattern.booleanValue(field, boolean): matches if the given field is exactly the given boolean value.
  • FilterPattern.all(jsonPattern, jsonPattern, ...): matches if all of the given JSON patterns match. This makes an AND combination of the given patterns.
  • FilterPattern.any(jsonPattern, jsonPattern, ...): matches if any of the given JSON patterns match. This makes an OR combination of the given patterns.

Example:

# Search for all events where the component field is equal to
# "HttpServer" and either error is true or the latency is higher
# than 1000.
pattern = FilterPattern.all(
    FilterPattern.string_value("$.component", "=", "HttpServer"),
    FilterPattern.any(
        FilterPattern.boolean_value("$.error", True),
        FilterPattern.number_value("$.latency", ">", 1000)))

Space-delimited table patterns

If the log events are rows of a space-delimited table, this pattern can be used to identify the columns in that structure and add conditions on any of them. The canonical example where you would apply this type of pattern is Apache server logs.

Text that is surrounded by "..." quotes or [...] square brackets will be treated as one column.

  • FilterPattern.spaceDelimited(column, column, ...): construct a SpaceDelimitedTextPattern object with the indicated columns. The columns map one-by-one the columns found in the log event. The string "..." may be used to specify an arbitrary number of unnamed columns anywhere in the name list (but may only be specified once).

After constructing a SpaceDelimitedTextPattern, you can use the following two members to add restrictions:

  • pattern.whereString(field, comparison, string): add a string condition. The rules are the same as for JSON patterns.
  • pattern.whereNumber(field, comparison, number): add a numerical condition. The rules are the same as for JSON patterns.

Multiple restrictions can be added on the same column; they must all apply.

Example:

# Search for all events where the component is "HttpServer" and the
# result code is not equal to 200.
pattern = FilterPattern.space_delimited("time", "component", "...", "result_code", "latency").where_string("component", "=", "HttpServer").where_number("result_code", "!=", 200)

Amazon Relational Database Service Construct Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


### Starting a Clustered Database

To set up a clustered database (like Aurora), define a DatabaseCluster. You must always launch a database in a VPC. Use the vpcSubnets attribute to control whether your instances will be launched privately or publicly:

# INCORRECT
cluster = DatabaseCluster(self, "Database",
    engine=DatabaseClusterEngine.AURORA,
    master_user={
        "username": "admin"
    },
    instance_props={
        "instance_type": ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.SMALL),
        "vpc_subnets": {
            "subnet_type": ec2.SubnetType.PUBLIC
        },
        "vpc": vpc
    }
)

By default, the master password will be generated and stored in AWS Secrets Manager.

Your cluster will be empty by default. To add a default database upon construction, specify the defaultDatabaseName attribute.

Starting an Instance Database

To set up a instance database, define a DatabaseInstance. You must always launch a database in a VPC. Use the vpcSubnets attribute to control whether your instances will be launched privately or publicly:

instance = DatabaseInstance(stack, "Instance",
    engine=rds.DatabaseInstanceEngine.ORACLE_SE1,
    instance_class=ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.SMALL),
    master_username="syscdk",
    vpc=vpc
)

By default, the master password will be generated and stored in AWS Secrets Manager.

Use DatabaseInstanceFromSnapshot and DatabaseInstanceReadReplica to create an instance from snapshot or a source database respectively:

DatabaseInstanceFromSnapshot(stack, "Instance",
    snapshot_identifier="my-snapshot",
    engine=rds.DatabaseInstanceEngine.POSTGRES,
    instance_class=ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
    vpc=vpc
)

DatabaseInstanceReadReplica(stack, "ReadReplica",
    source_database_instance=source_instance,
    engine=rds.DatabaseInstanceEngine.POSTGRES,
    instance_class=ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
    vpc=vpc
)

Creating a "production" Oracle database instance with option and parameter groups:

# Set open cursors with parameter group
# INCORRECT
parameter_group = rds.ParameterGroup(self, "ParameterGroup",
    family="oracle-se1-11.2",
    parameters={
        "open_cursors": "2500"
    }
)

Add XMLDB and OEM with option group

# INCORRECT
option_group = rds.OptionGroup(self, "OptionGroup",
    engine=rds.DatabaseInstanceEngine.ORACLE_SE1,
    major_engine_version="11.2",
    configurations=[{
        "name": "XMLDB"
    }, {
        "name": "OEM",
        "port": 1158,
        "vpc": vpc
    }
    ]
)

# Allow connections to OEM
option_group.option_connections.OEM.connections.allow_default_port_from_any_ipv4()

# Database instance with production values
instance = rds.DatabaseInstance(self, "Instance",
    engine=rds.DatabaseInstanceEngine.ORACLE_SE1,
    license_model=rds.LicenseModel.BRING_YOUR_OWN_LICENSE,
    instance_class=ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.MEDIUM),
    multi_az=True,
    storage_type=rds.StorageType.IO1,
    master_username="syscdk",
    vpc=vpc,
    database_name="ORCL",
    storage_encrypted=True,
    backup_retention=cdk.Duration.days(7),
    monitoring_interval=cdk.Duration.seconds(60),
    enable_performance_insights=True,
    cloudwatch_logs_exports=["trace", "audit", "alert", "listener"
    ],
    cloudwatch_logs_retention=logs.RetentionDays.ONE_MONTH,
    auto_minor_version_upgrade=False,
    option_group=option_group,
    parameter_group=parameter_group
)

# Allow connections on default port from any IPV4
instance.connections.allow_default_port_from_any_ipv4()

# Rotate the master user password every 30 days
instance.add_rotation_single_user("Rotation")

# Add alarm for high CPU
cloudwatch.Alarm(self, "HighCPU",
    metric=instance.metric_cPUUtilization(),
    threshold=90,
    evaluation_periods=1
)

# Trigger Lambda function on instance availability events
fn = lambda.Function(self, "Function",
    code=lambda.Code.from_inline("exports.handler = (event) => console.log(event);"),
    handler="index.handler",
    runtime=lambda.Runtime.NODEJS_8_10
)

# INCORRECT
availability_rule = instance.on_event("Availability", target=targets.LambdaFunction(fn))
availability_rule.add_event_pattern(
    detail={
        "EventCategories": ["availability"
        ]
    }
)

Instance events

To define Amazon CloudWatch event rules for database instances, use the onEvent method:

rule = instance.on_event("InstanceEvent", target=targets.LambdaFunction(fn))

Connecting

To control who can access the cluster or instance, use the .connections attribute. RDS databases have a default port, so you don't need to specify the port:

cluster.connections.allow_from_any_ipv4("Open to the world")

The endpoints to access your database cluster will be available as the .clusterEndpoint and .readerEndpoint attributes:

write_address = cluster.cluster_endpoint.socket_address

For an instance database:

address = instance.instance_endpoint.socket_address

Rotating master password

When the master password is generated and stored in AWS Secrets Manager, it can be rotated automatically:

cluster = rds.DatabaseCluster(stack, "Database",
    engine=rds.DatabaseClusterEngine.AURORA,
    master_user={
        "username": "admin"
    },
    instance_props={
        "instance_type": ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.SMALL),
        "vpc": vpc
    }
)

cluster.add_rotation_single_user("Rotation")

Rotation of the master password is also supported for an existing cluster:

SecretRotation(stack, "Rotation",
    secret=imported_secret,
    application=SecretRotationApplication.ORACLE_ROTATION_SINGLE_USER,
    target=imported_cluster, # or importedInstance
    vpc=imported_vpc
)

The importedSecret must be a JSON string with the following format:

{
  "engine": "<required: database engine>",
  "host": "<required: instance host name>",
  "username": "<required: username>",
  "password": "<required: password>",
  "dbname": "<optional: database name>",
  "port": "<optional: if not specified, default port will be used>"
}

Metrics

Database instances expose metrics (cloudwatch.Metric):

# The number of database connections in use (average over 5 minutes)
db_connections = instance.metric_database_connections()

# The average amount of time taken per disk I/O operation (average over 1 minute)
read_latency = instance.metric("ReadLatency", statistic="Average", period_sec=60)

Route53 Patterns for the CDK Route53 Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


This library contains commonly used patterns for Route53.

HTTPS Redirect

This construct allows creating a simple domainA -> domainB redirect using CloudFront and S3. You can specify multiple domains to be redirected.

HttpsRedirect(stack, "Redirect",
    record_names=["foo.example.com"],
    target_domain="bar.example.com",
    zone=HostedZone.from_hosted_zone_attributes(stack, "HostedZone",
        hosted_zone_id="ID",
        zone_name="example.com"
    )
)

See the documentation of @aws-cdk/aws-route53-patterns for more information.

Route53 Alias Record Targets for the CDK Route53 Library---

Stability: Stable


This library contains Route53 Alias Record targets for:

  • API Gateway custom domains

    route53.ARecord(self, "AliasRecord",
        zone=zone,
        target=route53.RecordTarget.from_alias(alias.ApiGateway(rest_api))
    )
  • CloudFront distributions

    route53.ARecord(self, "AliasRecord",
        zone=zone,
        target=route53.RecordTarget.from_alias(alias.CloudFrontTarget(distribution))
    )
  • S3 Bucket WebSite

    route53.ARecord(self, "AliasRecord",
        zone=zone,
        target=route53.RecordTarget.from_alias(alias.BucketWebsiteTarget(bucket))
    )
  • ELBv2 load balancers

    route53.ARecord(self, "AliasRecord",
        zone=zone,
        target=route53.RecordTarget.from_alias(alias.LoadBalancerTarget(elbv2))
    )
  • Classic load balancers

    route53.ARecord(self, "AliasRecord",
        zone=zone,
        target=route53.RecordTarget.from_alias(alias.ClassicLoadBalancerTarget(elb))
    )

See the documentation of @aws-cdk/aws-route53 for more information.

Amazon Route53 Construct Library---

Stability: Stable


To add a public hosted zone:

import aws_cdk.aws_route53 as route53

route53.PublicHostedZone(self, "HostedZone",
    zone_name="fully.qualified.domain.com"
)

To add a private hosted zone, use PrivateHostedZone. Note that enableDnsHostnames and enableDnsSupport must have been enabled for the VPC you're configuring for private hosted zones.

import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_route53 as route53

vpc = ec2.Vpc(self, "VPC")

zone = route53.PrivateHostedZone(self, "HostedZone",
    zone_name="fully.qualified.domain.com",
    vpc=vpc
)

Additional VPCs can be added with zone.addVpc().

Adding Records

To add a TXT record to your zone:

import aws_cdk.aws_route53 as route53

route53.TxtRecord(self, "TXTRecord",
    zone=my_zone,
    record_name="_foo", # If the name ends with a ".", it will be used as-is;
    # if it ends with a "." followed by the zone name, a trailing "." will be added automatically;
    # otherwise, a ".", the zone name, and a trailing "." will be added automatically.
    # Defaults to zone root if not specified.
    values=["Bar!", "Baz?"],
    ttl=Duration.minutes(90)
)

To add a A record to your zone:

import aws_cdk.aws_route53 as route53

route53.ARecord(self, "ARecord",
    zone=my_zone,
    target=route53.AddressRecordTarget.from_ip_addresses("1.2.3.4", "5.6.7.8")
)

To add a AAAA record pointing to a CloudFront distribution:

import aws_cdk.aws_route53 as route53
import aws_cdk.aws_route53_targets as targets

route53.AaaaRecord(self, "Alias",
    zone=my_zone,
    target=route53.AddressRecordTarget.from_alias(targets.CloudFrontTarget(distribution))
)

Constructs are available for A, AAAA, CAA, CNAME, MX, NS, SRV and TXT records.

Use the CaaAmazonRecord construct to easily restrict certificate authorities allowed to issue certificates for a domain to Amazon only.

Adding records to existing hosted zones

If you know the ID and Name of a Hosted Zone, you can import it directly:

zone = HostedZone.import(self, "MyZone",
    zone_name="example.com",
    hosted_zone_id="ZOJJZC49E0EPZ"
)

If you don't know the ID of a Hosted Zone, you can use the HostedZone.fromLookup to discover and import it:

HostedZone.from_lookup(self, "MyZone",
    domain_name="example.com"
)

AWS CDK Assets---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


Assets are local files or directories which are needed by a CDK app. A common example is a directory which contains the handler code for a Lambda function, but assets can represent any artifact that is needed for the app's operation.

When deploying a CDK app that includes constructs with assets, the CDK toolkit will first upload all the assets to S3, and only then deploy the stacks. The S3 locations of the uploaded assets will be passed in as CloudFormation Parameters to the relevant stacks.

The following JavaScript example defines an directory asset which is archived as a .zip file and uploaded to S3 during deployment.

asset = assets.Asset(self, "SampleAsset",
    path=path.join(__dirname, "sample-asset-directory")
)

The following JavaScript example defines a file asset, which is uploaded as-is to an S3 bucket during deployment.

asset = assets.Asset(self, "SampleAsset",
    path=path.join(__dirname, "file-asset.txt")
)

Attributes

Asset constructs expose the following deploy-time attributes:

In the following example, the various asset attributes are exported as stack outputs:

asset = assets.Asset(self, "SampleAsset",
    path=path.join(__dirname, "sample-asset-directory")
)

cdk.CfnOutput(self, "S3BucketName", value=asset.s3_bucket_name)
cdk.CfnOutput(self, "S3ObjectKey", value=asset.s3_object_key)
cdk.CfnOutput(self, "S3URL", value=asset.s3_url)

Permissions

IAM roles, users or groups which need to be able to read assets in runtime will should be granted IAM permissions. To do that use the asset.grantRead(principal) method:

The following examples grants an IAM group read permissions on an asset:

group = iam.Group(self, "MyUserGroup")
asset.grant_read(group)

How does it work?

When an asset is defined in a construct, a construct metadata entry aws:cdk:asset is emitted with instructions on where to find the asset and what type of packaging to perform (zip or file). Furthermore, the synthesized CloudFormation template will also include two CloudFormation parameters: one for the asset's bucket and one for the asset S3 key. Those parameters are used to reference the deploy-time values of the asset (using { Ref: "Param" }).

Then, when the stack is deployed, the toolkit will package the asset (i.e. zip the directory), calculate an MD5 hash of the contents and will render an S3 key for this asset within the toolkit's asset store. If the file doesn't exist in the asset store, it is uploaded during deployment.

The toolkit's asset store is an S3 bucket created by the toolkit for each environment the toolkit operates in (environment = account + region).

Now, when the toolkit deploys the stack, it will set the relevant CloudFormation Parameters to point to the actual bucket and key for each asset.

CloudFormation Resource Metadata

NOTE: This section is relevant for authors of AWS Resource Constructs.

In certain situations, it is desirable for tools to be able to know that a certain CloudFormation resource is using a local asset. For example, SAM CLI can be used to invoke AWS Lambda functions locally for debugging purposes.

To enable such use cases, external tools will consult a set of metadata entries on AWS CloudFormation resources:

  • aws:asset:path points to the local path of the asset.
  • aws:asset:property is the name of the resource property where the asset is used

Using these two metadata entries, tools will be able to identify that assets are used by a certain resource, and enable advanced local experiences.

To add these metadata entries to a resource, use the asset.addResourceMetadata(resource, property) method.

See aws/aws-cdk#1432 for more details

AWS S3 Deployment Construct Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


Status: Experimental

This library allows populating an S3 bucket with the contents of a .zip file from another S3 bucket or from local disk.

The following example defines a publicly accessible S3 bucket with web hosting enabled and populates it from a local directory on disk.

website_bucket = s3.Bucket(self, "WebsiteBucket",
    website_index_document="index.html",
    public_read_access=True
)

s3deploy.BucketDeployment(self, "DeployWebsite",
    source=s3deploy.Source.asset("./website-dist"),
    destination_bucket=website_bucket,
    destination_key_prefix="web/static"
)

This is what happens under the hood:

  1. When this stack is deployed (either via cdk deploy or via CI/CD), the contents of the local website-dist directory will be archived and uploaded to an intermediary assets bucket.
  2. The BucketDeployment construct synthesizes a custom CloudFormation resource of type Custom::CDKBucketDeployment into the template. The source bucket/key is set to point to the assets bucket.
  3. The custom resource downloads the .zip archive, extracts it and issues aws s3 sync --delete against the destination bucket (in this case websiteBucket).

Supported sources

The following source types are supported for bucket deployments:

  • Local .zip file: s3deploy.Source.asset('/path/to/local/file.zip')
  • Local directory: s3deploy.Source.asset('/path/to/local/directory')
  • Another bucket: s3deploy.Source.bucket(bucket, zipObjectKey)

Retain on Delete

By default, the contents of the destination bucket will be deleted when the BucketDeployment resource is removed from the stack or when the destination is changed. You can use the option retainOnDelete: true to disable this behavior, in which case the contents will be retained.

CloudFront Invalidation

You can provide a CloudFront distribution and optional paths to invalidate after the bucket deployment finishes.

bucket = s3.Bucket(self, "Destination")

# INCORRECT
distribution = cloudfront.CloudFrontWebDistribution(self, "Distribution",
    origin_configs=[{
        "s3_origin_source": {
            "s3_bucket_source": bucket
        },
        "behaviors": [{"is_default_behavior": True}]
    }
    ]
)

s3deploy.BucketDeployment(self, "DeployWithInvalidation",
    source=s3deploy.Source.asset("./website-dist"),
    destination_bucket=bucket,
    distribution=distribution,
    distribution_paths=["/images/*.png"]
)

Notes

  • This library uses an AWS CloudFormation custom resource which about 10MiB in size. The code of this resource is bundled with this library.
  • AWS Lambda execution time is limited to 15min. This limits the amount of data which can be deployed into the bucket by this timeout.
  • When the BucketDeployment is removed from the stack, the contents are retained in the destination bucket (#952).
  • Bucket deployment only happens during stack create/update. This means that if you wish to update the contents of the destination, you will need to change the source s3 key (or bucket), so that the resource will be updated. This is inline with best practices. If you use local disk assets, this will happen automatically whenever you modify the asset, since the S3 key is based on a hash of the asset contents.

Development

The custom resource is implemented in Python 3.6 in order to be able to leverage the AWS CLI for "aws sync". The code is under lambda/src and unit tests are under lambda/test.

This package requires Python 3.6 during build time in order to create the custom resource Lambda bundle and test it. It also relies on a few bash scripts, so might be tricky to build on Windows.

Roadmap

  • Support "progressive" mode (no --delete) (#953)
  • Support "blue/green" deployments (#954)

Amazon S3 Construct Library---

Stability: Stable


Define an unencrypted S3 bucket.

Bucket(self, "MyFirstBucket")

Bucket constructs expose the following deploy-time attributes:

  • bucketArn - the ARN of the bucket (i.e. arn:aws:s3:::bucket_name)
  • bucketName - the name of the bucket (i.e. bucket_name)
  • bucketWebsiteUrl - the Website URL of the bucket (i.e. http://bucket_name.s3-website-us-west-1.amazonaws.com)
  • bucketDomainName - the URL of the bucket (i.e. bucket_name.s3.amazonaws.com)
  • bucketDualStackDomainName - the dual-stack URL of the bucket (i.e. bucket_name.s3.dualstack.eu-west-1.amazonaws.com)
  • bucketRegionalDomainName - the regional URL of the bucket (i.e. bucket_name.s3.eu-west-1.amazonaws.com)
  • arnForObjects(pattern) - the ARN of an object or objects within the bucket (i.e. arn:aws:s3:::bucket_name/exampleobject.png or arn:aws:s3:::bucket_name/Development/*)
  • urlForObject(key) - the URL of an object within the bucket (i.e. https://s3.cn-north-1.amazonaws.com.cn/china-bucket/mykey)

Encryption

Define a KMS-encrypted bucket:

bucket = Bucket(self, "MyUnencryptedBucket",
    encryption=BucketEncryption.KMS
)

# you can access the encryption key:
assert(bucket.encryption_key instanceof kms.Key)

You can also supply your own key:

my_kms_key = kms.Key(self, "MyKey")

bucket = Bucket(self, "MyEncryptedBucket",
    encryption=BucketEncryption.KMS,
    encryption_key=my_kms_key
)

assert(bucket.encryption_key === my_kms_key)

Use BucketEncryption.ManagedKms to use the S3 master KMS key:

bucket = Bucket(self, "Buck",
    encryption=BucketEncryption.KMS_MANAGED
)

assert(bucket.encryption_key == null)

Permissions

A bucket policy will be automatically created for the bucket upon the first call to addToResourcePolicy(statement):

bucket = Bucket(self, "MyBucket")
bucket.add_to_resource_policy(iam.PolicyStatement(
    actions=["s3:GetObject"],
    resources=[bucket.arn_for_objects("file.txt")],
    principals=[iam.AccountRootPrincipal()]
))

Most of the time, you won't have to manipulate the bucket policy directly. Instead, buckets have "grant" methods called to give prepackaged sets of permissions to other resources. For example:

lambda = lambda.Function(self, "Lambda")

bucket = Bucket(self, "MyBucket")
bucket.grant_read_write(lambda)

Will give the Lambda's execution role permissions to read and write from the bucket.

Sharing buckets between stacks

To use a bucket in a different stack in the same CDK application, pass the object to the other stack:

#
# Stack that defines the bucket
#
class Producer(cdk.Stack):

    def __init__(self, scope, id, props=None):
        super().__init__(scope, id, props)

        bucket = s3.Bucket(self, "MyBucket",
            removal_policy=cdk.RemovalPolicy.DESTROY
        )
        self.my_bucket = bucket

#
# Stack that consumes the bucket
#
class Consumer(cdk.Stack):
    def __init__(self, scope, id, *, userBucket):
        super().__init__(scope, id, userBucket=userBucket)

        user = iam.User(self, "MyUser")
        user_bucket.grant_read_write(user)

producer = Producer(app, "ProducerStack")
Consumer(app, "ConsumerStack", user_bucket=producer.my_bucket)

Importing existing buckets

To import an existing bucket into your CDK application, use the Bucket.fromBucketAttributes factory method. This method accepts BucketAttributes which describes the properties of an already existing bucket:

bucket = Bucket.from_bucket_attributes(self, "ImportedBucket",
    bucket_arn="arn:aws:s3:::my-bucket"
)

# now you can just call methods on the bucket
bucket.grant_read_write(user)

Alternatively, short-hand factories are available as Bucket.fromBucketName and Bucket.fromBucketArn, which will derive all bucket attributes from the bucket name or ARN respectively:

by_name = Bucket.from_bucket_name(self, "BucketByName", "my-bucket")
by_arn = Bucket.from_bucket_arn(self, "BucketByArn", "arn:aws:s3:::my-bucket")

Bucket Notifications

The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket as described under S3 Bucket Notifications of the S3 Developer Guide.

To subscribe for bucket notifications, use the bucket.addEventNotification method. The bucket.addObjectCreatedNotification and bucket.addObjectRemovedNotification can also be used for these common use cases.

The following example will subscribe an SNS topic to be notified of all s3:ObjectCreated:* events:

import aws_cdk.aws_s3_notifications as s3n

my_topic = sns.Topic(self, "MyTopic")
bucket.add_event_notification(s3.EventType.OBJECT_CREATED, s3n.SnsDestination(topic))

This call will also ensure that the topic policy can accept notifications for this specific bucket.

Supported S3 notification targets are exposed by the @aws-cdk/aws-s3-notifications package.

It is also possible to specify S3 object key filters when subscribing. The following example will notify myQueue when objects prefixed with foo/ and have the .jpg suffix are removed from the bucket.

bucket.add_event_notification(s3.EventType.OBJECT_REMOVED,
    s3n.SqsDestination(my_queue), prefix="foo/", suffix=".jpg")

Block Public Access

Use blockPublicAccess to specify block public access settings on the bucket.

Enable all block public access settings:

bucket = Bucket(self, "MyBlockedBucket",
    block_public_access=BlockPublicAccess.BLOCK_ALL
)

Block and ignore public ACLs:

bucket = Bucket(self, "MyBlockedBucket",
    block_public_access=BlockPublicAccess.BLOCK_ACLS
)

Alternatively, specify the settings manually:

bucket = Bucket(self, "MyBlockedBucket",
    block_public_access=BlockPublicAccess(block_public_policy=True)
)

When blockPublicPolicy is set to true, grantPublicRead() throws an error.

Website redirection

You can use the two following properties to specify the bucket redirection policy. Please note that these methods cannot both be applied to the same bucket.

Static redirection

You can statically redirect a to a given Bucket URL or any other host name with websiteRedirect:

bucket = Bucket(self, "MyRedirectedBucket",
    website_redirect={"host_name": "www.example.com"}
)

Routing rules

Alternatively, you can also define multiple websiteRoutingRules, to define complex, conditional redirections:

# INCORRECT
bucket = Bucket(self, "MyRedirectedBucket",
    website_routing_rules=[{
        "host_name": "www.example.com",
        "http_redirect_code": "302",
        "protocol": RedirectProtocol.HTTPS,
        "replace_key": ReplaceKey.prefix_with("test/"),
        "condition": {
            "http_error_code_returned_equals": "200",
            "key_prefix_equals": "prefix"
        }
    }]
)

AWS Serverless Application Model Construct Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


This module includes low-level constructs that synthesize into AWS::Serverless resources.

sam = require("@aws-cdk/aws-sam")

Related

The following AWS CDK modules include constructs that can be used to work with Amazon API Gateway and AWS Lambda:

AWS Secrets Manager Construct Library---

Stability: Stable


secretsmanager = require("@aws-cdk/aws-secretsmanager")

Create a new Secret in a Stack

In order to have SecretsManager generate a new secret value automatically, you can get started with the following:

# Default secret
secret = secretsmanager.Secret(self, "Secret")
secret.grant_read(role)

iam.User(self, "User",
    password=secret.secret_value
)

# Templated secret
# INCORRECT
templated_secret = secretsmanager.Secret(self, "TemplatedSecret",
    generate_secret_string={
        "secret_string_template": JSON.stringify(username="user"),
        "generate_string_key": "password"
    }
)

iam.User(self, "OtherUser",
    user_name=templated_secret.secret_value_from_json("username").to_string(),
    password=templated_secret.secret_value_from_json("password")
)

The Secret construct does not allow specifying the SecretString property of the AWS::SecretsManager::Secret resource (as this will almost always lead to the secret being surfaced in plain text and possibly committed to your source control).

If you need to use a pre-existing secret, the recommended way is to manually provision the secret in AWS SecretsManager and use the Secret.fromSecretArn or Secret.fromSecretAttributes method to make it available in your CDK Application:

secret = secretsmanager.Secret.from_secret_attributes(scope, "ImportedSecret",
    secret_arn="arn:aws:secretsmanager:<region>:<account-id-number>:secret:<secret-name>-<random-6-characters>",
    # If the secret is encrypted using a KMS-hosted CMK, either import or reference that key:
    encryption_key=encryption_key
)

SecretsManager secret values can only be used in select set of properties. For the list of properties, see the CloudFormation Dynamic References documentation.

Rotating a Secret

A rotation schedule can be added to a Secret:

fn = lambda.Function(...)
secret = secretsmanager.Secret(self, "Secret")

secret.add_rotation_schedule("RotationSchedule",
    rotation_lambda=fn,
    automatically_after=Duration.days(15)
)

See Overview of the Lambda Rotation Function on how to implement a Lambda Rotation Function.

For RDS credentials rotation, see aws-rds.

Amazon ECS Service Discovery Construct Library---

Stability: Stable


This module is part of the AWS Cloud Development Kit project.

This package contains constructs for working with AWS Cloud Map

AWS Cloud Map is a fully managed service that you can use to create and maintain a map of the backend services and resources that your applications depend on.

For further information on AWS Cloud Map, see the AWS Cloud Map documentation

HTTP Namespace Example

The following example creates an AWS Cloud Map namespace that supports API calls, creates a service in that namespace, and registers an instance to it:

import aws_cdk.core as cdk
import ...lib as servicediscovery

app = cdk.App()
stack = cdk.Stack(app, "aws-servicediscovery-integ")

namespace = servicediscovery.HttpNamespace(stack, "MyNamespace",
    name="covfefe"
)

service1 = namespace.create_service("NonIpService",
    description="service registering non-ip instances"
)

service1.register_non_ip_instance("NonIpInstance",
    custom_attributes={"arn": "arn:aws:s3:::mybucket"}
)

service2 = namespace.create_service("IpService",
    description="service registering ip instances",
    health_check={
        "type": servicediscovery.HealthCheckType.HTTP,
        "resource_path": "/check"
    }
)

service2.register_ip_instance("IpInstance",
    ipv4="54.239.25.192"
)

app.synth()

Private DNS Namespace Example

The following example creates an AWS Cloud Map namespace that supports both API calls and DNS queries within a vpc, creates a service in that namespace, and registers a loadbalancer as an instance:

import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_elasticloadbalancingv2 as elbv2
import aws_cdk.core as cdk
import ...lib as servicediscovery

app = cdk.App()
stack = cdk.Stack(app, "aws-servicediscovery-integ")

vpc = ec2.Vpc(stack, "Vpc", max_azs=2)

namespace = servicediscovery.PrivateDnsNamespace(stack, "Namespace",
    name="boobar.com",
    vpc=vpc
)

service = namespace.create_service("Service",
    dns_record_type=servicediscovery.DnsRecordType.A_AAAA,
    dns_ttl=cdk.Duration.seconds(30),
    load_balancer=True
)

loadbalancer = elbv2.ApplicationLoadBalancer(stack, "LB", vpc=vpc, internet_facing=True)

service.register_load_balancer("Loadbalancer", loadbalancer)

app.synth()

Public DNS Namespace Example

The following example creates an AWS Cloud Map namespace that supports both API calls and public DNS queries, creates a service in that namespace, and registers an IP instance:

import aws_cdk.core as cdk
import ...lib as servicediscovery

app = cdk.App()
stack = cdk.Stack(app, "aws-servicediscovery-integ")

namespace = servicediscovery.PublicDnsNamespace(stack, "Namespace",
    name="foobar.com"
)

# INCORRECT
service = namespace.create_service("Service",
    name="foo",
    dns_record_type=servicediscovery.DnsRecordType.A,
    dns_ttl=cdk.Duration.seconds(30),
    health_check={
        "type": servicediscovery.HealthCheckType.HTTPS,
        "resource_path": "/healthcheck",
        "failure_threshold": 2
    }
)

service.register_ip_instance("IpInstance",
    ipv4="54.239.25.192",
    port=443
)

app.synth()

For DNS namespaces, you can also register instances to services with CNAME records:

import aws_cdk.core as cdk
import ...lib as servicediscovery

app = cdk.App()
stack = cdk.Stack(app, "aws-servicediscovery-integ")

namespace = servicediscovery.PublicDnsNamespace(stack, "Namespace",
    name="foobar.com"
)

service = namespace.create_service("Service",
    name="foo",
    dns_record_type=servicediscovery.DnsRecordType.CNAME,
    dns_ttl=cdk.Duration.seconds(30)
)

service.register_cname_instance("CnameInstance",
    instance_cname="service.pizza"
)

app.synth()

Amazon Simple Email Service Construct Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


This module is part of the AWS Cloud Development Kit project.

Email receiving

Create a receipt rule set with rules and actions:

bucket = s3.Bucket(stack, "Bucket")
topic = sns.Topic(stack, "Topic")

# INCORRECT
ses.ReceiptRuleSet(stack, "RuleSet",
    rules=[{
        "recipients": ["hello@aws.com"],
        "actions": [
            ses.ReceiptRuleAddHeaderAction(
                name="X-Special-Header",
                value="aws"
            ),
            ses.ReceiptRuleS3Action(
                bucket=bucket,
                object_key_prefix="emails/",
                topic=topic
            )
        ]
    }, {
        "recipients": ["aws.com"],
        "actions": [
            ses.ReceiptRuleSnsAction(
                topic=topic
            )
        ]
    }
    ]
)

Alternatively, rules can be added to a rule set:

rule_set = ses.ReceiptRuleSet(self, "RuleSet")

aws_rule = rule_set.add_rule("Aws",
    recipients=["aws.com"]
)

And actions to rules:

aws_rule.add_action(
    ses.ReceiptRuleSnsAction(
        topic=topic
    ))

When using addRule, the new rule is added after the last added rule unless after is specified.

More actions

Drop spams

A rule to drop spam can be added by setting dropSpam to true:

ses.ReceiptRuleSet(self, "RuleSet",
    drop_spam=True
)

This will add a rule at the top of the rule set with a Lambda action that stops processing messages that have at least one spam indicator. See Lambda Function Examples.

Receipt filter

Create a receipt filter:

ses.ReceiptFilter(self, "Filter",
    ip="1.2.3.4/16"
)

A white list filter is also available:

ses.WhiteListReceiptFilter(self, "WhiteList",
    ips=["10.0.0.0/16", "1.2.3.4/16"
    ]
)

This will first create a block all filter and then create allow filters for the listed ip addresses.

Amazon Simple Notification Service Construct Library---

Stability: Stable


Add an SNS Topic to your stack:

import aws_cdk.aws_sns as sns

topic = sns.Topic(self, "Topic",
    display_name="Customer subscription topic"
)

Subscriptions

Various subscriptions can be added to the topic by calling the .addSubscription(...) method on the topic. It accepts a subscription object, default implementations of which can be found in the @aws-cdk/aws-sns-subscriptions package:

Add an HTTPS Subscription to your topic:

import aws_cdk.aws_sns_subscriptions as subs

my_topic = sns.Topic(self, "MyTopic")

my_topic.add_subscription(subs.UrlSubscription("https://foobar.com/"))

Subscribe a queue to the topic:

my_topic.add_subscription(subs.SqsSubscription(queue))

Note that subscriptions of queues in different accounts need to be manually confirmed by reading the initial message from the queue and visiting the link found in it.

Filter policy

A filter policy can be specified when subscribing an endpoint to a topic.

Example with a Lambda subscription:

my_topic = sns.Topic(self, "MyTopic")
fn = lambda.Function(self, "Function", ...)

# Lambda should receive only message matching the following conditions on attributes:
# color: 'red' or 'orange' or begins with 'bl'
# size: anything but 'small' or 'medium'
# price: between 100 and 200 or greater than 300
# store: attribute must be present
topic.subscribe_lambda(subs.LambdaSubscription(fn,
    filter_policy={
        "color": sns.SubscriptionFilter.string_filter(
            whitelist=["red", "orange"],
            match_prefixes=["bl"]
        ),
        "size": sns.SubscriptionFilter.string_filter(
            blacklist=["small", "medium"]
        ),
        "price": sns.SubscriptionFilter.numeric_filter(
            between={"start": 100, "stop": 200},
            greater_than=300
        ),
        "store": sns.SubscriptionFilter.exists_filter()
    }
))

CloudWatch Event Rule Target

SNS topics can be used as targets for CloudWatch event rules.

Use the @aws-cdk/aws-events-targets.SnsTopicTarget:

import aws_cdk.aws_events_targets as targets

code_commit_repository.on_commit(targets.SnsTopicTarget(my_topic))

This will result in adding a target to the event rule and will also modify the topic resource policy to allow CloudWatch events to publish to the topic.

Amazon Simple Queue Service Construct Library---

Stability: Stable


Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

Installation

Import to your project:

import aws_cdk.aws_sqs as sqs

Basic usage

Here's how to add a basic queue to your application:

sqs.Queue(self, "Queue")

Encryption

If you want to encrypt the queue contents, set the encryption property. You can have the messages encrypted with a key that SQS manages for you, or a key that you can manage yourself.

# Use managed key
sqs.Queue(self, "Queue",
    encryption=QueueEncryption.KMS_MANAGED
)

# Use custom key
my_key = kms.Key(self, "Key")

sqs.Queue(self, "Queue",
    encryption=QueueEncryption.KMS,
    encryption_master_key=my_key
)

First-In-First-Out (FIFO) queues

FIFO queues give guarantees on the order in which messages are dequeued, and have additional features in order to help guarantee exactly-once processing. For more information, see the SQS manual. Note that FIFO queues are not available in all AWS regions.

A queue can be made a FIFO queue by either setting fifo: true, giving it a name which ends in ".fifo", or enabling content-based deduplication (which requires FIFO queues).

AWS Systems Manager Construct Library---

Stability: Stable


This module is part of the AWS Cloud Development Kit project.

Installation

Install the module:

$ npm i @aws-cdk/aws-ssm

Import it into your code:

import aws_cdk.aws_ssm as ssm

Using existing SSM Parameters in your CDK app

You can reference existing SSM Parameter Store values that you want to use in your CDK app by using ssm.ParameterStoreString:

# Retrieve the latest value of the non-secret parameter
# with name "/My/String/Parameter".
string_value = ssm.StringParameter.from_string_parameter_attributes(self, "MyValue",
    parameter_name="/My/Public/Parameter"
).string_value

# Retrieve a specific version of the secret (SecureString) parameter.
# 'version' is always required.
secret_value = ssm.StringParameter.from_secure_string_parameter_attributes(self, "MySecureValue",
    parameter_name="/My/Secret/Parameter",
    version=5
)

Creating new SSM Parameters in your CDK app

You can create either ssm.StringParameter or ssm.StringListParameters in a CDK app. These are public (not secret) values. Parameters of type SecretString cannot be created directly from a CDK application; if you want to provision secrets automatically, use Secrets Manager Secrets (see the @aws-cdk/aws-secretsmanager package).

# Create a new SSM Parameter holding a String
param = ssm.StringParameter(stack, "StringParameter",
    # description: 'Some user-friendly description',
    # name: 'ParameterName',
    string_value="Initial parameter value"
)

# Grant read access to some Role
param.grant_read(role)

# Create a new SSM Parameter holding a StringList
list_parameter = ssm.StringListParameter(stack, "StringListParameter",
    # description: 'Some user-friendly description',
    # name: 'ParameterName',
    string_list_value=["Initial parameter value A", "Initial parameter value B"]
)

When specifying an allowedPattern, the values provided as string literals are validated against the pattern and an exception is raised if a value provided does not comply.

AWS Step Functions Construct Library---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


The @aws-cdk/aws-stepfunctions package contains constructs for building serverless workflows using objects. Use this in conjunction with the @aws-cdk/aws-stepfunctions-tasks package, which contains classes used to call other AWS services.

Defining a workflow looks like this (for the Step Functions Job Poller example):

TypeScript example

import aws_cdk.aws_stepfunctions as sfn
import aws_cdk.aws_stepfunctions_tasks as tasks

submit_lambda = lambda.Function(self, "SubmitLambda", ...)
get_status_lambda = lambda.Function(self, "CheckLambda", ...)

submit_job = sfn.Task(self, "Submit Job",
    task=tasks.InvokeFunction(submit_lambda),
    # Put Lambda's result here in the execution's state object
    result_path="$.guid"
)

wait_x = sfn.Wait(self, "Wait X Seconds",
    duration=sfn.WaitDuration.seconds_path("$.wait_time")
)

get_status = sfn.Task(self, "Get Job Status",
    task=tasks.InvokeFunction(get_status_lambda),
    # Pass just the field named "guid" into the Lambda, put the
    # Lambda's result in a field called "status"
    input_path="$.guid",
    result_path="$.status"
)

job_failed = sfn.Fail(self, "Job Failed",
    cause="AWS Batch Job Failed",
    error="DescribeJob returned FAILED"
)

final_status = sfn.Task(self, "Get Final Job Status",
    task=tasks.InvokeFunction(get_status_lambda),
    # Use "guid" field as input, output of the Lambda becomes the
    # entire state machine output.
    input_path="$.guid"
)

definition = submit_job.next(wait_x).next(get_status).next(sfn.Choice(self, "Job Complete?").when(sfn.Condition.string_equals("$.status", "FAILED"), job_failed).when(sfn.Condition.string_equals("$.status", "SUCCEEDED"), final_status).otherwise(wait_x))

sfn.StateMachine(self, "StateMachine",
    definition=definition,
    timeout=Duration.minutes(5)
)

State Machine

A stepfunctions.StateMachine is a resource that takes a state machine definition. The definition is specified by its start state, and encompasses all states reachable from the start state:

start_state = stepfunctions.Pass(self, "StartState")

stepfunctions.StateMachine(self, "StateMachine",
    definition=start_state
)

State machines execute using an IAM Role, which will automatically have all permissions added that are required to make all state machine tasks execute properly (for example, permissions to invoke any Lambda functions you add to your workflow). A role will be created by default, but you can supply an existing one as well.

Amazon States Language

This library comes with a set of classes that model the Amazon States Language. The following State classes are supported:

  • Task
  • Pass
  • Wait
  • Choice
  • Parallel
  • Succeed
  • Fail

An arbitrary JSON object (specified at execution start) is passed from state to state and transformed during the execution of the workflow. For more information, see the States Language spec.

Task

A Task represents some work that needs to be done. The exact work to be done is determine by a class that implements IStepFunctionsTask, a collection of which can be found in the @aws-cdk/aws-stepfunctions-tasks package. A couple of the tasks available are:

  • tasks.InvokeActivity -- start an Activity (Activities represent a work queue that you poll on a compute fleet you manage yourself)
  • tasks.InvokeFunction -- invoke a Lambda function with function ARN
  • tasks.RunLambdaTask -- call Lambda as integrated service with magic ARN
  • tasks.PublishToTopic -- publish a message to an SNS topic
  • tasks.SendToQueue -- send a message to an SQS queue
  • tasks.RunEcsFargateTask/ecs.RunEcsEc2Task -- run a container task, depending on the type of capacity.
  • tasks.SagemakerTrainTask -- run a SageMaker training job
  • tasks.SagemakerTransformTask -- run a SageMaker transform job
  • tasks.StartExecution -- call StartExecution to a state machine of Step Functions

Except tasks.InvokeActivity and tasks.InvokeFunction, the service integration pattern (integrationPattern) are supposed to be given as parameter when customers want to call integrated services within a Task state. The default value is FIRE_AND_FORGET.

Task parameters from the state json

Many tasks take parameters. The values for those can either be supplied directly in the workflow definition (by specifying their values), or at runtime by passing a value obtained from the static functions on Data, such as Data.stringAt().

If so, the value is taken from the indicated location in the state JSON, similar to (for example) inputPath.

Lambda example - InvokeFunction

task = sfn.Task(self, "Invoke1",
    task=tasks.InvokeFunction(my_lambda),
    input_path="$.input",
    timeout=Duration.minutes(5)
)

# Add a retry policy
task.add_retry(
    interval=Duration.seconds(5),
    max_attempts=10
)

# Add an error handler
task.add_catch(error_handler_state)

# Set the next state
task.next(next_state)

Lambda example - RunLambdaTask

task = sfn.Task(stack, "Invoke2",
    task=tasks.RunLambdaTask(my_lambda,
        integration_pattern=sfn.ServiceIntegrationPattern.WAIT_FOR_TASK_TOKEN,
        payload={
            "token": sfn.Context.task_token
        }
    )
)

SNS example

import aws_cdk.aws_sns as sns

# ...

topic = sns.Topic(self, "Topic")

# Use a field from the execution data as message.
task1 = sfn.Task(self, "Publish1",
    task=tasks.PublishToTopic(topic,
        integration_pattern=sfn.ServiceIntegrationPattern.FIRE_AND_FORGET,
        message=TaskInput.from_data_at("$.state.message")
    )
)

# Combine a field from the execution data with
# a literal object.
task2 = sfn.Task(self, "Publish2",
    task=tasks.PublishToTopic(topic,
        message=TaskInput.from_object(
            field1="somedata",
            field2=Data.string_at("$.field2")
        )
    )
)

SQS example

import aws_cdk.aws_sqs as sqs

# ...

queue = sns.Queue(self, "Queue")

# Use a field from the execution data as message.
task1 = sfn.Task(self, "Send1",
    task=tasks.SendToQueue(queue,
        message_body=TaskInput.from_data_at("$.message"),
        # Only for FIFO queues
        message_group_id="1234"
    )
)

# Combine a field from the execution data with
# a literal object.
task2 = sfn.Task(self, "Send2",
    task=tasks.SendToQueue(queue,
        message_body=TaskInput.from_object(
            field1="somedata",
            field2=Data.string_at("$.field2")
        ),
        # Only for FIFO queues
        message_group_id="1234"
    )
)

ECS example

import aws_cdk.aws_ecs as ecs

# See examples in ECS library for initialization of 'cluster' and 'taskDefinition'

# INCORRECT
fargate_task = ecs.RunEcsFargateTask(
    cluster=cluster,
    task_definition=task_definition,
    container_overrides=[{
        "container_name": "TheContainer",
        "environment": [{
            "name": "CONTAINER_INPUT",
            "value": Data.string_at("$.valueFromStateData")
        }
        ]
    }
    ]
)

fargate_task.connections.allow_to_default_port(rds_cluster, "Read the database")

task = sfn.Task(self, "CallFargate",
    task=fargate_task
)

SageMaker Transform example

# INCORRECT
transform_job = tasks.SagemakerTransformTask(transform_job_name, "MyTransformJob", model_name, "MyModelName", role, transform_input, {
    "transform_data_source": {
        "s3_data_source": {
            "s3_uri": "s3://inputbucket/train",
            "s3_data_type": S3DataType.S3Prefix
        }
    }
}, transform_output, {
    "s3_output_path": "s3://outputbucket/TransformJobOutputPath"
}, transform_resources,
    instance_count=1,
    instance_type=ec2.InstanceType.of(ec2.InstanceClass.M4, ec2.InstanceSize.XLarge)
)

task = sfn.Task(self, "Batch Inference",
    task=transform_job
)

Step Functions example

# Define a state machine with one Pass state
child = sfn.StateMachine(stack, "ChildStateMachine",
    definition=sfn.Chain.start(sfn.Pass(stack, "PassState"))
)

# Include the state machine in a Task state with callback pattern
task = sfn.Task(stack, "ChildTask",
    task=tasks.ExecuteStateMachine(child,
        integration_pattern=sfn.ServiceIntegrationPattern.WAIT_FOR_TASK_TOKEN,
        input={
            "token": sfn.Context.task_token,
            "foo": "bar"
        },
        name="MyExecutionName"
    )
)

# Define a second state machine with the Task state above
sfn.StateMachine(stack, "ParentStateMachine",
    definition=task
)

Pass

A Pass state does no work, but it can optionally transform the execution's JSON state.

# Makes the current JSON state { ..., "subObject": { "hello": "world" } }
pass = stepfunctions.Pass(self, "Add Hello World",
    result={"hello": "world"},
    result_path="$.subObject"
)

# Set the next state
pass.next(next_state)

Wait

A Wait state waits for a given number of seconds, or until the current time hits a particular time. The time to wait may be taken from the execution's JSON state.

# Wait until it's the time mentioned in the the state object's "triggerTime"
# field.
wait = stepfunctions.Wait(self, "Wait For Trigger Time",
    duration=stepfunctions.WaitDuration.timestamp_path("$.triggerTime")
)

# Set the next state
wait.next(start_the_work)

Choice

A Choice state can take a differen path through the workflow based on the values in the execution's JSON state:

choice = stepfunctions.Choice(self, "Did it work?")

# Add conditions with .when()
choice.when(stepfunctions.Condition.string_equal("$.status", "SUCCESS"), success_state)
choice.when(stepfunctions.Condition.number_greater_than("$.attempts", 5), failure_state)

# Use .otherwise() to indicate what should be done if none of the conditions match
choice.otherwise(try_again_state)

If you want to temporarily branch your workflow based on a condition, but have all branches come together and continuing as one (similar to how an if ... then ... else works in a programming language), use the .afterwards() method:

choice = stepfunctions.Choice(self, "What color is it?")
choice.when(stepfunctions.Condition.string_equal("$.color", "BLUE"), handle_blue_item)
choice.when(stepfunctions.Condition.string_equal("$.color", "RED"), handle_red_item)
choice.otherwise(handle_other_item_color)

# Use .afterwards() to join all possible paths back together and continue
choice.afterwards().next(ship_the_item)

If your Choice doesn't have an otherwise() and none of the conditions match the JSON state, a NoChoiceMatched error will be thrown. Wrap the state machine in a Parallel state if you want to catch and recover from this.

Parallel

A Parallel state executes one or more subworkflows in parallel. It can also be used to catch and recover from errors in subworkflows.

parallel = stepfunctions.Parallel(self, "Do the work in parallel")

# Add branches to be executed in parallel
parallel.branch(ship_item)
parallel.branch(send_invoice)
parallel.branch(restock)

# Retry the whole workflow if something goes wrong
parallel.add_retry(max_attempts=1)

# How to recover from errors
parallel.add_catch(send_failure_notification)

# What to do in case everything succeeded
parallel.next(close_order)

Succeed

Reaching a Succeed state terminates the state machine execution with a succesful status.

success = stepfunctions.Succeed(self, "We did it!")

Fail

Reaching a Fail state terminates the state machine execution with a failure status. The fail state should report the reason for the failure. Failures can be caught by encompassing Parallel states.

success = stepfunctions.Fail(self, "Fail",
    error="WorkflowFailure",
    cause="Something went wrong"
)

Task Chaining

To make defining work flows as convenient (and readable in a top-to-bottom way) as writing regular programs, it is possible to chain most methods invocations. In particular, the .next() method can be repeated. The result of a series of .next() calls is called a Chain, and can be used when defining the jump targets of Choice.on or Parallel.branch:

definition = step1.next(step2).next(choice.when(condition1, step3.next(step4).next(step5)).otherwise(step6).afterwards()).next(parallel.branch(step7.next(step8)).branch(step9.next(step10))).next(finish)

stepfunctions.StateMachine(self, "StateMachine",
    definition=definition
)

If you don't like the visual look of starting a chain directly off the first step, you can use Chain.start:

definition = stepfunctions.Chain.start(step1).next(step2).next(step3)

State Machine Fragments

It is possible to define reusable (or abstracted) mini-state machines by defining a construct that implements IChainable, which requires you to define two fields:

  • startState: State, representing the entry point into this state machine.
  • endStates: INextable[], representing the (one or more) states that outgoing transitions will be added to if you chain onto the fragment.

Since states will be named after their construct IDs, you may need to prefix the IDs of states if you plan to instantiate the same state machine fragment multiples times (otherwise all states in every instantiation would have the same name).

The class StateMachineFragment contains some helper functions (like prefixStates()) to make it easier for you to do this. If you define your state machine as a subclass of this, it will be convenient to use:

class MyJob(stepfunctions.StateMachineFragment):

    def __init__(self, parent, id, *, jobFlavor):
        super().__init__(parent, id)

        first = stepfunctions.Task(self, "First", ...)
        # ...
        last = stepfunctions.Task(self, "Last", ...)

        self.start_state = first
        self.end_states = [last]

# Do 3 different variants of MyJob in parallel
stepfunctions.Parallel(self, "All jobs").branch(MyJob(self, "Quick", job_flavor="quick").prefix_states()).branch(MyJob(self, "Medium", job_flavor="medium").prefix_states()).branch(MyJob(self, "Slow", job_flavor="slow").prefix_states())

Activity

Activities represent work that is done on some non-Lambda worker pool. The Step Functions workflow will submit work to this Activity, and a worker pool that you run yourself, probably on EC2, will pull jobs from the Activity and submit the results of individual jobs back.

You need the ARN to do so, so if you use Activities be sure to pass the Activity ARN into your worker pool:

activity = stepfunctions.Activity(self, "Activity")

# Read this CloudFormation Output from your application and use it to poll for work on
# the activity.
cdk.CfnOutput(self, "ActivityArn", value=activity.activity_arn)

Metrics

Task object expose various metrics on the execution of that particular task. For example, to create an alarm on a particular task failing:

cloudwatch.Alarm(self, "TaskAlarm",
    metric=task.metric_failed(),
    threshold=1,
    evaluation_periods=1
)

There are also metrics on the complete state machine:

cloudwatch.Alarm(self, "StateMachineAlarm",
    metric=state_machine.metric_failed(),
    threshold=1,
    evaluation_periods=1
)

And there are metrics on the capacity of all state machines in your account:

cloudwatch.Alarm(self, "ThrottledAlarm",
    metric=StateTransitionMetrics.metric_throttled_events(),
    threshold=10,
    evaluation_periods=2
)

Future work

Contributions welcome:

  • A single LambdaTask class that is both a Lambda and a Task in one might make for a nice API.
  • Expression parser for Conditions.
  • Simulate state machines in unit tests.

AWS Cloud Development Kit Core Library---

Stability: Stable


This library includes the basic building blocks of the AWS Cloud Development Kit (AWS CDK). It defines the core classes that are used in the rest of the AWS Construct Library.

See the AWS CDK Developer Guide for information of most of the capabilities of this library. The rest of this README will only cover topics not already covered in the Developer Guide.

Durations

To make specifications of time intervals unambiguous, a single class called Duration is used throughout the AWS Construct Library by all constructs that that take a time interval as a parameter (be it for a timeout, a rate, or something else).

An instance of Duration is constructed by using one of the static factory methods on it:

Duration.seconds(300)# 5 minutes
Duration.minutes(5)# 5 minutes
Duration.hours(1)# 1 hour
Duration.days(7)# 7 days
Duration.parse("PT5M")

Secrets

To help avoid accidental storage of secrets as plain text, we use the SecretValue type to represent secrets. Any construct that takes a value that should be a secret (such as a password or an access key) will take a parameter of type SecretValue.

The best practice is to store secrets in AWS Secrets Manager and reference them using SecretValue.secretsManager:

secret = SecretValue.secrets_manager("secretId",
    json_field="password", # optional: key of a JSON field to retrieve (defaults to all content),
    version_id="id", # optional: id of the version (default AWSCURRENT)
    version_stage="stage"
)

Using AWS Secrets Manager is the recommended way to reference secrets in a CDK app. SecretValue also supports the following secret sources:

  • SecretValue.plainText(secret): stores the secret as plain text in your app and the resulting template (not recommended).
  • SecretValue.ssmSecure(param, version): refers to a secret stored as a SecureString in the SSM Parameter Store.
  • SecretValue.cfnParameter(param): refers to a secret passed through a CloudFormation parameter (must have NoEcho: true).
  • SecretValue.cfnDynamicReference(dynref): refers to a secret described by a CloudFormation dynamic reference (used by ssmSecure and secretsManager).

ARN manipulation

Sometimes you will need to put together or pick apart Amazon Resource Names (ARNs). The functions stack.formatArn() and stack.parseArn() exist for this purpose.

formatArn() can be used to build an ARN from components. It will automatically use the region and account of the stack you're calling it on:

# Builds "arn:<PARTITION>:lambda:<REGION>:<ACCOUNT>:function:MyFunction"
stack.format_arn(
    service="lambda",
    resource="function",
    sep=":",
    resource_name="MyFunction"
)

parseArn() can be used to get a single component from an ARN. parseArn() will correctly deal with both literal ARNs and deploy-time values (tokens), but in case of a deploy-time value be aware that the result will be another deploy-time value which cannot be inspected in the CDK application.

# Extracts the function name out of an AWS Lambda Function ARN
arn_components = stack.parse_arn(arn, ":")
function_name = arn_components.resource_name

Note that depending on the service, the resource separator can be either : or /, and the resource name can be either the 6th or 7th component in the ARN. When using these functions, you will need to know the format of the ARN you are dealing with.

For an exhaustive list of ARN formats used in AWS, see AWS ARNs and Namespaces in the AWS General Reference.

Dependencies### Construct Dependencies

Sometimes AWS resources depend on other resources, and the creation of one resource must be completed before the next one can be started.

In general, CloudFormation will correctly infer the dependency relationship between resources based on the property values that are used. In the cases where it doesn't, the AWS Construct Library will add the dependency relationship for you.

If you need to add an ordering dependency that is not automatically inferred, you do so by adding a dependency relationship using constructA.node.addDependency(constructB). This will add a dependency relationship between all resources in the scope of constructA and all resources in the scope of constructB.

If you want a single object to represent a set of constructs that are not necessarily in the same scope, you can use a ConcreteDependable. The following creates a single object that represents a dependency on two construts, constructB and constructC:

# Declare the dependable object
b_and_c = ConcreteDependable()
b_and_c.add(construct_b)
b_and_c.add(construct_c)

# Take the dependency
construct_a.node.add_dependency(b_and_c)

Stack Dependencies

Two different stack instances can have a dependency on one another. This happens when an resource from one stack is referenced in another stack. In that case, CDK records the cross-stack referencing of resources, automatically produces the right CloudFormation primitives, and adds a dependency between the two stacks. You can also manually add a dependency between two stacks by using the stackA.addDependency(stackB) method.

A stack dependency has the following implications:

  • Cyclic dependencies are not allowed, so if stackA is using resources from stackB, the reverse is not possible anymore.

  • Stacks with dependencies between them are treated specially by the CDK toolkit:

    • If stackA depends on stackB, running cdk deploy stackA will also automatically deploy stackB.
    • stackB's deployment will be performed before stackA's deployment.

AWS CloudFormation features

A CDK stack synthesizes to an AWS CloudFormation Template. This section explains how this module allows users to access low-level CloudFormation features when needed.

Stack Outputs

CloudFormation stack outputs and exports are created using the CfnOutput class:

CfnOutput(self, "OutputName",
    value=bucket.bucket_name,
    description="The name of an S3 bucket", # Optional
    export_name="Global.BucketName"
)

Parameters

CloudFormation templates support the use of Parameters to customize a template. They enable CloudFormation users to input custom values to a template each time a stack is created or updated. While the CDK design philosophy favors using build-time parameterization, users may need to use CloudFormation in a number of cases (for example, when migrating an existing stack to the AWS CDK).

Template parameters can be added to a stack by using the CfnParameter class:

CfnParameter(self, "MyParameter",
    type="Number",
    default=1337
)

The value of parameters can then be obtained using one of the value methods. As parameters are only resolved at deployment time, the values obtained are placeholder tokens for the real value (Token.isUnresolved() would return true for those):

param = CfnParameter(self, "ParameterName")

# If the parameter is a String
param.value_as_string

# If the parameter is a Number
param.value_as_number

# If the parameter is a List
param.value_as_list

Pseudo Parameters

CloudFormation supports a number of pseudo parameters, which resolve to useful values at deployment time. CloudFormation pseudo parameters can be obtained from static members of the Aws class.

It is generally recommended to access pseudo parameters from the scope's stack instead, which guarantees the values produced are qualifying the designated stack, which is essential in cases where resources are shared cross-stack:

# "this" is the current construct
stack = Stack.of(self)

stack.account# Returns the AWS::AccountId for this stack (or the literal value if known)
stack.region# Returns the AWS::Region for this stack (or the literal value if known)
stack.partition

Resource Options

CloudFormation resources can also specify resource attributes. The CfnResource class allows accessing those though the cfnOptions property:

raw_bucket = s3.CfnBucket(self, "Bucket")
# -or-
raw_bucket = bucket.node.default_child

# then
raw_bucket.condition = CfnCondition(self, "EnableBucket")
raw_bucket.cfn_options.metadata = {
    "metadata_key": "MetadataValue"
}

Resource dependencies (the DependsOn attribute) is modified using the cfnResource.addDependsOn method:

resource_a = CfnResource(self, "ResourceA")
resource_b = CfnResource(self, "ResourceB")

resource_b.add_depends_on(resource_a)

Intrinsic Functions and Condition Expressions

CloudFormation supports intrinsic functions. These functions can be accessed from the Fn class, which provides type-safe methods for each intrinsic function as well as condition expressions:

# To use Fn::Base64
Fn.base64("SGVsbG8gQ0RLIQo=")

# To compose condition expressions:
environment_parameter = CfnParameter(self, "Environment")
Fn.condition_and(
    # The "Environment" CloudFormation template parameter evaluates to "Production"
    Fn.condition_equals("Production", environment_parameter),
    # The AWS::Region pseudo-parameter value is NOT equal to "us-east-1"
    Fn.condition_not(Fn.condition_equals("us-east-1", Aws.REGION)))

When working with deploy-time values (those for which Token.isUnresolved returns true), idiomatic conditionals from the programming language cannot be used (the value will not be known until deployment time). When conditional logic needs to be expressed with un-resolved values, it is necessary to use CloudFormation conditions by means of the CfnCondition class:

environment_parameter = CfnParameter(self, "Environment")
is_prod = CfnCondition(self, "IsProduction",
    expression=Fn.condition_equals("Production", environment_parameter)
)

# Configuration value that is a different string based on IsProduction
stage = Fn.condition_if(is_prod.logical_id, "Beta", "Prod").to_string()

# Make Bucket creation condition to IsProduction by accessing
# and overriding the CloudFormation resource
bucket = s3.Bucket(self, "Bucket")
cfn_bucket = bucket.node.default_child
cfn_bucket.cfn_options.condition = is_prod

Mappings

CloudFormation mappings are created and queried using the CfnMappings class:

# INCORRECT
mapping = CfnMapping(self, "MappingTable",
    mapping={
        "region_name": {
            ""us-east-1"": "US East (N. Virginia)",
            ""us-east-2"": "US East (Ohio)"
        }
    }
)

mapping.find_in_map("regionName", Aws.REGION)

Dynamic References

CloudFormation supports dynamically resolving values for SSM parameters (including secure strings) and Secrets Manager. Encoding such references is done using the CfnDynamicReference class:

CfnDynamicReference(self, "SecureStringValue",
    service=CfnDynamicReferenceService.SECRETS_MANAGER,
    reference_key="secret-id:secret-string:json-key:version-stage:version-id"
)

Template Options & Transform

CloudFormation templates support a number of options, including which Macros or Transforms to use when deploying the stack. Those can be configured using the stack.templateOptions property:

stack = Stack(app, "StackName")

stack.template_options.description = "This will appear in the AWS console"
stack.template_options.transform = "AWS::Serverless"
stack.template_options.metadata = {
    "metadata_key": "MetadataValue"
}

Emitting Raw Resources

The CfnResource class allows emitting arbitrary entries in the [Resources][cfn-resources] section of the CloudFormation template.

CfnResource(self, "ResourceId",
    type="AWS::S3::Bucket",
    properties={
        "BucketName": "bucket-name"
    }
)

As for any other resource, the logical ID in the CloudFormation template will be generated by the AWS CDK, but the type and properties will be copied verbatim in the synthesized template.

Including raw CloudFormation template fragments

When migrating a CloudFormation stack to the AWS CDK, it can be useful to include fragments of an existing template verbatim in the synthesized template. This can be achieved using the CfnInclude class.

CfnInclude(self, "ID",
    template={
        "Resources": {
            "Bucket": {
                "Type": "AWS::S3::Bucket",
                "Properties": {
                    "BucketName": "my-shiny-bucket"
                }
            }
        }
    }
)

CDK Custom Resources---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


This module is part of the AWS Cloud Development Kit project.

AWS Custom Resource

Sometimes a single API call can fill the gap in the CloudFormation coverage. In this case you can use the AwsCustomResource construct. This construct creates a custom resource that can be customized to make specific API calls for the CREATE, UPDATE and DELETE events. Additionally, data returned by the API call can be extracted and used in other constructs/resources (creating a real CloudFormation dependency using Fn::GetAtt under the hood).

The physical id of the custom resource can be specified or derived from the data returned by the API call.

The AwsCustomResource uses the AWS SDK for JavaScript. Services, actions and parameters can be found in the API documentation.

Path to data must be specified using a dot notation, e.g. to get the string value of the Title attribute for the first item returned by dynamodb.query it should be Items.0.Title.S.

Examples

Verify a domain with SES:

# INCORRECT
verify_domain_identity = AwsCustomResource(self, "VerifyDomainIdentity",
    on_create={
        "service": "SES",
        "action": "verifyDomainIdentity",
        "parameters": {
            "Domain": "example.com"
        },
        "physical_resource_id_path": "VerificationToken"
    }
)

route53.TxtRecord(zone, "SESVerificationRecord",
    record_name=`_amazonses.example.com`,
    record_value=verify_domain_identity.get_data("VerificationToken")
)

Get the latest version of a secure SSM parameter:

# INCORRECT
get_parameter = AwsCustomResource(self, "GetParameter",
    on_update={# will also be called for a CREATE event
        "service": "SSM",
        "action": "getParameter",
        "parameters": {
            "Name": "my-parameter",
            "WithDecryption": True
        },
        "physical_resource_id": Date.now().to_string()}
)

# Use the value in another construct with
get_parameter.get_data("Parameter.Value")

IAM policy statements required to make the API calls are derived from the calls and allow by default the actions to be made on all resources (*). You can restrict the permissions by specifying your own list of statements with the policyStatements prop.

Chained API calls can be achieved by creating dependencies:

# INCORRECT
aws_custom1 = AwsCustomResource(self, "API1",
    on_create={
        "service": "...",
        "action": "...",
        "physical_resource_id": "..."
    }
)

aws_custom2 = AwsCustomResource(self, "API2",
    on_create={
        "service": "...",
        "action": "...",
        "parameters": {
            "text": aws_custom1.get_data("Items.0.text")
        },
        "physical_resource_id": "..."
    }
)

AWS Region-Specific Information Directory---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


## Usage

Some information used in CDK Applications differs from one AWS region to another, such as service principals used in IAM policies, S3 static website endpoints, ...

The RegionInfo class

The library offers a simple interface to obtain region specific information in the form of the RegionInfo class. This is the preferred way to interact with the regional information database:

from aws_cdk.region_info import RegionInfo

# Get the information for "eu-west-1":
region = RegionInfo.get("eu-west-1")

# Access attributes:
region.s3_static_website_endpoint# s3-website.eu-west-1.amazonaws.com
region.service_principal("logs.amazonaws.com")

The RegionInfo layer is built on top of the Low-Level API, which is described below and can be used to register additional data, including user-defined facts that are not available through the RegionInfo interface.

Low-Level API

This library offers a primitive database of such information so that CDK constructs can easily access regional information. The FactName class provides a list of known fact names, which can then be used with the RegionInfo to retrieve a particular value:

import aws_cdk.region_info as region_info

code_deploy_principal = region_info.Fact.find("us-east-1", region_info.FactName.service_principal("codedeploy.amazonaws.com"))
# => codedeploy.us-east-1.amazonaws.com

static_website = region_info.Fact.find("ap-northeast-1", region_info.FactName.S3_STATIC_WEBSITE_ENDPOINT)

Supplying new or missing information

As new regions are released, it might happen that a particular fact you need is missing from the library. In such cases, the Fact.register method can be used to inject FactName into the database:

region_info.Fact.register(
    region="bermuda-triangle-1",
    name=region_info.FactName.service_principal("s3.amazonaws.com"),
    value="s3-website.bermuda-triangle-1.nowhere.com"
)

Overriding incorrect information

In the event information provided by the library is incorrect, it can be overridden using the same Fact.register method demonstrated above, simply adding an extra boolean argument:

# INCORRECT
region_info.Fact.register({
    "region": "us-east-1",
    "name": region_info.FactName.service_principal("service.amazonaws.com"),
    "value": "the-correct-principal.amazonaws.com"
}, True)

If you happen to have stumbled upon incorrect data built into this library, it is always a good idea to report your findings in a GitHub issue, so we can fix it for everyone else!


This module is part of the AWS Cloud Development Kit project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment