This document describes how to use docker-compose to start a container with Firehose and S3 locally, seed it and connect to it from .NET Core.
Below is the service description for using Firehose and S3 with added comments. This refers to version 3.7 of the compose file.
NB: Running both Firehose and S3 in the same container reduces the work to get them to connect
firehoses3:
build:
context: ../Infrastructure/Docker/Firehose/ # Location of the Dockerfile
environment:
- SERVICES=firehose,s3 # List of AWS services to start, seperated by a comma ","
- AWS_ACCESS_KEY_ID=foo # AWS Access Key - I use "foo" for all services
- AWS_SECRET_ACCESS_KEY=bar # AWS Secret Key - I use "bar" for all services
- DEFAULT_REGION=ap-southeast-2 # AWS region
- DEBUG=1 # Switch on local stack debug logs
ports:
- "4573:4573" # Firehose port to expose externally
- "4572:4572" # S3 port to expose externally
Below are the commands used to create the container with added comments.
# Image to use. I found version 0.9.6 to be very stable
FROM localstack/localstack:0.9.6
# Number of port to expose externally
EXPOSE 4573
# Copy in table creation scripts. The directory below should be in the same directory as the Dockerfile
COPY ./docker-entrypoint-initaws.d /docker-entrypoint-initaws.d
The following definition creates a stream which will connect to a S3 bucket called test-bucket
https://docs.aws.amazon.com/cli/latest/reference/firehose/create-delivery-stream.html
{
"DeliveryStreamName": "firehose-to-s3",
"DeliveryStreamType": "DirectPut",
"KinesisStreamSourceConfiguration": {
"KinesisStreamARN": "arn:aws:firehose:ap-southeast-2:000000000000:deliverystream",
"RoleARN": "arn:aws:iam:local"
},
"S3DestinationConfiguration": {
"RoleARN": "arn:aws:iam:local",
"BucketARN": "arn:aws:s3:::test-bucket",
"Prefix": "events",
"ErrorOutputPrefix": "",
"CompressionFormat": "UNCOMPRESSED"
}
}
Localstack will attempt to run all files in the docker-entrypoint-initaws.d directory*
Create a shell file with the following contents:
# Create the firehose delivery stream
aws --endpoint-url http://localhost:4573 firehose --region $DEFAULT_REGION create-delivery-stream --cli-input-json file:///docker-entrypoint-initaws.d/s3-config.json
# Create a S3 bucket called test-bucket
aws --endpoint-url=http://localhost:4572 s3 mb s3://test-bucket
# Set the test-bucket to public-read so anything can read from it
aws --endpoint-url=http://localhost:4572 s3api put-bucket-acl --bucket test-bucket --acl public-read
# Dump the config out so you can check the setup
aws --endpoint-url=http://localhost:4572 s3api get-bucket-acl --bucket test-bucket
* Make sure your files are saved with line feeds. Add the following to your .gitattributes
*.sh text eol=lf
You can now start Firehose and S3 using:
docker-compose --file docker-compose.yml up --build
By default all amazon clients try to connect to AWS in the cloud. You will need to pass in a service URL to configure the client to use the container.
services.AddSingleton<IAmazonDynamoDB>(client =>
{
if (string.IsNullOrEmpty(options.FirehoseServiceUrl))
{
return new AmazonKinesisFirehoseClient(credentials, region);
}
// Localstack support
var config = new AmazonKinesisFirehoseConfig
{
ServiceURL = options.FirehoseServiceUrl
};
return new AmazonKinesisFirehoseClient(new BasicAWSCredentials("foo", "bar"), config);
});
By default all amazon clients try to connect to AWS in the cloud. You will need to pass in a service URL to configure the client to use the container.
services.AddSingleton<IAmazonS3>(provider => new AmazonS3Client(new BasicAWSCredentials(credentials.AccessKey, credentials.SecretKey), new AmazonS3Config
{
ServiceURL = config.ServiceUrl,
ForcePathStyle = true, // Don't try to use the bucket name as part of the host name
UseHttp = true // Just use HTTP for testing
}));