Skip to content

Instantly share code, notes, and snippets.

@sats17
Last active March 4, 2024 17:06
  • Star 60 You must be signed in to star a gist
  • Fork 26 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save sats17/493d05d8d4dfd16b7dad399163075156 to your computer and use it in GitHub Desktop.
############################################################################################################
################## #############################
################## #############################
This Gist collection contains all localstack related examples
################## #############################
################## #############################
############################################################################################################
version: '3.8'
services:
redis:
container_name: "redis"
image: redis:alpine
ports:
- "6379:6379" # To Connect redis from local cmd use : docker exec -it cores-redis redis-cli
localstack:
container_name: "localstack" # Container name in your docker
image: localstack/localstack:latest # Will download latest version of localstack
#image: localstack/localstack-full:latest # Full image support WebUI
ports:
- "4566:4566" # Default port forward
- "9200:4571" # Elasticsearch port forward
#- "8080:8080: # WebUI port forward
environment:
- SERVICES=es, s3, ec2, dynamodb, elasticcache, sqs #AWS Services that you want in your localstack
- DEBUG=1 # Debug level 1 if you want to logs, 0 if you want to disable
- START_WEB=0 # Flag to control whether the Web UI should be started in Docker
- LAMBDA_REMOTE_DOCKER=0
- DATA_DIR=/tmp/localstack/data # Local directory for saving persistent data(Example: es storage)
- DEFAULT_REGION=us-east-1
volumes:
- './.localstack:/tmp/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
1) S3 Connection :
AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(new EndpointConfiguration("http://localhost:4566/", "us-east-1")) // localstack endpoint configuration
.withCredentials(new DefaultAWSCredentialsProviderChain())
.withPathStyleAccessEnabled(true) // Disable virtualhost style connection and enable path style s3 bucket
.build();
2) SQS Connection :
AmazonSQSClientBuilder.standard()
.withEndpointConfiguration(new EndpointConfiguration("http://localhost:4566", "us-east-1"))
.withCredentials(new DefaultAWSCredentialsProviderChain());
3) SNS Connection :
AmazonSNSClientBuilder.standard()
.withEndpointConfiguration(new EndpointConfiguration("http://localhost:4566", "us-east-1"))
.build();
1) Create es domain -
# aws --endpoint-url=http://localhost:4566 es create-elasticsearch-domain --domain-name dev-domain --elasticsearch-version 6.8.1
(Note: It take around 5-10 min to up the domain in localstack)
2) Check if domain is up or not(Default port is 4571) -
# curl -X GET "http://localhost:9200"
3) Create new index in your domain -
# curl -X PUT "localhost:9200/newindex?pretty"
4) List all indices from your domain -
# curl -X GET "http://localhost:9200/_cat/indices?pretty"
5) List all elasticsearch domains -
# aws --endpoint-url=http://localhost:4566 es list-domain-names
6) Delete domain from elasticsearch -
# aws --endpoint-url=http://localhost:4566 es delete-elasticsearch-domain --domain-name dev-domain
1) Create s3 bucket -
# aws --endpoint-url=http://localhost:4566 s3 mb s3://my-test-bucket
2) List s3 buckets -
# aws --endpoint-url="http://localhost:4566" s3 ls
3) Upload file on s3 bucket -
# aws --endpoint-url="http://localhost:4566" s3 sync "myfiles" s3://my-test-bucket
4) List files from AWS CLI -
# aws --endpoint-url="http://localhost:4566" s3 ls s3://my-test-bucket
6) Access file via URL(File name was download.jpg) -
# http://localhost:4566/my-test-bucket/download.jpg
7) Delete object from bucket -
# aws --endpoint-url=http://localhost:4566 s3api delete-object --bucket <bucket-name> --key <file-name.format>
8) Create bucket notification configuration -
# aws --endpoint-url=http://localhost:4566 s3api put-bucket-notification-configuration --bucket <bucket-name> --notification-configuration file://<configuration-file-name>.json
9) Get bucket notification configuration -
# aws --endpoint-url=http://localhost:4566 s3api get-bucket-notification-configuration --bucket <bucket-name>
10) Set bucket policy -
# aws --endpoint-url=http://localhost:4566 s3api put-bucket-policy --bucket <bucket-name> --policy file://<policy-file>.json
10) Get bukcet policy -
# aws --endpoint-url=http://localhost:4566 s3api get-bucket-policy --bucket <bucket-name>
1) Create sns topic -
# aws --endpoint-url=http://localhost:4566 sns create-topic --name my-test-topic
2) list all sns topics -
# aws --endpoint-url=http://localhost:4566 sns list-topics
3) list subscriptions -
# aws --endpoint-url=http://localhost:4566 sns list-subscriptions
4) publish message -
# aws --endpoint-url=http://localhost:4566 sns publish --topic-arn "arn:aws:sns:us-east-1:000000000000:ingest-topic" --message file://message.txt --message-attributes file://attributes.json
-- message.txt
my message to publish
-- attributes.json
{
"key": {
"DataType": "String",
"StringValue": "value"
}
}
1) Create queue -
# aws --endpoint-url=http://localhost:4566 sqs create-queue --queue-name my-test-queue
2) Get queue url -
# aws --endpoint-url=http://localhost:4566 sqs get-queue-url --queue-name <queue-name>
3) list queue -
# aws --endpoint-url=http://localhost:4566 sqs list-queues
4) send message -
# aws --endpoint-url=http://localhost:4566 sqs send-message --queue-url <queue-url> --message-body <message>
5) receive message -
# aws --endpoint-url=http://localhost:4566 sqs receive-message --queue-url <queue-url>
6) purge queue -
# aws --endpoint-url=http://localhost:4566 sqs purge-queue --queue-url <queue-url>
7) delete queue -
# aws --endpoint-url=http://localhost:4566 sqs delete-queue --queue-url <queue-url>
8) set attributes -
# aws --endpoint-url=http://localhost:4566 sqs set-queue-attributes --queue-url=<queue-url> --attributes file://<file-name>.json
9) get attributes -
# aws --endpoint-url=http://localhost:4566 sqs get-queue-attributes --queue-url=<queue-url>
provider "aws" {
region = "us-east-1"
access_key = "test"
secret_key = "test"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
# Starting with localstack version 0.11.0, all APIs are exposed via a single edge service,
# which is accessible on http://localhost:4566 by default
endpoints = {
s3 = "http://localhost:4566"
sqs = "https://localhost:4566"
}
}
1) DynamoDB -
import AWS = require("aws-sdk");
let options = {
apiVersion: '2012-08-10',
region: 'us-east-1',
}
options['endpoint'] = new AWS.Endpoint("http://localhost:4566")
console.log(options)
const dynamoDB = new AWS.DynamoDB(options);
console.log(dynamoDB)
dynamoDB.listTables({Limit: 10}, function(err, data) {
if (err) {
console.log("Error", err.code);
} else {
console.log("Table names are ", data.TableNames);
}
});
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment