Skip to content

Instantly share code, notes, and snippets.

@NikiforovAll
Last active April 4, 2022 12:45
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save NikiforovAll/fcb0a3fcb5c7ac2db1252e5ff5c94a9e to your computer and use it in GitHub Desktop.
Save NikiforovAll/fcb0a3fcb5c7ac2db1252e5ff5c94a9e to your computer and use it in GitHub Desktop.
Postgres + RabbitMQ + Seq

Developer Setup

This repository is intended for bootstrapping developer environment.

version: "3.5"
services:
postgres:
image: postgres:14.0
hostname: postgres
volumes:
- postgresdata-nikiforovall:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: postgres
ports:
- "5432:5432"
networks:
- nikiforovall-network
rabbitmq:
image: rabbitmq:3.8-management
# container_name: rabbitmq
hostname: rabbitmq
volumes:
- rabbitmqdata-nikiforovall:/var/lib/rabbitmq
ports:
- "15672:15672"
- "15692:15692"
- "5672:5672"
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=admin
networks:
- nikiforovall-network
seq:
image: datalust/seq:2021.2
container_name: seq
restart: always
environment:
- ACCEPT_EULA=Y
ports:
- 5341:80
networks:
- nikiforovall-network
# network_mode: host
volumes:
- seq-nikiforovall:/data
networks:
nikiforovall-network:
name: nikiforovall-network
volumes:
rabbitmqdata-nikiforovall:
driver: local
postgresdata-nikiforovall:
driver: local
seq-nikiforovall:
driver: local
Docker images and configurations for running the following services:
*** Message queue ***
-RabbitMQ
*** Database ***
-MongoDB
-SQL Server 2017
-PostgreSQL
-InfluxDB
*** Cache ***
-Redis
*** Service discovery ***
-Consul
*** Secret storage ***
-Vault
*** Monitoring ***
-Grafana
-Prometheus
*** Logging ***
-Seq
-Elasticsearch
-Kibana
-Logstash
-Jaeger
===========================================================================================================================
*** MESSAGE QUEUE ***
===========================================================================================================================
===========================================================================================================================
RABBITMQ
===========================================================================================================================
--- Docker ---
docker run --name rabbitmq -d -p 5672:5672 -p 15672:15672 --hostname rabbitmq rabbitmq:3-management
--- Optional ---
-v /tmp/rabbitmq:/var/lib/rabbitmq
-e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=secret
===========================================================================================================================
*** DATABASE ***
===========================================================================================================================
===========================================================================================================================
MONGODB
===========================================================================================================================
--- Docker ---
docker run --name mongo -d -p 27017:27017 mongo:4
--- Optional ---
-v mongo:/data/db
===========================================================================================================================
MONGO EXPRESS
===========================================================================================================================
--- Docker ---
docker run --name mongo-express -d -p 8081:8081 mongo-express
--- Optional ---
-e ME_CONFIG_MONGODB_ADMINUSERNAME=user -e ME_CONFIG_MONGODB_ADMINPASSWORD=secret
--network dshop-network
===========================================================================================================================
SQL SERVER 2018
===========================================================================================================================
--- Docker ---
docker run --name mssql -d -p 1433:1433 -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Abcd1234!' microsoft/mssql-server-linux:2017-latest
--- Optional ---
-v /tmp/mssql:/var/opt/mssql
===========================================================================================================================
POSTGRESQL
===========================================================================================================================
--- Docker ---
docker run --name postgres -d -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=secret postgres
--- Optional ---
-v /tmp/postgresql:/var/lib/postgresql/data
===========================================================================================================================
INFLUXDB
===========================================================================================================================
--- Docker ---
docker run --name influxdb -d -p 8086:8086 influxdb
--- Optional ---
-v /tmp/influxdb:/var/lib/influxdb
===========================================================================================================================
*** CACHE ***
===========================================================================================================================
===========================================================================================================================
REDIS
===========================================================================================================================
--- Docker ---
docker run --name redis -d -p 6379:6379 redis
--- Optional ---
-v /tmp/redis/:/data
===========================================================================================================================
REDIS CLI
===========================================================================================================================
--- Docker ---
docker run -it --rm redis redis-cli -h redis -p 6379
--- Optional ---
--network dshop-network
===========================================================================================================================
*** SERVICE DISCOVERY ***
===========================================================================================================================
===========================================================================================================================
CONSUL
===========================================================================================================================
--- Docker ---
docker run --name consul -d -p 8500:8500 consul
--- Optional ---
-v /tmp/consul/data:/consul/data
-v /tmp/consul/config:/consul/config
config.json:
{
"datacenter": "dc1",
"log_level": "DEBUG",
"server": true,
"ui" : true,
"ports": {
"dns": 9600,
"http": 9500,
"https": -1,
"serf_lan": 9301,
"serf_wan": 9302,
"server": 9300
}
}
===========================================================================================================================
*** MONITORING ***
===========================================================================================================================
===========================================================================================================================
GRAFANA
===========================================================================================================================
--- Docker ---
docker run --name grafana -d -p 3000:3000 grafana/grafana
--- Optional ---
-v /tmp/grafana:/var/lib/grafana
===========================================================================================================================
PROMETHEUS
===========================================================================================================================
--- Docker ---
docker run --name prometheus -d -p 9090:9090 -v /tmp/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
prometheus.yml:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
static_configs:
- targets: ['docker.for.mac.localhost:9090']
- job_name: 'api'
metrics_path: '/metrics-text'
static_configs:
- targets: ['docker.for.mac.localhost:5000']
===========================================================================================================================
*** LOGGING ***
===========================================================================================================================
===========================================================================================================================
SEQ
===========================================================================================================================
--- Docker ---
docker run --name seq -d -p 5341:80 -e ACCEPT_EULA=Y datalust/seq
--- Optional ---
-v /tmp/seq:/data
===========================================================================================================================
ELASTICSEARCH
===========================================================================================================================
# The vm.max_map_count kernel has to be set to at least 262144 for production use.
# Linux:
grep vm.max_map_count /etc/sysctl.conf
vm.max_map_count=262144
# macOS:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
sysctl -w vm.max_map_count=262144
# Windows
docker-machine ssh
sudo sysctl -w vm.max_map_count=262144
--- Docker ---
docker run --name elasticsearch -p 9200:9200 -p 9300:9300 docker.elastic.co/elasticsearch/elasticsearch:6.4.0
--- Optional ---
-e "discovery.type=single-node"
-v /tmp/elasticsearch:/usr/share/elasticsearch/data
===========================================================================================================================
KIBANA
===========================================================================================================================
--- Docker ---
docker run --name kibana -d -p 5601:5601 docker.elastic.co/kibana/kibana:6.4.0
--- Optional ---
-v /tmp/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
kibana.yml:
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
===========================================================================================================================
LOGSTASH
===========================================================================================================================
--- Docker ---
docker run --name logstash -d -p 5044:5044 docker.elastic.co/logstash/logstash:6.4.0
===========================================================================================================================
JAEGER
===========================================================================================================================
--- Docker ---
docker run --name jaeger -d -p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 14268:14268 -p 9411:9411 -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 jaegertracing/all-in-one
#!/bin/bash
SCRIPTPATH="$( cd "$(dirname "$0")" >/dev/null 2>&1 ; pwd -P )"
DOCKER_COMPOSE_FILE="$SCRIPTPATH/../docker-compose-local-infrastructure.yml"
if [ -z "$1" ]
then
echo -e "Usage: \n${0##*/} start \n${0##*/} stop \n${0##*/} info"
exit 1
fi
if [ $1 == start ]
then
docker-compose -f $DOCKER_COMPOSE_FILE up --build -d ${@: 2}
STATUS=$?
if [ $STATUS -eq 0 ]
then
echo -e "\nContainers starting in background \nFor log info: ${0##*/} info"
else
echo -e "\nFailed starting containers"
fi
elif [ $1 == stop ]
then
docker-compose -f $DOCKER_COMPOSE_FILE down
STATUS=$?
if [ $STATUS -eq 0 ]
then
echo -e "\nContainers successfully stopped"
else
echo -e "\nFailed stopping containers"
fi
elif [ $1 == info ]
then
docker-compose -f $DOCKER_COMPOSE_FILE logs --follow ${@: 2}
else
echo -e "Usage: \n${0##*/} start \n${0##*/} stop \n${0##*/} info"
exit 1
fi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment