Skip to content

Instantly share code, notes, and snippets.

@graveca
Last active June 26, 2023 20:43
Show Gist options
  • Save graveca/0ae3f4a5f6d13736f5593f45864b5142 to your computer and use it in GitHub Desktop.
Save graveca/0ae3f4a5f6d13736f5593f45864b5142 to your computer and use it in GitHub Desktop.
Docker

Docker

  • Hazelast
  • MongoDB
  • Redis
  • Cassandra
  • Docker
  • Linux
  • Grafana
  • Splunk
  • Prometheus
  • Cassandra

Apache Cassandra

Cassandra cluster.

About

An open source NoSQL distributed database

Setup

docker run --name cassandrasrv -d cassandra:latest

docker container attach cassandrasrv
docker container start cassandrasrv

docker e ec -it cassandrasrv /bin/sh

Container

Port: 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp

# add to network casssrv 
# forward port from host

docker network create cassnet
docker run --name casssrv -p 9042:9042 --network cassnet -d cassandra:latest
docker inspect casssrv | less
docker network inspect cassnet
lsof -i -P > l.l; grep l.l 9092

Cluster

TODO

Data

TODO

Management App

DataSta OpsCenter only available for Enterprise versions.

docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' casssrv
# 172.18.0.2 9042
docker run -e DS_LICENSE=accept -d -p 8888:8888 -p 61620:61620 --network cassnet --name cassops datasta /dse-opscenter:6.5.1
# license issue

Console

docker run --name cassshell -it --network cassnet --rm cassandra cqlsh casssrv

help
show host
show version
#[cqlsh 6.0.0 | Cassandra 4.0.1 | CQL spec 3.4.5 | Native protocol v5]

describe tables
desc keyspaces
use <keyspace>
use system_schema;
select keyspace_name,table_name from tables where keyspace_name = 'system';
select * from system.local;

create keyspace people 
with replication = {'class': 'SimpleStrategy', 'replication_factor' : 1};
use people;
create table team (name te t PRIMARY KEY, age int);
insert into people.team (name, age) values ('andrew',29);
insert into people.team (name, age) values ('ian',30);
insert into people.team (name, age) values ('avinash',31);
select * from people.team;

describe people.team;
describe cluster;

Java

Use in Spring Boot with @Cacheable

implementation"com.datasta .oss:java-driver-core:4:13.0"

        try(CqlSession session=CqlSession.builder().build()){
        session.e ecute("select * from people.team")
        .all().forEach( ->{
        LOG.info("Row: {}", .getFormattedContents());
        });
        catch...

Summary

Cassandra is NonSQL distributed database:

  • No support for JOINs
  • Lacking security controls e.g. native password authentication.
Feature Status Comment
Distributed Cache Document collections
Distributed Locks
Distributed Queues
Distributed E ecution
Command line client
Query language Like SQL
Management app 🆗 Enterprise versions only
Spring Boot / Java API
Open Source Atlas is not
Docker standard image
Clustering / Replicas
Strong consistency Eventual consistency

docker

Run

docker run -d -p 80:80 docker/getting-started
http://localhost/tutorial/

docker image ls
docker history mongo

docker container list --all
docker container rename clever_franklin myprmydockerom
docker container restart mydocker
docker container stop mydocker
docker container restart mydocker
docker container attach mydocker
docker e ec -it mydocker /bin/sh
docker inspect mydocker | less
docker ps -a
docker logs mydocker
docker logs mydocker --follow
docker system prune
docker events

history -1000 | grep "docker run"

// ipaddress
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' cname

Editing

Container arguments can only be set with docker run at create time. E.g. see Prometheus

You can see the config and edit it. However does not persist or take effect:

docker inspect myprom

docker commit myprom myprom2        
docker run --name myprom2 -p 8090:90 -td myprom2       

docker run --rm -it -v /var/lib/docker:/var/lib/docker alpine vi $(docker inspect --format='/var/lib/docker/containers/{{.Id}}/config.v2.json' myprom)

Networks

Put containers on same network as others

docker network create cassnet
docker network inspect cassnet

docker network create angrave
docker network ls

docker network create mynet

Clusters

Clustering containers to work as a group.

  • Docker swarm. Running docker as a cluster orchestration.
  • Docker stack. Commands to perform cluster orchestration on a swarm.
  • Docker compose. Configuration for a cluster. Define and run multi-container apps.

docker stack

https://docs.docker.com/engine/reference/commandline/stack/

docker stack deploy --compose-file docker-stack.yml mystack

Uses redis to keep a hit counter. https://docs.docker.com/engine/swarm/stack-deploy/

Firstly start the swarm and a local image registry

docker swarm init
docker service create --name registry --publish published=5000,target=5000 registry:2
docker service ls
curl http://localhost:5000/v2/

Now create a stack

cd stackdemo
ls -1
Dockerfile
app.py
docker-compose.yml
requirements.t t

docker-compose up -d
docker-compose up -d --build 
docker-compose ls

curl http://localhost:8000

docker-compose ps
NAME                COMMAND                  SERVICE             STATUS              PORTS
stackdemo-redis-1   "docker-entrypoint.s…"   redis               running             6379/tcp
stackdemo-web-1     "python app.py"          web                 running             0.0.0.0:8000->8000/tcp

docker-compose down --volumes

Push image generated to the registry

docker-compose push
docker image ls

Now use that stack image in the swarm

docker stack deploy --compose-file docker-compose.yml stackdemo
docker stack services stackdemo

docker stack rm stackdemo
docker service rm registry
docker swarm leave --force

grafana

Run Grafana at http://localhost:3000

https://grafana.com/docs/grafana/latest/installation/docker/

docker run -d -p 3000:3000 --name mygraf grafana/grafana-oss:8.5.4
docker run -d -p 3000:3000 --name mygraf grafana/grafana-oss:9.3.1

docker container stop mygraf
docker container rm mygraf

// FIXME
docker run -d -p 3000:3000 --name mygraf grafana/grafana-oss:8.5.4 \
  -e "GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource"

/usr/bin/open -a "/Applications/Google Chrome.app" 'http://localhost:3000'
// admin/admin change to admin2

hazelcast

Hazelcast cluster running locally in Docker.

About

In-memory computing platform.

Install

docker run --name hazelsrv -d hazelcast/hazelcast:5.1-SNAPSHOT-slim
docker container attach hazelsrv
docker container start hazelsrv
docker exec -it hazelsrv /bin/sh

Config

  • Port 5701
  • Use version 5.1-SNAPSHOT-slim
  • Run as root so can access brentford mac
--add-host 'brentford.local:192.168.1.103'
-name hazelsrv
-d
--user root
--network host

// running as root so can access brentford mac
docker run --name hazelsrv --user root --add-host 'brentford:192.168.1.103' -p 5701:5701 hazelcast/hazelcast:5.1-SNAPSHOT-slim

// ip address for client: 172.17.0.3
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' hazelsrv
docker run -d --name hazelsrv --user root --add-host 'brentford:192.168.1.103' -p 5701:5701 hazelcast/hazelcast:5.1-SNAPSHOT-slim

// get ip address for client: 172.17.0.3
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' hazelsrv

// TODO. Multi node cluster:
docker run --name hazel1 --user root --add-host 'brentford:192.168.1.103' -e HZ_NETWORK_PUBLICADDRESS=brentford:5701 -p 5701:5701 hazelcast/hazelcast:5.1-SNAPSHOT-slim
docker run --name hazel2 --network host --add-host 'brentford.local:192.168.1.103' -d -e HZ_NETWORK_PUBLICADDRESS=brentford:5702 -p 5702:5701 hazelcast/hazelcast:5.1-SNAPSHOT-slim

// TODO data
/opt/hazelcast $ find config/
config/hazelcast-docker.xml

Management App

Run Management Center in another Docker and access from https://localhost:8080

docker run --name hazelman \
    -e MC_INIT_CMD="./mc-conf.sh cluster add -H=/data -ma 172.17.0.3:5701 -cn dev" \
    -p 8080:8080 hazelcast/management-center:latest
   
docker run --name hazelman --rm -p 8080:8080 hazelcast/management-center:latest

// find ip address for client: 172.17.0.3
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' hazelsrv

Console

Need to setup the client xml config and mount in a new Docker.

cat /Users/andrew/mnt/hazelcast-client.xml 
<hazelcast-client xmlns="http://www.hazelcast.com/schema/client-config"
                  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                  xsi:schemaLocation="http://www.hazelcast.com/schema/client-config
                  http://www.hazelcast.com/schema/client-config/hazelcast-client-config-4.0.xsd">
    <network>
        <cluster-members>
            <address>172.17.0.3</address>
        </cluster-members>
    </network>
</hazelcast-client>

docker run --name hazelcmd \
    -v /Users/andrew/mnt:/mnt --rm -it hazelcast/hazelcast:latest \
    java -Dhazelcast.client.config=/mnt/hazelcast-client.xml \
    -cp "/opt/hazelcast/lib/*" com.hazelcast.client.console.ClientConsoleApp

Console commands:

help
who

instances
ns default

ns map
m.put hello world
m.get hello
m.stats

ns exe
ns executeOnMembers hello

Java

Logs are stdout and has log4j logging config on node

Use in Spring Boot with @Cacheable

implementation "com.hazelcast:hazelcast:${hazelVersion}"

HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
Hazelcast.newHazelcastInstance(config).getCluster

Quick Start

Welcome back. Run on brentford with: [June 2023]

docker run -d --name hazelsrv --user root --add-host 'brentford:192.168.1.103' -p 5701:5701 hazelcast/hazelcast:5.1-SNAPSHOT-slim
docker run -d --name hazelman --rm -p 8080:8080 hazelcast/management-center:5.1-SNAPSHOT-slim
/usr/bin/open -a "/Applications/Google Chrome.app" 'http://localhost:8080'

Summary

Hazelcast is fully featured distributed environment. Has key/value pairs and other distributed structures.

Feature Status Comment
Distributed Cache API
Distributed Locks API
Distributed Queues API
Distributed Execution API
Command line client
Query language Predicates
Management app Management Center
Spring Boot / Java API
Open Source Github
Docker standard image
Clustering / Replicas
Strong consistency Eventual consistency

linux

Run linux using alpine distro

docker run -it --name alpine --rm alpine /bin/sh
docker run -it --name alpine --user nobody --rm alpine /bin/sh
docker run -it --name alpine -d --rm alpine /bin/sh
docker container attach alpine
% CTRL P CTRL Q // quit and keep container running

docker run debian echo hello

ping host.docker.internal
cat /etc/hosts
uname --all
id
ps awww 
ls
sudo lsof -i -P | grep LISTEN | grep :$PORT
lsof -i -P | grep 27017

MongoDB

MongoDB cluster running locally in Docker.

About

Cross-platform document-oriented database program.

Install

docker run --name mongosrv -d mongo:latest

docker container attach mongosrv
docker container start mongosrv

docker exec -it mongosrv /bin/sh

Config

Port 27017

-name mongosrv
-p 27017:27017
-d 

// not using
--add-host 'brentford.local:192.168.1.103'
--user root // not needed as runs as mongodb
--network host // if use this port forwarding is off. Not needed
--restart unless-stopped 
--replSet rs0 // for replicas

docker run --name mongosrv -p 27017:27017 -d mongo:latest

From another container will need custom network

docker network create mongo-network
docker run -d --network mongo-network --name example-mongo mongo:latest

This works per Jan 27:

server:
docker run --name mongosrv -p 27017:27017 -d mongo:latest

compass and java:
mongodb://localhost:27017

can be seen on mac listening:
mongodb://localhost:27017"

Cluster

TODO

--config <filename> on a mounted drive on host

// TODO not added config yet
docker run --name some-mongo -v /my/custom:/etc/mongo -d mongo --config /etc/mongo/mongod.conf

Data

TODO

Management App

Compass

Connect to your Docker instance at mongodb://localhost:27017

Console

// open shell client in another docker
docker run --name mongoclient --network host -it --rm mongo mongo test

db
show dbs
show collections
version()
exit

use myNewDatabase
db.getCollection("myCollection").find()
db.myCollection.find().count()
db.myCollection.find().pretty()
db.myStuff.find()

db.myCollection.insertOne( { x: 1 } );
db.myCollection.insertOne( { x: 42 } );

// find value by id
db.myStuff.find({"name":"andrew"})

// where qty does not exist
db.myCollection.find({qty:{$exists:false}})

// where qty does exist and value in set
db.myCollection.find({x:{$exists:true, $in:[20,42]}})

// get docs that have field
db.myCollection.find({$isNumber:x})

Java

Use in Spring Boot with @Cacheable

implementation 'org.springframework.boot:spring-boot-starter-data-mongodb'

String uri = "mongodb://localhost:27017";
MongoClient client = MongoClients.create(uri);
MongoDatabase database = client.getDatabase("myNewDatabase");
MongoCollection<Document> stuff = database.getCollection("myCollection");

Conclusions

MongoDB is a document/JSON store with rich query language for inspecting unstructured data.

MongoDB organises unstructured data using Collection, Document, and Field. Collection can hold differently structured JSON documents. Although use of schema is not enforced it is recommended.

MongoDB uses 'eventual consistency' across clusters.

Feature Status Comment
Distributed Cache Document collections
Distributed Locks
Distributed Queues
Distributed Execution
Command line client
Query language JSON/Document focussed
Management app Compass
Spring Boot / Java API
Open Source 🆗 Atlas is not
Docker standard image
Clustering / Replicas
Strong consistency Eventual consistency

Issues

  • Document as a 16MB max size
  • Query on Joins

Prometheus

Run Prometheus at http://localhost:9090

About

Publish stats from Java into Prometheus and then query.

Setup

docker run --name myprom -d -p 9090:9090 prom/prometheus

Config

Note that root can ping host.docker.internal but nobody cannot. So change user to root. Prometheus is setup to run as nobody.

Send in hostname as brentford.

Send in web.enable-lifecycle so can reload config into running process with curl.

docker container stop myprom
docker container rm myprom

// -d daemon run in background
// -p ports published internal:external
// -u username 0/root
// -h hostname
// -v bind volume
// --network connect to network
// --name container name
// --add-host add host to ip mapping

docker run -d --user root -p 9090:9090 -h promhost -v /var/lib/docker:/var/lib/docker --name myprom --add-host 'brentford:192.168.1.103' prom/prometheus 
// --network mynet or host doesnt work
// docker run -d --user root -p 9090:9090 -h promhost -v /var/lib/docker:/var/lib/docker --name myprom --add-host 'brentford:192.168.1.103' prom/prometheus --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus --web.console.libraries=/usr/share/prometheus/console_libraries --web.console.templates=/usr/share/prometheus/consoles --web.enable-lifecycle 
/usr/bin/open -a "/Applications/Google Chrome.app" 'http://localhost:9090'

docker inspect myprom
docker ps -a
docker e ec -it myprom /bin/sh

Add new scrape points.

open CLI
  cd ..
  prometheus -h
  vi /etc/prometheus/prometheus.yml
  promtool check config ./etc/prometheus/prometheus.yml

  http://localhost:8081/prometheus

- job_name: 'java'
  scrape_interval: 5s
  metrics_path: /prometheus
  static_configs:
    - targets: [ 'brentford:8081' ]

Usage

IntelliJ

Publish stats from Java and examine

URLs

Reload config:

curl -X POST http://localhost:9090/-/reload 

Queries:

promhttp_metric_handler_requests_total
rate(promhttp_metric_handler_requests_total{code="200"}[1m])
periodical_total

Quick Start

start docker
docker run -d -p 9090:9090 --network host --name myprom prom/prometheus 

Redis

Redis cluster running locally in Docker.

About

Real-time data platform.

Install

docker run --name redissrv -d redis:latest

docker container attach redissrv
docker container start redissrv

docker exec -it redissrv /bin/bash

Config

Port 6379

-name redissrv
-d 
-p 6379:6379
--hostname redishost

docker run --name redissrv --hostname redishost -d -p 6379:6379 redis:latest

docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' redissrv
// 172.17.0.5

Cluster

TODO

Data

TODO

//docker run -v /myredis/conf:/usr/local/etc/redis --name myredis redis redis-server /usr/local/etc/redis/redis.conf

Management App

Use RedisInsight on http://localhost:8001

docker run --name redisman \
    -v redisinsight:/db -p 8001:8001 redislabs/redisinsight:latest
    
Connect to 172.17.0.5:6379

Console

Note that Redis has 16 'databases' that can only switch between at CLI startup using -n argument.

// get ip address from docker inspect
docker run -it --rm redis redis-cli -h 172.17.0.5 -p 6379 -n 1

commands
acl users
client info 
exit

config get databases
info keyspace
dbsize

set cat-count 10
exists key
get cat-count

keys *
type banana
mget apple banana

Java

Use in Spring Boot with @Cacheable

Jedis jedis = new Jedis();
jedis.set("events/city/rome", "32,15,223,828");
String cachedResponse = jedis.get("events/city/rome");

Summary

Redis is real-time fasts key/value store.

Feature Status Comment
Distributed Cache Key/value store
Distributed Locks
Distributed Queues
Distributed Execution In Reddison
Command line client
Query language Only get/put etc
Management app RedisInsight
Spring Boot / Java API
Open Source 🆗 Redis Enterprise is not
Docker standard image
Clustering / Replicas
Strong consistency Eventual consistency

splunk

Run Splunk at http://localhost:3000

https://hub.docker.com/r/splunk/splunk/

// FIXME
docker run -d -p 8000:8000 \
--name mysplunk splunk/splunk:latest \
-e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=!!Splunk1234!!"

docker container stop mysplunk
docker container rm mysplunk

/usr/bin/open -a "/Applications/Google Chrome.app" 'http://localhost:8000'
// admin/!!Splunk1234!!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment