Skip to content

Instantly share code, notes, and snippets.

@nook24
Last active June 29, 2023 05:58
Show Gist options
  • Save nook24/75b8a07d19989de6fcc122c78044ce82 to your computer and use it in GitHub Desktop.
Save nook24/75b8a07d19989de6fcc122c78044ce82 to your computer and use it in GitHub Desktop.
openITCOCKPIT Docker Swarm

docker_swarm

This is an example of how to run openITCOCKPIT within a Docker Swarm cluster. To share the volumes between the swarm nodes, NFS is used.

Create NFS share

apt-get install nfs-kernel-server

mkdir -p /opt/volumes/{mysql-data,grafana-data,graphite-data,naemon-var,naemon-var-local,naemon-config,oitc-frontend-src,oitc-webroot,oitc-maps,oitc-agent-cert,oitc-agent-etc,oitc-var,oitc-backups,oitc-import,oitc-styles,checkmk-etc,checkmk-var,checkmk-agents}

Edit the file /etc/exports

/opt/volumes 172.16.166.0/24(rw,sync,no_subtree_check,no_root_squash)

Apply the new config exportfs -a

Prepare Docker Swarm Nodes

It is important, that all Docker Swarm nodes have the NFS driver installed: apt-get install nfs-common

Do not mount the NFS share. Docker will do this.

Enable NFS support for Graphire

Edit the file openitcockpit.env and set

CC_WHISPER_FALLOCATE_CREATE=0

Last, edit the default compose.yml so the the volumes use the NFS share. See the example below

Start the containers

docker stack deploy --compose-file compose.yml openitcockpit
# For now, Portainer can not use a env file when running in Swarm Mode
# Therefore you need to specify the env vars for the containers directly
# https://github.com/portainer/portainer/issues/6701
#
# You can use this as an example config tested with Portainer-CE in a 3 node Docker Swarm Cluster.
version: "3.9"
services:
gearmand:
image: openitcockpit/gearmand:latest
ports:
- "4730" # Gearman-Job-Server
ulimits: # See: https://statusengine.org/tutorials/gearman-to-many-files/
nofile:
soft: 65536
hard: 65536
networks:
- oitc-backend
deploy:
restart_policy:
condition: on-failure
redis:
image: redis:latest
ports:
- "6379" # Redis-Server
networks:
- oitc-backend
deploy:
restart_policy:
condition: on-failure
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root_password
- MYSQL_DATABASE=openitcockpit
- MYSQL_USER=openitcockpit
- MYSQL_PASSWORD=secure
ports:
- "3306" # MySQL Default Ports
ulimits:
nofile:
soft: 65536
hard: 65536
networks:
- oitc-backend
volumes:
- mysql-data:/var/lib/mysql
command: mysqld --sql_mode="" --innodb-buffer-pool-size=256M --innodb-flush-log-at-trx-commit=2 --innodb-file-per-table=1 --innodb-flush-method=O_DIRECT --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci
deploy:
restart_policy:
condition: on-failure
victoria-metrics:
image: openitcockpit/victoria-metrics:latest
ports:
- "8428" # victoria-metrics
networks:
- oitc-backend
deploy:
restart_policy:
condition: on-failure
carbon-c-relay:
image: openitcockpit/carbon-c-relay:latest
# Please notice: https://github.com/grobian/carbon-c-relay/issues/455
ports:
- "2003" # local graphite plaintext port
networks:
- oitc-backend
deploy:
restart_policy:
condition: on-failure
# You can add more carbon-cache instances if needed
# Please notice: https://github.com/grobian/carbon-c-relay/issues/455
carbon-cache1:
image: openitcockpit/carbon-cache:latest
environment:
- CC_INSTANCE=1 # Make sure to adjust this
- CC_WHISPER_FALLOCATE_CREATE=0 #fix for nfs
hostname: carbon-cache1
volumes:
- graphite-data:/var/lib/graphite/whisper
networks:
- oitc-backend
ulimits:
nofile:
soft: 65536
hard: 65536
# You can add more carbon-cache instances if needed
# Please notice: https://github.com/grobian/carbon-c-relay/issues/455
carbon-cache2:
image: openitcockpit/carbon-cache:latest
environment:
- CC_INSTANCE=2 # Make sure to adjust this
- CC_WHISPER_FALLOCATE_CREATE=0 #fix for nfs
hostname: carbon-cache2
volumes:
- graphite-data:/var/lib/graphite/whisper
networks:
- oitc-backend
ulimits:
nofile:
soft: 65536
hard: 65536
graphite-web:
image: openitcockpit/graphite-web:latest
ports:
- "8080" # Graphite Web
volumes:
- graphite-data:/var/lib/graphite/whisper
networks:
- oitc-backend
deploy:
restart_policy:
condition: on-failure
grafana:
image: openitcockpit/grafana:latest
environment:
- GF_SECURITY_ADMIN_PASSWORD=bGsPMxURCjg4esgJ
ports:
- "3033" # Grafana
networks:
oitc-backend:
aliases:
- "grafana.docker"
volumes:
- grafana-data:/var/lib/grafana
deploy:
restart_policy:
condition: on-failure
naemon:
image: openitcockpit/naemon:latest
init: true
ports:
- "9001" # Supervisor
- "9099" # Binaryd
networks:
- oitc-backend
depends_on:
- gearmand
volumes:
- naemon-var:/opt/openitc/nagios/var
- naemon-var-local:/opt/openitc/nagios/var_local # Stuff like logfiles etc we do not care about this
- naemon-config:/opt/openitc/nagios/etc/config:ro
deploy:
restart_policy:
condition: on-failure
mod_gearman_worker:
image: openitcockpit/mod_gearman_worker:latest
init: true
environment:
- IS_WORKHORSE_CONTAINER=1 # Enables the CLI for sending notifications and to be able to execute check plugins like EVC
- MYSQL_ROOT_PASSWORD=root_password
- MYSQL_DATABASE=openitcockpit
- MYSQL_USER=openitcockpit
- MYSQL_PASSWORD=secure
volumes:
- naemon-var:/opt/openitc/nagios/var
- oitc-agent-cert:/opt/openitc/agent:ro # TLS certificats of the openITCOCKPIT Monitoring Agent
- oitc-agent-etc:/opt/openitc/receiver/etc:ro # Configfile of the openITCOCKPIT Agent
- oitc-frontend-src:/opt/openitc/frontend:ro # Shared source code of openITCOCKPIT so that this container can execute EVC plugin and default notification scripts
networks:
- oitc-backend
depends_on:
- gearmand
deploy:
replicas: 1 # Increase this number if you need more mod_gearman worker containers
restart_policy:
condition: on-failure
statusengine-worker:
image: openitcockpit/statusengine-worker:latest
init: true
environment:
- MYSQL_ROOT_PASSWORD=root_password
- MYSQL_DATABASE=openitcockpit
- MYSQL_USER=openitcockpit
- MYSQL_PASSWORD=secure
networks:
- oitc-backend
depends_on:
- redis
- mysql
- gearmand
deploy:
restart_policy:
condition: on-failure
openitcockpit:
image: openitcockpit/openitcockpit-ce:latest # Community Edition of openITCOCKPIT
init: true
environment:
- MYSQL_ROOT_PASSWORD=root_password
- MYSQL_DATABASE=openitcockpit
- MYSQL_USER=openitcockpit
- MYSQL_PASSWORD=secure
- OITC_ADMIN_EMAIL=user@example.org
- OITC_ADMIN_PASSWORD=asdf12
- OITC_ADMIN_FIRSTNAME=John
- OITC_ADMIN_LASTNAME=Doe
- OITC_GRAFANA_ADMIN_PASSWORD=bGsPMxURCjg4esgJ
ports:
- "80:80" # HTTP
- "443:443" # HTTPS
networks:
- oitc-backend
volumes:
- naemon-config:/opt/openitc/nagios/etc/config # Configuration files related to Naemon
- naemon-var:/opt/openitc/nagios/var # The status.dat is required for the recurring downtimes cronjob
- oitc-frontend-src:/opt/openitc/src_sharing/frontend # Frontend of openITCOCKPIT to be shared with mod_gearman container so that the container can execute the EVC plugin
- oitc-webroot:/opt/openitc/frontend/webroot # Webroot of openITCOCKPIT to keep images and share files with puppeteer
- oitc-maps:/opt/openitc/frontend/plugins/MapModule/webroot/img # Images of the MapModule
- oitc-agent-cert:/opt/openitc/agent # TLS certificats of the openITCOCKPIT Monitoring Agent
- oitc-agent-etc:/opt/openitc/receiver/etc # Configuration for the openITCOCKPIT Monitoring Agent Check Reciver
- oitc-var:/opt/openitc/var # A safe harbor to store .lock files
- oitc-backups:/opt/openitc/nagios/backup # Automaticaly generated MySQL dumps
#- oitc-import:/opt/openitc/frontend/plugins/ImportModule/webroot/files # Uplaoded files of the ImportModule
- oitc-styles:/opt/openitc/frontend/plugins/DesignModule/webroot/css # Custom styles of the DesignModule
- checkmk-agents:/opt/openitc/check_mk/agents:ro
- checkmk-etc:/opt/openitc/check_mk/etc/check_mk
- checkmk-var:/opt/openitc/check_mk/var/check_mk
depends_on:
- redis
- mysql
- gearmand
deploy:
restart_policy:
condition: on-failure
puppeteer:
image: openitcockpit/puppeteer:latest
init: true
ports:
- "8084" # Puppeteer Web Server
networks:
- oitc-backend
volumes:
- oitc-webroot:/opt/openitc/frontend/webroot:ro
deploy:
restart_policy:
condition: on-failure
checkmk:
image: openitcockpit/checkmk:latest
init: true
ports:
- "1234" # Checkmk Wrapper HTTP API
networks:
- oitc-backend
volumes:
- naemon-var:/opt/openitc/nagios/var # oitc.cmd
- checkmk-agents:/opt/omd/sites/nagios/version/share/check_mk/agents
- checkmk-etc:/omd/sites/nagios/etc/check_mk
- checkmk-var:/omd/sites/nagios/var/check_mk
deploy:
restart_policy:
condition: on-failure
# Docs about the postfix container ban be found
# - https://hub.docker.com/r/boky/postfix
# - https://github.com/bokysan/docker-postfix
postfix:
image: boky/postfix
environment:
- HOSTNAME=openitcockpit.itsm.love # Postfix myhostname
- RELAYHOST=mailrelay.static.itsm.love # Host that relays all e-mails
#- RELAYHOST_USERNAME= # An (optional) username for the relay server
#- RELAYHOST_PASSWORD= # An (optional) login password for the relay server
#- MYNETWORKS= # allow domains from per Network ( default 127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16 )
#- ALLOWED_SENDER_DOMAINS= # domains sender domains
- ALLOW_EMPTY_SENDER_DOMAINS=true # if value is set (i.e: "true"), $ALLOWED_SENDER_DOMAINS can be unset
#- MASQUERADED_DOMAINS= # domains where you want to masquerade internal hosts
ports:
- "587" # SMTPS (postfix)
networks:
- oitc-backend
deploy:
restart_policy:
condition: on-failure
networks:
oitc-backend:
volumes:
mysql-data:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/mysql-data"
grafana-data:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/grafana-data"
graphite-data:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/graphite-data"
naemon-var:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/naemon-var"
naemon-var-local:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/naemon-var-local"
naemon-config:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/naemon-config"
oitc-frontend-src:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-frontend-src"
oitc-webroot:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-webroot"
oitc-maps:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-maps"
oitc-agent-cert:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-agent-cert"
oitc-agent-etc:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-agent-etc"
oitc-var:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-var"
oitc-backups:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-backups"
oitc-styles:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-styles"
checkmk-etc:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/checkmk-etc"
checkmk-var:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/checkmk-var"
checkmk-agents:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/checkmk-agents"
# If you use the plain docker swarm cli, no changes (other than the volumes to be shared across the nodes) are required.
# This is my volume config for NFS mounts.
volumes:
mysql-data:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/mysql-data"
grafana-data:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/grafana-data"
graphite-data:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/graphite-data"
naemon-var:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/naemon-var"
naemon-var-local:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/naemon-var-local"
naemon-config:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/naemon-config"
oitc-frontend-src:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-frontend-src"
oitc-webroot:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-webroot"
oitc-maps:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-maps"
oitc-agent-cert:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-agent-cert"
oitc-agent-etc:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-agent-etc"
oitc-var:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-var"
oitc-backups:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-backups"
oitc-styles:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/oitc-styles"
checkmk-etc:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/checkmk-etc"
checkmk-var:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/checkmk-var"
checkmk-agents:
driver_opts:
type: "nfs"
o: "addr=172.16.166.103,nolock,soft,rw"
device: ":/opt/volumes/checkmk-agents"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment