-
-
Save paulknulst/58543c289a08d193d9fbe850750ce0a9 to your computer and use it in GitHub Desktop.
version: '3.6' | |
services: | |
uptime-kuma: | |
image: louislam/uptime-kuma:latest | |
container_name: uptime-kuma | |
volumes: | |
- data:/app/data | |
restart: always | |
networks: | |
- traefik-public | |
deploy: | |
placement: | |
constraints: | |
- node.labels.monitor == true | |
labels: | |
- traefik.enable=true | |
- traefik.docker.network=traefik-public | |
- traefik.constraint-label=traefik-public | |
- traefik.http.routers.uptime-kuma-http.rule=Host(`monitor.${PRIMARY_DOMAIN?Variable not set}`) | |
- traefik.http.routers.uptime-kuma-http.entrypoints=http | |
- traefik.http.routers.uptime-kuma-http.middlewares=https-redirect | |
- traefik.http.routers.uptime-kuma-https.rule=Host(`monitor.${PRIMARY_DOMAIN?Variable not set}`) | |
- traefik.http.routers.uptime-kuma-https.entrypoints=https | |
- traefik.http.routers.uptime-kuma-https.tls=true | |
- traefik.http.routers.uptime-kuma-https.tls.certresolver=le | |
- traefik.http.services.uptime-kuma.loadbalancer.server.port=3001 | |
networks: | |
traefik-public: | |
external: true | |
volumes: | |
data: |
Im not sure if you created your Docker Swarm correctly. Also I don't know if using a folder for letsencrypt/acme.json works as you did it.
For me, I had to do it like I explained in the previously mentioned articles. Also I had to change the permissions of acme.json file to 600.
Then I do not see any constraints for uptime kuma but I see absolute volume paths which will create the volumes on the worker where i t is deployed. Maybe change to use relative paths so that container volumes will be created in docker folder (/var/lib/docker/volumes).
I would try to use production lets encrypt server instead of staging.
Sorry but I cannot help more. Try to use the same prerequisite I did in my tutorial. Then it should work. To know how to set up a Docker Swarm environment correctly check this article which contains every needed step to set up a Docker Swarm in ~15 minutes: https://www.paulsblog.dev/docker-swarm-in-a-nutshell/
followed your tutorial to create the swarm and deploy the swarm
permissions for acme.json is set to 600
regarding the volumes, I would want them to share a common volume (NFS) that is why /mnt/nfs/docker
all the nodes have access to this share and is configured correctly -- checked by login into each node
If I have a look at the readme on Uptime Kuma repository at https://github.com/louislam/uptime-kuma it says that NFS is not supported. So please test without using an NFS.
From Uptime Kuma:
Warning
File Systems like NFS (Network File System) are NOT supported. Please map to a local directory or volume.
still no good
version: '3.7'
services:
kuma:
image: louislam/uptime-kuma:latest
networks:
- traefik-public
volumes:
- data:/app/data
deploy:
labels:
- traefik.enable=true
- traefik.docker.network=traefik-public
- traefik.constraint-label=traefik-public
- traefik.http.routers.kuma-http.rule=Host(`time.bitsandbots.cc`)
- traefik.http.routers.kuma-http.entrypoints=http
- traefik.http.routers.kuma-http.middlewares=https-redirect
- traefik.http.routers.kuma-https.rule=Host(`time.bitsandbots.cc`)
- traefik.http.routers.kuma-https.entrypoints=https
- traefik.http.routers.kuma-https.tls=true
- traefik.http.routers.kuma-https.tls.certresolver=le
- traefik.http.services.kuma.loadbalancer.server.port=3001
networks:
traefik-public:
external: true
volumes:
data:
I just copied your docker-compose file and adjusted the URL to up.paulsblog.dev and deployed it in my Docker Swarm. It work out of the box. I guess you did not follow my tutorial or have something broken in your server cluster setup.
Sorry, I don't see any error in your
I will shut down Uptime Kuma in 10 minutes, it was just a fast test :)
the problem was with overlay networks in virtual machines that are running on the ESXI.
It seems like there is a bug in the virtual network driver. In my case it helped to disable the checksum verification, and it started to work as I needed it.
"sudo ethtool -K eth0 rx off tx off"
and it worked
I tried with a simple nginx service but the worker 1 cannot access service running on any other worker or manager
and I cannot see all the containers running on traefik-public , all nodes show different containers ( only the container running on their particular node)