Skip to content

Instantly share code, notes, and snippets.

@Radiergummi
Last active March 18, 2024 19:54
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Radiergummi/aa6e2d9a7d50afe626f776ed1d6b97e1 to your computer and use it in GitHub Desktop.
Save Radiergummi/aa6e2d9a7d50afe626f776ed1d6b97e1 to your computer and use it in GitHub Desktop.
RabbitMQ configuration
#!/usr/bin/env sh
export DOCKER_HOST=unix:///var/run/docker.sock
docker_hosts=$(docker node ls --format '{{.Hostname}}')
# Retrieve the list of hosts from the Docker node (needs to run
# within the Swarm).
# We use this list to configure RabbitMQ statically: On every node
# in the cluster, a RabbitMQ instance is running. They are
# configured to use the Swarm node hostname as their hostname; so
# we can assume every cluster host to be a RabbitMQ node, too!
# This is a bit of a hack, but unfortunately using the DNS
# discovery mechanism just isn't possible in Docker Swarm.
count=0
for host in ${docker_hosts}; do
count=$((count + 1))
echo "cluster_formation.classic_config.nodes.${count} = rabbit@rabbitmq-${host}" >> nodes.tmp.txt
done
lead='^# BEGIN DOCKER NODES$'
tail='^# END DOCKER NODES$'
sed -e "/${lead}/,/${tail}/{ /${lead}/{p; r nodes.tmp.txt}; /${tail}/p; d }" rabbitmq.conf >> rabbitmq.tmp.conf
mv rabbitmq.tmp.conf rabbitmq.conf
rm nodes.tmp.txt
# Add the magic OAuth values to the configuration file. Sadly,
# RabbitMQ doesn't have much in terms of secret loading, so this
# is the only way to get our secrets into the app.
echo "management.oauth_client_id = ${RABBITMQ_OAUTH_CLIENT_ID}" >> rabbitmq.conf
echo "management.oauth_provider_url = ${RABBITMQ_OAUTH_PROVIDER_URL}" >> rabbitmq.conf
echo "auth_oauth2.resource_server_id = ${RABBITMQ_OAUTH_RESOURCE_SERVER_ID}" >> rabbitmq.conf
echo "auth_oauth2.jwks_url = ${RABBITMQ_OAUTH_JWKS_URL}" >> rabbitmq.conf
# here, we build the rabbitmq metadata information as a json
# schema we can import during the cluster boot. this will ensure
# our desired user accounts, vhosts, and exchanges exist when the
# cluster is formed.
# see here for more information on the schema definitions:
# https://www.rabbitmq.com/definitions.html#import-on-boot
RABBITMQ_VERSION="${RABBITMQ_VERSION:-3.11.9}"
RABBITMQ_PASSWORD_HASH=$(python bin/hash_rabbitmq_password.py "${RABBITMQ_PASSWORD}")
template='{"bindings":[],"exchanges":[],"global_parameters":[],"parameters":[],"permissions":[{"configure":".*","read":".*","user":"%s","vhost":"%s","write":".*"}],"policies":[],"queues":[],"rabbit_version":"%s","rabbitmq_version":"%s","topic_permissions":[],"users":[{"hashing_algorithm":"rabbit_password_hashing_sha256","limits":{},"name":"%s","password_hash":"%s","tags":["administrator"]}],"vhosts":[{"limits":[],"metadata":{"description":"default virtual host","tags":[]},"name":"%s"}]}'
printf -v rabbitmq_definitions "${template}" \
"${RABBITMQ_USER}" \
"${RABBITMQ_VHOST}" \
"${RABBITMQ_VERSION}" \
"${RABBITMQ_VERSION}" \
"${RABBITMQ_USER}" \
"${RABBITMQ_PASSWORD_HASH}" \
"${RABBITMQ_VHOST}"
echo -e "${rabbitmq_definitions}" > rabbitmq_definitions.json
services:
rabbitmq:
image: "rabbitmq:${RABBITMQ_VERSION:-3-management}"
hostname: "rabbitmq-{{.Node.Hostname}}"
networks:
- rabbitmq
expose:
- 5672 # AMQP
- 15672 # Web UI
- 15692 # Metrics
volumes:
- rabbitmq-data:/var/lib/rabbitmq
environment:
RABBITMQ_NODENAME: "rabbit@rabbitmq-{{.Node.Hostname}}"
configs:
- source: rabbitmq_config
target: /etc/rabbitmq/rabbitmq.conf
uid: "999"
gid: "999"
mode: 0600
secrets:
- source: rabbitmq_erlang_cookie
target: /var/lib/rabbitmq/.erlang.cookie
uid: "999"
gid: "999"
mode: 0600
- source: rabbitmq_definitions
target: /etc/rabbitmq/definitions.json
uid: "999"
gid: "999"
mode: 0600
ulimits:
nofile:
soft: 64000
hard: 64000
deploy:
mode: global
# Make sure to set it to any: RabbitMQ has stopped with exit code 0
# in our cluster sometimes, leading to a partially-started service.
restart_policy:
condition: any
# Using this update config, a single instance will be started, then
# the swarm waits for it to complete the health check, then starts
# the next.
# This way, the cluster can be safely formed at the first start.
update_config:
parallelism: 1
monitor: 10s
order: stop-first
# The health check is what will be used by the "monitor" directive to
# monitor service health.
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 10s
timeout: 3s
retries: 30
import base64
import hashlib
import os
import sys
# This is the password we wish to encode
password = sys.argv[1]
# 1.Generate a random 32 bit salt:
# This will generate 32 bits of random data:
salt = os.urandom(4)
# 2.Concatenate that with the UTF-8 representation of the plaintext password
tmp0 = salt + password.encode("utf-8")
# 3. Take the SHA256 hash and get the bytes back
tmp1 = hashlib.sha256(tmp0).digest()
# 4. Concatenate the salt again:
salted_hash = salt + tmp1
# 5. convert to base64 encoding:
pass_hash = base64.b64encode(salted_hash)
print(pass_hash.decode("utf-8")) # noqa T201
log.default.level = warning
# Sadly, this doesn't work due to limitations in the way RabbitMQ and Docker DNS
# work and communicate; RabbitMQ will perform an A lookup for the given hostname
# and then do a reverse lookup for all IPs in the result.
# The Docker DNS daemon can deliver the IPs of all RabbitMQ tasks, but the
# reverse lookup unfortunately won't return the hostname of the container, so
# RabbitMQ complains (rightfully) that it cannot find matching hosts.
# As we thus cannot use automatic discovery, we'll resort to figuring out the
# cluster nodes at deployment time.
#cluster_formation.peer_discovery_backend = rabbit_peer_discovery_dns
#cluster_formation.dns.hostname = tasks.rabbitmq
cluster_partition_handling = pause_minority
cluster_formation.peer_discovery_backend = classic_config
cluster_formation.discovery_retry_limit = 10
cluster_formation.discovery_retry_interval = 2000
cluster_name = example
# BEGIN DOCKER NODES
### Insert your Docker nodes here during deployment ###
# END DOCKER NODES
vm_memory_high_watermark.relative = 0.7
heartbeat = 3600
# Consumer timeout: Time until messages are considered KIA, in milliseconds.
consumer_timeout = 31622400000
loopback_users = none
# The definition file is built in the CI pipeline during the deployment step.
# This allows us to pull the credentials from the CI secrets.
definitions.local.path = /etc/rabbitmq/definitions.json
definitions.import_backend = local_filesystem
definitions.skip_if_unchanged = true
# Authentication settings
# We use a two-fold auth approach: RabbitMQ users for queue access, and OAuth2
# for human access to the management UI.
auth_backends.1 = rabbit_auth_backend_oauth2
auth_backends.2 = rabbit_auth_backend_internal
# OAuth stuff here
@eazylaykzy
Copy link

eazylaykzy commented Jul 3, 2023

@Radiergummi lovely compose file, would you please share your config, I have a RabbitMQ cluster setup with Consul has the service discovery, but I've been having issues with the discovery, and been looking at other solutions, I'd like to try the dnsrr with Caddy in front of RabbitMQ.

Would be lovely to see what your definitions.json looks like too.

Thank you.

@Radiergummi
Copy link
Author

Hi @eazylaykzy, I just added the additional files to the gist. Sadly, it's not as straight-forward as you'd probably wish for. We need a build step just to prepare the configuration file and render the definitions (the deployment.sh script above is only part of our full build pipeline, but it should contain everything necessary to get the config right).

Let me know if you have any other questions!

@eazylaykzy
Copy link

Thank you @Radiergummi, you have a pretty neat definition there, though it doesn't look straightforward, I believe I could get something from it, I use Ansible for configuration and I will try to convert most of those to Ansible rules and see everything work fine.

Working with Consul was pretty straightforward, it only requires the consul plugin, after it is up for about a week, the quorum begin to act unusual with conflicting nodes, I hope this will do the magic for me and solve my problem.

I sure will give you my status update, and let you know if I hit a roadblock while at it.

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment