Skip to content

Instantly share code, notes, and snippets.

@dasniko
Last active March 11, 2024 06:55
Show Gist options
  • Star 16 You must be signed in to star a gist
  • Fork 9 You must be signed in to fork a gist
  • Save dasniko/3a57913047af3ca1b6b0a83b294dc1a1 to your computer and use it in GitHub Desktop.
Save dasniko/3a57913047af3ca1b6b0a83b294dc1a1 to your computer and use it in GitHub Desktop.
How to configure a Keycloak cluster properly (Quarkus edition)

Keycloak Cluster Configuration (How to)

This is a short and simple example on how to build a proper Keycloak cluster, using DNS_PING as discovery protocol and an NGINX server as reverse proxy.

If you prefer to use JDBC_PING, see @xgp's example gist here: https://gist.github.com/xgp/768eea11f92806b9c83f95902f7f8f80


Please see also my video about Keycloak Clustering: http://www.youtube.com/watch?v=P96VQkBBNxU
NOTE: The video covers JDBC_PING protocol and uses the legacy Keycloak Wildfly distribution!

version: '3.8'
services:
postgres:
image: postgres:latest
environment:
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: passw0rd
volumes:
- pg-data:/var/lib/postgresql/data
keycloak:
image: quay.io/keycloak/keycloak:latest
command: start-dev -Djgroups.dns.query=keycloak
environment:
KC_CACHE: ispn
KC_CACHE_STACK: kubernetes
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: passw0rd
KC_PROXY: edge
KC_HOSTNAME: localhost
KC_HOSTNAME_PORT: '8000'
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
deploy:
replicas: 2
endpoint_mode: dnsrr
lb:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8000:8000"
volumes:
pg-data:
name: keycloak-demo-cluster-data
upstream backend {
ip_hash;
server keycloak-1:8080 fail_timeout=2s;
server keycloak-2:8080 fail_timeout=2s;
}
server {
listen 8000;
server_name localhost;
access_log off;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_pass http://backend;
proxy_connect_timeout 2s;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
@dasniko
Copy link
Author

dasniko commented May 25, 2023

Not by default. You'll have to provide your custom Infinispan XML config file with proper values - or, with placeholders to be replaced by environment variable values during runtime.

@zhandospdf
Copy link

zhandospdf commented May 25, 2023

@dasniko thanks for reply, but I am not sure if I got it right. Was your answer about the cache configuration and how we can set number of owners for a cache ?

My original question was more about how can we query the total number of Keycloak nodes present in the cluster at runtime. Any clues about that ?

P.S.
With Jboss I used to do it like this: /opt/jboss/keycloak/bin/jboss-cli.sh -c --command="/subsystem=jgroups/channel=ee:read-attribute(name=view)"

@dasniko
Copy link
Author

dasniko commented May 25, 2023

@zhandospdf Ok, then I misunderstood your question.
AFAIK there's no ootb way to do this. Perhaps some custom java code could help here, but I don't have any details.

@1capedbaldy
Copy link

Hello. I am new to this. I tried this demo following your steps but I got a problem: nginx encountered this error and exited
nginx: [emerg] host not found in upstream "keycloak-1:8080" in /etc/nginx/conf.d/default.conf:2
Can you help me with this?

@dasniko
Copy link
Author

dasniko commented Aug 3, 2023

Replace keycloak-1 (and keycloak-2) with the hostname of the Keycloak containers, as they are reachable from within the Docker network. Usually (default) this is <name-of-the-folder>-<servicename>-<running-number-of-replica>

@PreeteshShettigar
Copy link

PreeteshShettigar commented Oct 6, 2023

I recently upgraded from Keycloak 18 to version 21.1.0. In my previous setup, Keycloak was running with cache stack as kubernates across multiple pods. However, after the upgrade, I encountered an issue where the jgroups were unable to locate multiple nodes

@samorganist
Copy link

Hello , anybody know how to share the cache between hosts that runs the same docker-compose like in this example?
i have this config:
Server 1 & 2 : running docker-compose
ExternalDatabse
loadBalancer: switch between the two servers

the cache is shared only on the instances running on the same server, so if you have a solution for this it will be so helpful for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment