Skip to content

Instantly share code, notes, and snippets.

@dasniko
Last active March 11, 2024 06:55
Show Gist options
  • Star 16 You must be signed in to star a gist
  • Fork 9 You must be signed in to fork a gist
  • Save dasniko/3a57913047af3ca1b6b0a83b294dc1a1 to your computer and use it in GitHub Desktop.
Save dasniko/3a57913047af3ca1b6b0a83b294dc1a1 to your computer and use it in GitHub Desktop.
How to configure a Keycloak cluster properly (Quarkus edition)

Keycloak Cluster Configuration (How to)

This is a short and simple example on how to build a proper Keycloak cluster, using DNS_PING as discovery protocol and an NGINX server as reverse proxy.

If you prefer to use JDBC_PING, see @xgp's example gist here: https://gist.github.com/xgp/768eea11f92806b9c83f95902f7f8f80


Please see also my video about Keycloak Clustering: http://www.youtube.com/watch?v=P96VQkBBNxU
NOTE: The video covers JDBC_PING protocol and uses the legacy Keycloak Wildfly distribution!

version: '3.8'
services:
postgres:
image: postgres:latest
environment:
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: passw0rd
volumes:
- pg-data:/var/lib/postgresql/data
keycloak:
image: quay.io/keycloak/keycloak:latest
command: start-dev -Djgroups.dns.query=keycloak
environment:
KC_CACHE: ispn
KC_CACHE_STACK: kubernetes
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: passw0rd
KC_PROXY: edge
KC_HOSTNAME: localhost
KC_HOSTNAME_PORT: '8000'
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
deploy:
replicas: 2
endpoint_mode: dnsrr
lb:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8000:8000"
volumes:
pg-data:
name: keycloak-demo-cluster-data
upstream backend {
ip_hash;
server keycloak-1:8080 fail_timeout=2s;
server keycloak-2:8080 fail_timeout=2s;
}
server {
listen 8000;
server_name localhost;
access_log off;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_pass http://backend;
proxy_connect_timeout 2s;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
@dasniko
Copy link
Author

dasniko commented Mar 12, 2022

If you prefer to use JDBC_PING, see @xgp's example gist here: https://gist.github.com/xgp/768eea11f92806b9c83f95902f7f8f80

@hookenful
Copy link

@dasniko Hi there! How is DNS resolving works here? Why does in upstream there is "keycloak-1", "keycloak-2". Which rule enforces that hostnames?

@dasniko
Copy link
Author

dasniko commented Aug 28, 2022

As this is being stared with Docker Swarm Mode (not stack-deploy!) there are two replicas started and the names are concatenated of the service name and a running number, thus -1 and -2. Depending of environments, there might also be the directory name prepended and/or instead of - for concatenation a _ is being used.

@hookenful
Copy link

Thanks. And what about this setting: -Djgroups.dns.query=keycloak . As we are currently using stack deploy to deploy our services. According to your comment this won't work?

@dasniko
Copy link
Author

dasniko commented Sep 3, 2022

keycloak in the JGroups DNS query is the service name what is being registered, not the hostname(s) itself.
Please refer to docs of Docker and Swarm, of how this works.

@dasniko
Copy link
Author

dasniko commented Sep 8, 2022

Of course not. If one is able (and willing) to read the introduction text, this will become clear, as this is only an example and thus it is not meant for production use. Just start thinking about the requirements your environment needs and adjust it to your needs.

@dasniko
Copy link
Author

dasniko commented Sep 8, 2022

Depends on the philosophy. Technically yes, but if you are in the "zero trust" team, then you want to use HTTPS everywhere.

@hakimnorizman-work
Copy link

Hi, is this method available for two different server?

@keepthemomentum
Copy link

Hi @dasniko

I need your help in configuring keycloak cluster. I'm using the custom stack for JDBC-PING to discover the instances, but its not working.
I have a shared Database and running two separate keycloak instances in docker in two different EC2 instances. Do i need to the IP address of atleast one instance for it to discover?
something like this or in the jdbc-custom stack. Any help on this would be much appreciated.

#IP address of this host, please make sure this IP can be accessed by the other Keycloak instances
JGROUPS_DISCOVERY_EXTERNAL_IP=172.21.48.39
#protocol
JGROUPS_DISCOVERY_PROTOCOL=JDBC_PING

Thank you!

@mattiashem
Copy link

command: start-dev -Djgroups.dns.query=keycloak

Cab you get it running in non dev mode ?

@mattiashem
Copy link

This is what i ended up with tp get it running in k8s in prod mode

     - args:
        - start
        - --proxy
        - edge
        - --hostname-strict=false
        env:
        - name: KC_CACHE
          value: ispn
        - name: KC_CACHE_STACK
          value: kubernetes
        - name: DB_VENDOR
          value: mariadb
        - name: DB_ADDR
          value: 
        - name: DB_PORT
          value: "3306"
        - name: DB_DATABASE
          value: 
        - name: JDBC_PARAMS
          value: connectTimeout=30000
        - name: JAVA_OPTS
          value: -Djboss.as.management.blocking.timeout=30000
        - name: PROXY_ADDRESS_FORWARDING
          value: "true"
        - name: KEYCLOAK_ADMIN
          value: 
        - name: KEYCLOAK_ADMIN_PASSWORD
          value: 
        - name: KEYCLOAK_WELCOME_THEME
          value: 
        - name: KEYCLOAK_DEFAULT_THEME
          value: 
        - name: KC_HOSTNAME
          value: 
        - name: KC_HTTP_ENABLED
          value: "true"
        - name: KC_HEALTH_ENABLED

@arjunarisang
Copy link

arjunarisang commented Feb 9, 2023

Hi @mattiashem, can you share full k8s yaml file? Do you use k8s ingress or dedicated nginx for reverse proxy?

@mattiashem
Copy link

`kind: StatefulSet
metadata:
  annotations:
  labels:
    app.kubernetes.io/instance: stackcore
    app.kubernetes.io/name: keycloak
  name: keycloak
  namespace: mantiser
spec:
  podManagementPolicy: Parallel
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: keycloak
      app.kubernetes.io/name: keycloak
  serviceName: keycloak-headless
  template:
    metadata:
      annotations:
      labels:
        app.kubernetes.io/instance: keycloak
        app.kubernetes.io/name: keycloak
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/component
                  operator: NotIn
                  values:
                  - test
                matchLabels:
                  app.kubernetes.io/instance: keycloak
                  app.kubernetes.io/name: keycloak
              topologyKey: failure-domain.beta.kubernetes.io/zone
            weight: 100
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app.kubernetes.io/component
                operator: NotIn
                values:
                - test
              matchLabels:
                app.kubernetes.io/instance: keycloak
                app.kubernetes.io/name: keycloak
            topologyKey: kubernetes.io/hostname
      containers:
      - args:
        - start
        - --proxy
        - edge
        - --hostname-strict=false
        env:
        - name: jgroups.dns.query
          value: keycloak-headless.mantiser.svc.cluster.local
        - name: KC_CACHE
          value: ispn
        - name: KC_CACHE_STACK
          value: kubernetes
        - name: DB_VENDOR
          value: mariadb
        - name: DB_ADDR
          value: mysql-admin-pxc.mysql-admin.svc
        - name: DB_PORT
          value: "3306"
        - name: DB_DATABASE
          value: mantiser_keycloak
        - name: JDBC_PARAMS
          value: connectTimeout=30000
        - name: JAVA_OPTS
          value: -Djboss.as.management.blocking.timeout=30000
        - name: PROXY_ADDRESS_FORWARDING
          value: "true"
        - name: KEYCLOAK_ADMIN
          value: 
        - name: KEYCLOAK_ADMIN_PASSWORD
          value: 
        - name: KEYCLOAK_WELCOME_THEME
          value: mantiser
        - name: KEYCLOAK_DEFAULT_THEME
          value: mantiser
        - name: KC_HOSTNAME
          value: auth.mantiser.com
        - name: KC_HTTP_ENABLED
          value: "true"
        - name: KC_HEALTH_ENABLED
          value: "true"
        envFrom:
        - secretRef:
            name: keycloak-db
        image: mantiser/keycloak:latest
        imagePullPolicy: IfNotPresent
        name: keycloak
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        - containerPort: 8443
          name: https
          protocol: TCP
        - containerPort: 9990
          name: http-management
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health/ready
            port: http
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        securityContext:
          runAsNonRoot: true
          runAsUser: 1000
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/jboss/startup-scripts/keycloak.cli
          name: startup
          readOnly: true
          subPath: keycloak.cli
        - mountPath: /etc/tls
          name: tls
          readOnly: true
      dnsPolicy: ClusterFirst
      enableServiceLinks: true
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1000
      serviceAccount: keycloak
      serviceAccountName: keycloak
      terminationGracePeriodSeconds: 60
      volumes:
      - name: tls
        secret:
          defaultMode: 420
          optional: false
          secretName: api-mantiser-tls
      - configMap:
          defaultMode: 365
          items:
          - key: keycloak.cli
            path: keycloak.cli
          name: keycloak-startup
        name: startup
  updateStrategy:
    type: RollingUpdate

This is a describe on the stateful set. I use Traefik ingress as a reverse proxy

@arjunarisang
Copy link

Awesome! Thanks a lot @mattiashem

@germanllop
Copy link

Any workaound with Docker Swarm on v21?

@mattiashem
Copy link

I use this in docker compose maybe can help you in the right direction

mysql: image: mysql:5.7 volumes: - ./mysql:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: keycloak MYSQL_USER: keycloak MYSQL_PASSWORD: password keycloak: image: quay.io/keycloak/keycloak environment: DB_VENDOR: MYSQL DB_ADDR: mysql DB_DATABASE: keycloak DB_USER: keycloak DB_PASSWORD: password KEYCLOAK_ADMIN: admin KEYCLOAK_ADMIN_PASSWORD: Pa55w0rd ports: - 8080:8080 - 9990:9990 depends_on: - mysql command: start-dev

@germanllop
Copy link

I use this in docker compose maybe can help you in the right direction

mysql: image: mysql:5.7 volumes: - ./mysql:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: keycloak MYSQL_USER: keycloak MYSQL_PASSWORD: password keycloak: image: quay.io/keycloak/keycloak environment: DB_VENDOR: MYSQL DB_ADDR: mysql DB_DATABASE: keycloak DB_USER: keycloak DB_PASSWORD: password KEYCLOAK_ADMIN: admin KEYCLOAK_ADMIN_PASSWORD: Pa55w0rd ports: - 8080:8080 - 9990:9990 depends_on: - mysql command: start-dev

Thanks, I've already make it work on docker with a JDBC cluster.
I was wondering if there is any workaround to use the KC_CACHE_STACK: kubernetes on docker swarm, they don't discover each other, maybe a naming, I can't specify static IP because the replicas can spawn on different nodes, so there is the catch and the KC_CACHE_STACK: kubernetes is not finding the other keycloak replicas :(

@bdoublet91
Copy link

bdoublet91 commented Apr 26, 2023

Hi,
You can find what you want here:
Work for keycloak 20.0.3
keycloak/keycloak#10210
The main problem for docker swarm is node discovery with multiple ip interface container. Also a problem with healthcheck container but it's possible but harder than kubernetes :)

@zhandospdf
Copy link

Hello,
When using Jboss based Keycloak versions, it was possible to query the cluster size using jboss-cli.sh.
Does anyone know if it is possible to query (just to confirm) the cluster size in Quarkus based Keycloak ?

@dasniko
Copy link
Author

dasniko commented May 25, 2023

Not by default. You'll have to provide your custom Infinispan XML config file with proper values - or, with placeholders to be replaced by environment variable values during runtime.

@zhandospdf
Copy link

zhandospdf commented May 25, 2023

@dasniko thanks for reply, but I am not sure if I got it right. Was your answer about the cache configuration and how we can set number of owners for a cache ?

My original question was more about how can we query the total number of Keycloak nodes present in the cluster at runtime. Any clues about that ?

P.S.
With Jboss I used to do it like this: /opt/jboss/keycloak/bin/jboss-cli.sh -c --command="/subsystem=jgroups/channel=ee:read-attribute(name=view)"

@dasniko
Copy link
Author

dasniko commented May 25, 2023

@zhandospdf Ok, then I misunderstood your question.
AFAIK there's no ootb way to do this. Perhaps some custom java code could help here, but I don't have any details.

@1capedbaldy
Copy link

Hello. I am new to this. I tried this demo following your steps but I got a problem: nginx encountered this error and exited
nginx: [emerg] host not found in upstream "keycloak-1:8080" in /etc/nginx/conf.d/default.conf:2
Can you help me with this?

@dasniko
Copy link
Author

dasniko commented Aug 3, 2023

Replace keycloak-1 (and keycloak-2) with the hostname of the Keycloak containers, as they are reachable from within the Docker network. Usually (default) this is <name-of-the-folder>-<servicename>-<running-number-of-replica>

@PreeteshShettigar
Copy link

PreeteshShettigar commented Oct 6, 2023

I recently upgraded from Keycloak 18 to version 21.1.0. In my previous setup, Keycloak was running with cache stack as kubernates across multiple pods. However, after the upgrade, I encountered an issue where the jgroups were unable to locate multiple nodes

@samorganist
Copy link

Hello , anybody know how to share the cache between hosts that runs the same docker-compose like in this example?
i have this config:
Server 1 & 2 : running docker-compose
ExternalDatabse
loadBalancer: switch between the two servers

the cache is shared only on the instances running on the same server, so if you have a solution for this it will be so helpful for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment