Skip to content

Instantly share code, notes, and snippets.

@pedroigor
Last active August 5, 2022 14:00
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save pedroigor/e1476a41b544d15c1bd59155aad4f6ad to your computer and use it in GitHub Desktop.
Save pedroigor/e1476a41b544d15c1bd59155aad4f6ad to your computer and use it in GitHub Desktop.
Keycloak.X k8s spec
apiVersion: v1
kind: Service
metadata:
name: keycloak-postgres
labels:
service: keycloak
layer: security
spec:
ports:
- port: 5432
selector:
service: keycloak-postgres
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: keycloak-postgres
labels:
service: keycloak-postgres
layer: security
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/storage/pv-keycloak-postgres
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: keycloak-postgres
labels:
service: keycloak
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
service: keycloak-postgres
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak-postgres
labels:
service: keycloak-postgres
spec:
replicas: 1
selector:
matchLabels:
service: keycloak-postgres
strategy:
type: Recreate
template:
metadata:
labels:
service: keycloak-postgres
spec:
containers:
- image: postgres
name: keycloak-postgress
env:
- name: POSTGRES_DB
value: keycloak
- name: POSTGRES_USER
value: keycloak
- name: POSTGRES_PASSWORD
value: password
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-persistent-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-persistent-storage
persistentVolumeClaim:
claimName: keycloak-postgres
---
kind: Service
apiVersion: v1
metadata:
name: keycloak
labels:
service: keycloak
spec:
ports:
- port: 8443
name: https
- port: 8080
name: http
selector:
service: keycloak
layer: security
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
labels:
service: keycloak
layer: security
spec:
selector:
matchLabels:
service: keycloak
layer: security
strategy:
type: Recreate
template:
metadata:
labels:
service: keycloak
layer: security
spec:
containers:
- image: quay.io/keycloak/keycloak-x:16.1.0
imagePullPolicy: IfNotPresent
args: ["-Djgroups.dns.query=keycloak-jgroups-ping.keycloak.svc.cluster.local", "start", "--auto-build", "--cache-stack=kubernetes", "--db=postgres", "--db-url=jdbc:postgresql://keycloak-postgres/keycloak", "--db-username=keycloak", "--db-password=password", "--hostname keycloak.apps.munerasoft.com", "--proxy edge", "--spi-sticky-session-encoder-infinispan-should-attach-route=false", "--hostname-strict-https=false"]
name: keycloak
resources:
limits:
cpu: 3
memory: 512Mi
requests:
cpu: 500m
memory: 512Mi
ports:
- containerPort: 8443
- containerPort: 8080
- containerPort: 4444
- containerPort: 8888
env:
- name: KEYCLOAK_ADMIN
value: admin
- name: KEYCLOAK_ADMIN_PASSWORD
value: admin
- name: JAVA_OPTS
value: -Xms128m -Xmx128m -XX:MetaspaceSize=128M -XX:MaxMetaspaceSize=128m -XX:ParallelGCThreads=2 -XX:ConcGCThreads=2 -Djava.net.preferIPv4Stack=true -Djava.security.egd=file:/dev/./urandom -Xlog:gc* -XX:NewRatio=1 -XX:MaxGCPauseMillis=10 -Djgroups.dns.query=keycloak-jgroups-ping.keycloak.svc.cluster.local -Dquarkus.vertx.worker-pool-size=5 -Dquarkus.vertx.event-loops-pool-size=2
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: keycloak
labels:
service: keycloak
layer: security
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "KC_SC"
nginx.ingress.kubernetes.io/session-cookie-secure: "true"
nginx.ingress.kubernetes.io/session-cookie-change-on-failure: "false"
nginx.ingress.kubernetes.io/affinity-mode: "balanced"
spec:
rules:
- host: keycloak.apps.munerasoft.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: keycloak
port:
number: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
service: keycloak
name: keycloak-jgroups-ping
spec:
clusterIP: None
ports:
- port: 4444
name: ping
protocol: TCP
targetPort: 4444
selector:
service: keycloak
sessionAffinity: None
type: ClusterIP
@anish-dcruz
Copy link

@pedroigor awesome, many thanks. I noticed that when I added two replicas, the new one does not connect to the existing pod when one of the pods goes down. There appears to be a problem with infinispan connectivity in the presence of multiple replicas.

Below are logs from the 3rd pod when i delete one of the existing pods.

Changes detected in configuration. Updating the server image.
Updating the configuration and installing your custom providers, if any. Please wait.
2022-04-07 23:09:11,577 INFO [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 12931ms
Server configuration updated and persisted. Run the following command to review the configuration:
kc.sh show-config
Next time you run the server, just run:
kc.sh start --hostname=auth.codekerala.local --proxy=edge --spi-sticky-session-encoder-infinispan-should-attach-route=false --hostname-strict-https=false
2022-04-07 23:09:17,103 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: FrontEnd: auth.codekerala.local, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin: <request>, Port: -1, Proxied: true
2022-04-07 23:09:18,583 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2022-04-07 23:09:18,629 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2022-04-07 23:09:18,678 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2022-04-07 23:09:19,141 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.6.Final
2022-04-07 23:09:19,307 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN`
2022-04-07 23:09:21,454 INFO [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) keycloak-5cddf9f74d-q97xk-36026: no members discovered after 2004 ms: creating cluster as coordinator
2022-04-07 23:09:21,469 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [keycloak-5cddf9f74d-q97xk-36026|0] (1) [keycloak-5cddf9f74d-q97xk-36026]
2022-04-07 23:09:21,476 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `keycloak-5cddf9f74d-q97xk-36026`, physical addresses are `[172.17.0.12:7800]`
2022-04-07 23:09:22,249 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: keycloak-5cddf9f74d-q97xk-36026, Site name: null
2022-04-07 23:09:23,126 ERROR [org.keycloak.services] (main) KC-SERVICES0010: Failed to add user 'admin' to realm 'master': user with username exists
2022-04-07 23:09:23,264 INFO [io.quarkus] (main) Keycloak 17.0.1 on JVM (powered by Quarkus 2.7.5.Final) started in 11.453s. Listening on: http://0.0.0.0:8080
2022-04-07 23:09:23,264 INFO [io.quarkus] (main) Profile prod activated.
2022-04-07 23:09:23,265 INFO [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-metrics, vault, vertx]

@anish-dcruz
Copy link

@pedroigor After i updated -Djgroups.dns.query=keycloak-jgroups-ping.keycloak.svc.cluster.local to "-Djgroups.dns.query=keycloak-jgroups-ping"

And changed ports in keycloak-jgroups-ping to 7800 clustering worked.
I also noticed 500 gateway error when one pod goes down but works after few minutes

Thanks

@samstride
Copy link

@pedroigor , should we now be using the image keycloak instead of keycloak-x?

Also, when I tried the above YAML, it doesn't seem to fully work for me.

When I click on Admin in the initial UI, it seems to add :80 to the end of the hostname, i.e. https://some.host.com:80/admin/. I have TLS terminating externally.

Is there a setting I am missing?

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment