You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Migrate Patroni PostgreSQL cluster to cloudnative-pg
Prerequisites
Stable connection to Patroni cluster from cnpg nodes
Same major PostgreSQL version (for example 15.1, cnpg image tag needs to correspond to that)
Empty files called custom.conf and override.conf inside pgdata folder on all nodes (folder where postgresql.conf is)
User named strictly streaming_replica with REPLICATION ROLE attached (if it's not streaming_replica cnpg will fail because it needs that user to exist)
Patroni dynamic config:
postgresql:
parameters:
listen: "*"max_wal_senders: 5unix_socket_directories: "/controller/run"pg_hba:
- host replication replicator 127.0.0.1/32 md5# ...other lines for patroni replication...
- host replication streaming_replica 0.0.0.0/0 md5
- host all all 127.0.0.1/32 md5
- host all all 0.0.0.0/0 md5
Creating cloudnative-pg cluster in replica mode
Configure cnpg cluster to connect to Patroni via pgbasebackup and use that source-db as bootstrap and replica mode
After starting the cluster in replica mode, first init pod should succeed but afterwards the first cnpg pod will fail to start
Find out on which node the first pod is running, ssh into it and cd into the dir which holds the pgdata volume (/var/lib/rancher/k3s/storage/pvc-*/pgdata)
Inside pgdata volume edit postgresql.conf:
change hba_file path from /var/lib/postgresql/15/main/pg_hba.conf to /var/lib/postgresql/data/pgdata/pg_hba.conf
change ident_file path from /var/lib/postgresql/15/main/pg_ident.conf to /var/lib/postgresql/data/pgdata/pg_ident.conf
add include 'custom.conf' at the end of the file
add include 'override.conf' at the end of the file
Restart the pod and it should start up correctly, after it all of the other pods will replicate from the first one as well
Using the new cnpg-cluster
Disable replica mode
Enable SuperUser access
Apply the YAML and wait for all nodes to restart
Connect to the database using generated SuperUser secret (use kubectl proxy to forward the connection)
Run ALTER DATABASE template1 REFRESH COLLATION VERSION;
Run these two commands on ALL of the databases in the cluster:
REINDEX DATABASE <db_name>;
ALTER DATABASE <db_name> REFRESH COLLATION VERSION;
Optionally disable SuperUser if there's no need for it