Skip to content

Instantly share code, notes, and snippets.

@perfectra1n
Last active November 19, 2023 22:59
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save perfectra1n/0b05e7b18dc0048890eab2ede55faf50 to your computer and use it in GitHub Desktop.
Save perfectra1n/0b05e7b18dc0048890eab2ede55faf50 to your computer and use it in GitHub Desktop.
Migrating to new major PostgreSQL version with CNPG

Migrating to new major PostgreSQL version with CNPG

  1. Create new cluster with the monolith bootstrap above
  2. Cut over all Postgres services after it’s done initializing
  3. Wave goodbye and pray for the lost data in the 5 minutes it took to cut over (could’ve cut over earlier, but I was worried it was going to have a stroke when initializing, so I waited)

Create new cluster with the monolith bootstrap above

First, need to create the new cluster, that's going to be replacing the old one, with the bootstrap section being the important one, to yoink all the data from the previous one

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: cluster16
  labels:
    name: cluster16
    usecase: databases
  namespace: databases
spec:
  instances: 2
  primaryUpdateStrategy: unsupervised
  imageName: ghcr.io/cloudnative-pg/postgresql:16.1-1
  monitoring:
    enablePodMonitor: true

  postgresql:
    parameters:
      max_connections: "600"
      shared_buffers: 512MB

  storage:
    size: 50Gi
    storageClass: local-path

  superuserSecret:
    name: postgres-creds
  enableSuperuserAccess: true
  # Note: Bootstrap is needed when recovering from an existing cnpg cluster
  bootstrap:
    initdb:
      import:
        type: monolith
        databases: ["*"]
        roles: ["*"]
        source:
          externalCluster: old-main-cluster

  externalClusters:
    - name: old-main-cluster
      connectionParameters:
        # Use the correct IP or host name for the source database
        host: 10.11.0.75
        user: postgres
        dbname: postgres
        sslmode: require
      password:
        name: postgres-creds
        key: password

  backup:
    barmanObjectStore:
      destinationPath: "s3://cloudnative-pg-backups"
      endpointURL: "https://s3.stuff.com"
      serverName: "cluster16"
      s3Credentials:
        accessKeyId:
          name: aws-creds
          key: MINIO_ACCESS_KEY
        secretAccessKey:
          name: aws-creds
          key: MINIO_SECRET_KEY
      wal:
        compression: gzip
        maxParallel: 8
      data:
        compression: gzip    
    retentionPolicy: "30d"

---
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
  name: cluster16
  namespace: databases
spec:
  schedule: "0 0 2 * * *"
  immediate: true
  backupOwnerReference: self
  cluster:
    name: cluster16

Now you have two choices here:

  1. Now cutover all the services, don't want for the new cluster to be up yet (to try to minimize data loss?)
  2. Wait for a few minutes, and potentially lose whatever data is created in those few minutes (I saw that even my pretty active DB didn't have any new data over these few minutes)

Cut over all Postgres services after it’s done initializing

MAKE SURE TO ADD THE _**-rw**_ PART

Wherever

main-cluster-rw.databases.svc.newcluster.local

was, had to change it to

cluster16-rw.databases.svc.newcluster.local

Make sure to change it in both GitOps deployments, and in the secrets. Whichever deployments rely on the secrets, make sure to restart those deployments as well.

Now you just commit / restart all the deployments that use Postgres, and pray 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment