Create a NFS share on some Ubuntu system HowTo - Create NFS Share on Ubuntu
Create a YAML configuration as required by kubernetes for deploying PostgreSQL Full YAML Config file
Create a PersistentVolume that will be used by PersistentVolumeClaim that will then be used by PostgreSQL to store its data.
The path in below config will be the path you added to /etc/exports
on your NFS server. The server in below config will be the server address on which NFS server is hosted. Ensure the storageClassName
is manual
apiVersion: v1
kind: PersistentVolume
metadata:
name: pgset-pv
labels:
app: pgset
spec:
storageClassName: manual
capacity:
storage: 150M
accessModes:
- ReadWriteMany
nfs:
path: /kubernetes-volumes
server: 172.16.17.10
persistentVolumeReclaimPolicy: Retain
Create a PersistentVolumeClaim as below. Ensure the accessModes
has exact same text as in the PersistentVolume, and the storageClassName
is manual
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pgset-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100M
apiVersion: v1
kind: ServiceAccount
metadata:
name: pgset-sa
---
apiVersion: v1
kind: Service
metadata:
name: pgset
labels:
app: pgset
spec:
#Comment1: For troubleshooting if you need to connect to the PostgreSQL Cluster from outside the Kubernetes Cluster
#Comment the next line `clusterIP: None`
#Comment the next line and Uncomment the subsequent line and the to be able to
#Uncomment the subsequent line `#type: NodePort` also uncomment the line further below `#nodePort: 30100`
clusterIP: None
#type: NodePort
ports:
- port: 5432
name: web
clusterIP: None
selector:
app: pgset
---
apiVersion: v1
kind: Service
metadata:
name: pgset-primary
labels:
name: pgset-primary
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
nodePort: 0
selector:
name: pgset-primary
type: ClusterIP
#type: NodePort
sessionAffinity: None
---
apiVersion: v1
kind: Service
metadata:
name: pgset-replica
labels:
name: pgset-replica
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
nodePort: 0
selector:
name: pgset-replica
type: ClusterIP
sessionAffinity: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pgset
spec:
selector:
matchLabels:
app: pgset # has to match .spec.template.metadata.labels
serviceName: pgset
replicas: 2
template:
metadata:
labels:
app: pgset
name: pgset-replica
spec:
serviceAccount: pgset-sa
securityContext:
fsGroup: 26
containers:
- name: pgset
image: crunchydata/crunchy-postgres:centos7-10.2-1.8.0
ports:
- containerPort: 5432
name: postgres
env:
- name: PG_PRIMARY_USER
value: primaryuser
- name: PGHOST
value: /tmp
- name: PG_MODE
value: set
- name: PG_PRIMARY_HOST
value: pgset-primary
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: password
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: password
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: password
volumeMounts:
- name: pgdata
mountPath: /pgdata
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: pgset-pvc
Assuming you saved the entire YAML as postgres-deployment.yaml
execute it as follows
kubectl create -f postgres-deployment.yaml
If you hit a problem, here are a few steps to follow in order to troubleshoot and figure out what could be the cause.
-
Find the Pod name for your PostgreSQL Primary or Replica instance.
kubectl get po
-
Check the logs for each of the pods to see whats the problem might be
kubectl logs <pod name>
-
If there are any log messages that indicate some problem with the storage of postgresql files, maybe the cause is the PV and PVC. Check the status of both the PersistentVolume and PersistentVolumeClaim both should show as bound, if not maybe the nfs share is not allowing a write confirm that you have proper access on nfs server
-
Maybe all is good but some database query is failing, you want to peep into the Database using some UI tool like pgAdmin or SQLWorkbenchJ. To be able to temporarily connect with your database follow the
#Comment1:
in the section that creates the Service for Postgres above just after the ServiceAccount account is created
Good Details for Master-Slave PG, Please provide me details for connecting the pg database from outside, I am using bare matel ( nginx ingress ).