This gits is just for sharing my experience so it might not be the best fit for you. But I hope this might help someone facing similar problems in the future.
Our team had to shutdown one of the clusters in GKE but there were some development dependencies to our nexus repository which is exposed as URI. So the team decided to migrate entire nexus repository from one GCP project to another and simply change the ip address of the DNS.
While it was easy to migrate the nexus because it was simply deployed as a pod, but it seemed quite difficult the persistent volume it depends upon.
I searched through some articles in the internet and found some useful solutions. Below are the sources and steps.
Hope this can help you.
- how-i-can-migrate-a-persistence-disk-from-one-project-to-another
- Using preexisting persistent disks as PersistentVolumes
I tried to make an image of existing disk but failed because the disk is used by existing pool.
So I snapshot the disk in use.
gcloud compute disks snapshot <your-diskname> --zone=<zone>
I made a disk from the snapshot above. But I did this by GCP UI.
I bet you can do this with gcloud
also.
gcloud compute images create <your-image-name> --source-disk=<disk-above> --source-disk-zone=<zone>
The result will print a URL for the image.
gcloud config set project <your-dest-project>
gcloud compute disks create <disk-name-in-dest-project> --image=<image-url-above> --zone=<target-zone>
# migrated-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: <pv name in your cluster> # <-- change as you need
spec:
storageClassName: standard # <-- change as you need
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi # <-- change as you need
claimRef:
name: <pvc name in your cluster> # <-- change as you need
namespace: default # <-- change as you need
gcePersistentDisk:
fsType: ext4 # <-- change as you need
pdName: <pv name in your cluster> # <-- change as you need
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <pvc name in your cluster> # <-- change as you need
spec:
storageClassName: standard # <-- change as you need
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi # <-- change as you need
kubectl apply -f ./migrated-pv.yaml
# nexus.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nexus
name: nexus
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nexus
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: nexus
spec:
containers:
- image: sonatype/nexus3
imagePullPolicy: Always
name: nexus
ports:
- containerPort: 8081
protocol: TCP
volumeMounts:
- mountPath: /nexus-data
name: nexus-data-volume
initContainers:
- command:
- sh
- -c
- chown -R 200:200 /nexus-data
image: busybox
imagePullPolicy: Always
name: volume-mount-hack
volumeMounts:
- mountPath: /nexus-data
name: nexus-data-volume
volumes:
- name: nexus-data-volume
persistentVolumeClaim:
claimName: <your-pvc-name>