Skip to content

Instantly share code, notes, and snippets.

@Jaysok
Last active June 28, 2022 13:28
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Jaysok/fe30e7cd9a72573e40faf46257924f62 to your computer and use it in GitHub Desktop.
Save Jaysok/fe30e7cd9a72573e40faf46257924f62 to your computer and use it in GitHub Desktop.

GKE PV migration to another project

This gits is just for sharing my experience so it might not be the best fit for you. But I hope this might help someone facing similar problems in the future.

Background

Our team had to shutdown one of the clusters in GKE but there were some development dependencies to our nexus repository which is exposed as URI. So the team decided to migrate entire nexus repository from one GCP project to another and simply change the ip address of the DNS.

While it was easy to migrate the nexus because it was simply deployed as a pod, but it seemed quite difficult the persistent volume it depends upon.

I searched through some articles in the internet and found some useful solutions. Below are the sources and steps.

Hope this can help you.

Referenced articles

1. Snapshot your disk

I tried to make an image of existing disk but failed because the disk is used by existing pool.

So I snapshot the disk in use.

gcloud compute disks snapshot <your-diskname> --zone=<zone>

2. Make a disk from the snapshot

I made a disk from the snapshot above. But I did this by GCP UI. I bet you can do this with gcloud also.

3. Make an image from the disk

gcloud compute images create <your-image-name> --source-disk=<disk-above> --source-disk-zone=<zone>

The result will print a URL for the image.

4. Move to your destination project

gcloud config set project <your-dest-project>

5. Create a disk in your destination project

gcloud compute disks create <disk-name-in-dest-project> --image=<image-url-above> --zone=<target-zone>

6. Write k8s pv and pvc manifest files for the dist

# migrated-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: <pv name in your cluster> # <-- change as you need
spec:
  storageClassName: standard # <-- change as you need
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi # <-- change as you need
  claimRef:
    name: <pvc name in your cluster> # <-- change as you need
    namespace: default # <-- change as you need
  gcePersistentDisk:
    fsType: ext4 # <-- change as you need
    pdName: <pv name in your cluster> # <-- change as you need
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: <pvc name in your cluster> # <-- change as you need
spec:
  storageClassName: standard # <-- change as you need
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi # <-- change as you need

7. Apply above k8s manifest

kubectl apply -f ./migrated-pv.yaml

8. Use it in Deployment yaml

# nexus.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nexus
  name: nexus
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nexus
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: nexus
    spec:
      containers:
      - image: sonatype/nexus3
        imagePullPolicy: Always
        name: nexus
        ports:
        - containerPort: 8081
          protocol: TCP
        volumeMounts:
        - mountPath: /nexus-data
          name: nexus-data-volume
      initContainers:
      - command:
        - sh
        - -c
        - chown -R 200:200 /nexus-data
        image: busybox
        imagePullPolicy: Always
        name: volume-mount-hack
        volumeMounts:
        - mountPath: /nexus-data
          name: nexus-data-volume
      volumes:
      - name: nexus-data-volume
        persistentVolumeClaim:
          claimName: <your-pvc-name>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment