Skip to content

Instantly share code, notes, and snippets.

@admun
Last active January 17, 2023 19:31
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save admun/4372899f20421a947b7544e5fc9f9117 to your computer and use it in GitHub Desktop.
Save admun/4372899f20421a947b7544e5fc9f9117 to your computer and use it in GitHub Desktop.
My journey migrating off nfs-client-provisioner, since it's deprecated and broken in k8s 1.20.4...

I recently upgraded my rancher RKE cluster to 1.20.4, and found that nfs-client-proisioner is broken. It failed to create new PVC with an "unexpected error getting claim reference: selfLink was empty, can't make reference" error.

After some google search I found this. However, there is no documentation on how to migrate from nfs-client-provisioner, even it seems to fixed the issue.

When asking around in #sig-storage, @thansen suggested to give democratic-csi a try, which has a crude yet simple implementation of nfs-client as a CSI driver.

Here's capture how I get it to work, after helps from @thansen

My setup: RKE cluster on k8s v1.20.4, 3x Fedora 33 node, managed by Rancher 2.5.6, NFS server on a Thecus N5550, nfs-client-provisioner used for dynamic PVC provisioning.

  • make sure nfs-utils installed on each node
  • enable and started rpc-statd.service
  • create a new NFS mount for democratic-csi to use, so it wlll not interfere with existing mounts from nfs-client-provisioner (otherwise, use a different folder from existing mount works too)
  • add repo to helm helm repo add democratic-csi https://democratic-csi.github.io/charts/ then helm repo update
  • notes
    • tried to install from Rancher, but it failed. So ended up running the install from command-line
    • tried to use controller setup option #2, but somehow pod not able to mount /nfs-storage from local node (maybe a RKE issue)

values.yaml constructed with lot of helps from @thansen, using controller setup option #1. details see here:

csiDriver:
  name: "org.democratic-csi.nfs-client"
storageClasses:
# changed the name from nfs-client, so it runs in parallel with nfs-client-provisioner
- name: democratic-nfs-client
  defaultClass: true
# set to Retain, so PVC will be left alone on delete
  reclaimPolicy: Retain
  volumeBindingMode: Immediate
  allowVolumeExpansion: false
  parameters:
    fsType: nfs
# use nfsvers=4, since my NAS support that
  mountOptions:
  - noatime
  - nfsvers=4
  secrets:
    provisioner-secret:
    controller-publish-secret:
    node-stage-secret:
    node-publish-secret:
    controller-expand-secret:
volumeSnapshotClasses: []
# only needs controller, and the csi node client daemonset
controller:
  enabled: true
  externalResizer:
    enabled: false
  strategy: deployment
  hostNetwork: true
  hostIPC: true
# for the controller, it manually mount from the remote NFS sevrer on start, so it can dynamically create PVC (the "crude" part)
  driver:
    securityContext:
      allowPrivilegeEscalation: true
      capabilities:
        add:
        - SYS_ADMIN
      privileged: true
    lifecycle:
      postStart:
        exec:
          command: ["/bin/sh", "-c", "mkdir -p /nfs-storage; mount <my NFS server>:/nfs-storage /nfs-storage"]
      preStop:
        exec:
          command: ["/bin/sh","-c","umount /nfs-storage"]
# provide the config to controller how to map and setup a PVC
driver:
  config:
    driver: nfs-client
    instance_id: <some random guid>
    nfs:
      shareHost: <my NFS server>
      shareBasePath: "/nfs-storage"
      controllerBasePath: "/nfs-storage"
# allow control to PVC creation user and permission
      dirPermissionsMode: "0700"
      dirPermissionsUser: 1000
      dirPermissionsGroup: 1000
  • install the chart helm upgrade --install --values values.yaml --namespace democratic-csi --create-namespace --version 0.7.0 democratic-nfs-client democratic-csi/democratic-csi
  • some post install verification:
    • check from kubectl or rancher that controller and per node client come up
    • shell into the controller to verify /nfs-storage is mounted and writable
    • add a volume to a deployment with the new demoratic-nfs-client storage class
@jimliming
Copy link

Thanks for taking the time to write this up! I'm in a similar situation.

Is the end result that you kept your nfs-client-provisioner in place for legacy PVC's, but you are creating new PVC's on the democratic-nfs-client storage class?

@admun
Copy link
Author

admun commented Jan 27, 2022

For my old clutser, yes.

But then I have to rebuild the cluster so I redo the PVCs and consolidated them under the same CSI

@jimliming
Copy link

Makes sense, thanks again! 🍻

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment