Skip to content

Instantly share code, notes, and snippets.

@ShyamsundarR
Created October 4, 2019 19:59
Show Gist options
  • Save ShyamsundarR/f16d32e3edd5b38df50e90106674a943 to your computer and use it in GitHub Desktop.
Save ShyamsundarR/f16d32e3edd5b38df50e90106674a943 to your computer and use it in GitHub Desktop.
---
apiVersion: v1
kind: PersistentVolume
metadata:
# Can be anything, but has to be matched at line 47
# Also should avoid conflicts with existing PV names in the namespace
name: preprov-pv-cephfs-01
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
csi:
driver: rook-ceph.cephfs.csi.ceph.com
nodeStageSecretRef:
name: rook-ceph-csi
namespace: rook-ceph
volumeAttributes:
clusterID: rook-ceph
fsName: myfs
# The key "staticVolume" states this is pre-provisioned
# NOTE: This was "preProvisionedVolume: "true"" in Ceph-CSI versions 1.0 and below
staticVolume: "true"
# Path of the PV on the CephFS filesystem
rootPath: /staticpvs/pv-1
# Can be anything, need not match PV name, or volumeName in PVC
# Retained as the same for simplicity and uniquness
volumeHandle: preprov-pv-cephfs-01
# Reclaim policy must be "retain" as,
# deletion of pre-provisioned volumes is not supported
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
claimRef:
# Name should match "claimName" in PVC claim section
name: csi-cephfs-pvc-preprov
namespace: default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc-preprov
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
volumeName: preprov-pv-cephfs-01
---
apiVersion: v1
kind: Pod
metadata:
name: csicephfs-preprov-demo-pod
spec:
containers:
- image: busybox
name: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: mypvc
mountPath: /mnt
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: csi-cephfs-pvc-preprov
@elinesterov
Copy link

Indeed, manually changing the userKey inside csi-cephfs-preprov-secret to match the base64 value of the admin keyring fixed the PV mounting issue. However, this seems cumbersome: having to manually inspect one secret and updating another secret to match is very fragile. And for the record the decoded rook-ceph-admin-keyring looks like:

[client.admin]
        key = <the secret used to mount the cephfs volume>
        caps mds = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"
        caps mgr = "allow *"

which means it cannot be directly referenced from nodeStageSecretRef.

I am definitely missing something -- how does this nodeStageSecretRef relate to the existing secrets that Rook generated when provisioning Ceph?

hm. Interesting! It still didn't work for me even with a right creds. I might have a different issue in my case.

@ztl8702
Copy link

ztl8702 commented Jan 2, 2020

There is another step that I did (which might or might not be necessary):

After creating the PV using this gist, I exec'ed into the rook-toolbox Pod, and mounted the CephFS filesystem root directly. And then I made a folder staticpvs/pv-1 inside the CephFS root to match rootPath in the PV manifest.

It still didn't work for me even with a right creds

Did you base64 encode the admin key when you modify the secret?


BTW, I just realised, since I am using Rook, that I should (conveniently) be using rook-csi-cephfs-provisioner and rook-csi-cephfs-node (both are created by Rook in the rook-ceph namespace) as provisionerSecretRef and nodeStageSecretRef respectively.

@ShyamsundarR
Copy link
Author

ShyamsundarR commented Jan 2, 2020

BTW, I just realised, since I am using Rook, that I should (conveniently) be using rook-csi-cephfs-provisioner and rook-csi-cephfs-node (both are created by Rook in the rook-ceph namespace) as provisionerSecretRef and nodeStageSecretRef respectively.

The rook created secrets will not have the keys userID and userKey. So reusing the rook created secret as is for static PVs is not feasible.

@ShyamsundarR
Copy link
Author

The secrets being discussed here need to be of the form: https://gist.github.com/ShyamsundarR/1009b79966992470f588a277429ae2e1#file-pod-with-static-pv-yaml-L1-L10

IOW, a user key and value needs to be specified, in exactly the same format as detailed here: https://github.com/ceph/ceph-csi/blob/3e656769b71a3c43d95f6875ed4934c82a8046e7/examples/cephfs/secret.yaml#L7-L10

If using stringData then base-64 encoding the userKey is not required. (if using data instead, then base-64 encoding the same is required)

@elinesterov
Copy link

I just realized that I was using wrong secret (something crewed up when I switched between different deployments). I just put correct one and everything seems working fine for me.

@ShyamsundarR
Copy link
Author

The intention of a static PV is that, the path mentioned as rootPath (here) already exists, and the secret created to access this path is a ceph userID/key with enough credentials to mount and perform required IO on the mount.

So, if using a static PV, the rootPath should be already created against the backing CephFS volume.

Based on the discussions here so far by @elinesterov and @ztl8702 the above needs to be clarified as having been done, and if so please pass on the csi-cephfsplugin logs from the ceph-csi nodeplugin daemonset logs, on the node where the PV is attempted to mounted.

@ztl8702
Copy link

ztl8702 commented Jan 2, 2020

@ShyamsundarR Thanks for the clarification. Manually creating a secret that matches the form using the client.admin key from ceph auth ls worked for me. Do you still want my logs?

I am just wondering if there is a way to more declaratively deploy a CephFS shared filesystem + mount a volume to a Pod that gives access to at a fixed path within cephfs, with ceph-csi. The extra step to get / create a client key in Ceph and then create the correct secret in Kubernetes API bugs me a little bit.
I believe this was previously possible with flexVolume or with provisionVolume: false?

@ShyamsundarR
Copy link
Author

@ShyamsundarR Thanks for the clarification. Manually creating a secret that matches the form using the client.admin key from ceph auth ls worked for me. Do you still want my logs?

No, thank you.

I am just wondering if there is a way to more declaratively deploy a CephFS shared filesystem + mount a volume to a Pod that gives access to at a fixed path within cephfs, with ceph-csi. The extra step to get / create a client key in Ceph and then create the correct secret in Kubernetes API bugs me a little bit.

The step to create a client key and a secret is a choice for security conscious setups. You could use a single secret across all static PVs that are created. This secret can be in any namespace, but the namespace needs to be reflected in here.

The above reduces it to a single secret creation step.

I believe this was previously possible with flexVolume or with provisionVolume: false?

I am not aware of how flexVolume worked, so not commenting on the same.

With provisionVolume: false the method of using static PVs was to request for one dynamically, and the provisioner detecting it as a pre-provisioned PV and hence using the same secret as that used by the CSI plugins. The entire provisioning step (and hence de-provisioning) was superfluous, and the intention was to go with kubernetes based static PV definitions instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment