-
-
Save ShyamsundarR/f16d32e3edd5b38df50e90106674a943 to your computer and use it in GitHub Desktop.
--- | |
apiVersion: v1 | |
kind: PersistentVolume | |
metadata: | |
# Can be anything, but has to be matched at line 47 | |
# Also should avoid conflicts with existing PV names in the namespace | |
name: preprov-pv-cephfs-01 | |
spec: | |
accessModes: | |
- ReadWriteMany | |
capacity: | |
storage: 5Gi | |
csi: | |
driver: rook-ceph.cephfs.csi.ceph.com | |
nodeStageSecretRef: | |
name: rook-ceph-csi | |
namespace: rook-ceph | |
volumeAttributes: | |
clusterID: rook-ceph | |
fsName: myfs | |
# The key "staticVolume" states this is pre-provisioned | |
# NOTE: This was "preProvisionedVolume: "true"" in Ceph-CSI versions 1.0 and below | |
staticVolume: "true" | |
# Path of the PV on the CephFS filesystem | |
rootPath: /staticpvs/pv-1 | |
# Can be anything, need not match PV name, or volumeName in PVC | |
# Retained as the same for simplicity and uniquness | |
volumeHandle: preprov-pv-cephfs-01 | |
# Reclaim policy must be "retain" as, | |
# deletion of pre-provisioned volumes is not supported | |
persistentVolumeReclaimPolicy: Retain | |
volumeMode: Filesystem | |
claimRef: | |
# Name should match "claimName" in PVC claim section | |
name: csi-cephfs-pvc-preprov | |
namespace: default | |
--- | |
apiVersion: v1 | |
kind: PersistentVolumeClaim | |
metadata: | |
name: csi-cephfs-pvc-preprov | |
spec: | |
accessModes: | |
- ReadWriteMany | |
resources: | |
requests: | |
storage: 5Gi | |
volumeName: preprov-pv-cephfs-01 | |
--- | |
apiVersion: v1 | |
kind: Pod | |
metadata: | |
name: csicephfs-preprov-demo-pod | |
spec: | |
containers: | |
- image: busybox | |
name: busybox | |
command: | |
- sleep | |
- "3600" | |
imagePullPolicy: IfNotPresent | |
volumeMounts: | |
- name: mypvc | |
mountPath: /mnt | |
volumes: | |
- name: mypvc | |
persistentVolumeClaim: | |
claimName: csi-cephfs-pvc-preprov |
@ztl8702
I don't know in details about users and passwords. There is a section in doc related to this, but if you get to this point it works, otherwise you'll see the error related to credentials.
I see the same error and when I dig into logs (something similar to kubectl -n rook-ceph logs csi-cephfsplugin-kzp4v csi-cephfsplugin
you need to change csi-cephfsplugin-kzp4v
with the right pod for you.)
mount [-t ceph 10.96.215.114:6789:/volumes/csi/csi-vol-811e84a6-2b74-11ea-90fb-5a7b60d4d7c3/ /var/lib/kubelet/plugins/kubernetes.io/csi/pv/preprov-pv-cephfs-01/globalmount -o name=admin,secretfile=/tmp/csi/keys/keyfile-446845825,mds_namespace=myfs]
From what I see in log files - there is no directory /var/lib/kubelet/plugins/kubernetes.io/csi/pv/preprov-pv-cephfs-01/globalmount
so there is no place to mount cephfs on the node.
I looked into the node when PV created dynamically - and there is some component which creates the directory for PV mount automatically.
So, I tries to create /var/lib/kubelet/plugins/kubernetes.io/csi/pv/preprov-pv-cephfs-01/globalmount
dirs on the node manually, but it didn't help because there is something removing these directories.
I guess I need to create an issue for this and dig into corresponding components code, but this is my first few days looking into ceph, rook and cephfs-csi, so I don't know many details about architecture.
Feel free to create an issue, otherwise I'll create one tomorrow.
Indeed, manually changing the
userKey
insidecsi-cephfs-preprov-secret
to match the base64 value of the admin keyring fixed the PV mounting issue. However, this seems cumbersome: having to manually inspect one secret and updating another secret to match is very fragile. And for the record the decodedrook-ceph-admin-keyring
looks like:[client.admin] key = <the secret used to mount the cephfs volume> caps mds = "allow *" caps mon = "allow *" caps osd = "allow *" caps mgr = "allow *"
which means it cannot be directly referenced from nodeStageSecretRef.
I am definitely missing something -- how does this nodeStageSecretRef relate to the existing secrets that Rook generated when provisioning Ceph?
hm. Interesting! It still didn't work for me even with a right creds. I might have a different issue in my case.
There is another step that I did (which might or might not be necessary):
After creating the PV using this gist, I exec'ed into the rook-toolbox Pod, and mounted the CephFS filesystem root directly. And then I made a folder staticpvs/pv-1
inside the CephFS root to match rootPath
in the PV manifest.
It still didn't work for me even with a right creds
Did you base64 encode the admin key when you modify the secret?
BTW, I just realised, since I am using Rook, that I should (conveniently) be using rook-csi-cephfs-provisioner
and rook-csi-cephfs-node
(both are created by Rook in the rook-ceph
namespace) as provisionerSecretRef and nodeStageSecretRef respectively.
BTW, I just realised, since I am using Rook, that I should (conveniently) be using
rook-csi-cephfs-provisioner
androok-csi-cephfs-node
(both are created by Rook in therook-ceph
namespace) as provisionerSecretRef and nodeStageSecretRef respectively.
The rook created secrets will not have the keys userID
and userKey
. So reusing the rook created secret as is for static PVs is not feasible.
The secrets being discussed here need to be of the form: https://gist.github.com/ShyamsundarR/1009b79966992470f588a277429ae2e1#file-pod-with-static-pv-yaml-L1-L10
IOW, a user key and value needs to be specified, in exactly the same format as detailed here: https://github.com/ceph/ceph-csi/blob/3e656769b71a3c43d95f6875ed4934c82a8046e7/examples/cephfs/secret.yaml#L7-L10
If using stringData then base-64 encoding the userKey
is not required. (if using data
instead, then base-64 encoding the same is required)
I just realized that I was using wrong secret (something crewed up when I switched between different deployments). I just put correct one and everything seems working fine for me.
The intention of a static PV is that, the path mentioned as rootPath
(here) already exists, and the secret created to access this path is a ceph userID/key with enough credentials to mount and perform required IO on the mount.
So, if using a static PV, the rootPath
should be already created against the backing CephFS volume.
Based on the discussions here so far by @elinesterov and @ztl8702 the above needs to be clarified as having been done, and if so please pass on the csi-cephfsplugin logs from the ceph-csi nodeplugin daemonset logs, on the node where the PV is attempted to mounted.
@ShyamsundarR Thanks for the clarification. Manually creating a secret that matches the form using the client.admin
key from ceph auth ls
worked for me. Do you still want my logs?
I am just wondering if there is a way to more declaratively deploy a CephFS shared filesystem + mount a volume to a Pod that gives access to at a fixed path within cephfs, with ceph-csi. The extra step to get / create a client key in Ceph and then create the correct secret in Kubernetes API bugs me a little bit.
I believe this was previously possible with flexVolume or with provisionVolume: false
?
@ShyamsundarR Thanks for the clarification. Manually creating a secret that matches the form using the
client.admin
key fromceph auth ls
worked for me. Do you still want my logs?
No, thank you.
I am just wondering if there is a way to more declaratively deploy a CephFS shared filesystem + mount a volume to a Pod that gives access to at a fixed path within cephfs, with ceph-csi. The extra step to get / create a client key in Ceph and then create the correct secret in Kubernetes API bugs me a little bit.
The step to create a client key and a secret is a choice for security conscious setups. You could use a single secret across all static PVs that are created. This secret can be in any namespace, but the namespace needs to be reflected in here.
The above reduces it to a single secret creation step.
I believe this was previously possible with flexVolume or with
provisionVolume: false
?
I am not aware of how flexVolume worked, so not commenting on the same.
With provisionVolume: false
the method of using static PVs was to request for one dynamically, and the provisioner detecting it as a pre-provisioned PV and hence using the same secret as that used by the CSI plugins. The entire provisioning step (and hence de-provisioning) was superfluous, and the intention was to go with kubernetes based static PV definitions instead.
Indeed, manually changing the
userKey
insidecsi-cephfs-preprov-secret
to match the base64 value of the admin keyring fixed the PV mounting issue. However, this seems cumbersome: having to manually inspect one secret and updating another secret to match is very fragile. And for the record the decodedrook-ceph-admin-keyring
looks like:which means it cannot be directly referenced from nodeStageSecretRef.
I am definitely missing something -- how does this nodeStageSecretRef relate to the existing secrets that Rook generated when provisioning Ceph?