#RFC: Adding test cases to e2e/volumes.go
As we are developing features and new volume plugins, the e2e test cases are not catching up at the same pace. This proposal aims to cover the security context and more volume plugins.
Existing test cases in e2e/volumes.go
A storage server Pod that runs one of Glusterfs, NFS, Ceph RBD, Ceph FS, iSCSI, and OpenStack Cinder exports a file share. The file share containers a sample file. The test passes if a client Pod can mount the file share and test passes and read the sample file.
Since the client Pod has SELinux label and fsGroup in securityContext, securityContext must be enabled. The server container runs in privileged mode, so kubelet must allow privileged mode.
Similar to the existing server Pods, we want to test against other volume plugins such as EBS, GCE PD, git repo, and probably fibre channel and Azure file.
- Setup. In server (not NFS) Pod, if the file share’s gid is unknown, set the file share’s gid to 100001 (chown root:100001 /path/to/share). In client Pod, set supplementalGroups to the same gid
- Expected result: client Pod can read sample on the file share
- Setup. In server (not NFS) Pod, if the file share’s gid is unknown, set the file share’s gid to 100001 (chown root:100001 /path/to/share). In client Pod, set supplementalGroups to 100002
- Expected result: client Pod can not read sample on the file share
- Setup. In server (not NFS) Pod, if the file share’s gid is unknown, set the file share’s gid to 100001 (chown root:100001 /path/to/share), set the file share to be writable. In client Pod, set supplementalGroups to 100001 and RunAsUser to 100002, client Pod writes a file to the file share
- Expected result: client Pod can write to the file share and the file’s uid is 100002 and gid is 100001 (or the file share's gid if it is known)
For block storage (iSCSI, Ceph RBD, AWS EBS, GCE PD, and OpenStack Cinder), verify the volume has expected gid as in fsGroup
- Setup. In server (not NFS) Pod, if the file share’s gid is unknown, set the file share to be writable. In client Pod, set fsGroup to 100001 and RunAsUser to 100002, client Pod writes a file to the file share
- Expected result: client Pod can write to the file share and the file’s uid is 100002 and gid is 100001 (or the file share's gid if it is known)
- Setup. In server (not NFS) Pod, set the file share to be writable. In client Pod, set seLinuxOptions to “s0:c13,c2”, client Pod writes a file to the file share
- Expected result: client Pod can write to the file share and the file’s selinux label is “s0:c13,c2”