Skip to content

Instantly share code, notes, and snippets.

@rootfs
Last active April 29, 2016 16:11
Show Gist options
  • Save rootfs/cfc48ebb3f402f50978c1988a1b6d1e7 to your computer and use it in GitHub Desktop.
Save rootfs/cfc48ebb3f402f50978c1988a1b6d1e7 to your computer and use it in GitHub Desktop.

#RFC: Adding test cases to e2e/volumes.go

Background

As we are developing features and new volume plugins, the e2e test cases are not catching up at the same pace. This proposal aims to cover the security context and more volume plugins.

Existing test cases in e2e/volumes.go

A storage server Pod that runs one of Glusterfs, NFS, Ceph RBD, Ceph FS, iSCSI, and OpenStack Cinder exports a file share. The file share containers a sample file. The test passes if a client Pod can mount the file share and test passes and read the sample file.

Since the client Pod has SELinux label and fsGroup in securityContext, securityContext must be enabled. The server container runs in privileged mode, so kubelet must allow privileged mode.

Proposed test cases

Support more volume plugins

Similar to the existing server Pods, we want to test against other volume plugins such as EBS, GCE PD, git repo, and probably fibre channel and Azure file.

Test SecurityContext: Supplemental group, RunAsUser and SELinux

Verify that client Pod with correct gid could access the volume

  • Setup. In server (not NFS) Pod, if the file share’s gid is unknown, set the file share’s gid to 100001 (chown root:100001 /path/to/share). In client Pod, set supplementalGroups to the same gid
  • Expected result: client Pod can read sample on the file share

Verify that client Pod with wrong gid cannot access the volume

  • Setup. In server (not NFS) Pod, if the file share’s gid is unknown, set the file share’s gid to 100001 (chown root:100001 /path/to/share). In client Pod, set supplementalGroups to 100002
  • Expected result: client Pod can not read sample on the file share

Verify the files created by client Pod under the volume has expected uid/gid

  • Setup. In server (not NFS) Pod, if the file share’s gid is unknown, set the file share’s gid to 100001 (chown root:100001 /path/to/share), set the file share to be writable. In client Pod, set supplementalGroups to 100001 and RunAsUser to 100002, client Pod writes a file to the file share
  • Expected result: client Pod can write to the file share and the file’s uid is 100002 and gid is 100001 (or the file share's gid if it is known)

For block storage (iSCSI, Ceph RBD, AWS EBS, GCE PD, and OpenStack Cinder), verify the volume has expected gid as in fsGroup

  • Setup. In server (not NFS) Pod, if the file share’s gid is unknown, set the file share to be writable. In client Pod, set fsGroup to 100001 and RunAsUser to 100002, client Pod writes a file to the file share
  • Expected result: client Pod can write to the file share and the file’s uid is 100002 and gid is 100001 (or the file share's gid if it is known)

Verify directory with expected selinux label

  • Setup. In server (not NFS) Pod, set the file share to be writable. In client Pod, set seLinuxOptions to “s0:c13,c2”, client Pod writes a file to the file share
  • Expected result: client Pod can write to the file share and the file’s selinux label is “s0:c13,c2”
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment