Skip to content

Instantly share code, notes, and snippets.

@azhuox

azhuox/blog.md Secret

Created November 29, 2020 21:21
Show Gist options
  • Save azhuox/d664ff1db9323c0221eb3d0e4773b2ac to your computer and use it in GitHub Desktop.
Save azhuox/d664ff1db9323c0221eb3d0e4773b2ac to your computer and use it in GitHub Desktop.

Kubernetes Persistent Volumes and Persistent Volume Claims

Persistent Volumes and Persistent Volume Claims

Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) are designed for managing storage resources in Kubernetes.

The following picture shows the overview of PVs and PVCs.

The Overview of Persistent Volumes and Persistent Volume Claims

From the picture you can see that:

  • PVs are created by cluster administrators and they are consumed by PVCs which are created by developers.
  • A PV is like mount configuration to a storage. Therefore, you can create different mount configurations for the same storage by creating multiple PVs.
  • A PV is a public resource in a cluster, which means it is accessible to all the namespace. This also means the name of the PV needs to be unique in the whole cluster.
  • A PVC is a k8s object within a namespace, which means its name must be unique in the namespace.
  • A PV can only be exclusively bound to a PVC. This one-to-one mapping lasts until the PVC is deleted.
  • A PV and its bound PVC builds a bridge between the "clients" (Pods) and the real storage.

Provisioning Persistent Volumes

There are two ways to provision a PV: statically or dynamically.

"Static" PVs

A static PV is a PV manually created by a cluster administrator with the details of a storage. "Static" here means the PV must exist before being consumed by a PVC.

Here is an example of static PVs:

PV spec:

https://gist.github.com/008f480d3d6e6975e2a3ecfc8cc302bf

PVC spec:

https://gist.github.com/eec669ae7da6d3408fe6d595d226a91c

Pod spec:

https://gist.github.com/7c4ef4c05447e021064148d804474297

In this example, the ngx-pvc PVC finds & binds the nfs-pv PV via LabelSelector. The nfs-pod Pod utilizes the nfs-pvc PVC to create a volume called data-dir and then mounts the volume to the /usr/share/nginx/html directory in the nginx container. After the creation of these Kubernetes objects, ssh into the nginx container in the nfs-pod Pod, run the cat /proc/mount command then you can find the information of the nfs-pv PV like: 12.34.56.78:/data /usr/share/nginx/html nfs4 vers=4.0,rsize=32768,wsize=32768,...,addr=12.34.56.78 0 0. This means the NFS server 12.34.56.78:/data which is specified in the nfs-pv PV is mounted to the /usr/share/nginx/html directory.

PV Types and Mount Options

Kubernetes currently supports many PV types, for example, NFS, CephFS, Glusterfs and GCEPersistentDisk. You can check this doc for more details.

In this example, the nfs-pv PV is created using NFS PV type, with the 12.34.56.78 server and the data path. In addition, this PV also specifies some other mount options for the NFS server. Mount options are only supported by some PV types, you can check this doc for more details.

The capacity of A PV

The capacity of a static PV is not hard limit of corresponding storage. Instead, the capacity is fully controlled by the real storage. Therefore, suppose the NFS server in the example has 200Gi storage space, the nfs-pv PV is able to use up all of the NFS server's space even although it only has capacity.storage == 10Gi. The capacity setting of a static PV normally is just for matching up the storage request in corresponding PVC.

Access Mode

There are three access modes for a PV:

  • ReadWriteOnce: a PV can be mounted as read-write by a single node if it has ReadWriteOnce in its accessModes spec. This means 1. the PV can perform read and write operation to a storage. 2. The PV can only be mounted on a single node, which means any Pod that wants to use this PV must be scheduled to the same node as well.
  • ReadOnlyMany: a PV can be mounted as read-only by many nodes if it has ReadOnlyMany in its accessModes spec. Unlike ReadWriteOnce, ReadOnlyMany allows the PV to be mounted on many nodes but it can only perform read operation to the real storage. Any write request will be denied in this case.
  • ReadWriteMany: a PV can be mounted as read-write by many nodes if it has ReadWriteMany in its accessModes spec. This means the PV can perform read and write operation in many nodes.

Different PV types have different supports for these three access modes. You can check this doc for more details.

You may notice that the PV's accessModes field is an array, which means it can has multiple access modes. Nevertheless, a PV can only be mounted using one access mode at a time, even if it has multiple access mods in its accessModes field. Therefore, instead of including multiple access modes in a PV, it is recommended to have one access mode in one PV and create separate PVs with different access modes for different use cases.

You may also notice that there are other attributes which can also affect access modes. Here is simplified summary:

  • readOnly attribute of a PV type is storage side setting. It is used to control whether real storage is read-only or not.
  • AccessModes of a PV is PV side setting and it is used to control access mode of the PV.
  • AccessModes of a PVC has to match up the PV that it wants to bind. A PV and a PVC build a bridge between the "client" and the real storage: the PV connects to the real storage while thr PVC connects to the "client".
  • readOnly attribute of VolumeMount is "client" side setting. It is used to control whether the mounted directory is read-only or not.

Reclaim Policy

The persistentVolumeReclaimPolicy field specifies the reclaim policy for a PV, which can be either Delete (default value) or Retain. You may want to set it Retain for a PV and back up the data at a certain frequency if the data inside the storage that the PV connects is really important.

Binding

The example above uses LabelSelector matchLabels.pv-name == pv-name to bind the nfs-pv PV and the nfs-pvc PVC together. You do not need to use LabelSelector to establish the bind between PVs and PVCs if you want more flexible way of binding. For example, without LabelSelector, a PVC that requires storage == 10Gi and accessModes == [ReadWriteOnce] can be bound to a PV with storage >= 10Gi and accessModes == [ReadWriteOnce, ReadWriteMany].

"Dynamic" Persistent Volumes

Dynamic PVs are dynamically created by K8s, which is triggered by the specification of a user's PVC. The dynamic provisioning is based on Storage Classes: a PVC must specify an existing StorageClass in order to create a dynamic PV.

Storage Classes.

A StorageClass is a Kubernetes object used to describe a storage class. It uses the fields like parameters, provisioner and reclaimPolicy to describe details of the storage class that it represents. Let's take a look at the GKE's default storage class standard, here is its spec:

https://gist.github.com/d1da33c8eb2c0d275a467b22f3293812

Explanation:

  • The metadata.name field is the name of the StorageClass. It has to be unique in the whole cluster.
  • The parameters field specifies the parameters for the real storage. For example parameters.type == pd-standard means this storage class uses GCEPersistentDisk as storage media. You can check this doc for more details about the parameters of Storage Classes.
  • The provisioner field specifies which volume plugin is used by the Storage Class to provision dynamic PVs. You can check this list for each provisioner's specification.
  • Like persistentVolumeReclaimPolicy, the reclaimPolicy field specifies the reclaim policy for the storage created by the Storage Class. It can be either Delete (default value) or Retain.
  • The volumeBindingMode field controls when to perform dynamic provisioning and volume binding. volumeBindingMode == Immediate means doing dynamic provisioning and volume binding once the PVC is created, while volumeBindingMode == WaitForFirstConsumer means delaying dynamic provisioning and volume binding until the PVC is actually being consumed.

A Use Case

This example utilizes dynamic provisioning to create storage resources for a ZooKeeper service. (Here I simplify the the config for the demo purpose. You can check this doc for more details about how to setup a ZooKeeper Service with a StatefulSet in Kubernetes.)

StatefulSet Spec: https://gist.github.com/d9e5769435321f5cf006f51ae6e10239

In this example, the volumeClaimTemplates field is used to do dynamic provisioning: A PVC is created with the storage specification defined in the volumeClaimTemplates field for each Pod. Then A PV is created by the standard Storage Class for each Pod and bound to each Pod's PVC. Then A 10Gi GCEPersistentDisk is created by the standard Storage Class for each PV. A PVC has the same accessModes, storage and reclaimPolicy with its corresponding PV.

"Updating" PVs & PVCs

Updating Static PVs

Sometimes you need to update some parameters, for example, mount options, for a static PV, which the storage is not dynamically provisioned. However, updating a PV that is being used may be blocked by K8s. But as what I mentioned above, A PV and PVC bound is like building a bridge between clients and the real storage. Therefore, instead of updating the existing PV, you can create a new PV and PVC with the new settings you want, and then mount replace the old PVC with the new one.

Updating Dynamic PVs

A dynamic PV now can be extended (shrinking is not supported) by editing its bound PVC in Kubernetes v1.11 or later versions. This feature is well supported in many built-in volume providers, such as GCE-PD, AWS-EBS and GlusterFs. An cluster administrator can make this feature available for cluster users by setting allowVolumeExpansion == true in the configurations of the Storage Classes. You can check this blog for more details.

Reference

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment