9.2. Use Storage in a Pod#
In this lab you’ll configure the Django pod to use the PersistentVolumeClaim
that you made in the last lab, 9.1. Request Storage for Your Application. Using a
PersistentVolumeClaim ensures that your valuable user data is kept separately
from your pods.
Update the Pod Definition#
There are three fragments that you must add to your pod definition in
deployment/pod.yaml. The first one goes under the spec key and ensures that
the PersistentVolume is available to the machine running the pod:
volumes:
- name: db
persistentVolumeClaim:
claimName: mysite-data
The second fragment makes the volume available to a particular container. This
one is siblings with image, ports and resources definition under the
spec.containers[0] key:
volumeMounts:
- name: db
mountPath: /data
The third fragment changes the permissions on the volume to match the user we created in the Docker container:
securityContext:
fsGroup: 1000
Redeploy the Pod#
You can’t simply update the pod with a volume mount. You have to delete and restart it:
$ kubectl delete pod/django
$ kubectl apply -f deployment/pod.yaml
Verify that the volume is present:
$ kubectl describe pod/django
...
Mounts:
/data from db (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-679jh (ro)
...
Volumes:
db:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysite-data
ReadOnly: false
...
Check the PersistentVolume#
Now let’s check the PersistentVolumeClaim again:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
mysite-data Bound pvc-ddaae17c-60d1-4d21-b16b-a3e293ffb095 1Gi RWO standard-rwo <unset> 22m
It’s in the Bound state so let’s look for the PersistentVolume:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-ddaae17c-60d1-4d21-b16b-a3e293ffb095 1Gi RWO Delete Bound default/mysite-data standard-rwo <unset> 11m
The resource name is automatically generated. You can examine the resource with this command:
Note
Change the name of the pvc to match yours.
$ kubectl describe pv/pvc-ddaae17c-60d1-4d21-b16b-a3e293ffb095
Name: pvc-ddaae17c-60d1-4d21-b16b-a3e293ffb095
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: pd.csi.storage.gke.io
volume.kubernetes.io/provisioner-deletion-secret-name:
volume.kubernetes.io/provisioner-deletion-secret-namespace:
Finalizers: [external-provisioner.volume.kubernetes.io/finalizer kubernetes.io/pv-protection external-attacher/pd-csi-storage-gke-io]
StorageClass: standard-rwo
Status: Bound
Claim: default/mysite-data
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Required Terms:
Term 0: topology.gke.io/zone in [us-central1-a]
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: pd.csi.storage.gke.io
FSType: ext4
VolumeHandle: projects/cis-92/zones/us-central1-a/disks/pvc-ddaae17c-60d1-4d21-b16b-a3e293ffb095
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1775381178985-4741-pd.csi.storage.gke.io
Events: <none>
What provisioner was used?