Kubernetes Entry Created: 10 Apr 2026 Updated: 10 Apr 2026

Kubernetes Persistent Volumes, PVCs, and Storage

Imagine you deploy a file-upload API in Kubernetes. A user uploads a photo. The Pod saves it to disk. Then Kubernetes restarts that Pod — maybe the node ran low on memory, maybe a new version was deployed. When the Pod comes back up, the photo is gone. The container's local filesystem started fresh.

This is the fundamental challenge of storage in Kubernetes: containers are ephemeral by design. Everything written to a container's filesystem disappears when the container is removed. For databases, file stores, and any workload that must survive restarts, you need a different approach.

Kubernetes solves this with a three-layer storage model:

  1. PersistentVolume (PV) — the actual storage resource, provisioned by an admin or automatically.
  2. PersistentVolumeClaim (PVC) — a request for storage made by a developer or a workload.
  3. StorageClass — a blueprint that automates the creation of PVs on demand.

Understanding how these three pieces fit together is one of the most important skills for running real applications on Kubernetes.

Core Concepts

The Problem: Ephemeral Container Storage

Every container gets an empty, temporary filesystem when it starts. Think of it like a whiteboard that gets erased every time the container stops. This ephemeral nature is a feature — it keeps containers predictable and immutable — but it is a problem for any data that must outlive the container.

Kubernetes volumes are the first step toward persistence. A volume is a directory accessible to the containers in a Pod. However, a basic volume only lives as long as the Pod itself. When the Pod is deleted, the volume and its data disappear too.

PersistentVolumes break this tie. A PV exists independently of any Pod. Data stored in a PV survives Pod restarts, Pod deletions, and even node failures.

PersistentVolume (PV) — The Storage Resource

A PersistentVolume is a piece of storage in the cluster that has been provisioned by a cluster administrator (or automatically by Kubernetes). It is a cluster-level resource — it does not belong to any namespace. Think of it as a hard drive that has been plugged into the cluster and made available for use.

A PV describes:

  1. Capacity — how much storage it provides (e.g., 10Gi).
  2. Access Mode — how many Pods can mount it, and in what way.
  3. Reclaim Policy — what happens to the data after the PVC that used it is deleted.
  4. Storage Backend — where the data physically lives (a cloud disk, an NFS share, a local path, etc.).

Developers do not create PVs directly in most production environments. They request storage via a PersistentVolumeClaim and let the cluster provision the PV automatically. But understanding the PV structure helps you understand what you are claiming.

Access Modes — Who Can Mount the Volume?

The access mode defines how many nodes and Pods can mount a PV simultaneously:

Access ModeShort NameMeaningTypical Use Case
ReadWriteOnceRWOMounted read-write by one node at a timeDatabase data directory, single-replica app
ReadOnlyManyROXMounted read-only by many nodes simultaneouslyStatic assets, configuration files, shared binaries
ReadWriteManyRWXMounted read-write by many nodes simultaneouslyShared file storage (requires NFS or Azure Files / AWS EFS)
ReadWriteOncePodRWOPMounted read-write by exactly one Pod (Kubernetes 1.22+)Strict single-writer scenarios

Important: Most cloud block storage (AWS EBS, GCP Persistent Disk, Azure Disk) only supports ReadWriteOnce. If your application needs multiple Pods to share the same writable volume, you must use a network filesystem like NFS, Azure Files, or AWS EFS which support ReadWriteMany.

Reclaim Policies — What Happens When You're Done?

When a PVC is deleted, Kubernetes needs to know what to do with the underlying PV and its data. This is controlled by the Reclaim Policy:

PolicyBehaviourWhen to Use
RetainPV and data are kept. The PV moves to Released state and must be manually reclaimed.Production databases — never auto-delete data.
DeleteThe PV and the underlying storage (e.g., cloud disk) are deleted automatically.Temporary scratch space, CI/CD runners, dev environments.
RecycleData is scrubbed (rm -rf /data/*) and the PV is made available again. Deprecated.Avoid — use Delete and reprovisioning instead.

For production workloads, always use Retain so that accidental PVC deletion does not result in data loss. You can always manually clean up later; you cannot un-delete a database.

PVC Lifecycle — Binding

The relationship between a PVC and a PV follows a clear lifecycle with distinct phases:

  1. Pending — The PVC has been created but no matching PV has been found yet. Kubernetes is searching for a PV that satisfies the request (correct size, access mode, and storage class).
  2. Bound — A matching PV was found (or created dynamically). The PVC and PV are now linked one-to-one. The Pod can mount the volume.
  3. Released — The PVC has been deleted, but the PV still holds data and has not yet been reclaimed. Only applies when the reclaim policy is Retain.
  4. Failed — Automatic reclamation failed for the PV.

The binding is exclusive: once a PV is bound to a PVC, no other PVC can claim it — even if there is leftover capacity.

PersistentVolumeClaim (PVC) — The Storage Request

A PersistentVolumeClaim is how a developer or an application asks for storage. It is a namespace-scoped resource — unlike the PV itself. The developer does not need to know what storage backend is being used. They simply state their requirements: "I need 10Gi, writable by one node." Kubernetes finds or creates a suitable PV and binds it.

Think of the PV as a hotel room and the PVC as a reservation. The guest (your Pod) states what they need — a room with a double bed for one night — and the hotel matches them to an available room. The guest does not manage the plumbing or furniture inside the room; they just use it.

Once a PVC is bound to a PV, a Pod can use the PVC as a volume by referencing the PVC name in the Pod's volumes section.

StorageClass — Dynamic Provisioning

Manually creating a PV for every PVC does not scale. In production, most clusters use StorageClasses to provision storage automatically.

A StorageClass is a template that tells Kubernetes how to create a PV on demand. When a PVC references a StorageClass and no existing PV matches, Kubernetes calls the StorageClass's provisioner — a plugin that talks to the underlying storage system (AWS EBS, Azure Disk, GCP Persistent Disk, NFS, etc.) and creates the disk automatically.

With dynamic provisioning:

  1. Developers only create PVCs — the PV appears automatically.
  2. When the PVC is deleted, the PV (and the underlying cloud disk) can be deleted automatically too (policy: Delete).
  3. Different StorageClasses can offer different tiers: fast NVMe SSD, standard HDD, replicated, encrypted.

Every Kubernetes cluster has a default StorageClass. If a PVC does not specify a storageClassName, it uses the default one — so in many environments you never need to mention StorageClasses explicitly.

Static Provisioning vs Dynamic Provisioning

AspectStatic ProvisioningDynamic Provisioning
Who creates the PV?Cluster administrator — manuallyKubernetes — automatically via StorageClass
PVC referencesstorageClassName: manual or a specific nameA StorageClass name (or default)
FlexibilityAdmin must pre-create PVs in advancePVs created on demand as PVCs are submitted
Typical environmentOn-premises, air-gapped, local developmentCloud providers (AWS, Azure, GCP)

Hands-On: Kubernetes Commands

List PersistentVolumes (cluster-wide)

PVs are cluster-scoped. No -n flag is needed:

kubectl get pv

Sample output:

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
uploads-pv 10Gi RWO Retain Bound default/uploads-pvc manual

List PersistentVolumeClaims (namespaced)

PVCs live in a namespace. Omitting -n shows the current namespace:

kubectl get pvc

Sample output:

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
uploads-pvc Bound uploads-pv 10Gi RWO manual

Describe a PVC to diagnose binding issues

If a PVC is stuck in Pending, describe shows the reason:

kubectl describe pvc uploads-pvc

Look at the Events section at the bottom. Common messages include "no persistent volumes available for this claim" (no matching PV) or "waiting for a volume to be created" (dynamic provisioner is running).

List StorageClasses

The StorageClass marked (default) is used when a PVC specifies no storageClassName:

kubectl get storageclass

Sample output:

NAME PROVISIONER RECLAIMPOLICY BINDINGMODE
standard (default) rancher.io/local-path Delete WaitForFirstConsumer
fast-ssd docker.io/hostpath Delete WaitForFirstConsumer

Check which Pod is using a PVC

Describe the PVC and look for the Used By field:

kubectl describe pvc uploads-pvc | grep "Used By"

Expand a PVC (if StorageClass allows it)

If the StorageClass has allowVolumeExpansion: true, you can increase the PVC's capacity by editing it:

kubectl patch pvc uploads-pvc -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'

Delete a PVC

Deleting a PVC unbinds the PV. What happens next depends on the reclaim policy:

kubectl delete pvc uploads-pvc

Step-by-Step Example

The Scenario

We will build two storage examples side by side:

  1. Static Provisioning: Manually create a PV and PVC, then deploy an ASP.NET Core 10 Media API that stores uploaded images in the volume.
  2. Dynamic Provisioning: Define a StorageClass and let Kubernetes create the PV automatically when we submit a PVC.

Part 1 — Static Provisioning

Step 1.1 — Create the PersistentVolume

This PV uses hostPath — a directory on the Kubernetes node's local filesystem. This is suitable for learning and single-node environments. In production, you would replace hostPath with a cloud disk or NFS share.

Key settings to notice: storageClassName: manual (must match the PVC), Retain reclaim policy (keeps data after the PVC is deleted), and ReadWriteOnce access mode (one node can mount it at a time).

apiVersion: v1
kind: PersistentVolume
metadata:
name: uploads-pv
labels:
type: local
app: media-api
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /data/media-uploads
kubectl apply -f uploads-pv.yaml

Verify the PV was created and is in Available status (not yet bound to any PVC):

kubectl get pv uploads-pv

Step 1.2 — Create the PersistentVolumeClaim

The PVC requests 10Gi of storage with ReadWriteOnce access, using storageClassName: manual. Kubernetes will search for a PV that matches all three criteria and bind them together.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: uploads-pvc
labels:
app: media-api
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
kubectl apply -f uploads-pvc.yaml

Verify binding — the PVC status should change from Pending to Bound:

kubectl get pvc uploads-pvc

Step 1.3 — Mount the PVC in the Media API Deployment

There are two important parts to mounting a PVC in a Pod:

  1. In spec.volumes, declare the volume and point it at the PVC by name (claimName: uploads-pvc).
  2. In spec.containers[].volumeMounts, tell the container where inside its filesystem to mount that volume (mountPath: /mnt/uploads).

The Media API reads the upload path from the environment variable Storage__UploadPath. This way, the path is configurable without rebuilding the image.

apiVersion: apps/v1
kind: Deployment
metadata:
name: media-api
labels:
app: media-api
spec:
replicas: 1
selector:
matchLabels:
app: media-api
template:
metadata:
labels:
app: media-api
spec:
containers:
- name: media-api
image: mcr.microsoft.com/dotnet/aspnet:10.0
ports:
- containerPort: 8080
env:
- name: ASPNETCORE_URLS
value: "http://+:8080"
- name: Storage__UploadPath
value: "/mnt/uploads"
volumeMounts:
- name: uploads-volume
mountPath: /mnt/uploads
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
readinessProbe:
httpGet:
path: /healthz/ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 3
volumes:
- name: uploads-volume
persistentVolumeClaim:
claimName: uploads-pvc
kubectl apply -f media-api-deployment.yaml

Step 1.4 — Verify the Volume is Mounted

Find the running Pod name, then open a shell inside it and check the mount point:

kubectl get pods -l app=media-api
kubectl exec -it <pod-name> -- df -h /mnt/uploads

You should see the 10Gi volume mounted at /mnt/uploads. Any files written there will persist even if this Pod is deleted and a new one takes its place.

Step 1.5 — Prove Data Survives a Pod Restart

Write a test file into the volume, delete the Pod, and verify the file is still there after Kubernetes creates a replacement Pod:

# Write a test file inside the running Pod
kubectl exec -it <pod-name> -- sh -c "echo 'hello persistent world' > /mnt/uploads/test.txt"

# Delete the Pod — the Deployment will recreate it immediately
kubectl delete pod <pod-name>

# Wait for the new Pod to be ready
kubectl get pods -l app=media-api -w

# Open a shell in the NEW Pod and check the file still exists
kubectl exec -it <new-pod-name> -- cat /mnt/uploads/test.txt

The output hello persistent world confirms that the data survived the Pod recreation. This is the core value proposition of PersistentVolumes.

Part 2 — Dynamic Provisioning with StorageClass

Step 2.1 — Create a StorageClass

A StorageClass defines a storage tier. The provisioner field tells Kubernetes which plugin to call when it needs to create a new disk. The reclaimPolicy: Delete means the disk will be automatically destroyed when the PVC is deleted. allowVolumeExpansion: true lets you grow the PVC size after creation.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: docker.io/hostpath
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
kubectl apply -f fast-storage-class.yaml

Step 2.2 — Create a PVC that Triggers Dynamic Provisioning

This PVC references storageClassName: fast-ssd. Because no PV with this storage class exists yet, Kubernetes will call the fast-ssd provisioner to create one automatically. With WaitForFirstConsumer binding mode, the PV is not created until a Pod actually tries to use this PVC — this avoids provisioning volumes in the wrong availability zone.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gallery-pvc
labels:
app: media-api
spec:
storageClassName: fast-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
kubectl apply -f gallery-pvc-dynamic.yaml

Check the PVC status — it will be Pending until a Pod claims it (because of WaitForFirstConsumer):

kubectl get pvc gallery-pvc

Attach this PVC to a Deployment (same pattern as Part 1, using claimName: gallery-pvc), and the provisioner will create the PV automatically as soon as the Pod is scheduled to a node.

kubectl get pv

After the Pod runs, a new PV appears automatically — you never had to create it manually. This is the power of dynamic provisioning.

Understanding the Full Picture

Here is how the three components relate when your Media API Pod is running:

[ Media API Pod ]
|
| mounts via volumeMount
v
[ uploads-volume (volume reference) ]
|
| resolves to
v
[ uploads-pvc (PersistentVolumeClaim) ]
|
| bound to
v
[ uploads-pv (PersistentVolume) ]
|
| backed by
v
[ /data/media-uploads on the Node filesystem (hostPath) ]
OR
[ Cloud disk (AWS EBS / Azure Disk / GCP PD) ]

Your application code only knows about /mnt/uploads. Everything below that path is Kubernetes infrastructure that the developer does not manage. You can swap a hostPath PV with an Azure Disk by changing the PV manifest — your Pod and your ASP.NET Core code do not change at all.

Summary

Kubernetes persistent storage separates the physical storage resource from the application's request for storage, with StorageClasses bridging the two automatically.

  1. A PersistentVolume (PV) is the actual storage — a cluster-level resource backed by a cloud disk, NFS share, or local path. It has a capacity, access mode, and reclaim policy.
  2. A PersistentVolumeClaim (PVC) is the request for storage — a namespace-level resource that your Pod references. Kubernetes binds it to a suitable PV automatically.
  3. A StorageClass automates PV creation. Developers only write PVCs, and the StorageClass provisioner creates the underlying disk on demand. Most cloud clusters have a default StorageClass.
  4. Access modes define how many Pods/nodes can use the volume: ReadWriteOnce (one node) is most common; ReadWriteMany requires a network filesystem.
  5. Use Retain reclaim policy for production databases. Use Delete for temporary or dev workloads. Never rely on Recycle.
  6. Mounting a PVC in a Pod requires two fields: spec.volumes (declare the PVC by name) and spec.containers[].volumeMounts (declare the mount path inside the container).
  7. Data written to a PVC-backed volume survives Pod deletion, Pod rescheduling, and node restarts — as long as the PVC itself is not deleted.


Share this lesson: