Understanding Kubernetes Pods
Overview
A Pod is the smallest deployable unit in Kubernetes. It is not a single container — it is a wrapper around one or more containers that share the same network, storage, and lifecycle. In this article, you will learn what a Pod is, why Kubernetes chose Pods over bare containers, how to write Pod manifests step by step, and how to add health checks, resource limits, and volumes progressively. By the end, you will deploy a .NET API inside a Pod and verify every feature hands-on.
Core Concepts
Step 1: What Is a Pod?
Think of a Pod as a shared apartment. Each room is a container — they have their own space (CPU, memory), but they share the same front door (IP address), mailbox (hostname), and common areas (volumes). Kubernetes never runs a container directly. Instead, it always wraps containers inside a Pod.
This means:
- Every container in a Pod gets the same IP address.
- Containers talk to each other over
localhost. - They can share files through volumes.
Step 2: Why Pods Instead of Containers?
Imagine you have a web API and a log collector that reads the API's log files from disk. They must share the same filesystem — putting them on separate machines would break the log collector. Kubernetes solves this by grouping them into one Pod, guaranteeing they always land on the same node.
At the same time, keeping them as separate containers inside the Pod gives each one its own resource boundaries. If the log collector leaks memory, the kernel kills only that container — not your web API.
Step 3: What Do Containers in a Pod Share?
| Shared Resource | What It Means |
|---|---|
| Network namespace | All containers share the same IP address and port space. They reach each other via localhost. |
| UTS namespace | All containers see the same hostname. |
| IPC namespace | Containers can communicate via System V IPC or POSIX message queues. |
| Volumes | Containers can mount the same volume at different paths for shared file access. |
Containers in different Pods are fully isolated — different IP addresses, different hostnames — even if they run on the same physical node.
Step 4: The Golden Rule — What Goes Together?
Ask yourself: "Will these containers work correctly if they land on different machines?"
- If no — put them in the same Pod (e.g., web server + local file syncer).
- If yes — use separate Pods (e.g., an API + a database communicate over the network and scale independently).
A common beginner mistake is putting a web app and its database in the same Pod. They communicate over the network anyway, and you need to scale them independently. Keep them in separate Pods.
Step 5: Your First Pod Manifest (The Simplest Form)
Pods are defined in a manifest — a YAML file describing the desired state. You submit this to the Kubernetes API server, and the scheduler places the Pod on a healthy node. This is declarative configuration: you declare what you want, not how to achieve it.
Here is the simplest possible Pod — a single container running NGINX:
Try it now:
Delete it when done:
Step 6: Running a .NET API in a Pod
Now let's use a real application. We will containerize an ASP.NET Weather API and run it in a Pod.
First, build the container image. Save this as Dockerfile in your project root:
Build and push:
Now create a Pod manifest. Save as weather-api-pod.yaml:
Apply and verify:
Step 7: Adding Health Checks (Probes)
Kubernetes automatically restarts a container if its main process crashes. But what if the process is running yet deadlocked or stuck? That is where health checks (probes) come in.
What Are the Three Probe Types?
| Probe Type | Question It Answers | What Happens on Failure |
|---|---|---|
| Liveness Probe | Is the application still functioning? | Container is restarted. |
| Readiness Probe | Is the application ready to accept traffic? | Container is removed from Service load balancer (not restarted). |
| Startup Probe | Has the application finished starting up? | Liveness/readiness probes are paused until startup succeeds. |
How a Probe Checks Health
Each probe can use one of three mechanisms:
- httpGet — Makes an HTTP request to a path and port. Success = status code 200–399.
- tcpSocket — Opens a TCP connection to a port. Success = connection established.
- exec — Runs a command inside the container. Success = exit code 0.
Adding Probes to Our .NET Pod
ASP.NET has built-in health check middleware. Assuming your API exposes /healthz for liveness and /ready for readiness, add probes to the container spec:
The startup probe gives the app up to 50 seconds (10 failures × 5 seconds) to finish starting. Until it succeeds, Kubernetes does not run the liveness or readiness probes.
Step 8: Setting Resource Requests and Limits
Every container can declare how much CPU and memory it needs. There are two settings:
- Request — The minimum resources guaranteed to the container. The scheduler uses this to find a node with enough capacity.
- Limit — The maximum the container is allowed to consume. The kernel enforces this cap.
What happens when limits are exceeded?
- Memory over limit → Container is killed (
OOMKilled) and restarted. - CPU over limit → Container is throttled (slowed down), not killed.
Common resource units:
| Resource | Unit Examples | Meaning |
|---|---|---|
| CPU | "250m", "1" | 250 millicores = 0.25 CPU core; "1" = 1 full core |
| Memory | "256Mi", "1Gi" | 256 mebibytes; 1 gibibyte (power-of-two units) |
Add resources to the container spec:
This guarantees the container gets at least 0.25 CPU and 128 MiB RAM, but caps it at 0.75 CPU and 512 MiB.
Step 9: Adding Volumes for Persistent Data
By default, data inside a container is ephemeral — it disappears on restart. Volumes attach storage that survives container restarts.
| Volume Type | When to Use | Survives Pod Deletion? |
|---|---|---|
emptyDir | Temporary shared storage between containers in the same Pod | No |
hostPath | Mount a directory from the host node (testing only) | Yes (on same node) |
persistentVolumeClaim | Cloud disks, network storage — production workloads | Yes |
Here is how to add an emptyDir volume for temporary log storage:
Hands-On: Kubernetes Commands
Create a Pod Imperatively
The quickest way to run a Pod for testing — useful in development:
Create a Pod from a YAML Manifest
The recommended declarative approach:
List Running Pods
Get Wide Output with Node and IP Info
Inspect Pod Details
This shows labels, node placement, container status, events, probe results, and resource usage — essential for debugging.
View Pod Logs
Stream logs in real-time:
View logs from a previous (crashed) container instance:
Execute Commands Inside the Container
Run a one-off command:
Open an interactive shell:
Port-Forward to Access the API Locally
Access the Pod from your machine without a Service or Ingress:
Then test it:
Copy Files To/From the Container
Check Resource Usage
Delete a Pod
By name:
Using the manifest:
When deleted, the Pod enters the Terminating state and gets a 30-second grace period to finish active requests before being killed.
Step-by-Step Example
Now let's put everything together. We will build a complete Pod manifest for the .NET Weather API — with probes, resource limits, and a volume — then deploy and verify each feature.
- Create the complete manifest. Save as
weather-api-full.yaml:
- Apply the manifest:
- Watch the Pod come up:
- Press
Ctrl+Cto stop watching. - Verify the probes are working:
- Scroll to the
Eventssection at the bottom — you should see events for scheduling, image pulling, container creation, and probe starts. No probe failure events means everything is healthy. - Check resource usage:
- The values should be within your defined limits (750m CPU, 512Mi memory).
- Port-forward and test the API:
- In a new terminal:
- Verify the volume mount:
- The
emptyDirvolume is working — log files persist across container restarts (but not Pod deletion). - Check environment variables:
- View the logs:
- Clean up:
Summary
- A Pod is the smallest deployable unit in Kubernetes — a group of one or more containers sharing network, storage, and lifecycle.
- Containers in the same Pod share the same IP address, hostname, and communicate via
localhost. - Use the golden rule: if containers must be on the same machine, put them in one Pod. Otherwise, use separate Pods.
- Pod manifests are YAML files declaring desired state. Use
kubectl apply -fto create or update them. - Build your manifest progressively: start with a basic container, then add probes, then resource limits, then volumes.
- Startup probes protect slow-starting apps. Liveness probes restart unhealthy containers. Readiness probes remove unready containers from load balancers.
- Resource requests guarantee minimum CPU/memory. Resource limits cap the maximum. Always set both in production.
- Volumes provide storage that survives container restarts. Use
emptyDirfor temporary data andpersistentVolumeClaimfor data that must outlive the Pod. - When a Pod is deleted, it enters a
Terminatingstate with a 30-second grace period. - Key debugging commands:
kubectl logs,kubectl exec,kubectl describe,kubectl port-forward,kubectl top.