Kubernetes Entry Created: 05 Mar 2026 Updated: 05 Mar 2026

Understanding Kubernetes Pods

Overview

A Pod is the smallest deployable unit in Kubernetes. It is not a single container — it is a wrapper around one or more containers that share the same network, storage, and lifecycle. In this article, you will learn what a Pod is, why Kubernetes chose Pods over bare containers, how to write Pod manifests step by step, and how to add health checks, resource limits, and volumes progressively. By the end, you will deploy a .NET API inside a Pod and verify every feature hands-on.

Core Concepts

Step 1: What Is a Pod?

Think of a Pod as a shared apartment. Each room is a container — they have their own space (CPU, memory), but they share the same front door (IP address), mailbox (hostname), and common areas (volumes). Kubernetes never runs a container directly. Instead, it always wraps containers inside a Pod.

This means:

  1. Every container in a Pod gets the same IP address.
  2. Containers talk to each other over localhost.
  3. They can share files through volumes.

Step 2: Why Pods Instead of Containers?

Imagine you have a web API and a log collector that reads the API's log files from disk. They must share the same filesystem — putting them on separate machines would break the log collector. Kubernetes solves this by grouping them into one Pod, guaranteeing they always land on the same node.

At the same time, keeping them as separate containers inside the Pod gives each one its own resource boundaries. If the log collector leaks memory, the kernel kills only that container — not your web API.

Step 3: What Do Containers in a Pod Share?

Shared ResourceWhat It Means
Network namespaceAll containers share the same IP address and port space. They reach each other via localhost.
UTS namespaceAll containers see the same hostname.
IPC namespaceContainers can communicate via System V IPC or POSIX message queues.
VolumesContainers can mount the same volume at different paths for shared file access.

Containers in different Pods are fully isolated — different IP addresses, different hostnames — even if they run on the same physical node.

Step 4: The Golden Rule — What Goes Together?

Ask yourself: "Will these containers work correctly if they land on different machines?"

  1. If no — put them in the same Pod (e.g., web server + local file syncer).
  2. If yes — use separate Pods (e.g., an API + a database communicate over the network and scale independently).

A common beginner mistake is putting a web app and its database in the same Pod. They communicate over the network anyway, and you need to scale them independently. Keep them in separate Pods.

Step 5: Your First Pod Manifest (The Simplest Form)

Pods are defined in a manifest — a YAML file describing the desired state. You submit this to the Kubernetes API server, and the scheduler places the Pod on a healthy node. This is declarative configuration: you declare what you want, not how to achieve it.

Here is the simplest possible Pod — a single container running NGINX:

apiVersion: v1
kind: Pod
metadata:
name: simple-web
spec:
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
resources:
requests:
cpu: "100m"
memory: "64Mi"
limits:
cpu: "250m"
memory: "128Mi"

Try it now:

kubectl apply -f simple-web.yaml
kubectl get pods
NAME READY STATUS RESTARTS AGE
simple-web 1/1 Running 0 12s

Delete it when done:

kubectl delete pod simple-web

Step 6: Running a .NET API in a Pod

Now let's use a real application. We will containerize an ASP.NET Weather API and run it in a Pod.

First, build the container image. Save this as Dockerfile in your project root:

FROM mcr.microsoft.com/dotnet/sdk:10.0 AS build
WORKDIR /src
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o /app/publish

FROM mcr.microsoft.com/dotnet/aspnet:10.0
WORKDIR /app
COPY --from=build /app/publish .
EXPOSE 8080
ENTRYPOINT ["dotnet", "WeatherApi.dll"]

Build and push:

docker build -t myregistry/weather-api:1.0 .
docker push myregistry/weather-api:1.0

Now create a Pod manifest. Save as weather-api-pod.yaml:

apiVersion: v1
kind: Pod
metadata:
name: weather-api
labels:
app: weather-api
spec:
containers:
- name: weather-api
image: myregistry/weather-api:1.0
ports:
- containerPort: 8080
name: http
protocol: TCP
env:
- name: ASPNETCORE_URLS
value: "http://+:8080"
- name: ASPNETCORE_ENVIRONMENT
value: "Development"
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"

Apply and verify:

kubectl apply -f weather-api-pod.yaml
kubectl get pods weather-api
NAME READY STATUS RESTARTS AGE
weather-api 1/1 Running 0 18s

Step 7: Adding Health Checks (Probes)

Kubernetes automatically restarts a container if its main process crashes. But what if the process is running yet deadlocked or stuck? That is where health checks (probes) come in.

What Are the Three Probe Types?

Probe TypeQuestion It AnswersWhat Happens on Failure
Liveness ProbeIs the application still functioning?Container is restarted.
Readiness ProbeIs the application ready to accept traffic?Container is removed from Service load balancer (not restarted).
Startup ProbeHas the application finished starting up?Liveness/readiness probes are paused until startup succeeds.

How a Probe Checks Health

Each probe can use one of three mechanisms:

  1. httpGet — Makes an HTTP request to a path and port. Success = status code 200–399.
  2. tcpSocket — Opens a TCP connection to a port. Success = connection established.
  3. exec — Runs a command inside the container. Success = exit code 0.

Adding Probes to Our .NET Pod

ASP.NET has built-in health check middleware. Assuming your API exposes /healthz for liveness and /ready for readiness, add probes to the container spec:

livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 15
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 2
startupProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 5
failureThreshold: 10

The startup probe gives the app up to 50 seconds (10 failures × 5 seconds) to finish starting. Until it succeeds, Kubernetes does not run the liveness or readiness probes.

Step 8: Setting Resource Requests and Limits

Every container can declare how much CPU and memory it needs. There are two settings:

  1. Request — The minimum resources guaranteed to the container. The scheduler uses this to find a node with enough capacity.
  2. Limit — The maximum the container is allowed to consume. The kernel enforces this cap.

What happens when limits are exceeded?

  1. Memory over limit → Container is killed (OOMKilled) and restarted.
  2. CPU over limit → Container is throttled (slowed down), not killed.

Common resource units:

ResourceUnit ExamplesMeaning
CPU"250m", "1"250 millicores = 0.25 CPU core; "1" = 1 full core
Memory"256Mi", "1Gi"256 mebibytes; 1 gibibyte (power-of-two units)

Add resources to the container spec:

resources:
requests:
cpu: "250m"
memory: "128Mi"
limits:
cpu: "750m"
memory: "512Mi"

This guarantees the container gets at least 0.25 CPU and 128 MiB RAM, but caps it at 0.75 CPU and 512 MiB.

Step 9: Adding Volumes for Persistent Data

By default, data inside a container is ephemeral — it disappears on restart. Volumes attach storage that survives container restarts.

Volume TypeWhen to UseSurvives Pod Deletion?
emptyDirTemporary shared storage between containers in the same PodNo
hostPathMount a directory from the host node (testing only)Yes (on same node)
persistentVolumeClaimCloud disks, network storage — production workloadsYes

Here is how to add an emptyDir volume for temporary log storage:

volumes:
- name: log-storage
emptyDir: {}

# Inside the container spec:
volumeMounts:
- name: log-storage
mountPath: /app/logs

Hands-On: Kubernetes Commands

Create a Pod Imperatively

The quickest way to run a Pod for testing — useful in development:

kubectl run weather-api --image=myregistry/weather-api:1.0 --port=8080

Create a Pod from a YAML Manifest

The recommended declarative approach:

kubectl apply -f weather-api-pod.yaml

List Running Pods

kubectl get pods
NAME READY STATUS RESTARTS AGE
weather-api 1/1 Running 0 45s

Get Wide Output with Node and IP Info

kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
weather-api 1/1 Running 0 52s 10.244.1.12 worker-01

Inspect Pod Details

kubectl describe pod weather-api

This shows labels, node placement, container status, events, probe results, and resource usage — essential for debugging.

View Pod Logs

kubectl logs weather-api

Stream logs in real-time:

kubectl logs weather-api -f

View logs from a previous (crashed) container instance:

kubectl logs weather-api --previous

Execute Commands Inside the Container

Run a one-off command:

kubectl exec weather-api -- printenv ASPNETCORE_ENVIRONMENT
Development

Open an interactive shell:

kubectl exec -it weather-api -- /bin/bash

Port-Forward to Access the API Locally

Access the Pod from your machine without a Service or Ingress:

kubectl port-forward weather-api 8080:8080

Then test it:

curl http://localhost:8080/weatherforecast

Copy Files To/From the Container

kubectl cp weather-api:/app/logs/app.log ./local-app.log

Check Resource Usage

kubectl top pod weather-api
NAME CPU(cores) MEMORY(bytes)
weather-api 15m 87Mi

Delete a Pod

By name:

kubectl delete pod weather-api

Using the manifest:

kubectl delete -f weather-api-pod.yaml

When deleted, the Pod enters the Terminating state and gets a 30-second grace period to finish active requests before being killed.

Step-by-Step Example

Now let's put everything together. We will build a complete Pod manifest for the .NET Weather API — with probes, resource limits, and a volume — then deploy and verify each feature.

  1. Create the complete manifest. Save as weather-api-full.yaml:
apiVersion: v1
kind: Pod
metadata:
name: weather-api
labels:
app: weather-api
version: "1.0"
spec:
volumes:
- name: log-storage
emptyDir: {}
containers:
- name: weather-api
image: myregistry/weather-api:1.0
ports:
- containerPort: 8080
name: http
protocol: TCP
env:
- name: ASPNETCORE_URLS
value: "http://+:8080"
- name: ASPNETCORE_ENVIRONMENT
value: "Production"
- name: Logging__LogFilePath
value: "/app/logs/weather-api.log"
resources:
requests:
cpu: "250m"
memory: "128Mi"
limits:
cpu: "750m"
memory: "512Mi"
volumeMounts:
- name: log-storage
mountPath: /app/logs
startupProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 5
failureThreshold: 10
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 15
timeoutSeconds: 2
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 2
failureThreshold: 2
  1. Apply the manifest:
kubectl apply -f weather-api-full.yaml
  1. Watch the Pod come up:
kubectl get pods --watch
NAME READY STATUS RESTARTS AGE
weather-api 0/1 Pending 0 0s
weather-api 0/1 ContainerCreating 0 1s
weather-api 0/1 Running 0 3s
weather-api 1/1 Running 0 8s
  1. Press Ctrl+C to stop watching.
  2. Verify the probes are working:
kubectl describe pod weather-api
  1. Scroll to the Events section at the bottom — you should see events for scheduling, image pulling, container creation, and probe starts. No probe failure events means everything is healthy.
  2. Check resource usage:
kubectl top pod weather-api
NAME CPU(cores) MEMORY(bytes)
weather-api 12m 92Mi
  1. The values should be within your defined limits (750m CPU, 512Mi memory).
  2. Port-forward and test the API:
kubectl port-forward weather-api 8080:8080
  1. In a new terminal:
curl http://localhost:8080/weatherforecast
[{"date":"2026-03-06","temperatureC":25,"summary":"Warm"},
{"date":"2026-03-07","temperatureC":18,"summary":"Cool"}]
  1. Verify the volume mount:
kubectl exec weather-api -- ls /app/logs
weather-api.log
  1. The emptyDir volume is working — log files persist across container restarts (but not Pod deletion).
  2. Check environment variables:
kubectl exec weather-api -- printenv | grep ASPNETCORE
ASPNETCORE_URLS=http://+:8080
ASPNETCORE_ENVIRONMENT=Production
  1. View the logs:
kubectl logs weather-api
  1. Clean up:
kubectl delete -f weather-api-full.yaml

Summary

  1. A Pod is the smallest deployable unit in Kubernetes — a group of one or more containers sharing network, storage, and lifecycle.
  2. Containers in the same Pod share the same IP address, hostname, and communicate via localhost.
  3. Use the golden rule: if containers must be on the same machine, put them in one Pod. Otherwise, use separate Pods.
  4. Pod manifests are YAML files declaring desired state. Use kubectl apply -f to create or update them.
  5. Build your manifest progressively: start with a basic container, then add probes, then resource limits, then volumes.
  6. Startup probes protect slow-starting apps. Liveness probes restart unhealthy containers. Readiness probes remove unready containers from load balancers.
  7. Resource requests guarantee minimum CPU/memory. Resource limits cap the maximum. Always set both in production.
  8. Volumes provide storage that survives container restarts. Use emptyDir for temporary data and persistentVolumeClaim for data that must outlive the Pod.
  9. When a Pod is deleted, it enters a Terminating state with a 30-second grace period.
  10. Key debugging commands: kubectl logs, kubectl exec, kubectl describe, kubectl port-forward, kubectl top.
Share this lesson: