Kubernetes Entry Created: 05 Apr 2026 Updated: 05 Apr 2026

Running Your Containers in Kubernetes: Understanding Pods

Pods are the most fundamental building block in Kubernetes. Every container you run on a Kubernetes cluster is managed through a Pod — you never create containers directly. Mastering Pods is the first and most important step toward becoming a confident Kubernetes practitioner.

In this article you will learn what Pods are and why Kubernetes uses them instead of bare containers. You will discover how Pods get IP addresses, how multiple containers inside a single Pod communicate, and how to design Pods correctly so they are easy to destroy and recreate. Finally, you will get hands-on experience creating, inspecting, and deleting Pods using kubectl.

By the end of this article you will be able to:

  1. Explain the relationship between containers and Pods
  2. Describe how Pod networking works with the Container Network Interface (CNI)
  3. Apply Pod design principles such as statelessness and external state storage
  4. Create, inspect, and delete Pods using kubectl
  5. Access a running Pod via port-forwarding and kubectl exec

Core Concepts

What is a Pod?

A Pod is the smallest deployable unit in Kubernetes. It is a group of one or more containers that Kubernetes launches together on the same worker node, inside the same Linux namespace. When you ask Kubernetes to run an application, you do not hand it a container definition — you hand it a Pod definition, and Kubernetes takes care of scheduling and launching the underlying containers.

The golden rule of Pods is simple: all containers in a Pod always run on the same node. A Pod can never span multiple worker nodes. This is an absolute constraint that Kubernetes enforces.

Why Not Create Containers Directly?

Modern applications are rarely made up of a single process. Consider a traditional WordPress site: it needs both an NGINX web server and a PHP-FPM interpreter running simultaneously. On a virtual machine you would install both on the same machine. In the container world, the golden rule is one process per container — so you end up with two containers that must communicate and share a file system.

Doing this at scale with raw Docker commands — managing custom networking, volume mounts across environments and machines — quickly becomes unmanageable. This is precisely the problem Pods solve. A Pod groups multiple containers logically, giving them three shared capabilities out of the box:

  1. All containers in the same Pod can reach each other via localhost because they share the same network namespace.
  2. All containers in the same Pod share the same port space.
  3. Volumes attached to a Pod can be mounted into any of its containers, allowing them to share file system locations.

Single-Container vs Multi-Container Pods

In practice, the majority of Pods you will create contain only one container. This is the standard pattern for microservices: one Pod, one container, one process. Multi-container Pods appear when two processes are so tightly coupled that they must run together — for example, an application container alongside a logging sidecar that streams its log files.

Regardless of how many containers a Pod holds, the Pod is always the lowest level of abstraction you interact with through the Kubernetes API. Kubernetes only manages containers it has launched through Pods — any container started manually on a cluster node is invisible to Kubernetes.

Each Pod Gets a Private IP Address

When Kubernetes schedules a Pod onto a node, it automatically assigns the Pod a private IP address. Every Pod in the cluster can communicate with every other Pod using these IP addresses, regardless of which node they are running on. This is called the flat network model.

The component that implements this networking model is the Container Network Interface (CNI). CNI is a standard interface between the container runtime and the underlying network infrastructure. Popular CNI plugins include Flannel, Calico, and Cilium. Each plugin communicates with the container runtime using standard input/output and handles IP provisioning and cross-node connectivity on behalf of Kubernetes.

Accessing Pods directly by IP address is possible but not recommended. In later articles you will learn about the Service resource, which provides a stable DNS name and virtual IP that maps to a dynamic set of pods.

How to Design Your Pods

The second golden rule of Pods is that they must be easy to destroy and recreate at any moment. A worker node failure, a deployment update, or a resource-pressure eviction can terminate a Pod without warning. Your application must tolerate this. Follow these two design principles:

  1. A Pod should be self-contained — it must include everything needed to start the application. If the Pod is recreated from scratch, the application should come back up without any manual steps.
  2. A Pod should be stateless — any data that must survive Pod restarts must be stored outside the Pod, in a database, an external cache, or a Kubernetes PersistentVolume.

A common mistake is packing the application and its database into the same Pod. This creates three problems: poor data durability (database data is lost when the Pod restarts), reduced availability (the database must restart every time the application crashes), and weaker stability (a memory leak in the application can take down the database too). The right approach is to run the database in its own dedicated Pod — Pods can communicate via IP addresses or DNS, so the decoupling does not prevent connectivity.

Hands-On: Kubernetes Commands

Creating Pods Imperatively

The quickest way to create a Pod is with kubectl run. This creates a single Pod without writing a YAML file first. It is useful for quick debugging or testing.

kubectl run busybox-test --image=busybox:1.37 --restart=Never -- sleep 3600

The --restart=Never flag tells Kubernetes to create a bare Pod rather than a Deployment. The -- sleep 3600 part overrides the container command so the Pod stays alive for one hour.

Listing Pods

Use kubectl get pods to see all Pods in the current namespace and their status.

kubectl get pods

Add -o wide to see the node name and Pod IP address:

kubectl get pods -o wide

Inspecting a Pod

kubectl describe pod shows the full lifecycle of a Pod including events, container statuses, resource limits, and scheduling decisions. It is your first stop when a Pod is not behaving as expected.

kubectl describe pod busybox-test

Reading Pod Logs

Use kubectl logs to read the standard output of any container in a Pod.

kubectl logs busybox-test

Follow live log output with the -f flag:

kubectl logs -f busybox-test

Running Commands Inside a Pod

kubectl exec opens an interactive shell or runs a one-off command inside a running container, similar to docker exec.

kubectl exec -it busybox-test -- sh

Port Forwarding

Port forwarding creates a temporary tunnel between a port on your local machine and a port on a running Pod. This is ideal for testing an HTTP endpoint without exposing it through a Service.

kubectl port-forward pod/weather-api 8080:8080

After running this command, open http://localhost:8080 in your browser to reach the Pod directly.

Creating Pods Declaratively

The recommended approach for creating Pods is to write a YAML file and apply it. This makes your configuration reproducible and version-controllable.

kubectl apply -f weather-api-pod.yaml

Deleting a Pod

To delete a Pod by name:

kubectl delete pod weather-api

To delete using the YAML manifest:

kubectl delete -f weather-api-pod.yaml

Step-by-Step Example

In this example we will deploy a simple ASP.NET Core weather API running on .NET 10 as a single-container Pod, access it via port-forwarding, inspect its logs, and connect to it interactively with kubectl exec. We will also run a busybox Pod alongside it to simulate a test Pod calling the API from within the cluster.

Step 1 — Create the BusyBox Test Pod

First, create a lightweight BusyBox Pod that we can use for network testing inside the cluster. Save the following manifest as busybox-test-pod.yaml:

apiVersion: v1
kind: Pod
metadata:
name: busybox-test
labels:
app: busybox-test
purpose: testing
spec:
containers:
- name: busybox
image: busybox:1.37
command: ["sleep", "3600"]
resources:
requests:
cpu: "50m"
memory: "32Mi"
limits:
cpu: "100m"
memory: "64Mi"
restartPolicy: Never

Apply the manifest:

kubectl apply -f busybox-test-pod.yaml

Step 2 — Create the Weather API Pod

Now create the main application Pod. This runs an ASP.NET Core application on .NET 10 published on port 8080. Save the following manifest as weather-api-pod.yaml:

apiVersion: v1
kind: Pod
metadata:
name: weather-api
labels:
app: weather-api
tier: backend
spec:
containers:
- name: weather-api
image: mcr.microsoft.com/dotnet/aspnet:10.0
ports:
- containerPort: 8080
env:
- name: ASPNETCORE_URLS
value: "http://+:8080"
- name: ASPNETCORE_ENVIRONMENT
value: "Development"
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
restartPolicy: Never

Apply the manifest:

kubectl apply -f weather-api-pod.yaml

Step 3 — Verify Both Pods Are Running

Wait a few seconds for the images to be pulled, then check that both Pods are in the Running state. Note the IP addresses assigned to each Pod.

kubectl get pods -o wide

Expected output:

NAME READY STATUS RESTARTS AGE IP NODE
busybox-test 1/1 Running 0 30s 10.244.0.5 node01
weather-api 1/1 Running 0 25s 10.244.0.6 node01

Step 4 — Inspect the Weather API Pod

Use kubectl describe to examine the full details of the Pod — the assigned node, IP address, image pulled, resource limits, and events. This is the most useful command when debugging a Pod that will not start.

kubectl describe pod weather-api

Step 5 — Access the API via Port Forwarding

Open a local tunnel to the weather API Pod. This maps port 8080 on your laptop to port 8080 on the Pod.

kubectl port-forward pod/weather-api 8080:8080

Open a second terminal and use curl to call the API, or open http://localhost:8080 in your browser:

curl http://localhost:8080/weatherforecast

Step 6 — Read the Pod Logs

In a new terminal, read the live logs from the weather API container. You should see the ASP.NET Core startup messages and the HTTP request you just made via port-forwarding.

kubectl logs -f weather-api

Step 7 — Connect to the BusyBox Pod and Call the API

Now simulate what happens when another Pod calls the weather API over the cluster's flat network. Open a shell inside the busybox-test Pod and use wget to reach the weather API using its Pod IP address (replace 10.244.0.6 with the actual IP from Step 3).

kubectl exec -it busybox-test -- sh

Inside the container shell:

wget -qO- http://10.244.0.6:8080/weatherforecast

This demonstrates the flat network model: the BusyBox Pod can reach the Weather API Pod directly using its IP address, with no firewall rules or port mappings needed.

exit

Step 8 — Clean Up

Delete both Pods when you are done:

kubectl delete pod weather-api busybox-test

Or delete using the manifest files:

kubectl delete -f weather-api-pod.yaml -f busybox-test-pod.yaml

Summary

Pods are the foundational unit of Kubernetes. Every container you run in a cluster runs inside a Pod, and every Pod is scheduled onto exactly one worker node. Understanding why Pods exist — to enable easy inter-container communication at scale — helps you reason about when to use single-container Pods and when to group multiple containers together.

The two golden rules to remember are: all containers in a Pod share a node, a network namespace, and optionally volumes; and Pods must be stateless and easy to recreate. Violating the second rule — for example by storing database state inside the same Pod as the application — leads to data loss and reduced availability.

You now have the skills to create Pods both imperatively with kubectl run and declaratively with YAML manifests, to inspect and debug them with kubectl describe and kubectl logs, to reach them via port-forwarding, and to interact with them using kubectl exec. In the next articles you will build on this foundation by learning about Deployments and Services, which manage Pods at scale and expose them reliably to other workloads.


Share this lesson: