Running Your Containers in Kubernetes: Understanding Pods
Pods are the most fundamental building block in Kubernetes. Every container you run on a Kubernetes cluster is managed through a Pod — you never create containers directly. Mastering Pods is the first and most important step toward becoming a confident Kubernetes practitioner.
In this article you will learn what Pods are and why Kubernetes uses them instead of bare containers. You will discover how Pods get IP addresses, how multiple containers inside a single Pod communicate, and how to design Pods correctly so they are easy to destroy and recreate. Finally, you will get hands-on experience creating, inspecting, and deleting Pods using kubectl.
By the end of this article you will be able to:
- Explain the relationship between containers and Pods
- Describe how Pod networking works with the Container Network Interface (CNI)
- Apply Pod design principles such as statelessness and external state storage
- Create, inspect, and delete Pods using
kubectl - Access a running Pod via port-forwarding and
kubectl exec
Core Concepts
What is a Pod?
A Pod is the smallest deployable unit in Kubernetes. It is a group of one or more containers that Kubernetes launches together on the same worker node, inside the same Linux namespace. When you ask Kubernetes to run an application, you do not hand it a container definition — you hand it a Pod definition, and Kubernetes takes care of scheduling and launching the underlying containers.
The golden rule of Pods is simple: all containers in a Pod always run on the same node. A Pod can never span multiple worker nodes. This is an absolute constraint that Kubernetes enforces.
Why Not Create Containers Directly?
Modern applications are rarely made up of a single process. Consider a traditional WordPress site: it needs both an NGINX web server and a PHP-FPM interpreter running simultaneously. On a virtual machine you would install both on the same machine. In the container world, the golden rule is one process per container — so you end up with two containers that must communicate and share a file system.
Doing this at scale with raw Docker commands — managing custom networking, volume mounts across environments and machines — quickly becomes unmanageable. This is precisely the problem Pods solve. A Pod groups multiple containers logically, giving them three shared capabilities out of the box:
- All containers in the same Pod can reach each other via
localhostbecause they share the same network namespace. - All containers in the same Pod share the same port space.
- Volumes attached to a Pod can be mounted into any of its containers, allowing them to share file system locations.
Single-Container vs Multi-Container Pods
In practice, the majority of Pods you will create contain only one container. This is the standard pattern for microservices: one Pod, one container, one process. Multi-container Pods appear when two processes are so tightly coupled that they must run together — for example, an application container alongside a logging sidecar that streams its log files.
Regardless of how many containers a Pod holds, the Pod is always the lowest level of abstraction you interact with through the Kubernetes API. Kubernetes only manages containers it has launched through Pods — any container started manually on a cluster node is invisible to Kubernetes.
Each Pod Gets a Private IP Address
When Kubernetes schedules a Pod onto a node, it automatically assigns the Pod a private IP address. Every Pod in the cluster can communicate with every other Pod using these IP addresses, regardless of which node they are running on. This is called the flat network model.
The component that implements this networking model is the Container Network Interface (CNI). CNI is a standard interface between the container runtime and the underlying network infrastructure. Popular CNI plugins include Flannel, Calico, and Cilium. Each plugin communicates with the container runtime using standard input/output and handles IP provisioning and cross-node connectivity on behalf of Kubernetes.
Accessing Pods directly by IP address is possible but not recommended. In later articles you will learn about the Service resource, which provides a stable DNS name and virtual IP that maps to a dynamic set of pods.
How to Design Your Pods
The second golden rule of Pods is that they must be easy to destroy and recreate at any moment. A worker node failure, a deployment update, or a resource-pressure eviction can terminate a Pod without warning. Your application must tolerate this. Follow these two design principles:
- A Pod should be self-contained — it must include everything needed to start the application. If the Pod is recreated from scratch, the application should come back up without any manual steps.
- A Pod should be stateless — any data that must survive Pod restarts must be stored outside the Pod, in a database, an external cache, or a Kubernetes
PersistentVolume.
A common mistake is packing the application and its database into the same Pod. This creates three problems: poor data durability (database data is lost when the Pod restarts), reduced availability (the database must restart every time the application crashes), and weaker stability (a memory leak in the application can take down the database too). The right approach is to run the database in its own dedicated Pod — Pods can communicate via IP addresses or DNS, so the decoupling does not prevent connectivity.
Hands-On: Kubernetes Commands
Creating Pods Imperatively
The quickest way to create a Pod is with kubectl run. This creates a single Pod without writing a YAML file first. It is useful for quick debugging or testing.
The --restart=Never flag tells Kubernetes to create a bare Pod rather than a Deployment. The -- sleep 3600 part overrides the container command so the Pod stays alive for one hour.
Listing Pods
Use kubectl get pods to see all Pods in the current namespace and their status.
Add -o wide to see the node name and Pod IP address:
Inspecting a Pod
kubectl describe pod shows the full lifecycle of a Pod including events, container statuses, resource limits, and scheduling decisions. It is your first stop when a Pod is not behaving as expected.
Reading Pod Logs
Use kubectl logs to read the standard output of any container in a Pod.
Follow live log output with the -f flag:
Running Commands Inside a Pod
kubectl exec opens an interactive shell or runs a one-off command inside a running container, similar to docker exec.
Port Forwarding
Port forwarding creates a temporary tunnel between a port on your local machine and a port on a running Pod. This is ideal for testing an HTTP endpoint without exposing it through a Service.
After running this command, open http://localhost:8080 in your browser to reach the Pod directly.
Creating Pods Declaratively
The recommended approach for creating Pods is to write a YAML file and apply it. This makes your configuration reproducible and version-controllable.
Deleting a Pod
To delete a Pod by name:
To delete using the YAML manifest:
Step-by-Step Example
In this example we will deploy a simple ASP.NET Core weather API running on .NET 10 as a single-container Pod, access it via port-forwarding, inspect its logs, and connect to it interactively with kubectl exec. We will also run a busybox Pod alongside it to simulate a test Pod calling the API from within the cluster.
Step 1 — Create the BusyBox Test Pod
First, create a lightweight BusyBox Pod that we can use for network testing inside the cluster. Save the following manifest as busybox-test-pod.yaml:
Apply the manifest:
Step 2 — Create the Weather API Pod
Now create the main application Pod. This runs an ASP.NET Core application on .NET 10 published on port 8080. Save the following manifest as weather-api-pod.yaml:
Apply the manifest:
Step 3 — Verify Both Pods Are Running
Wait a few seconds for the images to be pulled, then check that both Pods are in the Running state. Note the IP addresses assigned to each Pod.
Expected output:
Step 4 — Inspect the Weather API Pod
Use kubectl describe to examine the full details of the Pod — the assigned node, IP address, image pulled, resource limits, and events. This is the most useful command when debugging a Pod that will not start.
Step 5 — Access the API via Port Forwarding
Open a local tunnel to the weather API Pod. This maps port 8080 on your laptop to port 8080 on the Pod.
Open a second terminal and use curl to call the API, or open http://localhost:8080 in your browser:
Step 6 — Read the Pod Logs
In a new terminal, read the live logs from the weather API container. You should see the ASP.NET Core startup messages and the HTTP request you just made via port-forwarding.
Step 7 — Connect to the BusyBox Pod and Call the API
Now simulate what happens when another Pod calls the weather API over the cluster's flat network. Open a shell inside the busybox-test Pod and use wget to reach the weather API using its Pod IP address (replace 10.244.0.6 with the actual IP from Step 3).
Inside the container shell:
This demonstrates the flat network model: the BusyBox Pod can reach the Weather API Pod directly using its IP address, with no firewall rules or port mappings needed.
Step 8 — Clean Up
Delete both Pods when you are done:
Or delete using the manifest files:
Summary
Pods are the foundational unit of Kubernetes. Every container you run in a cluster runs inside a Pod, and every Pod is scheduled onto exactly one worker node. Understanding why Pods exist — to enable easy inter-container communication at scale — helps you reason about when to use single-container Pods and when to group multiple containers together.
The two golden rules to remember are: all containers in a Pod share a node, a network namespace, and optionally volumes; and Pods must be stateless and easy to recreate. Violating the second rule — for example by storing database state inside the same Pod as the application — leads to data loss and reduced availability.
You now have the skills to create Pods both imperatively with kubectl run and declaratively with YAML manifests, to inspect and debug them with kubectl describe and kubectl logs, to reach them via port-forwarding, and to interact with them using kubectl exec. In the next articles you will build on this foundation by learning about Deployments and Services, which manage Pods at scale and expose them reliably to other workloads.