Kubernetes Entry Created: 05 Mar 2026 Updated: 05 Mar 2026

Kubernetes DaemonSets

Overview

Deployments and ReplicaSets answer the question: "run N copies of this workload somewhere in the cluster." But there is a different class of workload where the question is: "run exactly one copy of this workload on every node." Examples include log collectors that must read log files from the local node filesystem, security intrusion-detection agents that monitor network traffic on each machine, hardware monitoring exporters that read CPU and memory counters directly from the kernel, and node-level storage provisioners.

The DaemonSet is the Kubernetes object designed for exactly this pattern. A DaemonSet ensures that one Pod is running on every node in the cluster (or on a labelled subset of nodes). When a new node joins the cluster, the DaemonSet controller automatically schedules the Pod on it. When a node is removed, the Pod is garbage-collected. No human intervention is required.

This article explains how DaemonSets work, when to choose them over ReplicaSets, how to restrict them to specific nodes using label selectors, and how to safely roll out updates without disrupting cluster-wide agents.

Core Concepts

Step 1: DaemonSet vs. ReplicaSet — Choosing the Right Controller

DaemonSets and ReplicaSets are both controllers that ensure a specific number of Pods matches a desired state. The key difference is where the Pods land:

FeatureReplicaSetDaemonSet
Desired countYou specify a number (e.g., replicas: 3)Implicitly one Pod per eligible node
PlacementScheduler distributes Pods across available nodesExactly one Pod per node; ignores scheduler
Node added to clusterNo automatic changePod is automatically created on the new node
Node removed from clusterPod rescheduled elsewherePod is garbage-collected (was bound to that node)
Use caseStateless application replicas serving user trafficCluster-wide agents and system daemons

The decision rule is simple: if you need one copy on each node, use a DaemonSet. If you need N copies somewhere in the cluster, use a ReplicaSet or Deployment.

Step 2: Use Cases for DaemonSets

Common real-world DaemonSet workloads include:

  1. Log collectors (e.g., Fluent Bit, Fluentd) — must read container log files from the local node disk, which requires running on every node.
  2. Metrics exporters (e.g., Prometheus Node Exporter) — reads CPU, memory, disk, and network counters directly from the node kernel.
  3. Security agents (e.g., intrusion-detection, vulnerability scanners) — must monitor every machine in the cluster to provide complete coverage.
  4. CNI and storage plugins — network and storage drivers that must be installed and running on every node.
  5. Hardware-specific agents — GPU drivers, FPGA initializers, or SSD performance monitors that only run on nodes with matching hardware.
  6. Compliance tooling — enterprise IT departments may require specific audit or configuration-management agents on every machine, even in a cloud-native cluster.

Step 3: How the DaemonSet Scheduler Works

Normally, a Pod is placed on a node by the Kubernetes scheduler, which analyses resource requests, affinity rules, and taints. DaemonSet Pods are different: the DaemonSet controller sets the nodeName field directly in the Pod spec before submitting it to the API server. The Kubernetes scheduler sees the nodeName already set and ignores the Pod entirely.

This is an important detail: DaemonSet Pods bypass the normal scheduling pipeline. They will be placed on nodes even if the node is marked as unschedulable (for example, after kubectl cordon), unless you explicitly add a toleration or nodeSelector to restrict placement.

The DaemonSet controller runs its own reconciliation loop:

  1. List all nodes that match the DaemonSet's nodeSelector (if any).
  2. For each matching node, check whether a Pod owned by this DaemonSet is running.
  3. If the Pod is missing, create one using the Pod template with nodeName pre-set.
  4. If the Pod exists on a node that no longer matches (e.g., the label was removed), delete it.

Step 4: The DaemonSet Spec

A DaemonSet manifest is structurally similar to a ReplicaSet manifest. There is no replicas field (the count is determined by the number of eligible nodes). The critical fields are:

FieldPurpose
spec.selectorLabel query that identifies the Pods this DaemonSet owns. Must match spec.template.metadata.labels.
spec.templatePod blueprint used to create one Pod per eligible node.
spec.template.spec.nodeSelectorOptional. Limits the DaemonSet to nodes whose labels match this map.
spec.updateStrategy.typeRollingUpdate (default) or OnDelete.
spec.updateStrategy.rollingUpdate.maxUnavailableMaximum number of Pods that can be updating simultaneously during a rollout.
spec.minReadySecondsHow long a newly-ready Pod must stay healthy before the next Pod is updated.

A complete DaemonSet spec looks like this (shown without the container detail for brevity):

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: audit-agent
labels:
app: audit-agent
spec:
selector:
matchLabels:
app: audit-agent
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
minReadySeconds: 30
template:
metadata:
labels:
app: audit-agent
spec:
terminationGracePeriodSeconds: 30
containers:
- name: audit-agent
image: busybox:1.36
# ... resources, volumeMounts, etc.

Step 5: Mounting the Host Filesystem

Most DaemonSet agents need access to the node's filesystem — for example, to read log files written by the container runtime or to collect kernel metrics. This is done using hostPath volumes:

spec:
template:
spec:
volumes:
- name: varlog
hostPath:
path: /var/log
- name: container-logs
hostPath:
path: /var/lib/docker/containers
containers:
- name: log-collector
image: fluent/fluent-bit:3.0
volumeMounts:
- name: varlog
mountPath: /var/log
- name: container-logs
mountPath: /var/lib/docker/containers
readOnly: true

Security note: hostPath volumes give the container direct access to the host filesystem. Mount them readOnly: true wherever possible. Only mount the specific paths you need — never mount the root filesystem.

Step 6: Limiting a DaemonSet to a Subset of Nodes

By default, a DaemonSet runs on every node. To limit it to a subset — for example, nodes with SSDs or GPU hardware — use a nodeSelector inside the Pod template spec.

First, label the target nodes:

kubectl label nodes <node-name> ssd=true

Then add a nodeSelector to the DaemonSet's Pod template:

spec:
template:
spec:
nodeSelector:
ssd: "true"
containers:
- name: ssd-monitor
image: <your-image>

The DaemonSet controller will only place Pods on nodes that have the ssd=true label. If a new node is added with that label, a Pod is created automatically. If the label is removed from a node, the DaemonSet controller deletes the Pod from that node immediately. Be careful when removing labels from production nodes for this reason.

Step 7: Rolling Updates for DaemonSets

Since Kubernetes 1.6, DaemonSets support the same RollingUpdate strategy as Deployments. When you change anything in spec.template (such as the container image), the controller starts replacing Pods one node at a time.

Two parameters control the update pace:

ParameterMeaningRecommendation
spec.minReadySecondsSeconds a new Pod must stay healthy before the next update begins.Set to 30–60 s for production agents.
spec.updateStrategy.rollingUpdate.maxUnavailableMax number of Pods that can be down simultaneously during the rollout.Start with 1 for safety; increase if rollout speed is a priority.

The alternative update strategy is OnDelete: the controller only replaces a Pod when you manually delete it. This is useful when you need full control over which node gets the update first (for example, during a careful canary rollout of a security agent).

Monitor a DaemonSet rollout the same way you monitor a Deployment rollout:

kubectl rollout status daemonset audit-agent

Hands-On: Kubernetes Commands

Create a DaemonSet from a manifest file:

kubectl apply -f audit-agent-daemonset.yaml

List all DaemonSets in the current namespace:

kubectl get daemonsets

Describe a DaemonSet (desired/current/ready counts per node, events):

kubectl describe daemonset audit-agent

List DaemonSet Pods and their nodes (-o wide shows the NODE column):

kubectl get pods -l app=audit-agent -o wide

Add a label to a node (to include it in a node-selected DaemonSet):

kubectl label nodes <node-name> ssd=true

List nodes that have a specific label:

kubectl get nodes --selector ssd=true

Remove a label from a node (this will remove the DaemonSet Pod from that node):

kubectl label nodes <node-name> ssd-

Watch rollout progress of a DaemonSet update:

kubectl rollout status daemonset audit-agent

View rollout history of a DaemonSet:

kubectl rollout history daemonset audit-agent

Roll back a DaemonSet to the previous revision:

kubectl rollout undo daemonset audit-agent

Delete a DaemonSet and all its Pods:

kubectl delete daemonset audit-agent

Delete a DaemonSet but keep its Pods running:

kubectl delete daemonset audit-agent --cascade=false

Step-by-Step Example

In this example you will deploy two DaemonSets: an audit agent that runs on every node, and a high-performance cache API that runs only on nodes labelled with ssd=true. You will then update the audit agent and observe the rolling update.

Step 1: Deploy the Audit Agent on Every Node

The audit agent is a lightweight process that tails /var/log on each node. Because it reads raw node filesystem paths, it cannot be modelled as a ReplicaSet. Apply the manifest (see node-audit-daemonset.yaml in this folder):

kubectl apply -f node-audit-daemonset.yaml

Confirm the DaemonSet was created:

kubectl get daemonsets

Expected output (for a three-node cluster):

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
audit-agent 3 3 3 3 3 <none> 30s

Step 2: Verify One Pod Per Node

List the Pods with -o wide to see which node each Pod was placed on:

kubectl get pods -l app=audit-agent -o wide

Expected output — one Pod per node, each on a different node:

NAME READY STATUS NODE IP
audit-agent-4xkpq 1/1 Running node-1 10.0.0.101
audit-agent-9r2mz 1/1 Running node-2 10.0.0.102
audit-agent-vbl7t 1/1 Running node-3 10.0.0.103

Step 3: Describe the DaemonSet

kubectl describe daemonset audit-agent

Check the Node-Selector line (it shows <none> for all-nodes DaemonSets), the Desired Number of Nodes Scheduled, and the Events section which records each Pod creation.

Step 4: Add an SSD Label to a Node

Label one of your nodes to simulate an SSD-equipped machine:

kubectl label nodes node-1 ssd=true

Confirm the label was applied:

kubectl get nodes --selector ssd=true

Expected output:

NAME STATUS ROLES AGE VERSION
node-1 Ready agent 1d v1.30.0

Step 5: Deploy the SSD Cache API on Labelled Nodes Only

The SSD cache API (see ssd-cache-api-daemonset.yaml) is an ASP.NET Core 10 service that uses local SSD-backed fast storage to serve cached responses. It uses a nodeSelector to ensure it only runs on ssd=true nodes.

kubectl apply -f ssd-cache-api-daemonset.yaml

Verify only one Pod was created (only one node has the ssd=true label):

kubectl get pods -l app=ssd-cache-api -o wide

Expected output:

NAME READY STATUS NODE IP
ssd-cache-api-xq9tp 1/1 Running node-1 10.0.0.101

Step 6: See Automatic Placement on a New SSD Node

Label a second node with ssd=true:

kubectl label nodes node-2 ssd=true

Within seconds, the DaemonSet controller automatically creates a Pod on the newly-eligible node:

kubectl get pods -l app=ssd-cache-api -o wide

Expected output — two Pods, one per SSD-labelled node:

NAME READY STATUS NODE IP
ssd-cache-api-xq9tp 1/1 Running node-1 10.0.0.101
ssd-cache-api-d7rks 1/1 Running node-2 10.0.0.102

Step 7: Roll Out an Update to the Audit Agent

Edit node-audit-daemonset.yaml — change the image tag to a new version (simulate a patch release). Apply the update:

kubectl apply -f node-audit-daemonset.yaml

Watch the rollout. With maxUnavailable: 1, Kubernetes replaces one Pod at a time:

kubectl rollout status daemonset audit-agent

Expected output as each node is updated:

Waiting for daemon set "audit-agent" rollout to finish: 1 out of 3 new pods have been updated...
Waiting for daemon set "audit-agent" rollout to finish: 2 out of 3 new pods have been updated...
Waiting for daemon set "audit-agent" rollout to finish: 1 of 3 updated pods are available...
daemon set "audit-agent" successfully rolled out

Step 8: Roll Back the Audit Agent

If the new version causes problems, roll back immediately:

kubectl rollout undo daemonset audit-agent

Confirm the rollback:

kubectl rollout status daemonset audit-agent

Step 9: Remove a Node Label and Observe Pod Deletion

Remove the ssd=true label from node-2:

kubectl label nodes node-2 ssd-

The DaemonSet controller immediately deletes the Pod from node-2 because it no longer matches the nodeSelector:

kubectl get pods -l app=ssd-cache-api -o wide

Expected output — only one Pod remains (on node-1):

NAME READY STATUS NODE IP
ssd-cache-api-xq9tp 1/1 Running node-1 10.0.0.101

Step 10: Clean Up

kubectl delete daemonset audit-agent
kubectl delete daemonset ssd-cache-api

Verify no Pods remain from either DaemonSet:

kubectl get pods -l app=audit-agent -o wide
kubectl get pods -l app=ssd-cache-api -o wide

Summary

  1. A DaemonSet ensures exactly one Pod runs on every node (or every node matching a label selector). No replicas field is needed — the count is determined by the number of eligible nodes.
  2. DaemonSet Pods bypass the Kubernetes scheduler. The DaemonSet controller sets the nodeName field directly, so Pods are placed even on cordoned nodes unless explicitly restricted with nodeSelector or tolerations.
  3. Use DaemonSets for node-local agents: log collectors, metrics exporters, security scanners, CNI plugins, and hardware drivers. Use ReplicaSets/Deployments for stateless application replicas that can run anywhere.
  4. Use spec.template.spec.nodeSelector to restrict a DaemonSet to a subset of nodes. Adding a matching label to a node automatically creates a Pod there. Removing the label automatically deletes the Pod.
  5. DaemonSets support RollingUpdate with maxUnavailable and minReadySeconds — the same parameters as Deployments. Start with maxUnavailable: 1 for safety. Use OnDelete strategy when you need manual per-node control during a rollout.
  6. kubectl rollout status daemonset, kubectl rollout history daemonset, and kubectl rollout undo daemonset all work exactly as they do for Deployments.
  7. DaemonSets are invaluable in autoscaled clusters where nodes are constantly added and removed. The controller guarantees the required agent is present on every node without any manual intervention.
Share this lesson: