DaemonSets in Kubernetes/OpenShift
Before we start talking about the DaemonSets we must understand the deployments.
What is deployment.?
A Kubernetes Deployment is a way to tell Kubernetes how to create or modify instances of the pods that hold a containerized application. deployment manage stateless services running on your cluster (as opposed to for example StatefulSets which manage stateful services). Their purpose is to keep a set of identical pods running and upgrade them in a controlled way. For example: If you define how many replicas(pods) of your app you want to run in the deployment definition and kubernetes will make that many replicas of your application spread over the nodes. If you say 5 replica’s should over 3 nodes, then some nodes will have more than one replica of your running application.
Kubernetes automates the work and repetitive manual functions that are involved in deploying, scaling, and updating applications in production.
Since the Kubernetes deployment controller is always monitoring the health of pods and nodes, it can replace a failed pod or bypass down nodes, replacing those pods to ensure continuity of critical applications.
Now since we have an understanding on deployment its good time to talk about Daemonsets
What is DaemonSet ?
DaemonSets manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. A Daemonset will not run more than one replica per node. Another advantage of using a Daemonset is that, if you add a node to the cluster, then the Daemonset will automatically spawn a pod on that node, which a deployment will not do.
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd.
Lets take the realtime example like why is
kube-dns a deployment and
kube-proxy a daemonset?
The reason behind that is that
kube-proxy is needed on every node in the cluster to run IP tables, so that every node can access every pod no matter on which node it resides. Hence, when we make
daemonset and another node is added to the cluster at a later time, kube-proxy is automatically spawned on that node.
Kube-dns responsibility is to discover a service IP using its name and only one replica of
kube-dns is enough to resolve the service name to its IP. Hence we make
deployment, because we don't need
kube-dns on every node.
Thanks for Reading…