Back to all posts

How a Pod is created in Kubernetes

April 23, 2025
Nkosinathi Sola
How a Pod is created in Kubernetes

When you make a pod in kubernetes, what happens behind the scenes? How do those kubernetes components work together to bring that pod into fruition?

This is a post that goes over the basics but with the perspective of a pod being made.So at the basis of our systems we have our nodes. The nodes are the worker machines that backup the Kubernetes cluster. In Kubernetes there are two types of nodes:

  1. Control nodes – In our case we are going to do one control nodes for simplicity sake but in a production level cluster you’d want at least three control nodes.

  2. Compute nodes – In our case we will do two two compute nodes but you could have many compute nodes in a Kubernetes cluster.

The basic inner-workings of K8S – making a pod.

Kube-API Server

The initial step is to make call with a kubectl command. That’s going to go into the Kubernetes cluster and hit the Kube-API server.

The Kube-API server is the main management component of a Kubernetes cluster. The first thing that the Kube-API server is going to do with my request to make a pod, is authenticate it and validate it. So its going to check who you are and basically confirm you have access to the cluster.

The next thing that happens is the Kube-API server is going to write that pod to etcd. etcd is a key-value data store that’s distributed across the cluster and it is the source of truth for the Kubernetes cluster.

So Kube-API Sever writes that request to etcd and then etcd will return when it has made a successful write, and then at that point the Kube-API server is going to return to me as the developer that it’s created even though not a lot has happened in our system yet.

That’s because at it’s core, Kubernetes and etcd has defined a desired state of what the system looks like and then all the Kubernetes components work together to make that desired state equal to the actual state. – It is as good as created, as far as Kubernetes is concerned.

Scheduler

The scheduler is keeping an eye out for workloads that need to be created, and what it is going to do is to determine which nodes it goes on. What it’s doing in the short terms is, it’s pinging our Kube-API server at regular intervals to get a status of whether there are any workloads that need to be scheduled. Usually five seconds.So the Kube scheduler pings the Kube API server, until it finds that there’s a pod that needs to be created on one of the our compute nodes.

Let’s jump to the compute node for a bit.

Our compute nodes have three major components: Kubelet – The kubelet is how the compute node communicates with the control plane, (specifically with the Kube-API server).

Each compute node has a kubelet. The Kubelet is going to register the node with the cluster, it will send periodic health checks so that the Kube-API server knows that our compute nodes are healthy and it will also create and destroy workloads as directed by the Kube-API server.

Each of our compute nodes is also going to have, a container runtime engine that’s compliant with the container runtime initiative. In the past it’s been Docker but it could really be anything that’s compliant.

Finally it has a Kube proxy (which isn’t needed in this post – but I would be remiss if I didn’t mention it). The Kube proxy is going to help the compute nodes communicate with one another if there are any workloads that span across more than one node. Generally it helps them communicate.

Back to our scheduler

Now that our scheduler is aware that it needs to create a pod, what our scheduler is going to do is look at the available compute nodes, it’s going to rule out any that are unsatisfactory either because of limitations that maybe the cluster administrator setup, or maybe it just doesn’t have space for the pod.

From the other ones that are left, it will choose the best one to run the workload on, taking all the factors into account. Once it has made that choice, all it does is tell the Kube-API server where it should go. Once the Kube-API server knows where it should go, it writes it to etcd and then after the successful write, then we have a desired state versus the actual state and the Kube-API server knows what it needs to do to make that actual state meet the desired state.

That’s when the Kube-API server is going to let the Kubelet know that on a certain node, we need to spin a pod on this cluster. The Kubelet is going to work together with the container runtime engine and make a pod that has the appropriate container running inside.

Congratulations, we have made a pod on the Kubernetes cluster. Really ! That’s it! That’s all you have to know. Now you can go and apply for that DevOps job you’ve always wanted.

I’m kidding!

There’s one more management piece to cover.

Lets consider a case, when I made a pod and set the restart policy to “always”, and then lets say something happens and it goes down. How would the system know that I want a new pod to be created in it’s place? That is where the controller manager (the last important component of Kubernetes) comes in.

Controller Manager

The controller manager is made up of all of the controllers. Simply put there are many controllers that are controlled by the controller manager.

In particular, the one that’s going to help me create a new pod, (NB: It’s doing this automatically for me, because basically my job was done at the initial step defining the states. Whoop! Whoop! Lazy bum.) its the replication controller within the controller manager – that’s going to help with this task.

On the controller manager, all the different controllers are watching different pieces of the Kubernetes system : the replication controller, just like the scheduler – these controllers are pinging the Kube-API server at regular basis to get an update on the actual state of the cluster. This is to make sure the desired state and the actual states are the same as one another.

So the replication controller, from contacting the Kube-API server, see that the pod is gone and it will take the necessary steps to spin that pod backup – or rather, create a new pod because pods are ephemeral.

In conclusion:

All these components are working together just to make a pod.

Kube-API server – the main management component of the cluster, etcd – the data store and source of truth for the cluster, Scheduler – that helps determine which of the compute nodes the workload should go to Controller manager – that is watching the actual state and making sure its the same as the desired state.