Kubernetes is container orchestration software that allows us to deploy, manage, and scale containerized applications. Even though Kubernetes has a reputation for being highly reliable, the need to restart it may arise, just like it sometimes does for any other application or service. Kubernetes is split into different components that can all be restarted individually, so that other parts can continue running uninterrupted. Ideally, you should only restart the component you are troubleshooting.
In this tutorial, we will go over the step by step instructions to restart Kubernetes on a Linux system. This includes restarting the restarting the master node (kubelet service), the worker nodes, and the pods in the cluster. You will see how to restart and check the status of each of these Kubernetes components below.
In this tutorial you will learn:
- How to restart kubelet service
- How to restart containerization layer
- How to restart master and worker nodes
- How to restart the deployed pods

Category | Requirements, Conventions or Software Version Used |
---|---|
System | Any Linux distro |
Software | Kubernetes |
Other | Privileged access to your Linux system as root or via the sudo command. |
Conventions |
# – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command$ – requires given linux commands to be executed as a regular non-privileged user |
How to Restart Kubernetes on Linux
Check out the various examples below to see how to restart the different components of Kubernetes.
- To restart the kubelet service on the master node or worker nodes, use the following
systemctl
command:$ sudo systemctl restart kubelet
Afterwards, check on the current status of the kubelet service:
$ sudo systemctl status kubelet
- You can also restart your containerization layer, which will sometimes help with troubleshooting errors. In most cases, it is Docker, but your cluster may use a different technology:
$ sudo systemctl restart docker
- To restart a worker node completely, we can use the typical
reboot
Linux command. First, we should use thekubectl cordon
command to make sure Kubernetes does not try to schedule any new pods on the node in the meantime. SSH into the worker node and execute:$ kubectl cordon [ip address] $ sudo reboot
After the reboot completes:
$ kubectl uncordon [ip address]
- When it comes to restarting the pods, one way would be to set the replicas to 0, before increasing them again after a few minutes when they have all had a chance to shut down. For example:
$ kubectl scale deployments/nginx-server --replicas=0
An even better way to restart your pods is to let
rollout restart
do the job. This way, each pod is restarted one at a time, and clients should not notice any downtime.$ kubectl rollout restart deployment [deployment_name] -n [namespace]
If you continue to face errors after a reboot of these various services, try checking the Kubernetes log files for more hints about what could be causing the error. Check the previously linked guide for information on how to do that.
Closing Thoughts
In this tutorial, we saw how to restart Kubernetes on a Linux system. Since Kubernetes is split up into multiple components, we went over restarting the kubelet service, the master node, worker nodes, the containerization layer, and all of the pods through two different methods. When facing a new error, sometimes a simple restart of one or more of these components can be the simplest fix possible.