Kubernetes is open source software that allows us to manage containerized applications across node systems. When it comes to managing a Kubernetes cluster, one of the most important aspects of administration is to keep constant tabs on the logs. These logs give us valuable information about the performance and overall health of our Kubernetes cluster. In this tutorial, we will see how to manage and troubleshooting Kubernetes logs on a Linux system.
In this tutorial you will learn:
- How to check logs with
- How to check logs with Docker
- How to check
|Category||Requirements, Conventions or Software Version Used|
|System||Any Linux distro|
|Other||Privileged access to your Linux system as root or via the
# – requires given linux commands to be executed with root privileges either directly as a root user or by use of
$ – requires given linux commands to be executed as a regular non-privileged user
Checking logs with kubectl
kubectlcommand on your master node should be the first place you check for relevant logs for the pods and containers in your Kubernetes cluster. Here are some command examples:
- Get the logs for a pod named
$ kubectl logs nginx
- If the pod in question has multiple containers running, we can get logs for all of its containers with the
$ kubectl logs nginx --all-containers=true
- To see streaming logs (similar to the
tail -fLinux command), we can append the
$ kubectl logs -f nginx
- To see the logs of a specific container on a node, use the
-coption and specify the container name. For example, to see the logs for container nginx1 on worker2:
$ kubectl logs -c nginx1 worker2
- To see the most 30 recent lines of log output from a pod named
$ kubectl logs --tail=30 nginx
- To see more examples and command line options, use the
$ kubectl logs -h
Checking logs with Docker
We can also check the logs with our container runtime. In most cases, this is probably Docker, although it would depend on your environment. This is a good additional step to see more logs for individual containers, especially if the
kubectl command is not working due to a Kubernetes error.
First, check the container IDs in Docker:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS 705a163eaac3 gcr.io/k8s-minikube/kicbase:v0.0.37 "/usr/local/bin/entr…" 3 hours ago Up 3 hours truncated...
Then use the
docker logs command along with the container ID that you want to check the logs for:
$ docker logs 705a163eaac3
Checking /var/logs directory
Some Kubernetes implementations such as kubeadm will also create logs under the typical Linux
/var/log directory. We can see them with:
$ ls /var/log/pods kube-flannel_kube-flannel-ds-9wdfm_9f1d0211 kube-system_coredns-787d4945fb-2nwxh_1f01271c kube-system_coredns-787d4945fb-6sh26_2203c902 kube-system_etcd-kubernetes-master_884126e4a26b kube-system_kube-apiserver-kubernetes-master_c6ea17320e910d kube-system_kube-controller-manager-kubernetes-master_0dfeb kube-system_kube-proxy-mgbwz_6a15d06d-0191-4ec1 kube-system_kube-scheduler-kubernetes-master_6b19524e5ea8b
Navigate into any of these directories to see the relevant log files and their entries. This method allows us to see logs specifically about Kubernetes coredns, API server, etcd, scheduler, controller manager, proxy, and pod network (in our example, this is flannel).
Kubernetes logs can easily become overwhelming, making them difficult to troubleshoot. Check out our tutorial on Advanced Logging and Auditing on Linux to learn about tools that can collect and compile Kubernetes logs in a centralized location.
In this tutorial, we saw how to manage and troubleshoot Kubernetes logs on a Linux system. This included using the
kubectl logscommand, checking the container runtime logs (in this case, Docker), and checking the Linux log files themselves. Being that there are logs located in multiple places, it is a good idea to use a third party tool that can help us to collect and visualize the data, as mentioned in the tutorial link above.