Kubernetes is container orchestration software that allows us to deploy, manage, and scale containerized applications. It has gained a lot of traction over the recent years and has become the most viable way to horizontally and vertically scale applications, even outperforming traditional methods such as virtualization. Kubernetes was originally developed and used by Google, and has since been taken over by the Cloud Native Computing Foundation (CNCF).
If you have not been keeping up with containerization technology in recent years, or you are a newcomer to system administration in general, then you may be wondering exactly what Kubernetes is and what it is used for. It is common to find very technical explanations online, which shed very little light on its real purpose and the role it plays with containerization, unless you already have some experience and knowledge under your belt.
In this tutorial, we will explain Kubernetes’ purpose and see what it used for. We will also learn why Kubernetes is specifically relevant for Linux administrators and often uses a Linux system as its host. Join us below as we discuss how Kubernetes is simplifying container management and deployment.
In this tutorial you will learn:
- What is Kubernetes used for?
- What is containerization?
- How does Kubernetes manage containers?
|Category||Requirements, Conventions or Software Version Used|
|System||Any Linux distro|
|Other||Privileged access to your Linux system as root or via the
# – requires given linux commands to be executed with root privileges either directly as a root user or by use of
$ – requires given linux commands to be executed as a regular non-privileged user
What is containerization?
Before understanding Kubernetes and why it even needs to exist, it is absolutely essential to understand containerization first. Containerization allows a developer to package software and its dependencies into a single container that can then be run identically on any system, no matter what operating system, distribution, or configuration is on the host system.
The most popular name in containerization is Docker, but there are a few other popular choices as well. Docker and other software allow us to create the containers that run our selected software. Using containers has become popular because they are very lightweight and can run on any type of system. Contrast this to the traditional method of deploying applications directly on the operating system, which ties many files to the server and makes it completely unportable. A container is uncoupled from the operating system and contains everything it needs in order to run what is contained inside.
So, containers are clearly a big step in the right direction for hosting applications. However, a problem arises when it comes to managing these containers at a large scale. This is the problem that Kubernetes addresses.
How does Kubernetes manage containers?
Kubernetes was made to manage containerized applications at scale. For example, if you have your web server packaged into a container, but need to make sure that it has redundancy, load balancing, and can scale onto multiple systems to help distribute traffic load, Kubernetes can do all of this.
Kubernetes works by having one (or a few, for redundancy) master node, and any number of worker nodes. When you hand off your containerized applications to Kubernetes, it will distribute the load as instructed across all of its worker nodes. Using the kubectl command, you can interact with your master node and instruct Kubernetes on how to run your cluster.
The master node, along with your own applied configurations, will manage the deployment, scheduling, load balancing, healing, and other aspects of the nodes and applications in your cluster. It is also simple enough to push new updates into the cluster, rollback states as needed, and check logs or perform other administrative functions from a centralized location.
Containerized applications: When hosting containerized applications, it is very easy to move from the cloud to on-site servers and vice versa. Since everything is self contained, all you need to do is hand your configuration to Kubernetes and let it handle the rest.
Highly scalable: Scalability was problematic and clunky with containerized applications and virtualization before Kubernetes came into the scene. Now a single administrator can manage multiple applications across hundreds of nodes, on site or in the cloud, from one command line terminal.
Less downtime: Self healing, load balancing, and failover capabilities are built into Kubernetes. You have the choice on how to configure these settings, but there is not a lot of tinkering needed to get it up and running. With Kubernetes, when a node goes down, it reacts to the problem and keeps your applications running.
Hybrid environments: Kubernetes does not care where the worker nodes are located. It is simple enough to create a hybrid environment of physical servers and cloud resources which can all join the Kubernetes cluster and share the workload. This is both an advantage of containerization technology in general, as well as the administrative capabilities built into Kubernetes.
In this tutorial, we learned what Kubernetes is used for. This started by understanding containerization technology and the advantages it offers over traditional application deployment and virtualization, followed by introducing the gap that Kubernetes fills by taking these containers to full scale. Kubernetes is an essential technology for Linux administrators, as tools like Kubeadm are often used on master nodes running Linux.