When getting started with Kubernetes, the jargon alone can be the source of a big learning curve. Words like pods, services, deployments, clusters, applications, nodes, namespaces, and many more all get tossed around constantly, and it can be impossible for a newcomer to even keep up with what is being said. Not to mention that after learning the basic terminology, it is a whole other subject to learn how all of these components fit in together to serve a Kubernetes cluster.
In this tutorial, we will go over all the basics of Kubernetes to help you understand the different components and how they work together. If you are looking to get started with launching a Kubernetes cluster on your Linux system, this is an excellent place to start before diving into your project. Once you get the basics down, the rest is not so hard to understand.
In this tutorial you will learn:
- Understanding the basics of Kubernetes and its pertinent jargon
|Category||Requirements, Conventions or Software Version Used|
|System||Any Linux distro|
|Other||Privileged access to your Linux system as root or via the
# – requires given linux commands to be executed with root privileges either directly as a root user or by use of
$ – requires given linux commands to be executed as a regular non-privileged user
What is Kubernetes?
Kubernetes is container orchestration software that allows us to deploy, manage, and scale containerized applications. It has gained a lot of traction over the recent years and has become the most viable way to horizontally and vertically scale applications, even outperforming traditional methods such as virtualization. Kubernetes was originally developed and used by Google, and has since been taken over by the Cloud Native Computing Foundation (CNCF).
What is containerization?
Containerization deployment is similar to virtualization, except it does not need a separate operating system to run. The application, its configuration, and all of its dependencies are packaged into a lightweight container that can be ported over to any system. The main advantage of containerization over traditional virtualization is that containers are much more lightweight. Aside from this point, they work the same in concept, but containerization has a lot of other inherent advantages due to being lightweight. For example, they are very easy to scale, to build redundancy, load balancing, and have many other features.
It is not strictly necessary to use Docker with Kubernetes, although most clusters do indeed use this combination of software. There are other containerization tools available such as Containerd that can also complement Kubernetes as the necessary containerization layer that it needs in order to execute container images.
What are nodes?
Nodes are the physical or virtual machines within the Kubernetes cluster. There are two types of nodes: master nodes and worker nodes. Usually, a Kubernetes cluster will only have one master node (or a few extra, for redundancy), but will have many worker nodes. From the master node, we are able to manage the entire cluster. Conversely, the worker nodes host the pods, which run our containerized applications.
What are pods?
Pods provide an isolated environment for your containerized applications to run within. A pod has its own IP address so that the containers running within it can be accessible over the network, as well as its own storage space. A pod is deployed onto a worker node, which can host a mulitude of pods simultaneously. Pods that are within the same namespace (more on that later) can communicate with each other over the network.
What are services?
A service works with pods to provide an interface so that the pods can be reached externally. A good example would be a web server. The web server container would run within a pod, and the service would be the layer that gives the pod connectivity with the outside world. But services also provide other features such as load balancing.
What are deployments?
Deployments are essentially a set of rules for controlling the behavior of your pods. Using deployments, you can configure the settings of your pods, such as how many replicas should be maintained. Deployments are essential for scaling applications up or down. Using YAML syntax, you can configure a whole slew of settings for your pods to follow, and then issue the changes to your cluster via the deployment.
What are namespaces?
Each namespace is a separate virtual cluster and, by default, resources in different namespaces are well isolated from each other and cannot talk to each other, but this can be changed by editing various network policies. Namespaces are also convenient when you have a big environment that is managed by multiple users or teams, and each one needs their own “space” for the resources that they are assigned to manage and administer. This is a much better solution than creating numerous Kubernetes clusters just to facilitate different groups of services or deployments, and to isolate teams to their own space.
In this tutorial, we went over the basics of Kubernetes and its components to understand how they work cohesively as a cluster on a Linux system. We have only scraped the surface on Kubernetes here, but this will give you the essential building blocks that you need in order to understand more advanced concepts. I really wish I had a Kubernetes dictionary like this one when first getting started, as it saves a lot of confusion and headaches for new users.