How to Create a Kubernetes Cluster

Kubernetes is leading software in container orchestration. Kubernetes works by managing clusters, which is simply a set of hosts meant for running containerized applications. In order to have a Kubernetes cluster, you need a minimum of two nodes – a master node and a worker node. Of course, you can expand the cluster by adding as many worker nodes as you need.

In this tutorial, we are going to deploy a Kubernetes cluster consisting of two nodes, both of which are running on a Linux system. Having two nodes in our cluster is the most basic configuration possible (outside of a test environment), but you will be able to scale that configuration and add more nodes if you wish.

In this tutorial you will learn:

  • How to configure a master and worker node in Kubernetes
  • How to join a worker node to a Kubernetes cluster
  • How to deploy Nginx (or any containerized app) in a Kubernetes cluster
How to Create a Kubernetes Cluster
How to Create a Kubernetes Cluster
Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Any Linux distro
Software Kubernetes
Other Privileged access to your Linux system as root or via the sudo command.
Conventions # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ – requires given linux commands to be executed as a regular non-privileged user

Installing Prerequisites




In this tutorial, we will be using Kubeadm to bootstrap Kubernetes, and Docker as the containerization layer. Since installation instructions for these tools will vary depending on your Linux distribution, first go to our tutorial on How to Install Kubernetes on All Linux Distros, which will show you how to install both Kubeadm and Docker. Then, you can follow along with the steps below to create a cluster.

Cluster Scenario

Before we dive in, let’s estabish the particulars of our scenario. As mentioned above, our cluster is going to have two nodes, and both of those nodes are running Kubeadm and Docker on Linux. One will be the master node and can be easily identified with its hostname of kubernetes-master. The second node will be our worker node and have a hostname of kubernetes-worker.

The master node will deploy a Kubernetes cluster and the worker node simply joins it. Since Kubernetes clusters are designed to run containerized software, after we get our cluster up and running we are going to deploy a Nginx server container as a proof of concept.

DID YOU KNOW? – Why Kubeadm?
In case you are wondering why we choose Kubedm to bootstrap Kubernetes, it is because it is production ready and capable of endless scaling. See all of its pros and cons in our tutorial: kubeadm vs minikube, pros and cons.

Disable swap memory

Kubernetes will refuse to function if your system is using swap memory. Before proceeding further, make sure that the master and worker node have swap memory disabled with this command:

$ sudo swapoff -a

That command will disable swap memory until your systems reboot, so to make this change persists, use nano or your favorite text editor to open this file:

$ sudo nano /etc/fstab

Inside this file, comment out the /swapfile line by preceeding it with a # symbol, as seen below. Then, close this file and save the changes.

Add # to comment out swapfile line
Add # to comment out swapfile line

Set hostnames

Next, ensure that all of your nodes have a unique hostname. In our scenario, we’re using the hostnames kubernetes-master and kubernetes-worker to easily differentiate our hosts and identify their roles. Use the following command if you need to change your hostnames:

$ sudo hostnamectl set-hostname kubernetes-master

And on the worker node:

$ sudo hostnamectl set-hostname kubernetes-worker

You won’t notice the hostname changes in the terminal until you open a new one. Lastly, make sure that all of your nodes have an accurate time and date, otherwise you will run into trouble with invalid TLS certificates.

Initialize Kubernetes master server

Now we are ready to initialize the Kubernetes master node. To do so, enter the following command on your master node:

kubernetes-master:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Note: you can enter a different pod network if you want, but this is the default one for the flannel network addon which we will be deploying later.




Kubernetes master node is now initialized
Kubernetes master node is now initialized

The Kubernetes master node has now been initialized. The output gives us a kubeadm join command that we will need to use later to join our worker node(s) to the master node. So, take note of this command for later.

The output from above also advises us to run several commands as a regular user to start using the Kubernetes cluster. Run those three commands on the master node:

kubernetes-master:~$ mkdir -p $HOME/.kube
kubernetes-master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
kubernetes-master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Deploy a pod network

The next step is to deploy a pod network. The pod network is used for communication between hosts and is necessary for the Kubernetes cluster to function properly. For this we will use the Flannel pod network. Issue the following command on the master node:

kubernetes-master:~$ kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Depending on your environment, it may take just a few seconds or a minute to bring the entire flannel network up. You can use the kubectl command to confirm that everything is up and ready:

kubernetes-master:~$ kubectl get pods --all-namespaces
Pod network is successfully deployed
Pod network is successfully deployed

When all of the STATUS column shows ‘Running,’ it’s an indication that everything is finished deploying and good to go.

Join the Kubernetes cluster

Now our cluster is ready to have the worker nodes join. Use the kubeadm join command retrieved earlier from the Kubernetes master node initialization output to join your Kubernetes cluster:

kubernetes-worker:~$ sudo kubeadm join 192.168.1.65:6443 --token 6rf2uy.kg7syeb6h4i2c7ao --discovery-token-ca-cert-hash sha256:8c34e9941b045f73a1aabe979460f44145761cadc137b13fe1a898e760dab630
Joining worker node to Kubernetes cluster
Joining worker node to Kubernetes cluster

Back on your Kubernetes master node, confirm that the worker node is now part of our Kubernetes cluster with this command:

kubernetes-master:~$ kubectl get nodes
Displays what nodes are currently in the Kubernetes cluster
Displays what nodes are currently in the Kubernetes cluster

Deploying a service on Kubernetes cluster

Now we are ready to deploy a service into the Kubernetes cluster. In our example, we will deploy a Nginx server into our new cluster as a proof of concept. Run the following three commands on your master node:

kubernetes-master:~$ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
kubernetes-master:~$ kubectl run --image=nginx nginx-server --port=80 --env="DOMAIN=cluster"
kubernetes-master:~$ kubectl expose deployment nginx-deployment --port=80 --name=nginx-http

You should now see a new nginx docker container deployed on your worker node. You can see a running list of all available services running in your cluster with the following command, issued from the Kubernetes maser node:

kubernetes-master:~$ kubectl get svc
Displays what containerized services are running on the Kubernetes cluster
Displays what containerized services are running on the Kubernetes cluster

Closing Thoughts




In this tutorial, we learned how to set up Kubernetes to deploy containerized applications with Kubeadm and Docker on a Linux system. We setup a basic cluster consisting of two hosts, a master and a worker, though this can be scaled to many more worker nodes if necessary.

We saw how to configure Docker and other pre-requisites, as well as deploy an Nginx server in our new cluster as a proof of concept. Of course, this same configuration can be used to deploy any number of containerized applications.



Comments and Discussions
Linux Forum