Kubernetes vs Docker, what’s the difference?

Kubernetes and Docker are two names that often get lumped in together. If you are new to containerization technology, you might be wondering what these two technologies do, how they are different from each other, and how both of them fit into the puzzle to tackle a single goal. Both of these tools are important and relevant to system administrators, and are often employed on a Linux system.

In this tutorial, we will shed light on Kubernetes and Docker to resolve any confusion that readers may have over the purpose of these tools, as well as look at how the two technologies work together to make containerized applications scalable. Keep reading to finally figure out why these tools are important and why each play a key role in containerization deployment.

In this tutorial you will learn:

  • What is containerization?
  • What is Docker used for?
  • What is Kubernetes used for?
  • How do Docker and Kubernetes work together?
Kubernetes vs Docker, what's the difference?
Kubernetes vs Docker, what’s the difference?
Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Any Linux distro
Software Kubernetes, Docker
Other Privileged access to your Linux system as root or via the sudo command.
Conventions # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ – requires given linux commands to be executed as a regular non-privileged user

Containerization Technology




Let’s start with the bare basics. We will assume that some readers are new to containerization in general, and do not yet understand the concept. Since Docker and Kubernetes deal entirely with containerization, it is important to first have a good understanding of how this technology works in theory, and why the need for it arises in the first place.

The Kubernetes website has a nice image that illustrates the advantages of containerization (source: kubernetes.io):

Traditional vs. virtualization vs. containerization
Traditional vs. virtualization vs. containerization

Traditional deployment was the only way that applications were deployed for decades. It involves installing software directly onto the operating system. All desktop users should automatically be familiar with this concept, as it is a common task to install an application (e.g., a game) onto your system. This type of deployment couples the application with the operating system through shared libraries, configuration files, and many other settings and files. The problem with this type of deployment is that it is not easily scalable. It is also not portable, so if a system fails, the application needs to be reinstalled on a separate server and set up all over again.

Virtualization deployment tackled many of the shortcomings of traditional deployment. Instead of letting applications get tied directly to the operating system, we could now use virtual machines and package everything neatly inside. The operating system, all shared libraries and dependencies, and all applications could reside inside of the virtual machine and be deployed on any host system. This made the applications much more portable, and scalability was also much easier.

Containerization deployment is similar to virtualization, except it does not need a separate operating system to run. The application, its configuration, and all of its dependencies are packaged into a lightweight container that can be ported over to any system. The main advantage of containerization over traditional virtualization is that containers are much more lightweight. Aside from this point, they work the same in concept, but containerization has a lot of other inherent advantages due to being lightweight. For example, they are very easy to scale, to build redundancy, load balancing, and have many other features.

What is Docker used for?

DID YOU KNOW?
It is not strictly necessary to use Docker with Kubernetes, although most clusters do indeed use this combination of software. There are other containerization tools available such as Containerd that can also complement Kubernetes as the necessary containerization layer that it needs in order to execute container images.

Docker is the biggest name in containerization, although there are a slew of other tools that can also do the job. Many Linux administrators get started with Docker to build out applications that can be packaged into a container and easily deployed on any system. This is also a great way for developers to share their work, as anyone with Docker can run the application.

What is Kubernetes used for?




Using containers sounds great, right? They are lightweight and portable, and they are the next logical step after virtualization deployment. The problem that arises is when an enterprise needs to deploy various containerized applications, and scale them to meet high demand. This is the problem that Kubernetes is built to address.

Kubernetes works by having one (or a few, for redundancy) master node, and any number of worker nodes. When you hand off your containerized applications to Kubernetes, it will distribute the load as instructed across all of its worker nodes. Using the kubectl command, you can interact with your master node and instruct Kubernetes on how to run your cluster.

The master node, along with your own applied configurations, will manage the deployment, scheduling, load balancing, healing, and other aspects of the nodes and applications in your cluster. It is also simple enough to push new updates into the cluster, rollback states as needed, and check logs or perform other administrative functions from a centralized location.

How do Docker and Kubernetes work together?

Keep in mind that Kubernetes allows us to manage all of our containers and nodes, but still relies on Docker (or another containerization layer) in order to actually run the applications. Therefore it is still necessary to have Docker installed on all worker nodes, as this layer is used to execute the containers.

Kubernetes is the technology that allows us to build a cluster of nodes and pods to run our containers. With Kubernetes, we can easily scale containerized applications by executing one or two commands. For example, by interacting with the master node via command line, we can easily instruct Kubernetes to use extra replicas for one of our applications which is seeing increased demand.

Check out our tutorial on What is Kubernetes used for? to learn more about Kubernetes and all of its advantages.

Closing Thoughts




In this tutorial, we learned about the differences between Kubernetes and Docker. Although it is typical to have some confusion when first learning about containerization, it is simple to understand how these two technologies differ from each other and are meant to work together in order to bring containerization to a large scale.



Comments and Discussions
Linux Forum