Control CPU and RAM usage in Kubernetes

Kubernetes is typically used to scale containerized applications across many worker nodes. With more and more applications being deployed into your Kubernetes cluster, managing CPU and memory utilization becomes a crucial issue. In this tutorial, we will look at how to manage CPU and RAM usage in Kubernetes on a Linux system, in order to configure resource limits and ensure that containers are not using more than they are allotted.

In this tutorial you will learn:

  • How to configure minimum RAM and CPU for a container
  • How to configure maximum allowed usage of RAM and CPU for a container
  • How to format a YAML file for resource requests and limits
Control CPU and RAM usage in Kubernetes
Control CPU and RAM usage in Kubernetes
Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Any Linux distro
Software Kubernetes
Other Privileged access to your Linux system as root or via the sudo command.
Conventions # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ – requires given linux commands to be executed as a regular non-privileged user

Using Resource Requests and Limits




The main mechanisms used by Kubernetes to regulate CPU and RAM utilization are resource requests and limits. Resource limits define the maximum amount of CPU and memory that a container can use, whereas resource requests define the minimum CPU and RAM that a container must have in order to function.

You can include the following lines (starting at ‘resources’) into a container’s YAML configuration file to specify resource requests and limits:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: my-image
    resources:
      requests:
        cpu: 200m
        memory: 258Mi
      limits:
        cpu: 400m
        memory: 512Mi

In the example above, the container will request 200 milli CPUs and 256 MB of RAM in order to function, and the upper limit of the container is capped off at 400 milli CPUs and 512 MB of RAM usage.

Note: 200 “millicpus” is the same as 20% CPU usage.

WARNING
If the CPU and RAM requests are not able to be met for a container, then it will not be scheduled to run on the node. Make sure that your physical hardware or cloud servers can meet the minimum and maximum demands of your requests and limits before deploying the configurations.

When ready to push your configuration to the container, execute the kubectl apply command:



$ kubectl apply -f /path/to/pod-config.yaml

That’s all there is to it. Kubernetes can better manage the distribution of resources among containers and make sure that each one gets the resources it needs to function well by setting resource demands and limits.



Comments and Discussions
Linux Forum