How to configure network interface bonding on RHEL 8 / CentOS 8 Linux

Network interface bonding consists in the aggregation of two or more physical network interfaces, called slaves, under one logical interface called master or bond interface. Depending on the bonding mode, such setup can be useful to achieve fault tolerance and/or load balancing. In this tutorial we will learn what the available bonding modes are and how to create a network bonding on RHEL 8 / CentOS 8.

In this tutorial you will learn:

  • What is network interface bonding
  • How to configure network interface bonding on RHEL 8 / CentOS 8
  • What are the different bonding modes

bond0_status

The Bond status as seen by the Linux kernel

Software Requirements and Conventions Used

Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System RHEL 8 / CentOS 8
Software The nmtui utility to control the NetworkManager daemon. The application is included in a minimal system installation.
Other Root privileges to modify system settings
Conventions # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ – requires given linux commands to be executed as a regular non-privileged user

What bonding mode?

There are basically 7 bonding modes we can use:

Round Robin

Packets are distributed equally, in sequential order, to all the slave interfaces (from the first to the last). This mode provides both load balancing and fault tolerance, but needs support on the switches.



Active Backup

Only the primary slave interface is used. If it fails, another slave is used in its place. It does only provide fault tolerance; there are no special requirements.

XOR (Exclusive OR)

Packets are transmitted and assigned to one of the slave interfaces depending on the hash of the source and destination MAC addresses, calculated with the following formula:

[(source MAC address XOR’d with destination MAC address) modulo slave count]

This mode provides both fault tolerance and load balancing.

Broadcast

When this mode is used, all packets are transmitted on all the slave interfaces, providing fault tolerance but not load balancing.

802.3ad

This mode makes use of the IEEE 802.3ad link aggregation which must be supported on the switches. Creates aggregation groups that share same speed and duplex settings. Transmits and receives on all slaves in the active group. Provides both load balancing and fault tolerance.

Adaptive transmit load balancing

Outgoing packets are transmitted across the slave interfaces depending on their load, and incoming traffic is received by the current slave. If the latter fails, another slave takes over its MAC address. This mode provides fault tolerance and load balancing.

Adaptive load balancing

Works like the Adaptive Transmit Load Balancing, but provides also inbound balancing via ARP (Address Resolution Protocol) negotiation.

The environment

For the sake of this tutorial we will work on a virtualized Red Hat Enterprise Linux 8 system. To create our network bonding we will work with nmtui, a text user interface utility used to control the NetworkManager daemon. The same operations, however, can be performed with the nmcli command line utility or via GUI with the Network Manager Connection Editor.

The system has currently two ethernet links, enp1s0
and enp7s0:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:cb:25:82 brd ff:ff:ff:ff:ff:ff
3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:32:37:9b brd ff:ff:ff:ff:ff:ff

Creating the network bonding

As a first thing, we will delete the current existing configurations for the slave interfaces. This is not strictly necessary, since we could edit such configurations in place, but to start from scratch we will proceed this way. Let’s invoke nmtui:

$ sudo nmtui

From the main menu we select “Edit a connection” and confirm.


nmtui-main-menu

Nmtui main menu.

We first select the connection to delete in the list, and then move on <Delete>. Finally we confirm:


nmtui-connection-list

Nmtui connection list.

Finally, we confirm that we want to delete the connection:


nmtui-delete-connection

Nmtui confirmation prompt to delete an existing connection.


We repeat the operation for the other interface. Once we removed all the existent configurations, we can create the bond interface. We select <Add> in the menu, and from the list of connection types, we choose Bond:


nmtui-connection-type-selection

Nmtui connection type selection menu.

A new window will open where we can configure our interface. In this case, even if it’s totally optional, I will use bond0 both as the profile and device name. The most important part, however, is the selection of the slave interfaces to be added to the bond. In the BOND Slaves menu, click on <Add>, and select the type of slave connection to add, in this case ethernet.


nmtui-slave-type-selection

Nmtui menu to select the slave connection type.

Enter the device name, select <OK> and confirm. The operation must be repeated for each one of the slave interfaces.


nmtui-slave-configuration

Nmtui interface to edit slave connection.

The next step is to select the bonding mode: for the sake of this tutorial we will use the Active Backup one. We select the related option in the menu and in the “Primary” field we specify the name of the primary slave interface. Finally, we just select <OK> to confirm the bond interface creation.


nmtui-bond-creation-confirm

The network bonding setup.

We can now exit the nmtui application. To verify that the bonding creation was successful, we can launch the following command:

$ ip addr show bond0

The result is the following:

4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:cb:25:82 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.164/24 brd 192.168.122.255 scope global dynamic noprefixroute bond0
       valid_lft 3304sec preferred_lft 3304sec
    inet6 fe80::48:d311:96c1:89dc/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

The ifcfg configuration files related to our configuration have been generated inside the /etc/sysconfig/network-scripts directory:

$ ls /etc/sysconfig/network-scripts
ifcfg-bond0  ifcfg-enp1s0  ifcfg-enp7s0

To view the current state of the bond0 interface as seen by the kernel, we can run:

$ cat /proc/net/bonding/bond0

The output of the command is reported below:

Ethernet Channel Bonding Driver: v3.7.1 (April
27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: enp1s0 (primary_reselect always)
Currently Active Slave: enp1s0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: enp1s0
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:cb:25:82
Slave queue ID: 0

Slave Interface: enp7s0
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:32:37:9b
Slave queue ID: 0


We can see how both the slave interfaces are up, but only enp1s0 is active, since it is the one used as the primary slave.

Testing the Active Backup

How can we verify that our configuration works? We can put the primary slave interface down and see if the machine still responds to pings. To put down the interface we run:

$ sudo ip link set enp1s0 down

Does the machine still respond? Let’s verify it:

$ ping -c3 192.168.122.164
PING 192.168.122.164 (192.168.122.164) 56(84) bytes of data.
64 bytes from 192.168.122.164: icmp_seq=1 ttl=64 time=0.385 ms
64 bytes from 192.168.122.164: icmp_seq=2 ttl=64 time=0.353 ms
64 bytes from 192.168.122.164: icmp_seq=3 ttl=64 time=0.406 ms

--- 192.168.122.164 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 88ms
rtt min/avg/max/mdev = 0.353/0.381/0.406/0.027 ms

It does! Let’s see how the status of the bond changed:

Ethernet Channel Bonding Driver: v3.7.1 (April
27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: enp1s0 (primary_reselect always)
Currently Active Slave: enp7s0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: enp1s0
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 1
Permanent HW addr: 52:54:00:cb:25:82
Slave queue ID: 0

Slave Interface: enp7s0
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:32:37:9b
Slave queue ID: 0


As you can see, since we put the primary slave interface down (enp1s0), the other slave, enp7s0 was used as backup and is now the currently active one. In addition, the Link Failure Count for the primary slave increased, and is now 1.

Conclusions

In this tutorial we learned what is a network bonding and what are the possible ways to configure a network bonding. We also created a network bonding between two ethernet interfaces using the Active Backup mode. With Red Hat Enterprise Linux 7, a new concept has been introduced, network teaming. In some aspects teaming is similar to bonding, but it is implemented differently and has more feautures. We will cover it in future articles.