1. Introduction

This article describes step by step setup of Linux software RAID 1 on Linux Platform. Although this software RAID 1 configuration has been accomplished on Debian ( Ubuntu ) it also can guide you if you are running some other Linux distributions such as RedHat, Fedora , Suse, PCLinux0S etc. For RAID-1 setup we need two or more disks. RAID1 mode creates a exact mirror of all data between two or more disks.

1.1. System Info

  • OS: Debian Etch ( basic installation on /dev/sda)
  • Kernel: Linux raid 2.6.18-5-686 #1 SMP Fri Jun 1 00:47:00 UTC 2007 i686 GNU/Linux
  • Hard Drives: /dev/sda -> 4GB , /dev/sdb -> 5GB

2. The Plan

We start with running linux system Debian Etch and the following partition scheme and hard drives ( sda , sdb ):

raid paritions scheme

raid 1 setup guide

3. Software Installation

There is only 1 package ( + prerequisites ) needed by software raid 1 on debian. This package is mdadm. Simply use apt-get tool to install mdadm package it into your system. You may be asked to answer couple question.

apt-get install mdadm 

4. Configure Kernel Modules

4.1. Load modules at boot time

No we need to make sure that raid kernel modules are loaded at the boot time. To accomplish this task we need to edit /etc/modules files and add couple lines. Open up your favorite text editor or just simply append lines with echo command.

echo raid1 >> /etc/modules
echo md >> /etc/modules

load raid kernel modules at boot time

4.2. Load modules to the Kernel

At this stage if we want to use raid modules we have two options. First one is to reboot our system and the other option is to use modprobe or insmode to load modules to the Kernel. I find second option easier:

modprobe raid1 

load raid kernel modules

NOTE: if you do not have md module already loaded as shown on the figure above use modprobe to load it:

modprobe md 

There are two ways to confirm that our raid modules are loaded into kernel:

lsmod | grep raid1

or

cat /proc/mdstat 

You should have similar output as shown on the figure below:

load raid kernel modules output from a command line

5. Prepare sdb for RAID

At this stage we need to prepare our second hard drive sdb to act as RAID 1 arrays. First we need to copy a partition table from sda => sdb1. For this task we need to use sfdisk command. This step can also be done manually if you prefer to do it that way. Make sure that you do NOT have any data on disk sdb because this sfdisk command will erase them permanently !!! This command will copy a partition table from /dev/sda to /dev/sdb.

sfdisk -d /dev/sda | sfdisk /dev/sdb 

copy a partition table

Second Hard Drive sdb is almost ready. All we need to do is to change partition ID's for partitions: sdb1, sdb5, sdb6, sdb8 and sdb9 to fd (Linux raid autodetect). Also in this task sfdisk command comes very handy: NOTE: We are not interested in mirroring swap sdb7 partition which is swap partition.

for partition in 1 5 6 8 9; do sfdisk --change-id /dev/sdb $partition fd; done 

change partition ID

6. Setting Up RAID 1 Arrays

6.1. Create RAID 1 Arrays

It is time to create RAID 1 arrays. We first create RAID 1 array only from /dev/sdb. Once this is ready, we include the first hard drive /dev/sda to this array as well.

for partition in 1 5 6 8 9; do mdadm --create /dev/md$partition --level=1 \
--raid-disks=2 missing /dev/sdb$partition; done

Create RAID 1 Arrays

6.2. Create a filesystem on RAID 1 Arrays

Now we can create a file system on our new RAID 1 arrays:

for partition in 1 5 6 8 9; do mkfs.ext3 /dev/md$partition; done 

6.3. Edit mdadm.conf file

Please make a copy of your /etc/mdadm/mdadm.conf file as we will use in later stage of this tutorial. Let's edit mdadm.conf file. Make sure that you use an append ">>" to avoid accidental overwrite of mdadm.conf file.

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Edit mdadm.conf file

7. Edit /etc/fstab

Just because we want to make our system to mount new RAID 1 arrays every time after reboot, we need to edit /etc/fstab file. OLD /etc/fstab

# /etc/fstab: static file system information.
#
#
proc /proc proc defaults 0 0
/dev/sda1 / ext3 defaults,errors=remount-ro 0 1
/dev/sda9 /home ext3 defaults 0 2
/dev/sda8 /tmp ext3 defaults 0 2
/dev/sda5 /usr ext3 defaults 0 2
/dev/sda6 /var ext3 defaults 0 2
/dev/sda7 none swap sw 0 0
/dev/hdc /media/cdrom0 udf,iso9660 user,noauto 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto 0 0

NEW /etc/fstab

# /etc/fstab: static file system information.
#
#
proc /proc proc defaults 0 0
/dev/md1 / ext3 defaults,errors=remount-ro 0 1
/dev/md9 /home ext3 defaults 0 2
/dev/md8 /tmp ext3 defaults 0 2
/dev/md5 /usr ext3 defaults 0 2
/dev/md6 /var ext3 defaults 0 2
/dev/sda7 none swap sw 0 0
/dev/hdc /media/cdrom0 udf,iso9660 user,noauto 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto 0 0

8. Configure GRUB boot manager

First, we make a copy of /boot/grub/menu.lst.

cp /boot/grub/menu.lst /boot/grub/menu.lst_orig 

let's engage a sed command to do this job:

sed 's/sda1/md1/' < /boot/grub/menu.lst_orig > /boot/grub/menu.lst 

edit grub to boot raid devices
Update GRUB:

update-grub 

GRUB - save changes

9. Copy data from sda => sdb

reboot in single mode:

init 1 

Copy data from each partition to new RAID 1 arrays. For this operation we use rsync command which will clone both partitions. If you do not have rsync installed on your system issue command:

apt-get install rsync 

"/" copy root partition:

mount /dev/md1 /media; rsync -aqxP / /media; umount /media 

copy /usr/ partition:

mount /dev/md5 /media; rsync -aqxP /usr/* /media; umount /media 

copy /var partition:

mount /dev/md6 /media; rsync -aqxP /var/* /media; umount /media 

copy /tmp/ partition:

NOTE: You can omit /tmp if you like !!

mount /dev/md8 /media; rsync -aqxP /tmp/* /media; umount /media 

copy /home partition:

mount /dev/md9 /media; rsync -aqxP /home/* /media; umount /media 

10. Setup boot manager

grub 

Once in GRUB type the following commands:

device (hd0) /dev/sdb
root (hd0,0)
setup (hd0)
quit

setup grub with boot manager

11. Reboot

We are ready to reboot the system. To confirm that we are running a system from RAID 1 arrays ( I guess we are on the good path since our system has started ) we can check for mounted devices with mount command:

mount 

reboot with RAID 1 arrays

12. Add first hard drive ( sda ) to the RAID 1 array

12.1. Change Partition Id with sfdisk

Same as we did with /dev/sdb we need to first change partition ID's:

for partition in 1 5 6 8 9; do sfdisk --change-id /dev/sda $partition fd; done 

Change Partition Id with sfdisk

12.2. Add partitions with mdadm to RAID 1 array

for partition in 1 5 6 8 9; do mdadm --add /dev/md$partition /dev/sda$partition; done 

Add partitions with mdadm to RAID 1 array

Since we have added new devices to a RAID 1 array the system started to resync each /dev/md* . You can see this process with watch command:

watch cat /proc/mdstat 

RAID 1 array resync

12.3. Edit /etc/mdadm/mdadm.conf

Now we need to edit /etc/mdadm/mdadm.conf file again and for that we can use our backup copy of this file as a template.

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig1
cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Edit /etc/mdadm/mdadm.conf

13. Finish

  • Everything is now ready. You may want to reboot your system and confirm your successful setup of Linux software RAID 1.
  • To confirm that raid 1 is running use a cat /proc/mdstat command
cat /proc/mdstat 
  • Since we have used /dev/sda7 as a swap we have now unused partition /dev/sdb7.


Free Linux eBooks

Do you have the right skills?

Our IT Skills Watch page reflects an up to date IT skills demand leaning towards the Linux and Unix environment. We have considered a number of skills and operating systems.

See the result...

Linux Online Training

Learn to run Linux servers and prepare for LPI certification with Linux Academy. 104 available video lessons with PDF course notes with your own server!

Go to top