RAID stands for Redundant Array of Independent Disks. It is used for performance, availability and security reasons. Different virtual and physical storage devices can be combined into logical RAID arrays in different configuration levels (like RAID 0 is striping and RAID 1 is mirroring). The array looks like a single device on the computer. RAID is useful when we want to handle a large amount of data. It enhances speed and increases our storage capacity. The possible data loss because of disk failure is mitigated by adding parity disks to our configuration.

The Linux software RAID is called mdraid (Multiple Device RAID). Our tool to interact with mdraid on Linux is mdadm. It supports RAID 0, 1, 4, 5, 6, 10.
RAID levels
- Level 0 = striping disks together
- Level 1 = mirroring
- Level 4 = uses parity with a single disk for protecting data
- Level 5 = distributing parity across the disks
- Level 6 = complex parity scheme for data protection
- Level 10 = combines level 1 and level 0
RAID Superblock
A software RAID on Linux stores the RAID array information in a superblock.
The current limitations of the superblock are:
- Maximum 28 devices per array
- Maximum 2 TB size
Creating software RAID on Linux
It is possible to create RAID arrays during the installation and put the entire system on RAID. The Debian installer and RHEL installer are capable of creating RAID arrays and install the operating system on these arrays. The /boot filesystem must be on a separate non-RAID partition. At the time of the boot Linux the mdraid kernel module is not available. It can be compiled into the init image, but I do not go into the details now.
It is also possible to add disks to the system and set up RAID after the installation.
Hands-on LAB exercise
In our example we will add two additional data disks to a working Debian Linux and set up RAID 1 mirroring.
Create a RAID 1 array of two disks
We start with installing two additional virtual disks to the Linux.
root@debtop:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 30G 0 disk ├─sda1 8:1 0 487M 0 part /boot ├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 29.5G 0 part ├─debtop--vg-root 254:0 0 10.1G 0 lvm / ├─debtop--vg-swap_1 254:1 0 976M 0 lvm [SWAP] └─debtop--vg-home 254:2 0 18.4G 0 lvm /home sdb 8:16 0 5G 0 disk sdc 8:32 0 5G 0 disk sr0 11:0 1 1024M 0 rom
With fdisk we partition them for Linux RAID.
Then we can create a RAID array of the two disks. (Install the mdadm tool if it is not already installed.)
root@debtop:~# apt install mdadm
root@debtop:~# mdadm --create /dev/md0 --level=raid1 --raid-devices=2 /dev/sdb1 /dev/sdc1
Follow the data sync between the two RAID devices. If we start with empty disks it must be pretty fast.
root@debtop:~# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
5236736 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Let’s check the status of the RAID device.
mdadm --detail /dev/md0
Or we can investigate the RAID array in more detail.
mdadm --examine /dev/sdb1 /dev/sdc1
Now we can create an ext4 filesystem on the RAID array.
root@debtop:~# mkfs.ext4 /dev/md0 mke2fs 1.46.2 (28-Feb-2021) (...) Writing superblocks and filesystem accounting information: done
We can mount the new filesystem on our Linux.
mount /dev/md0 /mnt/redundant_data/
Our RAID device is ready to use now.
To mount automatically the array let’s extend the /etc/fstab file.
As the device ID can change after a reboot, firstly check the UUID of the device with the blkid command, then use the UUID in the fstab file instead of the device ID.
UUID="<device UUID>" /mnt/redundant_data ext4 defaults 0 0
Now after a reboot the /mnt/redundant_data mount point remains persistent.
Change a failed RAID disk
In case of a failed disk we have to find and isolate the faulty device first, then mark and remove it from the array, and then we can add a new, working disk to the system. The data will be synchronized automatically between the two disks.
Let’s assume that the sdb device is the faulty one. We replace it with the following steps – mark the device as a failed one in the array.
mdadm --manage /dev/md0 --fail /dev/sdb1
Then we can remove the device from the RAID 1 array.
mdadm --manage /dev/md0 --remove /dev/sdb1
After installing the new device we can add it to the existing RAID 1 configuration.
mdadm --manage /dev/md0 --add /dev/sdd1
Querying the detailed information will show us if the operation was successful.
mdadm --detail /dev/md0
Final words
This article and hands-on lab were just the introduction of mdraid. We barely scraped the surface of this topic.
If you have anything to share then please visit my Tom’s IT Cafe Discord Server!
Good work
LikeLiked by 1 person
Thank you!
LikeLike