RAID can be implemented and managed using “block†devices, which can be Software RAID (e.g. Linux Software RAID’s mdadm), Hardware RAID (i.e. dedicated hardware card), or FakeRAID (also called BIOS or Onboard RAID). This article will be focusing on the how-tos of Linux Software RAID and its management tool, mdadm.
Â
How Software RAID devices work
Software RAID devices should be seen as ordinary disks or disk partitions. They can be "built" from a number of other block devices.
In a RAID implementation, there are spare disks or hot spares that do not take part in the RAID set until one of the active disks fail. Once a device failure is detected, that device is marked as ‘faulty’ and the set starts reconstruction immediately on the first hot spare available. Your system can now run for some time even with a faulty disk as the spare takes its place. If no spare disk is available, then the array runs in 'degraded' mode.
Note that faulty disks still appear and behave as members of the array. The RAID layer just avoids reading/writing them. Also, if a device needs to be removed from the array, remember to mark it first as ‘faulty’ before you remove it.
Â
Creating a SATA drive-based RAID device
You can use full drives as parts of a RAID array but this should be avoided if you want to make the drives bootable. What you need to do is create a partition in the drives (using fdisk or Parted) with partition type set to ‘fd,’ the Linux Software RAID auto-detect.
After creating a partition in each relevant drive (sda1, sdb1), you can now create the array:
mdadm --create /dev/md0 -n 2 --level=1 /dev/sda1 /dev/sdb1
               Where:
-
- /dev/md0 - your new array device path
- -n 2  - the total number of drives, which can be classified as spare, active, or missing; a ‘missing’ drive requires the additional option -f to work
- --level=<RAID level>- can contain any of the following:
- one of linear
- raid0Â OR 0
- stripe
- raid1Â OR 1
- mirror
- raid4 OR 4
- raid5 OR 5Â
- raid6 OR 6
- raid10 OR 10
- multipath OR mp
- faulty
- container
- /dev/sda1, /dev/sdb1 - each component device's path
IMPORTANT NOTE: If you want to boot from a RAID device, place the bootloader code at the start of the physical drive that will be used as the underlying component of the RAID device. This means that if you are using /dev/sda1 as a member of the array /dev/md0 and your primary HDD/SSD is set as BIOS boot option, you must choose /dev/sda for the bootloader code. One way to do this is by running grub-install /dev/sda.
mdadm --detail --scan >> /etc/mdadm.conf
For Ubuntu:
/etc/mdadm/mdadm.conf
Â
Monitoring a RAID device
You can proactively monitor the health of a RAID device using --monitor in the mdadm tool. If mdadm detects an array health issue, it can do a host of actions you specify in the command line. It can send an email, run a specified program, or send a message to the system log to name a few. Read the manual in man mdadm (Monitor section) to see a list of options.
Most Linux distributions provide wrappers for the mdadm –monitor. CentOS / RHEL for example, provides an mdmonitor service, which you can set to start automatically with the system by running chkconfig mdmonitor on.
Another way to check device health is to look into the /proc/mdstat text file to see what arrays your kernel recognizes and to see their respective states.
Â
Replacing the component drive of a RAID device
Â
Destroying a RAID device
There may be a time when you need to completely destroy a RAID device to make underlying devices usable for the system. A situation where this is plausible is if you have a spare HDD with preconfigured RAID metadata and your kernel does recognize the array. In this case, there is only one member in the array and it is not possible to just remove the single drive.
To destroy the RAID device, you first need to stop the array with mdadm -S /dev/md0. The next step is to remove the metadata from the sda1 partition by issuing the command mdadm --zero-superblock /dev/sda1. Remember to bring the metadata of /dev/sda1 (not /dev/sda) to zero (0) if it is part of an array.
Â
References: