This article describes how to use a software RAID for organizing the interaction of multiple drives in a Linux operating system, and without using a hardware RAID controller.
Display the state of a software RAID
To do this, just use this command:
cat /proc/mdstat
If no RAID is active, the output looks like this:
Personalities : [raid1]
unused devices: <none>
Here’s the output for a configured RAID1:
Personalities : [raid1]
md2 : active raid1 sda3[2] sdb3[1]
234405504 blocks super 1.2 [2/2] [UU]
bitmap: 0/2 pages [0KB], 65536KB chunk
md1 : active raid1 sda2[2] sdb2[1]
523712 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sda1[2] sdb1[1]
33521664 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Here’s the output for a configured RAID0 (You need to always configure the /boot
partition md0
as RAID1 to allow the server to boot from it):
Personalities : [raid1] [raid0]
md2 : active raid0 sda3[0] sdb3[1]
883956736 blocks super 1.2 512k chunks
md1 : active raid0 sda2[0] sdb2[1]
52393984 blocks super 1.2 512k chunks
md0 : active raid1 sda1[0] sdb1[1]
523264 blocks super 1.2 [2/2] [UU]
unused devices: <none>
If there is a progress bar displayed under one of the partitions, a RAID resync is currently running:
md0 : active raid1 sdb1[0] sdc1[1]
2095040 blocks super 1.2 [2/2] [UU]
[====>................] resync = 32.7% (418656/2095040) finish=4.2min speed=131219K/sec
Add a software RAID array
In our example scenario, the drives /dev/sda
and /dev/sdb
are already combined in multiple RAID1 arrays, which contain the operating system:
cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[2] sdb3[1]
234405504 blocks super 1.2 [2/2] [UU]
bitmap: 0/2 pages [0KB], 65536KB chunk
md1 : active raid1 sda2[2] sdb2[1]
523712 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sda1[2] sdb1[1]
33521664 blocks super 1.2 [2/2] [UU]
unused devices: <none>
But we have two more drives (/dev/sdc
and /dev/sdd
), which we would also like to set up for data storage with RAID1 array. So we need to add the RAID array first:
mdadm --create --verbose /dev/md3 --level=1 --raid-devices=2 /dev/sdc /dev/sdd
The RAID configuration should now look like this:
cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sdc1[0] sdd1[1]
2095040 blocks super 1.2 [2/2] [UU]
[====>................] resync = 32.7% (418656/2095040) finish=4.2min speed=131219K/sec
md2 : active raid1 sda3[2] sdb3[1]
234405504 blocks super 1.2 [2/2] [UU]
bitmap: 0/2 pages [0KB], 65536KB chunk
md1 : active raid1 sda2[2] sdb2[1]
523712 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sda1[2] sdb1[1]
33521664 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Now we can format the new partition (here with EXT4) and mount them:
mkfs.ext4 /dev/md3
mount /dev/md3 /mnt
Email notification when a drive in software RAID fails
Requirement: You must first install and configure a mail server of your choice (e.g. Sendmail).
Debian/Ubuntu/CentOS
Edit /etc/mdadm/mdadm.conf
or /etc/mdadm.conf
(CentOS) and change the following line:
MAILADDR moc.e1733500140lpmax1733500140e@ulo1733500140h1733500140
Here you can directly specify a destination address. Or, you can forward all emails sent to root to a specific email address using /etc/aliases
.
You can also optionally configure the sending email address:
MAILFROM moc.e1733500140lpmax1733500140e@mda1733500140dm1733500140
For Debian and Ubuntu, it is important that you set AUTOCHECK
in the file /etc/default/mdadm
to true
:
# grep AUTOCHECK= /etc/default/mdadm
AUTOCHECK=true
For CentOS, you must enable the RAID check in the file /etc/sysconfig/raid-check
:
# grep ENABLED /etc/sysconfig/raid-check
ENABLED=yes
openSUSE
Edit /etc/sysconfig/mdadm
and add the email address where you would receive the notification next to the variable MDADM_MAIL
:
MDADM_MAIL="moc.e1733500140lpmax1733500140e@ulo1733500140h1733500140"
Test the configuration
You can verify your configuration by letting mdadm
send a test mail to the mail address using this command:
mdadm --monitor --test --oneshot /dev/md0
You should also ensure that the file /etc/cron.daily/mdadm
contains the following line, which performs the daily monitoring of your RAID:
exec --monitor --scan --oneshot
Removing a software RAID
To remove a software RAID, you can use the following commands.
mdadm --remove /dev/md0
mdadm --remove /dev/md1
mdadm --remove /dev/md2
mdadm --stop /dev/md0
mdadm --stop /dev/md1
mdadm --stop /dev/md2
After that, you can formatt the drives normally again (for example, with EXT4):
mkfs.ext4 /dev/sda
mkfs.ext4 /dev/sdb
You can check the result with the commands…
cat /proc/mdstat
…and…
fdisk -l
And now you should remove the software RAID.