md0 : active raid1 sdc1[1] sdb1[0]
63872 blocks [2/2] [UU]
unused devices: <none>
If you have three or more devices, you can use RAID 5, and if you have four or more, you can use RAID 6. This example creates a RAID 5 array:
# mdadm --create -n 3 -l raid5 /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdf1
mdadm: largest drive (/dev/sdb1) exceed size (62464K) by more than 1%
Continue creating array? y
mdadm: array /dev/md0 started.
Note that RAID expects all of the devices to be the same size. If they are not, the array will use only the amount of storage equal to the smallest partition on each of the devices; for example, if given partitions that are 50 GB, 47.5 GB, and 52 GB in size, the RAID system will use 47.5 GB in each of the three partitions, wasting 5 GB of disk space. If the variation between devices is more than 1 percent, as in this case, mdadm will prompt you to confirm that you're aware of the difference (and therefore the wasted storage space).
Once the RAID array has been created, make a filesystem on it, as you would with any other block device:
# mkfs -t ext3 /dev/md0
mke2fs 1.38 (30-Jun-2005)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
16000 inodes, 63872 blocks
3193 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=65536000
8 block groups
8192 blocks per group, 8192 fragments per group
2000 inodes per group
Superblock backups stored on blocks: 8193, 24577, 40961, 57345
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 28 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Then mount it and use it:
# mkdir /mnt/raid
# mount /dev/md0 /mnt/raid
Alternately, you can use it as a PV under LVM. In this example, a new VG test is created, containing the LV mysql :
# pvcreate /dev/md0
Physical volume "/dev/md0" successfully created
# vgcreate test /dev/md0
Volume group "test" successfully created
# lvcreate test --name mysql --size 60M
Logical volume "mysql" created
# mkfs -t ext3 /dev/test/mysql
mke2fs 1.38 (30-Jun-2005)
...(Lines skipped)...
This filesystem will be automatically checked every 36 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
# mkdir /mnt/mysql
# mount /dev/test/mysql /mnt/mysql
6.2.1.3. Handling a drive failure
You can simulate the failure of a RAID array element using mdadm :
# mdadm --fail /dev/md0 /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md0
The "failed" drive is marked with the symbol (F) in /proc/ mdstat :
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[2](F) sdb1[0]
63872 blocks [2/1] [U_]
unused devices: <none>
To place the "failed" element back into the array, remove it and add it again:
# mdadm --remove /dev/md0 /dev/sdc1
mdadm: hot removed /dev/sdc1
# mdadm --add /dev/md0 /dev/sdc1
mdadm: re-added /dev/sdc1
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
63872 blocks [2/1] [U_]
[>....................] recovery = 0.0% (928/63872) finish=3.1min speed=309K/sec
unused devices: <none>
If the drive had really failed (instead of being subject to a simulated failure), you would replace the drive after removing it from the array and before adding the new one.
Do not hot-plug disk drivesi.e., physically remove or add them with the power turned onunless the drive, disk controller, and connectors are all designed for this operation. If in doubt, shut down the system, switch the drives while the system is turned off, and then turn the power back on.
If you check /proc/mdstat a short while after readding the drive to the array, you can see that the RAID system automatically rebuilds the array by copying data from the good drive(s) to the new drive:
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
63872 blocks [2/1] [U_]
[=============>.......] recovery = 65.0% (42496/63872)
finish=0.8min speed=401K/sec
unused devices: <none>
The mdadm command shows similar information in a more verbose form:
# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Thu Mar 30 01:01:00 2006
Raid Level : raid1
Array Size : 63872 (62.39 MiB 65.40 MB)
Device Size : 63872 (62.39 MiB 65.40 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Mar 30 01:48:39 2006