State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 65% complete
UUID : b7572e60:4389f5dd:ce231ede:458a4f79
Events : 0.34
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 spare rebuilding /dev/sdc1
6.2.1.4. Stopping and restarting a RAID array
A RAID array can be stopped anytime that it is not in useuseful if you have built an array incorporating removable or external drives that you want to disconnect. If you're using the RAID device as an LVM physical volume, you'll need to deactivate the volume group so the device is no longer considered to be in use:
# vgchange test -an
0 logical volume(s) in volume group "test" now active
The -an argument here means activated: no . (Alternately, you can remove the PV from the VG using vgreduce .)
To stop the array, use the --stop option to mdadm :
# mdadm --stop /dev/md0
The two steps above will automatically be performed when the system is shut down.
To restart the array, use the --assemble option:
# mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1
mdadm: /dev/md0 has been started with 2 drives.
To configure the automatic assembly of this array at boot time, obtain the array's UUID (unique ID number) from the output of mdadm -D :
# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Thu Mar 30 02:09:14 2006
Raid Level : raid1
Array Size : 63872 (62.39 MiB 65.40 MB)
Device Size : 63872 (62.39 MiB 65.40 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Mar 30 02:19:00 2006
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 5fccf106:d00cda80:daea5427:1edb9616
Events : 0.18
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
Then create the file /dev/ mdstat if it doesn't exist, or add an ARRAY line to it if it does:
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 uuid=c27420a7:c7b40cc9:3aa51849:99661a2e
In this file, the DEVICE line identifies the devices to be scanned (all partitions of all storage devices in this case), and the ARRAY lines identify each RAID array that is expected to be present. This ensures that the RAID arrays identified by scanning the partitions will always be assigned the same md device numbers, which is useful if more than one RAID array exists in the system. In the mdadm.conf files created during installation by Anaconda, the ARRAY lines contain optional level= and num-devices= enTRies (see the next section).
If the device is a PV, you can now reactivate the VG:
# vgchange test -a y
1 logical volume(s) in volume group "test" now active
6.2.1.5. Monitoring RAID arrays
The mdmonitor service uses the monitor mode of mdadm to monitor and report on RAID drive status.
The method used to report drive failures is configured in the file /etc/ mdadm.conf . To send email to a specific email address, add or edit the MAILADDR line:
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR raid-alert
ARRAY /dev/md0 level=raid1 num-devices=2 uuid=dd2aabd5:fb2ab384:cba9912c:df0b0f4b
ARRAY /dev/md1 level=raid1 num-devices=2 uuid=2b0846b0:d1a540d7:d722dd48:c5d203e4
ARRAY /dev/md2 level=raid1 num-devices=2 uuid=31c6dbdc:414eee2d:50c4c773:2edc66f6
When mdadm.conf is configured by Anaconda, the email address is set to root . It is a good idea to set this to an email alias, such as raid-alert , and configure the alias in the /etc/ aliases file to send mail to whatever destinations are appropriate:
raid-alert: chris, 4165559999@msg.telus.com
In this case, email will be sent to the local mailbox chris , as well as to a cell phone.
When an event occurs, such as a drive failure, mdadm sends an email message like this:
From root@bluesky.fedorabook.com Thu Mar 30 09:43:54 2006
Date: Thu, 30 Mar 2006 09:43:54 -0500
From: mdadm monitoring <root@bluesky.fedorabook.com>
To: chris@bluesky.fedorabook.com
Subject: Fail event on /dev/md0:bluesky.fedorabook.com
This is an automatically generated mail message from mdadm running on bluesky.fedorabook.com
A Fail event had been detected on md device /dev/md 0 .
It could be related to component device /dev/ sdc1 .
Faithfully yours, etc.
I like the "Faithfully yours" bit at the end!
If you'd prefer that mdadm run a custom program when an event is detectedperhaps to set off an alarm or other notificationadd a PROGRAM line to /etc/mdadm.conf :
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR raid-alert
PROGRAM /usr/local/sbin/mdadm-event-handler
ARRAY /dev/md0 level=raid1 num-devices=2 uuid=dd2aabd5:fb2ab384:cba9912c:df0b0f4b
ARRAY /dev/md1 level=raid1 num-devices=2 uuid=2b0846b0:d1a540d7:d722dd48:c5d203e4
ARRAY /dev/md2 level=raid1 num-devices=2 uuid=31c6dbdc:414eee2d:50c4c773:2edc66f6
Only one program name can be given. When an event is detected, that program will be run with three arguments: the event, the RAID device, and (optionally) the RAID element. If you wanted a verbal announcement to be made, for example, you could use a script like this: