Выбрать главу

OS type: Linux

Block size=1024 (log=0)

Fragment size=1024 (log=0)

32768 inodes, 130944 blocks

6547 blocks (5.00%) reserved for the super user

First data block=1

Maximum filesystem blocks=67371008

16 block groups

8192 blocks per group, 8192 fragments per group

2048 inodes per group

Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729

Writing inode tables: done

Creating journal (4096 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 27 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.

# mkdir /mnt/database

# mount /dev/md0 /mnt/database

Any data you write to /mnt/database will be written to both the local volume and the remote drive.

Do not use iSCSI directly over the Internet: route iSCSI traffic through a private TCP/IP network or a virtual private network (VPN) to maintain the privacy of your stored data. 

To shut down the remote mirror, reverse the steps:

# umount /mnt/database

# mdadm --stop /dev/md0

# iscsiadm -m node --record f68ace --logout

A connection will be made to the remote node whenever the iSCSI daemon starts. To prevent this, edit the file /etc/iscsid.conf :

#

# Open-iSCSI default configuration.

# Could be located at /etc/iscsid.conf or ~/.iscsid.conf

#

node.active_cnx = 1

node.startup = automatic

#node.session.auth.username = dima

#node.session.auth.password = aloha

node.session.timeo.replacement_timeout = 0

node.session.err_timeo.abort_timeout = 10

node.session.err_timeo.reset_timeout = 30

node.session.iscsi.InitialR2T = No

node.session.iscsi.ImmediateData = Yes

node.session.iscsi.FirstBurstLength = 262144

node.session.iscsi.MaxBurstLength = 16776192

node.session.iscsi.DefaultTime2Wait = 0

node.session.iscsi.DefaultTime2Retain = 0

node.session.iscsi.MaxConnections = 0

node.cnx[0].iscsi.HeaderDigest = None

node.cnx[0].iscsi.DataDigest = None

node.cnx[0].iscsi.MaxRecvDataSegmentLength = 65536

#discovery.sendtargets.auth.authmethod = CHAP

#discovery.sendtargets.auth.username = dima

#discovery.sendtargets.auth.password = aloha

Change the node.startup line to read:

node.startup = manual

Once the remote mirror has been configured, you can create a simple script file with the setup commands:

#!/bin/bash

iscsiadm -m node --record f68ace --login

mdadm --assemble /dev/md0 /dev/main/database /dev/sdi1

mount /dev/md0 /mnt/database

And another script file with the shutdown commands:

#!/bin/bash

umount /mnt/database

mdadm --stop /dev/md0

iscsiadm -m node --record f68ace --logout

Save these scripts into /usr/local/sbin and enable read and execute permission for both of them:

# chmod u+rx /usr/local/sbin/ remote-mirror-start

# chmod u+rx /usr/local/sbin/ remote-mirror-stop

You can also install these as init scripts (see Lab 4.6, "Managing and Configuring Services and Lab 4.12, "Writing Simple Scripts ").

6.2.3.4. ...using more than one RAID array, but configuring one hot spare to be shared between them?

This can be done through /etc/mdadm.conf . In each ARRAY line, add a spare-group option:

# mdadm.conf written out by anaconda

DEVICE partitions

MAILADDR root

ARRAY /dev/md0 spare-group= red uuid=5fccf106:d00cda80:daea5427:1edb9616

ARRAY /dev/md1 spare-group= red uuid=aaf3d1e1:6f7231b4:22ca60f9:00c07dfe

The name of the spare-group does not matter as long as all of the arrays sharing the hot spare have the same value; here I've used red . Ensure that at least one of the arrays has a hot spare and that the size of the hot spare is not smaller than the largest element that it could replace; for example, if each device making up md0 was 10 GB in size, and each element making up md1 was 5 GB in size, the hot spare would have to be at least 10 GB in size, even if it was initially a member of md1 .

6.2.3.5. ...configuring the rebuild rate for arrays?

Array rebuilds will usually be performed at a rate of 1,000 to 20,000 KB per second per drive, scheduled in such a way that the impact on application storage performance is minimized. Adjusting the rebuild rate lets you adjust the trade-off between application performance and rebuild duration.

The settings are accessible through two pseudofiles in /proc/sys/dev/raid , named speed_limit_max and speed_limit_min . To view the current values, simply display the contents:

$ cat /proc/sys/dev/raid/speed_limit*

200000

1000

To change a setting, place a new number in the appropriate pseudo-file:

# echo 40000 >/proc/sys/dev/raid/speed_limit_max

6.2.3.6. ...simultaneous drive failure?

Sometimes, a drive manufacturer just makes a bad batch of disksand this has happened more than once. For example, a few years ago, one drive maker used defective plastic to encapsulate the chips on the drive electronics; drives with the defective plastic failed at around the same point in their life cycles, so that several elements of RAID arrays built using these drives would fail within a period of days or even hours. Since most RAID levels provide protection against a single drive failure but not against multiple drive failures, data was lost.