Есть сервер с софтовым RAID1. Ниже части файлов.mdadm.conf
---------
# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=36e3b942:88f211c4:37529f05:68b20d1f name=syslog:0
ARRAY /dev/md/1 metadata=1.2 UUID=d969d029:ac9eb2d7:22181bfc:7a955a6f name=syslog:1
ARRAY /dev/md/2 metadata=1.2 UUID=54212f4b:6a512973:f7ff9b6f:a148eb9a name=syslog:2
---------
/etc/fstab
---------
proc /proc proc defaults 0 0
# / was on /dev/md0 during installation
UUID=ab07e74a-c942-423b-8a63-2fbaccf7e7bb / ext3 errors=remount-ro 0 1
# /var was on /dev/md1 during installation
UUID=3b2d5673-8453-4b16-a1e4-7690fdbe0e2a /var ext3 defaults 0 2
# swap was on /dev/md2 during installation
UUID=a1504064-5d15-4bf9-95a3-5cabff3cb8aa none swap sw 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto 0 0
---------
/boot/grub/grub.cfg
---------
### BEGIN /etc/grub.d/10_linux ###
menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-686' --class debian --class gnu-linux --class gnu --class os {
insmod raid
insmod mdraid
insmod part_msdos
insmod part_msdos
insmod ext2
set root='(md/0)'
search --no-floppy --fs-uuid --set ab07e74a-c942-423b-8a63-2fbaccf7e7bb
echo 'Loading Linux 2.6.32-5-686 ...'
linux /boot/vmlinuz-2.6.32-5-686 root=UUID=ab07e74a-c942-423b-8a63-2fbaccf7e7bb ro quiet vga=791
echo 'Loading initial ramdisk ...'
initrd /boot/initrd.img-2.6.32-5-686
}
menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-686 (recovery mode)' --class debian --class gnu-linux --class gnu --class os {
insmod raid
insmod mdraid
insmod part_msdos
insmod part_msdos
insmod ext2
set root='(md/0)'
search --no-floppy --fs-uuid --set ab07e74a-c942-423b-8a63-2fbaccf7e7bb
echo 'Loading Linux 2.6.32-5-686 ...'
linux /boot/vmlinuz-2.6.32-5-686 root=UUID=ab07e74a-c942-423b-8a63-2fbaccf7e7bb ro single
echo 'Loading initial ramdisk ...'
initrd /boot/initrd.img-2.6.32-5-686
}
---------
cat /proc/mdstat
---------
Personalities : [raid1]
md2 : active raid1 sda3[0] sdb3[1]
3905524 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
58592184 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
14646200 blocks super 1.2 [2/2] [UU]
unused devices: <none>
---------
fdisk -l
---------
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00097448
Device Boot Start End Blocks Id System
/dev/sda1 * 1 1824 14647296 fd Linux raid autodetect
/dev/sda2 1824 9119 58593280 fd Linux raid autodetect
/dev/sda3 9119 9605 3906560 fd Linux raid autodetect
Disk /dev/sdb: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xcce7cce7
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 1824 14647296 fd Linux raid autodetect
/dev/sdb2 1824 9119 58593280 fd Linux raid autodetect
/dev/sdb3 9119 9605 3906560 fd Linux raid autodetect
Disk /dev/md0: 15.0 GB, 14997708800 bytes
2 heads, 4 sectors/track, 3661550 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/md1: 60.0 GB, 59998396416 bytes
2 heads, 4 sectors/track, 14648046 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md2: 3999 MB, 3999256576 bytes
2 heads, 4 sectors/track, 976381 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md2 doesn't contain a valid partition table
---------
По прошествии какого то времени встала необходимость в фулбекапе системы. Мысль пришла в виде скрипта типа:
#!/bin/sh
mdadm /dev/md0 --fail /dev/sda1
mdadm /dev/md0 --remove /dev/sda1
mdadm /dev/md1 --fail /dev/sda2
mdadm /dev/md1 --remove /dev/sda2
mdadm /dev/md2 --fail /dev/sda3
mdadm /dev/md2 --remove /dev/sda3
dd if=/dev/sda conv=sync,noerror bs=8M | gzip -c > /var/nfsNETstorage/image.gz
mdadm /dev/md0 --re-add /dev/sda1
mdadm /dev/md1 --re-add /dev/sda2
mdadm /dev/md2 --re-add /dev/sda3
То есть делаем fail и remove одному из винтов и с помощью dd снимаем образ на nfs, а потом его загоняем обратно в рейд.
Скрипт успешно отработал, но после восстановления
(аппаратно тестовый системник отличается от сервера(HP G3) с которого снимался бекап)
на тестовый винт система не грузиться ->
GRUB loading.
Welcome to GRUB!
error: no such disk.
Entering rescue mode.
grub rescue>
а если
grub rescue>ls
grub rescue>(hd0) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1)
Как жеж такжеж? Есть у гуру мысли на этот счёт?