[root@sapphire]# more /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
md0 : active raid1 sdb1[1] sda1[0]
1260992 blocks [2/2] [UU]
md1 : active raid1 sdb5[1]
3759104 blocks [2/1] [_U]
md2 : active raid1 sdb6[1] sda6[0]
3759104 blocks [2/2] [UU]
md3 : active raid1 sdb8[1] sda8[0]
25454848 blocks [2/2] [UU]
unused devices:
As you can see, md1 only has one drive in the mirrored array. Instead of both sda5 and sdb5, only sdb5 is active. If the array was working properly, the indicator would show [UU] and not [_U].
After some googling, I found that I needed to rebuild the dirty disk with the following command:
raidhotadd /dev/md1 /dev/sda5
This says to add the kicked out /dev/sda5 disk to the /dev/md1 array. It rebuilds the dirty array disk from the main array disk. While it is running, you can see its progress by looking at mdstat:
[root@sapphire]# more /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
md0 : active raid1 sdb1[1] sda1[0]
1260992 blocks [2/2] [UU]
md1 : active raid1 sda5[2] sdb5[1]
3759104 blocks [2/1] [_U]
[=>...................] recovery = 7.9% (300080/3759104) finish=5.5min s
peed=10347K/sec
md2 : active raid1 sdb6[1] sda6[0]
3759104 blocks [2/2] [UU]
md3 : active raid1 sdb8[1] sda8[0]
25454848 blocks [2/2] [UU]
unused devices:
When it completed, it looked like this:
[root@sapphire zones]# more /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
md0 : active raid1 sdb1[1] sda1[0]
1260992 blocks [2/2] [UU]
md1 : active raid1 sda5[0] sdb5[1]
3759104 blocks [2/2] [UU]
md2 : active raid1 sdb6[1] sda6[0]
3759104 blocks [2/2] [UU]
md3 : active raid1 sdb8[1] sda8[0]
25454848 blocks [2/2] [UU]
unused devices:
Now all partitions are functioning properly again.
Thanks to http://www.kieser.net/linux/raidhotadd.html for details on doing this.
No comments:
Post a Comment