This scenario assumes that you have two hard disk with RAID1 setup and one of them is broken (say sdb).
To check the status of RAID:
$ cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda3[1]
730202368 blocks [2/1] [U_]
md1 : active raid1 sda2[1]
264960 blocks [2/1] [U_]
md0 : active (auto-read-only) raid1 sda1[1]
2102464 blocks [2/1] [U_]
you will see [_U] or [U_] if there is a broken RAID.
If required remove the broken hardrive from RAID from all md devices.
# mdadm --manage /dev/md0 --fail /dev/sdb1
# mdadm --manage /dev/md1 --fail /dev/sdb2
# mdadm --manage /dev/md2 --fail /dev/sdb3
Shutdown the machine and replace the hard drive.
Once the server is booted, you will see the new device (either sda or sdb depends on what drive is broken)
# ls -l /dev/sd*
Now we need to replicate the partition schema on the new drive.
sfdisk -d /dev/sda | sfdisk /dev/sdb
// -d Dump the partitions of a device
We can add the partition to the RAID now, you could verify the partitions with fdisk -l.
# mdadm --manage /dev/md0 --add /dev/sdb1
# mdadm --manage /dev/md1 --add /dev/sdb2
# mdadm --manage /dev/md2 --add /dev/sdb3
It will start sync the data and will be ready once completed.
You may verify the mdstat
# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda3[0] sdb3[1]
7302023 blocks [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
2649 blocks [2/2] [UU]
md0 : active (auto-read-only) raid1 sda1[0] sdb1[1]
21024 blocks [2/2] [UU]
./arun
Leave a Reply