Debian
Using mdadm to recover from a dead disk in a Linux RAID-1 array
September 7, 2013Yes, it’s that time of the year again. A disk in my desktop-replacement laptop with 2 disks and a RAID-1 has died. Time for recovery.
This laptop has been running 24⁄7 for the last 3 years or such, so it’s not too surprising that a disk dies. Surprisingly though, for the first time in a long series of dead disks, smartctl -a
does indeed show errors for this disk. Here’s a short snippet of those:
$ smartctl -a /dev/sda [...] Error 1341 occurred at disk power-on lifetime: 17614 hours (733 days + 22 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 41 02 1f c0 9c 40 Error: UNC at LBA = 0x009cc01f = 10272799 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 f8 08 20 c0 9c 40 00 41d+01:51:50.974 READ FPDMA QUEUED 60 08 00 18 c0 9c 40 00 41d+01:51:50.972 READ FPDMA QUEUED ef 10 02 00 00 00 a0 00 41d+01:51:50.972 SET FEATURES [Reserved for Serial ATA] ec 00 00 00 00 00 a0 00 41d+01:51:50.971 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 41d+01:51:50.971 SET FEATURES [Set transfer mode] SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed: read failure 90% 20511 156170102 [...]
The status of the degraded RAID array looks like this:
$ cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdb7[1] 409845696 blocks [2/1] [_U] md0 : active raid1 sda6[0] sdb6[1] 291776 blocks [2/2] [UU]
The [_U] means that one of two disks has failed, it should normally be [UU]. There are two RAID-1s actually, a small md0 (sda6 + sdb6) for /boot and the main md1 (sda7 + sdb7) which holds the OS and my data. Apparently (at first at least), only sda7 was faulty and got kicked out of the array:
$ dmesg | grep kick md: kicking non-fresh sda7 from array!
Anyway, so I ordered a replacement disk, removed the dead disk (I checked the serial number and brand before, so I don’t accidentally remove the wrong one), inserted the new disk and rebooted.
Note: In order for this to work you have to have (previously) installed the bootloader (usually GRUB) onto both disks, otherwise you won’t be able to boot from either of them (which you’ll want to do if one of them dies, of course). In my case, sda was now dead, so I put sdb into its place (physically, by using the other SATA connector/port) and the new replacement disk would become the new sdb.
After the reboot, the new disk needs to be partitioned like the other RAID disk. This can be done easily by copying the partition layout of the “good” disk (now sda after the reboot) onto the empty disk (sdb):
$ sfdisk -d /dev/sda | sfdisk /dev/sdb
Specifically, the RAID disks/partitions need to have the type/ID “fd” (“Linux raid autodetect”), check if that is the case. Then, you can add the new disk to the RAIDs:
$ mdadm /dev/md0 --add /dev/sdb6 $ mdadm /dev/md1 --add /dev/sdb7
After a few hours the RAID will be re-synced properly and all is good again. You can check the progress via:
$ watch -n 1 cat /proc/mdstat
You should probably not reboot during the resync (though I’m not 100% sure if that would be an issue in practice; please leave a comment if you know).
Also, don’t forget to install GRUB on the new disk so you can still boot when the next disk dies:
$ grub-mkdevicemap $ grub-install /dev/sdb
And it might be a good idea to use S.M.A.R.T. to check the new disk, just in case. I did a quick run for the new disk via:
$ smartctl -t short /dev/sdb # Wait a few minutes after this. $ smartctl -a /dev/sdb [...] SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 22 - [...]
Looks good. So far.