Rebooting Results in Degraded RAID Array Using Debian Lenny

Wed 24 December 2008 Category: Linux

As described earlier, I setup a RAID 6 array consisting of physical 1 TB disk and 'virtual' 1 TB disks that are in fact two 0.5 TB disks in RAID 0. 

I wanted to upgrade to Lenny because the new kernel that ships with Lenny supports growing a RAID 6 array. After installing Lenny the RAID 0 devices were running smootly, but not recognised as part of the RAID 6. 

So the array was running in degraded mode. That is bad.

In Lenny, a new version of mdadm is used that requires the presense of the mdadm.conf file. The mdadm.conf file contains these lines: 

#DEVICE partitions
#DEVICE /dev/md*

After I uncommented the "DEVICE /dev/md*" line and generated a new initramfs file with:

update-initramfs -u

The RAID 0 drives were recognised as part of a RAID array and everything was OK again. So mdadm must be instructed to check if /dev/md? devices are a member of a RAID array. 

I guess this is also relevant if you are running a RAID 10 based on a mirrored stripe or a striped mirror.

Comments