Building a RAID 6 Array of Mixed Drives

Sun 10 August 2008 Category: Storage

To be honest, 4 TB of storage isn't really necessary for home usage. However, I like to collect movies in full DVD or HD quality and so I need some storage.

I decided to build myself a NAS box based on Debian Etch. Samba is used to allow clients to access the data. The machine itself was initially based on 4 x 0.5 TB disks using the four SATA ports on the mainboard. With Linux build-in support for software RAID, I created a RAID 5 array, giving me 1.5 TB of storage space. Since a single movie is around 4 GB, the 1.5 TB turned out to become rather tight.

So I bought 4 x 1 TB disks and a Highpoint RocketRaid 2320 controller (SATA 4x). I put all 8 disks on this controller.

I wanted to create a single RAID 6 array using both the 1 TB disks and the 0.5 TB disks. I didn't want to create two separate array's because although it would have provided additional space, it wouldn't have given me the same safety level as RAID 6 does.

I mainly chose for RAID 6 since I cannot afford a backup solution  for this amount of data. I'm aware that RAID is no substitute for a proper backup, but it's an accepted risk for me.

Using both 1 TB disks and 0.5 TB disks, how to create a RAID 6 array using different drive sizes? The solution is fairly simple. Just put two 0.5 TB disks together in one RAID 0 volume and you'll have a 'virtual' 1 TB disk. Since I had four 0.5 TB disks, I could create 2 'virtual' 1 TB disks. 

The only downside is that I had to skim a little bit of storage capacity of the native 1 TB drives, because 2 x 0.5 TB provides slightly less storage space than a single 1 TB disk. We're talking about something like 50 MB here, so It's not a big deal in my opinion. 

The funny thing is that this array actually performs rather well. The disks are connected using a HighPoint RocketRaid 2320 controller. This controller is used just for it's SATA-ports, the on-board RAID functionality is not used. For RAID, I use Linux software RAID, using mdadm. This is how the RAID 6 array looks like:

        server:~# mdadm --detail /dev/md5
        /dev/md5:
        Version : 00.90.03
  Creation Time : Thu Jul 24 22:40:26 2008
     Raid Level : raid6
     Array Size : 3906359808 (3725.40 GiB 4000.11 GB)
    Device Size : 976589952 (931.35 GiB 1000.03 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 5
    Persistence : Superblock is persistent
    Update Time : Sun Aug 10 15:36:18 2008
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0
     Chunk Size : 128K
           UUID : 0442e8fa:acd9278e:01f9e43d:ac30fbff (local to host server)
         Events : 0.14170

Number   Major   Minor   RaidDevice State
   0       8        1        0      active sync   /dev/sda1
   1       8       17        1      active sync   /dev/sdb1
   2       8       33        2      active sync   /dev/sdc1
   3       8       49        3      active sync   /dev/sdd1
   4       9        0        4      active sync   /dev/md0
   5       9        1        5      active sync   /dev/md1

And this is how this array performs:

server:~# dd if=/storage/test.bin of=/dev/null bs=1M

10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 45.7107 seconds, 229 MB/s

server:~# dd if=/dev/zero of=/storage/test.bin bs=1M count=10000

10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 81.0798 seconds, 129 MB/s

With 229 MB/s read performance and 129 MB/s write performance using RAID 6, I think I should be content.

Comments