1. ZFS Performance on HP Proliant Microserver Gen8 G1610T

    Fri 14 August 2015

    I think the HP Proliant Microserver Gen8 is a very interesting little box if you want to build your own ZFS-based NAS. The benchmarks I've performed seem to confirm this.

    The Microserver Gen8 has nice features such as:

    • iLO (KVM over IP with dedicated network interface)
    • support for ECC memory
    • 2 x Gigabit network ports
    • Free PCIe slot (half-height)
    • Small footprint
    • Fairly silent
    • good build quality

    The Microserver Gen8 can be a better solution than the offerings of - for example - Synology or QNAP because you can create a more reliable system based on ECC-memory and ZFS.

    gen8

    Please note that the G1610T version of the Microserver Gen8 does not ship with a DVD/CD drive as depicted in the image above.

    The Gen8 can be found fairly cheap on the European market at around 240 Euro including taxes and if you put in an extra 8 GB of memory on top of the 2 GB installed you have a total of 10 GB, which is more than enough to support ZFS.

    The Gen8 has room for 4 x 3.5" hard drives so with todays large disk sizes you can pack quite a bit of storage inside this compact machine.

    gen82

    Netto storage capacity:

    This table gives you a quick overview of the netto storage capacity you would get depending on the chosen drive size and redundancy.

    Drive sizeRAIDZRAIDZ2 or Mirror
    3 TB 9 TB 6 TB
    4 TB12 TB 8 TB
    6 TB18 TB12 TB
    8 TB24 TB16 TB

    Boot device

    If you want to use all four drive slots for storage, you need to boot this machine from either the fifth internal SATA port, the internal USB 2.0 port or the microSD card slot.

    The fifth SATA port is not bootable if you disable the on-board RAID controller and run in pure AHCI mode. This mode is probably the best mode for ZFS as there seems to be no RAID controller firmware active between the disks and ZFS. However, only the four 3.5" drive bays are bootable.

    The fifth SATA port is bootable if you configure SATA to operate in Legacy mode. This is not recommended as you lose the benefits of AHCI such as hot-swap of disks and there are probably also performance penalties.

    The fifth SATA port is also bootable if you enable the on-board RAID controller, but do not configure any RAID arrays with the drives you plan to use with ZFS (Thanks Mikko Rytilahti). You do need to put the boot drive in a RAID volume in order to be able to boot from the fifth SATA port.

    The unconfigured drives will just be passed as AHCI devices to the OS and thus can be used in your ZFS array. The big question here is what happens if you encounter read errors or other drive problems that ZFS could handle, but would be a reason for the RAID controller to kick a drive off the SATA bus. I have no information on that.

    I myself used an old 2.5" hard drive with a SATA-to-USB converter which I stuck in the case (use double-sided tape or velcro to mount it to the PSU). Booting from USB stick is also an option, although a regular 2.5" hard drive or SSD is probably more reliable (flash wear) and faster.

    Boot performance

    The Microserver Gen8 takes about 1 minute and 50 seconds just to pass the BIOS boot process and start booting the operating system (you will hear a beep).

    Test method and equipment

    I'm running Debian Jessie with the latest stable ZFS-on-Linux 0.6.4. Please note that reportedly FreeNAS also runs perfectly fine on this box.

    I had to run my tests with the disk I had available:

    root@debian:~# show disk -sm
    -----------------------------------
    | Dev | Model              | GB   |   
    -----------------------------------
    | sda | SAMSUNG HD103UJ    | 1000 |   
    | sdb | ST2000DM001-1CH164 | 2000 |   
    | sdc | ST2000DM001-1ER164 | 2000 |   
    | sdd | SAMSUNG HM250HI    | 250  |   
    | sde | ST2000DM001-1ER164 | 2000 |   
    -----------------------------------
    

    The 250 GB is a portable disk connected to the internal USB port. It is used as the OS boot device. The other disks, 1 x 1 TB and 3 x 2 TB are put together in a single RAIDZ pool, which results in 3 TB of storage.

    Tests with 4-disk RAIDZ VDEV

    root@debian:~# zfs list
    NAME       USED  AVAIL  REFER  MOUNTPOINT
    testpool  48.8G  2.54T  48.8G  /testpool
    root@debian:~# zpool status
      pool: testpool
     state: ONLINE
      scan: none requested
    config:
    
        NAME                        STATE     READ WRITE CKSUM
        testpool                    ONLINE       0     0     0
          raidz1-0                  ONLINE       0     0     0
            wwn-0x50000f0008064806  ONLINE       0     0     0
            wwn-0x5000c5006518af8f  ONLINE       0     0     0
            wwn-0x5000c5007cebaf42  ONLINE       0     0     0
            wwn-0x5000c5007ceba5a5  ONLINE       0     0     0
    
    errors: No known data errors
    

    Because a NAS will face data transfers that are sequential in nature, I've done some tests with 'dd' to measure this performance.

    Read performance:

    root@debian:~# dd if=/testpool/test.bin of=/dev/null bs=1M
    50000+0 records in 50000+0 records out 52428800000 bytes (52 GB) copied, 162.429 s, 323 MB/s

    Write performance:

    root@debian:~# dd if=/dev/zero of=/testpool/test.bin bs=1M count=50000 conv=sync 50000+0 records in 50000+0 records out 52428800000 bytes (52 GB) copied, 169.572 s, 309 MB/s

    Test with 3-disk RAIDZ VDEV

    After the previous test I wondered what would happen if I would exclude the older 1 TB disk and create a pool with just the 3 x 2 TB drives. This is the result:

    Read performance:

    root@debian:~# dd if=/testpool/test.bin of=/dev/null bs=1M conv=sync 50000+0 records in 50000+0 records out 52428800000 bytes (52 GB) copied, 149.509 s, 351 MB/s

    Write performance:

    root@debian:~# dd if=/dev/zero of=/testpool/test.bin bs=1M count=50000 conv=sync 50000+0 records in 50000+0 records out 52428800000 bytes (52 GB) copied, 144.832 s, 362 MB/s

    The performance is clearly better even there's one disk less in the VDEV. I would have liked to test with an additional 2 TB drive what kind of performance would be achieved with four drives but I only have three.

    The result does show that the pool is more than capable of sustaining gigabit network transfer speeds.

    This is confirmed when performing the actual network file transfers. In the example below, I simulate a copy of a 50 GB test file from the Gen8 towards a test system using NFS. Tests are performed using the 3-disk pool.

    NFS read performance:

    root@nano:~# dd if=/mnt/server/test2.bin of=/dev/null bs=1M
    50000+0 records in
    50000+0 records out
    52428800000 bytes (52 GB) copied, 443.085 s, 118 MB/s
    

    NFS write performance:

    root@nano:~# dd if=/dev/zero of=/mnt/server/test2.bin bs=1M count=50000 conv=sync 
    50000+0 records in
    50000+0 records out
    52428800000 bytes (52 GB) copied, 453.233 s, 116 MB/s
    

    I think these results are excellent. Tests with the 'cp' command give the same results.

    I've also done some test with the SMB/CIFS protocol. I've used a second Linux box as a CIFS client to connect to the Gen8.

    CIFS read performance:

    root@nano:~# dd if=/mnt/test/test.bin of=/dev/null bs=1M
    50000+0 records in
    50000+0 records out
    52428800000 bytes (52 GB) copied, 527.778 s, 99.3 MB/s
    

    CIFS write performance:

    root@nano:~# dd if=/dev/zero of=/mnt/test/test3.bin bs=1M count=50000 conv=sync
    50000+0 records in
    50000+0 records out
    52428800000 bytes (52 GB) copied, 448.677 s, 117 MB/s
    

    Hot-swap support

    Although it's even printed on the hard drive caddies that hot-swap is not supported, it does seem to work perfectly fine if you run the SATA controller in AHCI mode.

    Fifth SATA port for SSD SLOG/L2ARC?

    If you buy a converter cable that converts a floppy power connector to a SATA power connector, you could install an SSD. This SSD can then be used as a dedicated SLOG device and/or L2ARC cache if you have a need for this.

    RAIDZ, is that OK?

    If you want maximum storage capacity with redundancy RAIDZ is the only option. RAID6 or two mirrored VDEVs is more reliable, but will reduce available storage space by a third.

    The main risk of RAIDZ is a double-drive failure. As with larger drive sizes, a resilver of a VDEV will take quite some time. It could take more than a day before the pool is resilvered, during which you run without redundancy.

    With the low number of drives in the VDEV the risk of a second drive failure may be low enough to be acceptable. That's up to you.

    Noise levels

    In the past, there have been reports about the Gen8 making tons of noise because the rear chasis fan spins at a high RPM if the RAID card is set to AHCI mode.

    I myself have not encountered this problem. The machine is almost silent.

    Power consumption

    With drives spinning: 50-55 Watt. With drives standby: 30-35 Watt.

    Conclusion

    I think my benchmarks show that the Microserver Gen8 could be an interesting platform if you want to create your own ZFS-based NAS.

    Please note that it is likely that since the Gen9 server platform is already out for some time, HP may release a Gen9 version of the microserver in the near future. However as of August 2015, there is no information on this yet and it is not clear if a successor is going to be released.

    Tagged as : ZFS microserver

Page 1 / 1