Articles in the Storage category

  1. Using iSCSI With Time MacHine and Super Duper

    Sun 21 July 2013

    In the past, as a Mac user, I've used separate external drives for Time Machine backups and Super Duper clones but I'm not happy with that. External hard drives make noise and create clutter.

    I'd like to move away all my storage from my living room (or home office) and put it in another room or even closet.

    A NAS may help with that but a NAS does not solve all problems. The main problem being the reliability of network-based Time Machine backups. Those NAS devices pretend to be Time Capsules, but there's always the risk that Apple breaks compatibility with a future update.

    qnap nas

    From my experience, Time Machine backups are only 100% reliable with local attached storage - like external hard drives.

    Now there is a cool technology called iSCSI. It's basically a storage protocol tunneled through your home LAN network instead of a USB / Firewire or Thunderbolt cable. Most NAS devices support iSCSI and allow you to carve out some local NAS storage and present it to your computer through the network as if it was just local storage. Since iSCSI uses your Gigabit network as a transport, you can achieve transfer speeds of around ~110 MB/s easily, which should suit most needs*.

    This is very cool, because you can export entire hard drives through the network to your computer. Your computer does not see the difference between an external USB hard drive and a hard drive exported through your NAS to your computer. iSCSI is totally transparent from the perspective of the operating system.

    This trick allows you to create bootable Super Duper clones of your boot drive through the network. I would just hook up an external USB drive to my NAS and export it through iSCSI.

    In case of an emergency - when your boot drive dies - you can boot from this external hard drive. Just disconnect it from your NAS and hook it up to your Mac.

    Because hard drives attached through iSCSI are seen as normal storage, you can also encrypt them with the Apple build-int whole-drive (or whole-partition) encryption.

    Now there is one caveat. Mac OS X does not natively support iSCSI, it has no native iSCSI initiator (client). In contrast, Windows 7 does have a very good iSCSI initiator. I think it's a shame, but Mac users must buy an iSCSI initiator from either:

    1. GlobalSAN for $89
    2. Atto for $195

    I've only used the GlobalSAN iSCSI initiator and it seems to work fine. I believe that $89 is well worth the money: all your storage tucked away from your home office or living room.

    Another caveat is that iSCSI requires reliable networking or otherwise there is a possible risk of data corruption, so I would not advice using iSCSI over a wireless network connection, although it is possible.

    For the most popular NAS vendors, I've added some tutorials on how to setup iSCSI.

    1. Synology
    2. QNAP
    3. Thecus

    P.S. The GlobalSAN iSCSI initiator does support sleep and hibernate, in contrast to what some tutorials may tell you.

  2. Improving iSCSI Native Multi Pathing Round Robin Performance

    Mon 27 May 2013

    10 Gb ethernet is still quite expensive. You not only need to buy appropriate NICS, but you must also upgrade your network hardware as well. You may even need to replace existing fiber optic cabling if it's not rated for 10 Gbit.

    So I decided to still just go for plain old 1 Gbit iSCSI based on copper for our backup SAN. After some research I went for the HP MSA P2000 G3 with dual 1 Gbit iSCSI controllers.

    Each controller has 4 x 1 Gbit ports, so the box has a total of 8 Gigabit ports. This is ideal for redundancy, performance and cost. This relatively cheap SAN does support active/active mode, so both controllers can share the I/O load.

    The problem with storage is that a single 1 Gbit channel is just not going to cut it when you need to perform bandwidth intensive tasks, such as moving VMs between datastores (within VMware).

    Fortunately, iSCSI Multi Pathing allows you to do basically a RAID 0 over multiple network cards, combining their performance. So four 1 Gbit NICS can provide you with 4 Gbit of actual storage throughput.

    The trick is not only to configure iSCSI Multi Pathing using regular tutorials, but also to enable the Round Robin setting on each data store or each RAW device mapping.

    So I dit all this and still I got less than 1 Gb/s performance, but fortunately, there is only one little trick to get to the actual performance you might expect.

    I found this at multiple locations but the explanation on Justin's IT Blog is best.

    By default, VMware issues 1000 IOPS to a NIC before switching (Round Robin) to the next one. This really hampers performance. You need to set this value to 1.

    esxcli storage nmp psp roundrobin deviceconfig set -d $DEV --iops 1 --type iops
    

    This configuration tweak is recommended by HP, see page 28 of the linked PDF.

    Once I configured all iSCSI paths to this setting, I got 350 MB/s of sequential write performance from a single VM to the datastore. That's decent enough for me.

    How do you do this? It's a simple one liner that sets the iops value to 1, but I'm so lazy, I don't want to copy/past devices and run the command by hand each time.

    I used a simple CLI script (VMware 5) to configure this setting for all devices. SSH to the host and then run this script:

    for x in `esxcli storage nmp device list | grep ^naa`
    do
        echo "Configuring Round Robin iops value for device $x"
        esxcli storage nmp psp roundrobin deviceconfig set -d $x --iops 1 --type iops
    done
    

    This is not the exact script I used, I have to verify this code, but basically it just configures this value for all storage devices. Devices that don't support this setting will raise an error message that can be ignored (if the VMware host also has some local SAS or SATA storage, this is expected).

    The next step is to check if this setting is permanent and survives a host reboot.

    Anyway, I verified the performance using a Linux VM and just writing a simple test file:

    dd if=/dev/zero of=/storage/test.bin bs=1M count=30000
    

    To see the Multi Pathing + Round Robin in action, run esxtop at the cli and then press N. You will notice that with four network cards, VMware will use all four channels available.

    This all is to say that plain old 1 Gbit iSCSI can still be fast. But I believe that 10 Gbit ethernet does probably provide better latency. If that's really an issue for your environment, is something I can't tell.

    Changing the IOPS parameter to 1 IOPS also seems to improve random I/O performance, according to the table in Justin's post.

    Still, although 1 Gbit iSCSI is cheap, it may be more difficult to get the appropriate performance levels you need. If you have time, but little money, it may be the way to go. However, if time is not on your side and money isn't the biggest problem, I would definitely investigate the price difference with going for fibre channel or with 10Gbit iSCSI.

Page 10 / 16