Articles in the ZFS category

  1. Why I Do Use ZFS as a File System for My NAS

    January 29, 2015

    On February 2011, I posted an article about my motivations why I did not use ZFS as a file system for my 18 TB NAS.

    You have to understand that at the time, I believe the arguments in the article were relevant, but much has changed since then, and I do believe this article is not relevant anymore.

    My stance on ZFS is in the context of a home NAS build.

    I really recommend giving ZFS a serious consideration if you are building your own NAS. It's probably the best file system you can use if you care about data integrity.

    ZFS may only be available for non-Windows operating systems, but there are quite a few easy-to-use NAS distros available that turn your hardware into a full-featured home NAS box, that can be managed through your web browser. A few examples:

    I also want to add this: I don't think it's wrong or particular risky if you - as a home NAS builder - would decide not to use ZFS and select a 'legacy' solution if that better suits your needs. I think that proponents of ZFS often overstate the risks ZFS mitigates a bit, maybe to promote ZFS. I do think those risks are relevant but it all depends on your circumstances. So you decide.

    May 2016: I have also written a separate article on how I feel about using ZFS for DIY home NAS builds.

    Arstechnica article about FreeNAS vs NAS4free.

    If you are quite familiar with FreeBSD or Linux, I do recommend this ZFS how-to article from Arstechnica. It offers a very nice introduction to ZFS and explains terms like 'pool' and 'vdev'.

    If you are planning on using ZFS for your own home NAS, I would recommend reading the following articles:

    My historical reasons for not using ZFS at the time

    When I started with my 18 TB NAS in 2009, there was no such thing as ZFS for Linux. ZFS was only available in a stable version for Open Solaris. We all know what happened to Open Solaris (it's gone).

    So you might ask: "Why not use ZFS on FreeBSD then?". Good question, but it was bad timing:

    The FreeBSD implementation of ZFS became only stable [sic] in January 2010, 6 months after I build my NAS (summer 2009). So FreeBSD was not an option at that time.
    

    One of the other objections against ZFS is the fact that you cannot expand your storage by adding single drives and growing the array as your data set grows.

    A ZFS pool consists of one or more VDEVs. A VDEV is a traditional RAID-array. You expand storage capacity by expanding the ZFS pool, not the VDEVS. You cannot expand the VDEV itself. You can only add VDEVS to a pool.

    So ZFS either forces you to invest in storage you don't need upfront, or it forces you invest later on because you may waste quite a few extra drives on parity. For example, if you start with a 6-drive RAID6 (RAIDZ) configuration, you will probably expand with another 6 drives. So the pool has 4 parity drives on 12 total drives (33% loss). Investing upfront in 10 drives instead of 6 would have been more efficient because you only lose 2 drives out of 10 to parity (20% loss).

    So at the time, I found it reasonable to stick with what I knew: Linux & MDADM.

    But my new 71 TiB NAS is based on ZFS.

    I wrote an article about my worry that ZFS may die with FreeBSD as it sole backing, but fortunately, I've been proven very, very wrong.

    ZFS is now supported on FreeBSD and Linux. Despite some licencing issues that prevent ZFS from being integrated in the Linux kernel itself, it can still be used as a regular kernel module and it works perfectly.

    There is even an open-source ZFS consortium that brings together all the developers for the different operating systems supporting ZFS.

    ZFS is here to stay for a very long time.

    Tagged as : ZFS
  2. The ZFS Event Daemon on Linux

    August 29, 2014

    If something goes wrong with my zpool, I'd like to be notified by email. On Linux using MDADM, the MDADM daemon took care of that.

    With the release of ZoL 0.6.3, a brand new 'ZFS Event Daemon' or ZED has been introduced.

    I could not find much information about it, so consider this article my notes on this new service.

    If you want to receive alerts there is only one requirement: you must setup an MTA on your machine and that is outside the scope of this article.

    When you install ZoL, the ZED daemon is installed automatically and will start on boot.

    The configuration file for ZED can be found here: /etc/zfs/zed.d/zed.rc. Just uncomment the "ZED_EMAIL=" section and fill out your email address. Don't forget to restart the service.

    ZED seems to hook into the zpool event log that is kept in the kernel and monitors these events in real-time.

    You can see those events yourself:

    root@debian:/etc/zfs/zed.d# zpool events
    TIME                           CLASS
    Aug 29 2014 16:53:01.872269662 resource.fs.zfs.statechange
    Aug 29 2014 16:53:01.873291940 resource.fs.zfs.statechange
    Aug 29 2014 16:53:01.962528911 ereport.fs.zfs.config.sync
    Aug 29 2014 16:58:40.662619739 ereport.fs.zfs.scrub.start
    Aug 29 2014 16:58:40.670865689 ereport.fs.zfs.checksum
    Aug 29 2014 16:58:40.671888655 ereport.fs.zfs.checksum
    Aug 29 2014 16:58:40.671905612 ereport.fs.zfs.checksum
    ...
    

    You can see that a scrub was started and that incorrect checksums were discovered. A few seconds later I received an email:

    The first email:

    A ZFS checksum error has been detected:
    
      eid: 5
     host: debian
     time: 2014-08-29 16:58:40+0200
     pool: storage
     vdev: disk:/dev/sdc1
    

    And soon thereafter:

    A ZFS pool has finished scrubbing:
    
      eid: 908
     host: debian
     time: 2014-08-29 16:58:51+0200
     pool: storage
    state: ONLINE
    status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
    action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
      see: http://zfsonlinux.org/msg/ZFS-8000-9P
     scan: scrub repaired 100M in 0h0m with 0 errors on Fri Aug 29 16:58:51 2014
    config:
    
        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0   903
    
    errors: No known data errors
    

    Awesome!

    The ZED daemon executes commands based on the event class. So it can do more than just send emails, you can customise different actions based on the event class. The event class can be seen in the zpool events output.

    One of the more interesting features is automatic replacement of a defect drive with a hot spare, so full fault tolerance is restored as soon as possible.

    I've not been able to get this to work. The ZED scripts would not automatically replace a failed/faulted drive.

    There seem to be some known issues. The fixes seem to be in a pending pull request.

    Just to make sure I got alerted, I've simulated the ZED configuration for my production environment in a VM.

    I simulated a drive failure with dd as stated earlier, but the result was that for every checksum error I received one email. With thousands of checksum errors, I had to clear 1000+ emails from my inbox.

    It seems that this option, which is uncommented by default, was not enabled.

    ZED_EMAIL_INTERVAL_SECS="3600"
    

    This option implements a cool-down period where an event is just reported once and suppressed afterwards until the interval expires.

    It would be best if this option would be enabled by default.

    The ZED authors acknowledge that ZED is a bit rough around the edges, but it sends out alerts consistently and that's what I was looking for, so I'm happy.

    Tagged as : ZFS event daemon
  3. Installation of ZFS on Linux Hangs on Debian Wheezy

    August 29, 2014

    After a fresh net-install of Debian Wheezy, I was unable to compile the ZFS for Linux kernel module. I've installed apt-get install build-essential but that wasn't enough.

    The apt-get install debian-zfs command would just hang.

    I noticed a 'configure' process and I killed it, and after a few seconds, the installer continued after spewing out this error:

    Building initial module for 3.2.0-4-amd64
    Error! Bad return status for module build on kernel: 3.2.0-4-amd64 (x86_64)
    Consult /var/lib/dkms/zfs/0.6.3/build/make.log for more information.
    

    So I ran ./configure manually inside the mentioned directory and then I got this error:

    checking for zlib.h... no
    configure: error: in `/var/lib/dkms/zfs/0.6.3/build':
    configure: error: 
        *** zlib.h missing, zlib-devel package required
    See `config.log' for more details
    

    So I ran apt-get install zlib1g-dev and no luck:

    checking for uuid/uuid.h... no
    configure: error: in `/var/lib/dkms/zfs/0.6.3/build':
    configure: error: 
        *** uuid/uuid.h missing, libuuid-devel package required
    See `config.log' for more details
    

    I searched a bit online and then I found this link that listed some additional packages that may be missing and I installed them all with:

    apt-get install zlib1g-dev uuid-dev libblkid-dev libselinux-dev parted
    lsscsi wget
    

    This time the ./configure went fine and I could manually make install the kernel module and import my existing pool.

    Tagged as : ZFS Wheezy

Page 1 / 3