Many people have asked me why I do not use ZFS for my NAS storage box. This is a good question and I have multpile reasons why I do not use ZFS and probably never will.
Updated October 18, 2012. I decided to test ZFS on my download server and created a blog post about that.
The demise of Solaris
ZFS is invented by Sun for the Solaris operating system. When I was building my NAS, the only full-featured and production-ready version of ZFS is implemented in Sun Solaris. The only usable version of Solaris was Open Solaris. I dismissed using Open Solaris because of the lack of hardware support and the small user base. This small user base is very important to me. More users is more testing. More support.
The FreeBSD implementation of ZFS became only stable in January 2010, 6 months after I build my NAS (summer 2009). So FreeBSD was not an option at that time.
I am glad that I didn't go for Open Solaris, as Suns new owner Oracle has killed this operating system in August 2010. Although ZFS is open source software, I think it is actually closed source already. The only open source version was through Open Solaris. That software is now killed. Oracle will close the source of ZFS just by not publishing the code of new features and updates. Only their proprietary closed source Solaris platform will obtain updates. But I must say that I don't have proof on this. However, Oracle seems to have at least no interest in open source software and almost seems to be hostile towards it.
FreeBSD and ZFS
So I build my NAS when basically ZFS was not around yet. But with FreeBSD as of today you can build a NAS based on ZFS right? Sure, you can do that. I had no choice back then but you do. But to be honest, I still would not use ZFS. As of March 1th, 2011, I would still go with Linux software RAID and XFS.
The reasons are maybe not that great, I just provide them for you. It's up for you to decide.
I sincerely do respect the FreeBSD community and platform, but it is not for me. It may be that I have just much more experience with Debian Linux and just don't like changing platforms. I find the installation process much more user friendly, I see a year over year improvement on Debian, I see none on the 8.2 FreeBSD release. Furthermore, I'm just thrilled with the really big APT repository. Last, I cannot oversee future requirements. But I'm sure that those requirements have a higher chance to support Linux than BSD.
Furthermore, although FreeBSD has a community, it is relatively small. Resources on Debian an Ubuntu are abundant. I consider Linux a safer bet, also on the part of hardware support. My NAS must be simple to build and rock stable. I don't want to have a day time job just getting my NAS to work and maintain it.
If you are experienced with FreeBSD, by all means, built a ZFS setup if you want. If you have to learn either BSD or Linux, I consider knowledge about Linux more valuable in the long run.
ZFS is a hype
This is the part where people may strongly disagree with me. I admire ZFS, but I consider it total overkill for home usage. I have seen many people talking about ZFS like Apple users about Apple products. It is a hype. Don't get me wrong. As a long-time Mac user I'm also mocking myself here. I get the impression that ZFS is regarded as the second coming of Jesus Christ. It solves problems that I didn't know of in the first place. The only thing it can't do is beat Chuck Norris. But it does vacuum your house if you ask it to.
As a side note, one of the things I do not like about ZFS is the terminology. It is just RAID 0, RAID 1, RAID 5 or 6 but no, the ZFS people had to use different, more cool sounding terms like RAID Z or something. But it is basically the same thing.
Okay, now back to the point: nobody at home needs ZFS. You may argue that nobody needs 18 TB of storage space at home, but that's another story. Running ZFS means using FreeBSD or an out-of-the-box NAS solution based on FreeBSD. And there aren't any other relevant options.
Now, lets take a look at the requirements of most NAS builders. They want as much storage that is possible at the lowest price possible. That's about it. Many people want to add additional disk drives as their demand for storage capacity increases. So people buy a solution with a capacity for say 10 drives and start out with 4 drives and add disks when they need it.
Linux allows you to 'grow' or 'expand' an array, just like most hardware RAID solutions. As far as I know, this is a feature is still not available in ZFS. Maybe this feature is not relevant in the enterprise world, but it is for most people who actually have to think about how they spend their money.
Furthermore, I don't understand Why I can run any RAID array with decent performance with maybe 512 MB of RAM while ZFS would just totally crash with so little memory installed. You seem to need at least 2 GB to prevent crashing your system. More is recommended if you want to prevent it from crashing under high load or something. I really can't wrap my mind about this. Honestly, I think this is insane.
ZFS does great things. Management is easy. Many features are cool. Snapshots, other stuff. But most features are just not required for a home setup. ZFS seems to solve a lot of 'scares' that I've only heard about since ZFS came along. Like the RAID 5/6 write hole. Where others just hookup a UPS in the first place (if you don't use a UPS on your NAS, you might as well also try and see if you are lucky running RAID 0) they find a solution that prevents data loss when power fails. One of the most interesting features to me is though that ZFS checksums all data and detects corruption. But I like it because it sounds useful, but how high are the chances that you need this stuff?
If ZFS would be available under Linux as a native option instead of through FUSE, I would probably consider using it if I would know in advance that I would not want to expand or grow my array in the future. But I am pessimistic about this scenario. It is not in Oracle's interest to change the license on ZFS in order to allow Linux to incorporate support for it in the kernel.
To build my 20 disk RAID array, I had to puzzle with my drives to keep all data while migrating to the new system. Some of the 20 disks came from my old NAS system, so I had to repeatedly grow the array and add disks, which I couldn't have done with ZFS.
Why I choose to build this setup.
The array is just a single 20 disk RAID 6 volume created with a single MDADM command. The second command I issued to make my array operational was to format this new 'virtual' disk with XFS, which just takes seconds. A UPS protects the systems against power failure and I'm happy with it for 1.5 years now. Never had any problems. Never had a disk failure... A single RAID 6 array is simple and fast. XFS is old but reliable. My whole setup is just this: extremely simple. I just love simple.
My array does not use LVM, so I cannot create snapshots or stuff like that. But I don't need it. I just want so much storage that I don't have to think about it. And I think most people just want some storage share with lots of space. In that case, you don't need LVM or stuff like that. Just an array with a file system on top of it. If you can grow the array and the file system, you're set for the future. Speaking about the future: please note that on Linux, XFS is the only file system that is capable of addressing more than 16 TB of data. EXT4 is still limited to 16 TB.
For the future, my hopes are that BTRFS will become a modern viable alternative to ZFS.