Should I Use ZFS for My Home NAS?

Mon 09 May 2016 Category: Storage

When building your own home NAS, you may be advised to use ZFS for the file system. I would only recommend using ZFS if you understand it well and you accept its limitation. It must make sense for your particular situation and your skill level.

I would like to make the case that for a lot of people, ZFS offers little benefit given their circumstances and for those people, I think it is totally reasonable to select a different platform.

I wrote this article because I sense that people not well-versed into Linux or FreeBSD may sometimes feel pressured1 into building a NAS that they can't handle when problems arise. In that case, opting for ZFS could cause more trouble than it would solve.

Why ZFS?

The main reason why people advise ZFS is the fact that ZFS offers better protection against data corruption as compared to other file systems. It has extra defences build-in that protect your data in a manner that other free file systems cannot2.

The fact that ZFS is better at protecting your data against corruption isn't that important for most home NAS builders because the risks ZFS protect against are very small. I would argue that at the small scale home users operate, it would be reasonable to just accept the risks ZFS protect against.

I must say that software like Freenas does make it very simple to setup a ZFS-based NAS. If you are able to drop to a console and stand your own if problems arise and if you accept the limitations of ZFS, it may be a reasonable option.

In this article, I'm not arguing against ZFS itself, it's an amazing file system with interesting features, but it may not be the best option for your particular situation. I just want to argue that it is reasonable not to use ZFS for your home NAS build. Only use it if it fits your needs.

Silent data corruption

From the perspective of protecting the integrity of your data or preventing data corruption, our computers do an excellent job, most of the time. This is because almost every component in any computer has been build with resiliency in mind.

They deploy checksums and parity when data is stored or transmitted, as a means to assure data integrity3.

For instance, hard drives store extra redundant information alongside your data in order to verify data integrity. They can also use this redundant data to recover from data corruption, although only to some extend. If a hard drive can't read some portion of the disk anymore, it will report an error. So this is not silent data corruption, this is just a 'bad sector' or an Unrecoverable Read Error (URE).

Silent data corruption occurs when data corruption is not detected by the hard drive. The hard drive will thus return corrupt data and will not sound any alarm. This can cause corruption of your files. The chance of this happening at home is extremely rare.

It is way more likely that a hard drive just fails completely or develops regular 'bad sectors'. Those incidents can be handled by any kind of RAID solution, you don't need ZFS to handle these events.

I've been looking around for a real study on the prevalence of silent data corruption. I found a study from 2008. Honestly, I'm not sure what to make of it. Because I'm not sure how the risks portrayed in this study can be translated to a real-life risk for home users.

The study talks about 'checksum errors' and 'identity discrepancies'. Checksum errors are quite prevalent, but they are handled by the storage / RAID subsystem and are of no real concern as far as I can tell. It seems that the problem lies with 'identity discrepancies'. Such errors would cause silent data corruption.

The 'identity discrepancies' are events where - for example - a sector ends up at the wrong spot on the drive, so the sector itself is ok, but the file is still corrupt. That would be a true example of silent data corruption. And ZFS would protect against this risk.

Of the 1.53 milion drives, only 365 drives witnessed an 'identity discrepancy' error. I have difficulty to determine how many of those 365 drives are SATA drives. Since there are a total of 358,000 SATA drives used in this study, even if all 365 disks were SATA drives (which is not true), the risk would be 0,102% or one in 980 drives over the period of 17 months. Remember that this is a worst-case scenario.

To me, this risk seems rather small. If I don't mess up the statistics, you should have a thousand hard drives running for 17 months for a single instance of silent data corruption to show up. So unless you're operating at that scale, I would say that silent data corruption is indeed not a risk a DIY home user should worry about.

ZFS and Unrecoverable Read Errors (UREs)

If you are building your own NAS, you may want to deploy RAID to protect against the impact of one or two drives failing.

For instance, using RAID 5 (RAIDZ) allows your NAS to survice a single drive failure. RAID 6 (RAIDZ2) would survive two drive failures.

Let's take the example of RAID 5. Your NAS can survive a single drive failure and one drive fails. At some point you replace the failed drive and the RAID array starts the rebuild process. During this rebuild process the array is not protected against drive failure so no additional drives should fail.

If a second drive would encounter a bad sector or what people today call an Unrecoverable Read Error during this rebuild process, in most cases the RAID solution would give up on the entire array. A single 'bad sector' may have the same impact as a second drive failure.

This is where ZFS shines as ZFS is a file system and RAID solution in one.

Instead of failing the whole drive, ZFS is capable of keeping the affected drive online and only mark affected files as 'bad'. So clearly this is a benefit over other RAID solutions, which are not file(system)-aware and just have to give up.

Although this scenario is often cited by people recommending ZFS, I would argue that the chance that ZFS will save you from this risk, is rather small, because the risk itself is very small.

Just to be clear, an URE is the oposite of silent data corruption. It is the hard drive reporting a read error back to the RAID solution or operating system. So this is a separate topic.

I'm aware of the (in)famous article declaring RAID 5 dead. In this article, the URE specification of hard drives is used to prove that it's quite likely to encounter a second drive failure due to an URE during an array rebuild.

It seems though that the one URE in 10^14^ bits (an error every 12.5 TB of data read) is a worst-case specification. In real life, drives are way more reliable than this specification. So in practice, the risks aren't as high as this article portrays them to be.

The hidden cost of ZFS

For home users, it would be very convenient to just add extra hard drives over time as the capacity demand increases. With ZFS, this is not possible in a cost-efficient manner. I've written an article about this topic where this drawback is discussed in more detail.

Many other RAID solutions support online capacity expansion, where you can just add drives as you see fit. For instance, Linux MDADM supports 'growing' an array with one or more drives at once.

Especially for home users, it's important to take this limitation of ZFS into account.

Conclusion

For home usage, building your own ZFS-based NAS is a cool project that can be educational and fun, but there is no real necessity to use ZFS over any other solution.

In many cases, I doubt that building your own NAS gives you any significant edge over a pre-build Synology, QNAP or Netgear NAS unless you have specific needs that these products don't cover.

These products may be a bit more expensive in an absolute sense as compared to a DIY NAS build, but if you factor in your own free time as a cost, they are probably hard to beat.

That said, if you regard building your own NAS as a fun hobby project and you are willing to spend some time on it, it's perfectly fine to run with ZFS, considering you accept the 'hidden cost'.

However, I think it's perfectly reasonable to build your NAS based on Windows, with:

  • Storage Spaces
  • Hardware RAID
  • SnapRAID
  • FlexRAID

Or you many want to use Linux with:

  • MDADM kernel software RAID
  • Hardware RAID
  • unRAID
  • SnapRAID
  • FlexRAID

Vendor SnapRAID provides a comparison of the various products. Please note it's a vendor comparing it's product against competitors. I have never used SnapRAID, unRAID or FlexRAID.

Would you still use ZFS?

I've build a 71 TiB NAS based on 24 drives using ZFS on Linux.

In my case, I would probably keep on using ZFS. Here are my reasons:

  1. It's a hobby project I'm happy to spend some time on
  2. I'm totally OK with Linux and know my way around
  3. I buy all my storage upfront so the 'hidden cost' is no issue for me
  4. In my case, I would probably now use RAIDZ3 with the large_blocks feature enabled to regain some space. Tripple parity is unique to ZFS as far as I know and with my 24-drive setup, I think that would add a bit of extra safety.

So in my particular situation, ZFS offers little to no drawbacks and I see the extra data integrity protection as a nice bonus.


  1. People may create an impression that not using ZFS is incredibly dangerous and that you're foolish if you don't use ZFS for your home NAS. I strongly disagree with that idea. 

  2. BTRFS's implementation of RAID 5/6 is not considered production-ready at the time this article was written. 

  3. One notable exception is the lack of ECC memory in most laptop/desktop computers. 

Comments