devroom.io/content/posts/2020-02-28-building-a-nas.md
2020-02-29 11:41:48 +01:00

13 KiB
Raw Blame History

+++ date = "2020-02-28" title = "Building a DIY Home Server with FreeNAS" tags = ["freenas", "nas", "storage", "homelab", "plex", "zfs"] description = "How feasible is it to build a home storage server yourself?" +++

This post is almost a year over due. I think it might still be relevant for people looking to start a NAS project or upgrade from a consumer-grade NAS like QNAP or Synology.

This is by no means a definitive guide on how to build a reliable network attached storage server. I made choices relevant to my use case, yours might be different.

FreeNAS Screenshot

Some history

Up until May 2019 I stored my photos, old documents and other things on a single 3TB Western Digital Green drive in my Hackintosh.

Backups were already part of my setup by using Backblaze. This was great because it allowed me to use unlimited backup data for a fixed price.

But I was running into some limitations of this setup.

Filesize and volume of digital media

Every new phone has a better camera, and I like to shoot in the best quality available. With my current Oneplus 6 that means 3456×4608px per photo (roughly 4-8MB per photo). Not to mention 4k video at 60fps...

Having kids also means taking a lot more photos and videos.

Before I would automatically backup media on my phone to Dropbox, because that was convenient. But with a 5GB limit I would often be reminded to either upgrade or clean up some files, which means manually moving older photos and videso from Dropbox to my trusty 3TB hard drive.

Restoring backups sucks

I have not kept track on how many disks failed on me, but is has happened on a few occassions. Backblaze to the rescue! However, restoring 1TB+ of data is not a fun activity.

The reality is, disks fail. Especially if you buy the cheapest ones available and don't take care of them.

Streaming video

I've always been a fan of Plex. With a collection of over 200 DVDs, there's plenty to watch, but physical drives are disappearing all around. Plex is a great solution, but it works best if you media are available when you need them. Having to be at home and turn on your computer to watch something is a nuisance.

The plan

So, after much consideration, I decided I needed a NAS. Many of my friends have one and they seem to be happy with them. Of course, I could not run to a local electronics store and buy just any storage server. First I had to write down what I wanted out of this system.

  • Store 1TB+ of data safely
    • Redundancy to cope with disk failure
    • Extendable in the future
    • Write/Read performance should be okay, but not a priority (I'm not editing 8k video on this thing)
    • Problaby needs a UPS for safe shutdown in the case of power loss
    • Share data/disks over Samba/AFS/NFS, Time Machine support would be nice
  • It needs to be always-on
    • Energy efficiency is important
    • It needs to be silent, will probably sit in the study
  • Needs to run Nextcloud and Plex
    • Needs a companion mobile app to auto-backup photos and videos
    • Needs sufficient memory
    • Needs sufficient CPU power for transcoding 1, maybe 2 simultanious streams
    • Room to grow to run more apps or VMs
  • Needs to be future proof
    • More storage
    • More memory
    • Be able to use this a desktop computer if this NAS thing really sucks

What NAS solution fits best

What I had to figure out is what NAS solutions are available to me, and which one is most suitable for my usecase.

Consumer grade NAS systems like those from QNAP or Synology are quite popular. They do come with a few major drawbacks for me. They are not (or minimally) upgradeable. A simple Celeron CPU and 8GB of memory will probably not suffice. I'm also stuck with a custom operating system, something which I try to avoid like the plague.

At this point, my best option in this category was the Synology DS918+, retailing locally for €550. With room for four drives, it already felt limited. I want it to be expandable. The Intel Celeron J3455 (quad core @ 1.5Ghz) and 4GB RAM seemed especially limiting. Also the vendor lock-in is not very appealing.

Next I visited /r/DataHoarder, a nice corner of the internet where people come together to discuss hoarding data - and all the hardware needed to do that. Many here opt for used enterprise hardware from SuperMicro or Dell in 19" rack cases. The hardware can be very powerfull, but comes at a cost too. Space and noise are two considerations. This enterprise hardware is made for use in datacenters where noise is not much of an issue. Having to mod a case to be silent would be a major effort. Secondly, a 4u 19" server is quite bulky and not something you easily fit under your desk.

This left me to find the middle ground: consumer grade hardware, but with special, preferably open source, software. /r/DataHoarder quickly pointed me towards FreeNAS. _"Enterprise-Grade features, open source, BSD licensed" grabbed my attention. Based on FreeBSD, FreeNAS leans heavily on ZFS - which sounded really good to me.

At this point I setup a VirtualBox VM on my Mac with 4 "8GB Disks" to give FreeNAS a spin. The setup was rather easy (not how I remember installing FreeBSD 5.x years ago). I toyed around with ZFS, setting up a pool, removing / replacing / resilvering disks. It was awesome.

My mind was made up: I wanted a x86/amd64, FreeNAS compatible system with 8+ 3.5" drive bays in a small-ish form factor.

FreeNAS FUD

Here's the deal with FreeNAS. It's maintained by a company named iXsystems, who sell hardare and also offer a more professional version of FreeNAS called TrueNAS. The forums are filled with people who run NAS'es in professional settings.

If I were to setup a NAS for a company, yes, I would opt for the Pro/Enterprise SAS drives, Intel Xeon or AMD ThreadRipper CPUs and loads of ECC RAM.

However. I'm building a home NAS system with four users (this includes two pre-schoolers who can't even read yet).

So, although many of the recommendations of helpful people on the internet are true, it does not necessarily mean you're wrong if you do it differently.

Shopping time!

Time to go shopping! I always try to find the best value for money, so these choices were not always directly obvious.

Motherboard / CPU / RAM

When I built my NAS, Intel was still king. If I had to do a rebuild, I'd happily opt for some AMD gear, but alas, here I am. I was happy with the performance of my Intel i5-3570K and I looked for something comparable in a newer generation. The Intel i3 8-series is now quad core, and the i3-8350K goes up to 4Ghz and a maximum of 64GB DDR4 RAM.

To go with this, I opted for the Gigabyte Z370M D3H motherboard. It's a decent board in µATX format with 6 SATA ports and 2 NVMe slots. It can accomodate a few expansion cards and has 4 slots for a maximum of 4x 16GB RAM. It also on the osx86 compatibility list, so if I need to, I could run macos on this board.

I wanted decent memory (but without all the RGB madness). I started out with 16GB (single module) Corsair Vengeance (DDR4 2400Mhz).

Many will swear on their mother that you need to have ECC memory when running ZFS. There is a point to be made for the benefits of ECC memory. But this is a pro-sumer system. As Matt Ahren (somewhat head of the OpenZFS project) once wrote:

There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.

I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.

I did not get a graphics card for this system.

Enclosure

I didn't want a large 19" rack server. I also had a hard time imagening where I'd put a tall tower system. This led me to investigate some of the more "exotic" enclosures on the market. I was looking for something that could fit 8+ 3.5" drives and a µATX motherboard.

When I found the Fractal Design Node 804 I was sold on it immediately. It's a beautiful case, fits ten (10!) 3.5" drives, has good cooling options, and it fits perfectly on the bottom shelve of my IKEA Ivar shelving unit.

FreeNAS Screenshot

Disks

I wanted as much storage as possible. But at a reasonable price. I had about 1TB of space in use already, so I would need at least four times that. I found a good deal on four 3TB Western Digital Red drives, which in RaidZ (the ZFS "equivalent" of RAID5) leaves about 9TB of usable space. That seemed plenty.

Updates 1 - Moar Memories!

I quickly realised that 16GB was fine, but since I was running more and more services on this system, and I was looking into running at least a Linux VM to play with Docker, It'd need more memory. An easy upgrade, since I was using only one of the four memory slots.

FreeNAS Screenshot

Update 2 - Moar Spinners!

More spinners, rust, disks, drives, whatever your want to call it. As it turns out, once you have 9TB of storage at your disposal, it quickly fills up. This is also around the time I discovered shucking.

It turns out that Red drives are kind of special in how the can deal with vibrations. Their firmware is also optimized to run drives 24/7. The thing is, they are rather expensive.

FreeNAS Screenshot

Now that's a whopping 44TB Raw storage in a single pool. The pool consists for 2 vdevs, each with 4 drives in RaidZ1. I can lose one drive in each vdev without issue.

FreeNAS Screenshot

(Sorry, I have to remove identifiable drive data so other can't abuse them)

The external USB drives (My Book and Essential) are much cheaper, and they often contain "white" drives, which are identical in specs to reds, but for "internal use" by Western Digital. Once the were on sale, it was easy to pick up four of those external drives, remove the disk and put them in the NAS.

For those looking to go the shucking route, keep in mind the following:

  • You void your warranty by removing the disks from their enclosure. WD might be lenient when you RMA them, but know what you're doing.
  • Not all models have white drives and some models are known to have many issues (like the 6TB drives from WD). Check out /r/DataHoarder for up to date info.
  • Be sure to check the entire disk in its enclosure. You can easily RMA it if you find defects, and you know you're putting something working into your machine.
  • White drives use a newer SATA spec, which re-uses a pin to disable power to the drive. If your system or power supply cannot handle that, the disk will not start-up. This issue can be easily fixed by using a piece of tape.

A real HBA Card

With only six SATA connectors on board, how did I manage to hook up 8 SATA drives? Well, I bought the cheapest SATA controller card I could find on AliExpress. It said it supports four drives, but I could only get it to work with two.

After some research I found a good deal on a use Dell H200 PERC card, which sports the LSI 9211-8i chipset. It has 2 SAS connectors, and when flashed into IT mode, supports 8 SATA drives - and it works flawlessly with FreeNAS.

Only problem is that it needs a bit of external coolingk as the card is used to sitting in a well cooled 19" server.

FreeNAS Screenshot

Yes, I cleaned out the dust, but after running for about 9 months I found the inside rather clean.

Future upgrades

There are some future upgrades I'm look at.

  • Add four 120mm Nocuta fans at the front to optimize air intake for cooling both the SAS Controller as well as the eight spinners. Still have figure out how to PWM control four fans.
  • Add an additional mirror of two spinners for a seperate pool, maybe local Borg backups of the most important data.
  • Add an additional NVMe SSD. I now have an single boot volume, which is fine, but it would be more fault tolerant that way.
  • Add a mirror of two 2.5" SSDs to run my VM off.
  • Upgrade the 3TBs to something better (16TB, anyone?)
  • Fill up the RAM slots for a total of 64GB

Want to build a NAS yourself or have any questions? Feel free to drop me a line.