Update diy nas blog post

This commit is contained in:
Ariejan de Vroom 2020-08-17 13:19:11 +02:00
parent 3fcc20ea41
commit 5ff11cae9d
Signed by: ariejan
GPG Key ID: AD739154F713697B
6 changed files with 137 additions and 6 deletions

View File

@ -5,6 +5,9 @@ tags = ["freenas", "nas", "storage", "homelab", "plex", "zfs"]
description = "How feasible is it to build a home storage server yourself?"
+++
<span class='important'>Keep reading! I've documented several upgrades to my DIY
NAS at the bottom of this post.</span>
This post is almost a year over due. I think it might still be relevant for people
looking to start a NAS project or upgrade from a consumer-grade NAS like QNAP or
Synology.
@ -267,18 +270,136 @@ Type|Item
There are some future upgrades I'm look at.
- ~Add four 120mm Nocuta fans at the front to optimize air intake for cooling both the SAS Controller
as well as the eight spinners. Still have figure out how to PWM control four fans.~
- ~~Add four 120mm Nocuta fans at the front to optimize air intake for cooling both the SAS Controller
as well as the eight spinners. Still have figure out how to PWM control four fans.~~
I've added two front fans on the motherboard side to provide enough airflow for the passively cooled Dell Perc H200. I've
also added another front fan on the harddisk side (at the top) to increase airflow across all eight HDD's. Temperatures
are much more stable now.
- Add an additional mirror of two spinners for a seperate pool, maybe local Borg backups of the most
important data.
- Add an additional NVMe SSD. I now have an single boot volume, which is fine, but it would be more
fault tolerant that way.
- ~~Add an additional NVMe SSD.~~ My motherboard has two m.2 slots, so I added another (relatively cheap) Gigabyte 128 GB SSD and put that into a ZFS mirror configuration.
- Add a mirror of two 2.5" SSDs to run my VM off.
- Upgrade the 3TBs to something better (16TB, anyone?). Heck, upgrade _all_ drives to 16TB for 128TB RAW storage capacity (96TB in the current 2x RaidZ1 setup)
Additionally I might opt for a DAS enclosure with 4-8 more HDDs and hook these up through another SAS expander card.
- Fill up the RAM slots for a total of 64GB
- ~~Fill up the RAM slots for a total of 64GB~~
Want to build a NAS yourself or have any questions? Feel free to [drop me a line](/contact/).
Want to build a NAS yourself or have any questions? Feel free to [drop me a line](/contact/).
## Big August 2020 update
Right, so it's time to write an update to my NAS project. Quite a few things have changed, so gear up!
![New DIY NAS enclosure](/images/diy-nas/IMG_20200702_004507.jpg)
The image above shows it already; I changed enclosures. But why? Let me start by saying that the Fractal Design Node 804 is an
amazing case, but it has three drawbacks.
1. The airflow, even with 3 fans blowing across the hard disks, was not enough to keep the disks cool. Especially the 8TB Reds
would easily reach over 50°C (~122°F) on hot days. That's _probably_ not something to worry about, but better safe than sorry.
2. The HDDs are not really easily accessible. Now I didn't have any issues with disks yet, but with the amount of cable clutter
and caddies with four drives each, it's not trivial to replace a disk. This becomes more important now as failures are more likely
to start happening and I'm still looking to upgrade the 3TB Reds sometime.
3. It can only hold 8 disks. Well, that's not really a fault of this case.
So, what's a reasonable upgrade for a Node 804 that makes disks easily accessible, accommodates more drives and can be fit
under my desk. I _could_ have gone with a Fractal Design Define 7 XL, which can house up to 18 HDDs, but it's quite expensive at over €220. It also did not really make the drives any more accessible.
I soon ended up with a 19" rack mount form factor. I saw a few deals on cheap Dell machines with 8 or 12 bays and decent
Xeon-based hardware. I passed on those, mainly because these things are power hogs and really _LOUD_. Besides, I already have
all the hardware I need, I just want another case.
There are a few manufacturers out there that sell 19" rack mount storage chassis. The 2U versions house up to 8 HDD's, which would
be fine. But, if I'm going to invest in a new case, I want that little bit of extra room to expand in the future. This soon leads
to 4U cases, which have 10, 16, 24 or even more HDD bays. The problem is, at this time, they're _really_ hard to find, especially
if you want to stick consumer ATX hardware into them. Also, they are almost all sold out. And if you manage to find one, they're
not cheap either at over €500. Such is life in Europe, I guess.
Luckily, while browsing the classified ads section of [Tweakers.net](https://tweakers.net) I came across a nice fellow who
was selling his old Norcotek 4U 16 bay chassis. I informed about the price and we ma deal for €75. The only problem was, he
lived at the other side of the country. And you simply don't ship a 4U server easily or cheaply.
So, I took a day off work and drove over 3 hours to pick up my new NAS enclosure.
![Route Son en Breugel to Groningen, 3 hours and 10 minutes without traffic](/images/diy-nas/son-breugel-groningen.png)
<small>Map by [OpenStreetMap.org](https://www.openstreetmap.org)</small>
When I went to pick up the case, it turned out it came with a 520W power supply (already neatly cable managed), four
Mini-SAS 8087 to 4x SATA cables and all the original screws in their originally labele bags. Bonus are nice scythe case fanse. Wow!
Installation was pretty straight forward, simply move everything to the new case. My single Dell PERC H200 card could accommodate
8 of the 16 hot swappable caddies. So I would need another one of those to handle the other 8. That would free up my motherboard
SATA connectors for additional, internal SSDs and what not. Luckily, I could buy another one pre-flashed from a local ebay seller.
![Transplant complete](/images/diy-nas/IMG_20200708_111038.jpg)
## Enter Proxmox
Another item on my list was to move to Proxmox and virtualize FreeNAS. Let me explain why I want to do this.
First, FreeNAS / TrueNAS is a great system. FreeBSD is rock solid and ZFS offers an amazing "storage experience". The only
thing lacking is virtualization support. FreeBSD jails work fine, but I had some recent trouble with `iocage` commands being
terribly slow, and I often struggle to convert linux install instructions to their FreeBSD equivalents. Updating jails
has also proven to be a painful process for me. The solution was to run a virtual machine (with a fixed amount of RAM) and
run docker on it. It all worked, but it didn't feel solid. Also, I didn't hear many great things about bhyve, although I
haven't gotten into any real trouble.
Well, enter Proxmox. Proxmox is a custom debian linux os that offers qemu virtual machines and LXC / Linux Containers. That
sounds really awesome. With Proxmox it's also possible to pass through PCI devices to a virtual machine. This means that I can
map both my HBA cards to my FreeNAS virtual machine and run FreeNAS just as before.
The process was quite painless: backup my FreeNAS configuration, install Proxmox, create a VM for FreeNAS, and install it. Then
restore your configuration (maybe make a few tweaks for changed device names, like your NIC, or scheduled SMART tasks), and
you continue where you left off. It works really great.
I've migrated most of my jailed / dockerized services over to linux containers based on Debian. Some more complex ones,
like Gitea and Plex are still where they were (a docker VM and FreeNAS jail, respectively).
For fun an giggles I added three old 500GB spinners to create a new ZFS pool with to try things out, like replacing disks, etc.
## More RAM
All the while, the original [Corsair Vengeance LPX 16 GB (1 x 16 GB) DDR4-2400 Memory](https://nl.pcpartpicker.com/product/dDTrxr/corsair-vengeance-lpx-16gb-1-x-16gb-ddr4-2400-memory-cmk16gx4m1a2400c14) were on sale at Amazon.de, so I decided to scoop up
a few and extend my NAS' memory to 64GB total. 24GB of this RAM has now been allocated to FreeNAS, which uses about half of that
for services, the rest is used for ZFS cache.
## So what about those disk temperatures?
My eight spinners are all located on the left side of the chassis right now, so they're packed tight. But with the recent
heat wave in the Netherlands with ambient temperatures rising to 32°C, no disk went above 44°C, which is a big win if you ask me.
## Treebeard and Gandalf
![Treebeard and Gandalf](/images/diy-nas/IMG_20200708_113303.jpg)
<small>`treebeard` and `gandalf` in my Lack Rack.</small>
My NAS has the official hostname of `treebeard`. I also setup `gandalf`, a 1U 19" rack mount server with a simple ASRock mini-ITX
board an i3 4130T processor. It's fitted with 2x 128GB SSDs locally. `gandalf` also runs Proxmox. In fact, they form a cluster of two. This makes it super easy to migrate
services between the two nodes. `gandalf` runs some essential home network services (all in Linux Containers), like pihole, wireguard,
and home automation.
_Update: `gandalf` was using over 150W of power while idle. It was also running quite hot, even though I upgrade the three 40x40mm
fans with Noctua ones. For now, `gandalf` has been retired and removed from the cluster until I can figure out what's causing this
insane power consumption. For comparison, `treebeard` with its 11 spinners, is running at about 100W idle._
## APC UPS woos
A year ago I purchased an APC Back-UPS 700. It's been working great so far. However, after disconnecting it from the old server,
and hooking it up again to the new one, I noticed that the USB connection would no longer work. Just nothing. I found out I needed
to reset the UPS by powering it off, disconnecting the battery (with the large yellow plug on the back) and turn it back on again.
As that worked to get USB working again, it did yield some strange values in `apcaccess`, namely that the status was not `ONLINE`,
but `BOOST` and that it was not reporting some critical values, like `TIMELEFT`. After a quick call to tech support I had to
perform a battery callibration. Basically: you disconnect your load and instead hook up something that burns power, like an old
light bulb or an electric heater. You then disconnect mains power and run the battery dry (this is safe, as the UPS will shutdown
before you can damange your battery). Then, without any load, reconnect mains and let the unit charge fully.
After this the UPS was working great again.
Recently I read about how this budget line of UPS devices does not work well with no battery connection. E.g. if you disconnect
the battery, your load will be disconnected from power, even if you have mains power connected. Read more at [Fitzcarraldo's blog post about his UPS experiences](https://fitzcarraldoblog.wordpress.com/2020/08/09/that-ups-you-bought-for-your-home-server-may-not-be-as-useful-as-you-think/).
## Next steps?
So, what will the future hold? If everything keeps as stable as it is right now, the only change will be either adding more
storage or upgrading the 3TB Reds for more storage. I have some other hardware around that might make good use of the 4x 3TB
Reds as a backup machine for essential data.

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.6 MiB

View File

@ -10,6 +10,7 @@ $links: #F6019D;
$footer: #540D6E;
$meta: $text;
$header: #2DE2E6;
$highlight: #F8FFA3;
html, body {
background-color: $backdrop;
@ -19,6 +20,15 @@ html, body {
line-height: $line-height;
}
.important {
background-color: $highlight;
color: $background;
width: 100%;
display: block;
padding: 1rem;
text-align: center;
}
.container {
background-color: $background;
padding: 1rem 1.5rem;