update building a nas
This commit is contained in:
parent
5ff11cae9d
commit
4167691f5c
|
@ -398,8 +398,101 @@ After this the UPS was working great again.
|
|||
Recently I read about how this budget line of UPS devices does not work well with no battery connection. E.g. if you disconnect
|
||||
the battery, your load will be disconnected from power, even if you have mains power connected. Read more at [Fitzcarraldo's blog post about his UPS experiences](https://fitzcarraldoblog.wordpress.com/2020/08/09/that-ups-you-bought-for-your-home-server-may-not-be-as-useful-as-you-think/).
|
||||
|
||||
## September 2020 update
|
||||
|
||||
This is starting to turn in a blog-in-a-blog kind of thing. But here's another update for you.
|
||||
|
||||
### More APC UPS woos
|
||||
|
||||
The reset I performed worked for a few days. Then, in the middle of the night:
|
||||
|
||||
STATUS : ONLINE REPLACEBATT
|
||||
|
||||
Of course, this happened in the middle of the night, accompanied by a loud beeping tone from the UPS. I purchased a replacement
|
||||
battery online (same model / ratings, just not the expensive, APC branded one). This fixed the problem and the UPS has been
|
||||
working as it should. Still very strange that the battery gave out after just one year of usage.
|
||||
|
||||
### No more FreeNAS!
|
||||
|
||||
This may come as a shocker ;-) I dropped FreeNAS. As you may remember I started running FreeNAS in a VM on Proxmox with the HBA
|
||||
card passed through to FreeNAS. In order to share my pool/datasets with other VMs / containers in Proxmox, I needed to share them
|
||||
over NFS to the host system, which would then require my containers to run in priviledged mode to mount the NFS shares. The real
|
||||
issue was performance, many apps had issues with locking to the NFS mounts. Downloading large files would sometimes throw errors.
|
||||
It was not fun.
|
||||
|
||||
But then I thought, what am I really still using FreeNAS for? I don't like the jails - they're FreeBSD and I prefer linux. I have
|
||||
an NFS share - but only because I need to share with the host. Maybe a Samba share for time machine, but that's really all. Well,
|
||||
and for ZFS of course.
|
||||
|
||||
But Proxmox support ZFS as well! A small container can run Samba for Time Machine backups. So, I took the plunge:
|
||||
|
||||
1. Shut down the FreeNAS Vm
|
||||
2. Disable auto-start :-)
|
||||
3. `zfs import core-storage tank` - importing and renaming the pool in one go
|
||||
|
||||
That. Was. It.
|
||||
|
||||
### Moar drives!
|
||||
|
||||
My chassis can house 16 3.5" HDDs in hot-swap caddies. I ordered a second Dell PERC H200 HBA card from eBay. I now have the
|
||||
following drives in my pool:
|
||||
|
||||
* 4x Western Digital 3TB Red
|
||||
* 4x Western Digital 8TB White (shucked from WD Elements)
|
||||
* 4x Western Digital 14TB White (shucked from WD Elements)
|
||||
|
||||
Yes, you read that right. Amazon.de had the 14TB WD Elements on sale, so I grabbed four of them. My pool now
|
||||
consists of three RAIDZ1 vdevs:
|
||||
|
||||
```
|
||||
# zpool status -v tank
|
||||
pool: tank
|
||||
state: ONLINE
|
||||
scan: scrub repaired 0B in 0 days 20:08:21 with 0 errors on Thu Sep 17 05:25:13 2020
|
||||
config:
|
||||
|
||||
NAME STATE READ WRITE CKSUM
|
||||
tank ONLINE 0 0 0
|
||||
raidz1-0 ONLINE 0 0 0
|
||||
sdc ONLINE 0 0 0
|
||||
sdb ONLINE 0 0 0
|
||||
sdg ONLINE 0 0 0
|
||||
sde ONLINE 0 0 0
|
||||
raidz1-1 ONLINE 0 0 0
|
||||
sdd ONLINE 0 0 0
|
||||
sdf ONLINE 0 0 0
|
||||
sdh ONLINE 0 0 0
|
||||
sda ONLINE 0 0 0
|
||||
raidz1-2 ONLINE 0 0 0
|
||||
sdi ONLINE 0 0 0
|
||||
sdj ONLINE 0 0 0
|
||||
sdl ONLINE 0 0 0
|
||||
sdm ONLINE 0 0 0
|
||||
```
|
||||
|
||||
And some specifics:
|
||||
|
||||
```
|
||||
# zpool list tank
|
||||
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
|
||||
tank 90.8T 35.2T 55.6T - - 9% 38% 1.00x ONLINE -
|
||||
```
|
||||
|
||||
That should be sufficient for a while, but don't quote me on that. :-)
|
||||
|
||||
### Smart UDMA CRC Errors
|
||||
|
||||
For a bit `/dev/sdi` had some UDMA CRC errors. These are, especially with new drives, often caused by cable issues.
|
||||
I offlined the drive, stuck in one of the four remaining free slots, and the problem was solved. ZFS of course had
|
||||
picked up on this as well and Proxmox immediately nofitified me of the SMART and ZFS issues by email. Yay for Proxmox!
|
||||
|
||||
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
|
||||
199 UDMA_CRC_Error_Count 0x000a 100 100 000 Old_age Always - 16
|
||||
|
||||
Resetting the ZFS pool status is easy with `zfs clear tank`. For good measure I ordered a scrub, which finished without
|
||||
any further issue.
|
||||
|
||||
## Next steps?
|
||||
|
||||
So, what will the future hold? If everything keeps as stable as it is right now, the only change will be either adding more
|
||||
storage or upgrading the 3TB Reds for more storage. I have some other hardware around that might make good use of the 4x 3TB
|
||||
Reds as a backup machine for essential data.
|
||||
Nothing right now. I have several TB's of storage left to hoard. CPU-wise I'm good. Memory is all max'ed out. I love
|
||||
Proxmox for being Linux _and_ supporting ZFS.
|
||||
|
|
Loading…
Reference in New Issue
Block a user