«608 »
  • Post
  • Reply
Matt Zerella
Oct 7, 2002


D. Ebdrup posted:

It's not as if ZFS can't be expanded, either - you just can't add individual drives to an existing raidz vdev. If you do want to expand your zpool, you have two options: replace drives with bigger drives (and resilver between each replacement), or just add another vdev with a seperate striped parity configuration.
If you're a bit forward-thinking and smart about buying used server equipment, you can get a xeon board with ecc support and 4 daughter board slots capable of handling 16 drives per HBA (without port multipliers, which means you can fit plenty more than 16 drives per HBA since spinning rust for consumers or prosumers don't take up all the bandwidth of a SATA2 let alone SATA3) - all without breaking the bank.
If you add 4 devices at a time in raidz2 and you're careful about wiring, you can expand the zpool at least 16 times and lose any two HBAs without experiencing dataloss.

I just add any drive equal or smaller to my parity drive and it works out of the box for my linux isos.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



I recently migrated a 5 2TB drive pool to a 5 5TB drive pool.

It worked fine, but lord that took over a week with all the resilvering time.

Mr. Crow
May 22, 2008

Snap City mayor for life


The idea that you have to spend at least $450-600 to increase disk space and the ZFS community is so casual about it blows my mind.

For home use it's untenable to me. There are better options.

8-bit Miniboss
May 24, 2005

CORPO COPS CAME FOR MY


Mr. Crow posted:

The idea that you have to spend at least $450-600 to increase disk space and the ZFS community is so casual about it blows my mind.

For home use it's untenable to me. There are better options.



I got spooked enough that I'm just going to take down my FreeNAS box and do snapraid with mergerfs or something.

movax
Aug 30, 2008



Yeah -- I'm kind of going into this and not planning on expanding this machine.

My previous attempt at ZFS, I started with two six-drive RAID-Z2s, and expanded to a third a few years down the line. That's around where I learned the rather obvious lesson in retrospect that new data was only ending up on the third vdev since the first two were pretty much full, and I'd have to copy the data off and re-write it all to balance it out.

Krailor
Nov 2, 2001
I'm only pretending to care

Taco Defender

For standard home use I'm a big fan of DrivePool.
I can use whatever random sized drives I have laying around and it lets me set folder level redundancy levels.
If a drive dies it sends me an email and starts making new copies of whatever was on that drive on the other drives (provided there's space)
If I want to add another drive I just hook a new one up, mount it to a folder, and add it to the pool.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


I've got 8x4TB and rather than doing the whole resilver dance when I move to 8x8TB, I think I'm going to buy an extra Mybook Duo, back all my shit up to it (and another external I've got floating here at home, pull all the old drives, put in the new ones, and restore the data. Sounds a hell of a lot faster and easier (and safer!) than 8 resilvers.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

I'm just going to add another RAIDZ2 vdev to my zpool for expansion and I figure by the time that all is on the fritz I'll be looking at a ton of xpoint drives or it'll be cheaper to put it all in GCP.

zennik
Jun 9, 2002



8-bit Miniboss posted:



I got spooked enough that I'm just going to take down my FreeNAS box and do snapraid with mergerfs or something.

Take a gander at unRAID. It doesn't have the performance of ZFS, but it's got a drive expansion system similar to DrivePool or the old Microsoft WHS.

All you really need for 'security' is one or two parity drives that are as large as the largest drive in the pool.

Further, they've got some fantastic native plugins as well as a decent VM engine. If you have a supported GPU you can even directly map it to a VM and turn your unRAID box into a workstation as well.

Linus Tech Tips had a cool show awhile back where they put 3 or 4 GPUs in a box with unRAID and had it work as a bunch of gaming systems running out of 1 single tower.

EDIT: My bad, it was two. I've done 3 in a tower before, though. https://www.youtube.com/watch?v=LuJYMCbIbPk

zennik fucked around with this message at 06:51 on Mar 25, 2017

8-bit Miniboss
May 24, 2005

CORPO COPS CAME FOR MY


zennik posted:

Take a gander at unRAID. It doesn't have the performance of ZFS, but it's got a drive expansion system similar to DrivePool or the old Microsoft WHS.

All you really need for 'security' is one or two parity drives that are as large as the largest drive in the pool.

Further, they've got some fantastic native plugins as well as a decent VM engine. If you have a supported GPU you can even directly map it to a VM and turn your unRAID box into a workstation as well.

Linus Tech Tips had a cool show awhile back where they put 3 or 4 GPUs in a box with unRAID and had it work as a bunch of gaming systems running out of 1 single tower.

EDIT: My bad, it was two. I've done 3 in a tower before, though. https://www.youtube.com/watch?v=LuJYMCbIbPk

I'm aware of it. Snapraid+mergerfs made a stronger case for me.

Edit: I should say that it's making a stronger case for me. I'm still evaluating routes I want to explore.

eames
May 9, 2009



yeah he repeated that project later on with 7 gaming VMs in one box.

My main gripe with unRAID is that their security design (telnet and plaintext HTTP in 2017?) and overall visual design look like they're stuck in 2003.
Some of the security concerns can be worked around by putting the box into its own VLAN. I can't fault the functionality though. Don't ever dream of using it for anything business related but as a high functionality/low maintenance homeserver it's the best option out there IMO.

D. Ebdrup
Mar 13, 2009



Matt Zerella posted:

I just add any drive equal or smaller to my parity drive and it works out of the box for my linux isos.
Yes, but RAID4/unraid also doesn't protect against bitrot, nor does it offer inline compression, you can't use the OS on the same pool (with the protection or speed that that pool offers) that you're using to store your data (this is specific to unraid, as far as I can determine, as it could be done with just Linux), you can't do snapshotting (or bootable snapshots which make any failed system upgrade or kernel tweaks easy to recover from and debug), you can't do reverse delta block-level backup which means that after your initial sync it only resyncronises the blocks that have actually changed, it has slower write preformance (if we ignore caching for a moment), its SSD caching is limited to write caching, you can't create and manipulate block devices for applications using the storage stack, and you can't arbitarily match stripe size to the write width of whatever application your running on top of your storage.
Every one of these features are useful in a home-lab situation, it's not some enterprise-only feature - whereas unraid started out being made to store FreeBSD isos, and is now trying all sorts of extra features which it wasn't designed to do.

Mr. Crow posted:

The idea that you have to spend at least $450-600 to increase disk space and the ZFS community is so casual about it blows my mind.

For home use it's untenable to me. There are better options.
I can only speak for myself, but the features above more than make up for the downside of needing to put extra money into expanding my storage.

8-bit Miniboss posted:

I got spooked enough that I'm just going to take down my FreeNAS box and do snapraid with mergerfs or something.
Not gonna lie, when I read this post last night while drunk, I thought you were being ironic and I was going to suggest you use BTRFS RAID5/6. Dodged that drunkposting probation bullet.

zennik posted:

Linus Tech Tips had a cool show awhile back where they put 3 or 4 GPUs in a box with unRAID and had it work as a bunch of gaming systems running out of 1 single tower.
LTS also now uses ZFS for storage because all their other storage options couldn't expand enough for their needs. That's one neat feature I do want bhyve on FreeBSD to have, though - either through VT-d/AMD-IOMMU, or preferably through SRV-IO.

eames posted:

yeah he repeated that project later on with 7 gaming VMs in one box.

My main gripe with unRAID is that their security design (telnet and plaintext HTTP in 2017?) and overall visual design look like they're stuck in 2003.
Some of the security concerns can be worked around by putting the box into its own VLAN. I can't fault the functionality though. Don't ever dream of using it for anything business related but as a high functionality/low maintenance homeserver it's the best option out there IMO.
That's generally a complaint about a lot of opensource stuff, that it feels like what commercial systems were offering a decade or more ago.. Xorg definitely suffers from this, but it's understandable since X is much older than Linux or FreeBSD (only technically, since FreeBSDs source history actually goes back to June 30th, 1970) - it's so old that its name is a pun, back when naming programs for puns was a good idea.

D. Ebdrup fucked around with this message at 09:35 on Mar 25, 2017

phosdex
Dec 16, 2005



Tortured By Flan

You guys went from freenas/zfs isn't necessary for a home user so you should run this other solution that can run 7 gaming vms.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

phosdex posted:

You guys went from freenas/zfs isn't necessary for a home user so you should run this other solution that can run 7 gaming vms.

Well what else do you think a home user does? I mean, you do have 7-man LAN parties every Friday night like the rest of us normal human beings do, right?

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


I mean... I'm a fucking weirdo and all, but I definitely run a gaming VM. I just don't do it on my NAS. But I've considered setting up iSCSI and using the NAS to host the storage for it.

redeyes
Sep 14, 2002
I LOVE THE WHITE STRIPES!

Krailor posted:

For standard home use I'm a big fan of DrivePool.
I can use whatever random sized drives I have laying around and it lets me set folder level redundancy levels.
If a drive dies it sends me an email and starts making new copies of whatever was on that drive on the other drives (provided there's space)
If I want to add another drive I just hook a new one up, mount it to a folder, and add it to the pool.

Me too. Never lost a bit using DrivePool. I use it on Server 2012 R2. Not only that, any standard NTFS recovery tool will work if a drive messes up. Whats not to love.

D. Ebdrup
Mar 13, 2009



G-Prime posted:

I mean... I'm a fucking weirdo and all, but I definitely run a gaming VM. I just don't do it on my NAS. But I've considered setting up iSCSI and using the NAS to host the storage for it.
That's my #1 reason for building a new server with two overprovisioned SLC SSDs and 2 10G SFP+ modules that support iSCSI boot: a completely diskless system with zpool-sized storage-size and SSD-like access speeds and all the data redundancy and protection that ZFS offers.

redeyes
Sep 14, 2002
I LOVE THE WHITE STRIPES!

D. Ebdrup posted:

two overprovisioned SLC SSDs


D. Ebdrup
Mar 13, 2009



Intel 25-E 64GB overprovisioned to 20GB each.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

Read: what people used to do with hard drives years ago ("short-stroking") for the purpose of performance except now with SSDs for the purpose of longevity and reliability. Strange times, what a time to be alive.

Hughlander
May 11, 2005



If you guys use Crashplan listen here...

I run Crashplan in Docker that autoupdates itself and when it does I usually go a week or two without resetting the VM memory options to 4g so it crashes constantly and takes forever to catch up. This last time it was saying 1.4 - 2.8 years to catch up so I got sick of it and poke and prodded and came across
<dataDeDupAutoMaxFileSizeForWan> in the my.service.xml file. It seems to specify the max file size before it'll try to dedup to the crashplan server. It's set to 0 or all files. I set it to 1. My throughput went from 4-600kps to 30Mbps. I was trying to paste a datadog graph but imgur kept saying that it couldn't take it... I'm uploading 3-4 megabytes a second and the time to catch up dropped to 23 days and after a day it's now 19 days.

IOwnCalculus
Apr 2, 2003





Which Docker container are you using? This one seems to remember my java mx setting across reboots and updates.

Hughlander
May 11, 2005



IOwnCalculus posted:

Which Docker container are you using? This one seems to remember my java mx setting across reboots and updates.

jrcs/crashplan, I considered that one but didn't want the client portion. I may reconsider it when this update is done though...

IOwnCalculus
Apr 2, 2003





That's the one I used to use, and yeah it forgot the max RAM setting every time it restarts. I don't know what black magic the one with the client uses but it doesn't have that problem, and is easier to manage anyway.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



That's weird. I've never messed with it and CrashPlan always maxes out my measly 5mbps upload.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

D. Ebdrup posted:

That's my #1 reason for building a new server with two overprovisioned SLC SSDs and 2 10G SFP+ modules that support iSCSI boot: a completely diskless system with zpool-sized storage-size and SSD-like access speeds and all the data redundancy and protection that ZFS offers.
I've investigated to do that, but I can't get decent performance out of FreeNAS. With SMB I can saturate the 10Gbe link when the cache is hot, maybe 200-300MB/s on a cold cache (depending on how ZFS sprayed the data across my quasi-RAID10). I would be happy if I get more than 150MB/s on iSCSI (4K logical blocks, 8K volblocksize).

Question is whether you can maintain near local SSD like latency to begin with.

I haven't tried with the new FreeNAS 10 setup, though.

Hughlander
May 11, 2005



Thermopyle posted:

That's weird. I've never messed with it and CrashPlan always maxes out my measly 5mbps upload.

It only becomes an issue when you pass 4-5 TB of data. Below that the native 1G is sufficient.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



I've got over 10TB up there.

8-bit Miniboss
May 24, 2005

CORPO COPS CAME FOR MY


I spent a chunk of my weekend looking at OS'es. FreeNAS Corral is bad. I had an easier time setting up in 9. The UI is a little unresponsive (updating the hostname took longer than it should have because the save button seemed to not work) and the "wizard" is pretty barebones for anyone new to making a server with it as there's no help or tooltips. You're pretty much left to your own devices with a currently barebones wiki unlike previous versions' online documentation. Having to deal with the UI is just a bad time. What a step back. I'm sure it'll be better down the road but I'm looking at using it anytime soon.

Thanks Ants
May 21, 2004

Bless You Ants, Blants



Fun Shoe

Thermopyle posted:

I've got over 10TB up there.

How long did you take to get it uploaded?

D. Ebdrup
Mar 13, 2009



Combat Pretzel posted:

I've investigated to do that, but I can't get decent performance out of FreeNAS. With SMB I can saturate the 10Gbe link when the cache is hot, maybe 200-300MB/s on a cold cache (depending on how ZFS sprayed the data across my quasi-RAID10). I would be happy if I get more than 150MB/s on iSCSI (4K logical blocks, 8K volblocksize).

Question is whether you can maintain near local SSD like latency to begin with.

I haven't tried with the new FreeNAS 10 setup, though.
I don't know whether it's bytes per sector or bytes per cluster in Windows, but check 'fsutil fsinfo ntfsinfo c:' to see either and ensure that it matches your volblocksize. Also, if you have fast enough slog devices that you want to make full use of, you can set the zfs property called 'sync' to 'always' but be aware that your slog devices need to be big enough to contain two full writes of whatever your interface is capable of moving data at, every 5 seconds (since that's when it gets flushed to disk, and you want to ensure that you have room for overhead) - so if you're on a 10Gibps interface you're looking to store 10Gb/8=1.25*5*2=12.5GiB.

D. Ebdrup fucked around with this message at 22:19 on Mar 27, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

I matched the cluster size to the volblocksize. Not that much interested in write speeds (I suppose it'd be helpful for compiling stuff), read speeds is that I'm looking for.

Might want to try 16K blocks, but I doubt it'll do much. Picked 8K because it fits within a jumbo frame.

--edit:
Which reminds me, I can't create iSCSI shares on existing ZVOLs anymore/yet on FreeNAS 10, so that experiment is out for now.

Combat Pretzel fucked around with this message at 22:27 on Mar 27, 2017

D. Ebdrup
Mar 13, 2009



Latency is going to be limited to whatever interface you're using, hence why I'm planning on using 10G SFP+ since it has an order of magnitude faster latency than RJ45 with a direct connection between two machines instead of going over a store-and-forward switch (since cut-through switches aren't really available for anything but SAN prices).

Matching vblocksize isn't about read speed, it's ensuring that when Windows tries to write 4096 bytes (which is what my Windows defaults to), that it actually only writes that 4096 byte block and doesn't write a 16k block or a 256k block.
MTU Jumbo frames just means that the NIC can use up to 9k MTU per packet, instead of ~1500 (which is the default for WAN; lower on ppp and many other technologies - but since you're not sending traffic onto a WAN link, you can completely ignore that and set it as high as your NICs will allow). Programs will attempt to use as much MTU as the lowest MTU in the path will allow.

I can't really say where you're getting bottlenecked by read speeds - what's your cpu utilization for the top process and load averages looking like?

If you were running FreeBSD I'd suggest you used dtrace to create some flamegraphs to look at what the kernel and other processes involved are doing most of the time, but FreeNAS is pretty damn resistent to that kind of debugging. It's mostly just set-and-forget that's not designed to do what you're trying.

D. Ebdrup fucked around with this message at 22:31 on Mar 27, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

I have Intel X520 cards here, which are SFP+, and my NAS is hooked up directly via DAC.

As edited above, experimentation is out currently due to bullshit from FreeNAS 10's part. Creating an iSCSI share in its current state will net me a volblocksize of 4KB, which is useless. (--edit: Oh hey, it defaults to 8K anyway. Still, I want 16K.)

D. Ebdrup
Mar 13, 2009



Yeah, I can't really help you with FreeNAS. To me, it's a good appliance for doing SMB and NFS sharing on ZFS, but if you want anything above that you're at the mercy of what iXsystems want to focus on, and not what FreeBSD is capable of.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Yea, well, the link certainly ain't the problem.



That said, I don't have an L2ARC (yet). Looks like I'll need an L2ARC and an SLOG. Redoing this box for iSCSI boot would require new hardware, anyway.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Thanks Ants posted:

How long did you take to get it uploaded?

Hard to say for sure as I added stuff over time.

D. Ebdrup
Mar 13, 2009



Combat Pretzel posted:

Yea, well, the link certainly ain't the problem.



That said, I don't have an L2ARC (yet). Looks like I'll need an L2ARC and an SLOG. Redoing this box for iSCSI boot would require new hardware, anyway.
L2ARC is only something you should use if 1) you've completely filled your system with DRAM, as mapping L2ARC will take away memory otherwise used by the ARC, 2) and if your ARC hit ratio isn't high enough for your liking - and only when you both of those conditions are filled.

The 32GB Optane drives also look very interesting for slog devices for anyone doing ZFS unless you're looking to fill 40Gbps or 100Gbps.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

At some point, it becomes cheaper to run a small and fast SSD (or a mirror thereof) instead of buying more RAM. 170 bytes of ARC for 16K of L2ARC (ideal volblocksize for iSCSI) looks like a nice trade off. Certainly worthwhile at 32GB of RAM.

If you're going to use the server as remote disk, with the desire of SSD-like performance, you want as big of a read cache as possible, regardless of ARC hits.

--edit: The 16K example above, if you had 8GB out of these 32GB in L2ARC pointers, that maps to more than 768GB.

Combat Pretzel fucked around with this message at 17:38 on Mar 28, 2017

D. Ebdrup
Mar 13, 2009



I'm not entirely sure where you got that the idea that 16k volblocksize is ideal for iSCSI - could you provide some sources? Because as far as I know, it's all about matching width of blocks with the block size that your application is writing as described by skiselkov@nexenta who commits to openzfs.

ARC and L2ARC don't map out files though, it maps out individual blocks based on their use, and only by looking at what's actually used rather than as a read-ahead cache.

D. Ebdrup fucked around with this message at 18:47 on Mar 28, 2017

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »