|
D. Ebdrup posted:It's not as if ZFS can't be expanded, either - you just can't add individual drives to an existing raidz vdev. If you do want to expand your zpool, you have two options: replace drives with bigger drives (and resilver between each replacement), or just add another vdev with a seperate striped parity configuration. I just add any drive equal or smaller to my parity drive and it works out of the box for my linux isos.
|
![]() |
|
I recently migrated a 5 2TB drive pool to a 5 5TB drive pool. It worked fine, but lord that took over a week with all the resilvering time.
|
![]() |
|
The idea that you have to spend at least $450-600 to increase disk space and the ZFS community is so casual about it blows my mind. For home use it's untenable to me. There are better options.
|
![]() |
|
Mr. Crow posted:The idea that you have to spend at least $450-600 to increase disk space and the ZFS community is so casual about it blows my mind. ![]() I got spooked enough that I'm just going to take down my FreeNAS box and do snapraid with mergerfs or something.
|
![]() |
|
Yeah -- I'm kind of going into this and not planning on expanding this machine. My previous attempt at ZFS, I started with two six-drive RAID-Z2s, and expanded to a third a few years down the line. That's around where I learned the rather obvious lesson in retrospect that new data was only ending up on the third vdev since the first two were pretty much full, and I'd have to copy the data off and re-write it all to balance it out.
|
![]() |
|
For standard home use I'm a big fan of DrivePool. I can use whatever random sized drives I have laying around and it lets me set folder level redundancy levels. If a drive dies it sends me an email and starts making new copies of whatever was on that drive on the other drives (provided there's space) If I want to add another drive I just hook a new one up, mount it to a folder, and add it to the pool.
|
![]() |
|
I've got 8x4TB and rather than doing the whole resilver dance when I move to 8x8TB, I think I'm going to buy an extra Mybook Duo, back all my shit up to it (and another external I've got floating here at home, pull all the old drives, put in the new ones, and restore the data. Sounds a hell of a lot faster and easier (and safer!) than 8 resilvers.
|
![]() |
|
I'm just going to add another RAIDZ2 vdev to my zpool for expansion and I figure by the time that all is on the fritz I'll be looking at a ton of xpoint drives or it'll be cheaper to put it all in GCP.
|
![]() |
|
8-bit Miniboss posted:
Take a gander at unRAID. It doesn't have the performance of ZFS, but it's got a drive expansion system similar to DrivePool or the old Microsoft WHS. All you really need for 'security' is one or two parity drives that are as large as the largest drive in the pool. Further, they've got some fantastic native plugins as well as a decent VM engine. If you have a supported GPU you can even directly map it to a VM and turn your unRAID box into a workstation as well. Linus Tech Tips had a cool show awhile back where they put 3 or 4 GPUs in a box with unRAID and had it work as a bunch of gaming systems running out of 1 single tower. EDIT: My bad, it was two. I've done 3 in a tower before, though. https://www.youtube.com/watch?v=LuJYMCbIbPk zennik fucked around with this message at 06:51 on Mar 25, 2017 |
![]() |
|
zennik posted:Take a gander at unRAID. It doesn't have the performance of ZFS, but it's got a drive expansion system similar to DrivePool or the old Microsoft WHS. I'm aware of it. Snapraid+mergerfs made a stronger case for me. Edit: I should say that it's making a stronger case for me. I'm still evaluating routes I want to explore.
|
![]() |
|
yeah he repeated that project later on with 7 gaming VMs in one box. My main gripe with unRAID is that their security design (telnet and plaintext HTTP in 2017?) and overall visual design look like they're stuck in 2003. Some of the security concerns can be worked around by putting the box into its own VLAN. I can't fault the functionality though. Don't ever dream of using it for anything business related but as a high functionality/low maintenance homeserver it's the best option out there IMO.
|
![]() |
|
Matt Zerella posted:I just add any drive equal or smaller to my parity drive and it works out of the box for my linux isos. Every one of these features are useful in a home-lab situation, it's not some enterprise-only feature - whereas unraid started out being made to store FreeBSD isos, and is now trying all sorts of extra features which it wasn't designed to do. Mr. Crow posted:The idea that you have to spend at least $450-600 to increase disk space and the ZFS community is so casual about it blows my mind. 8-bit Miniboss posted:I got spooked enough that I'm just going to take down my FreeNAS box and do snapraid with mergerfs or something. zennik posted:Linus Tech Tips had a cool show awhile back where they put 3 or 4 GPUs in a box with unRAID and had it work as a bunch of gaming systems running out of 1 single tower. eames posted:yeah he repeated that project later on with 7 gaming VMs in one box. D. Ebdrup fucked around with this message at 09:35 on Mar 25, 2017 |
![]() |
|
You guys went from freenas/zfs isn't necessary for a home user so you should run this other solution that can run 7 gaming vms. ![]()
|
![]() |
|
phosdex posted:You guys went from freenas/zfs isn't necessary for a home user so you should run this other solution that can run 7 gaming vms. Well what else do you think a home user does? I mean, you do have 7-man LAN parties every Friday night like the rest of us normal human beings do, right?
|
![]() |
|
I mean... I'm a fucking weirdo and all, but I definitely run a gaming VM. I just don't do it on my NAS. But I've considered setting up iSCSI and using the NAS to host the storage for it.
|
![]() |
|
Krailor posted:For standard home use I'm a big fan of DrivePool. Me too. Never lost a bit using DrivePool. I use it on Server 2012 R2. Not only that, any standard NTFS recovery tool will work if a drive messes up. Whats not to love.
|
![]() |
|
G-Prime posted:I mean... I'm a fucking weirdo and all, but I definitely run a gaming VM. I just don't do it on my NAS. But I've considered setting up iSCSI and using the NAS to host the storage for it.
|
![]() |
|
D. Ebdrup posted:two overprovisioned SLC SSDs ![]()
|
![]() |
|
Intel 25-E 64GB overprovisioned to 20GB each.
|
![]() |
|
Read: what people used to do with hard drives years ago ("short-stroking") for the purpose of performance except now with SSDs for the purpose of longevity and reliability. Strange times, what a time to be alive.
|
![]() |
|
If you guys use Crashplan listen here... I run Crashplan in Docker that autoupdates itself and when it does I usually go a week or two without resetting the VM memory options to 4g so it crashes constantly and takes forever to catch up. This last time it was saying 1.4 - 2.8 years to catch up so I got sick of it and poke and prodded and came across <dataDeDupAutoMaxFileSizeForWan> in the my.service.xml file. It seems to specify the max file size before it'll try to dedup to the crashplan server. It's set to 0 or all files. I set it to 1. My throughput went from 4-600kps to 30Mbps. I was trying to paste a datadog graph but imgur kept saying that it couldn't take it... I'm uploading 3-4 megabytes a second and the time to catch up dropped to 23 days and after a day it's now 19 days.
|
![]() |
|
Which Docker container are you using? This one seems to remember my java mx setting across reboots and updates.
|
![]() |
|
IOwnCalculus posted:Which Docker container are you using? This one seems to remember my java mx setting across reboots and updates. jrcs/crashplan, I considered that one but didn't want the client portion. I may reconsider it when this update is done though...
|
![]() |
|
That's the one I used to use, and yeah it forgot the max RAM setting every time it restarts. I don't know what black magic the one with the client uses but it doesn't have that problem, and is easier to manage anyway.
|
![]() |
|
That's weird. I've never messed with it and CrashPlan always maxes out my measly 5mbps upload.
|
![]() |
|
D. Ebdrup posted:That's my #1 reason for building a new server with two overprovisioned SLC SSDs and 2 10G SFP+ modules that support iSCSI boot: a completely diskless system with zpool-sized storage-size and SSD-like access speeds and all the data redundancy and protection that ZFS offers. Question is whether you can maintain near local SSD like latency to begin with. I haven't tried with the new FreeNAS 10 setup, though.
|
![]() |
|
Thermopyle posted:That's weird. I've never messed with it and CrashPlan always maxes out my measly 5mbps upload. It only becomes an issue when you pass 4-5 TB of data. Below that the native 1G is sufficient.
|
![]() |
|
I've got over 10TB up there.
|
![]() |
|
I spent a chunk of my weekend looking at OS'es. FreeNAS Corral is bad. I had an easier time setting up in 9. The UI is a little unresponsive (updating the hostname took longer than it should have because the save button seemed to not work) and the "wizard" is pretty barebones for anyone new to making a server with it as there's no help or tooltips. You're pretty much left to your own devices with a currently barebones wiki unlike previous versions' online documentation. Having to deal with the UI is just a bad time. What a step back. I'm sure it'll be better down the road but I'm looking at using it anytime soon.
|
![]() |
|
Thermopyle posted:I've got over 10TB up there. How long did you take to get it uploaded?
|
![]() |
|
Combat Pretzel posted:I've investigated to do that, but I can't get decent performance out of FreeNAS. With SMB I can saturate the 10Gbe link when the cache is hot, maybe 200-300MB/s on a cold cache (depending on how ZFS sprayed the data across my quasi-RAID10). I would be happy if I get more than 150MB/s on iSCSI (4K logical blocks, 8K volblocksize). D. Ebdrup fucked around with this message at 22:19 on Mar 27, 2017 |
![]() |
|
I matched the cluster size to the volblocksize. Not that much interested in write speeds (I suppose it'd be helpful for compiling stuff), read speeds is that I'm looking for. Might want to try 16K blocks, but I doubt it'll do much. Picked 8K because it fits within a jumbo frame. --edit: Which reminds me, I can't create iSCSI shares on existing ZVOLs anymore/yet on FreeNAS 10, so that experiment is out for now. Combat Pretzel fucked around with this message at 22:27 on Mar 27, 2017 |
![]() |
|
Latency is going to be limited to whatever interface you're using, hence why I'm planning on using 10G SFP+ since it has an order of magnitude faster latency than RJ45 with a direct connection between two machines instead of going over a store-and-forward switch (since cut-through switches aren't really available for anything but SAN prices). Matching vblocksize isn't about read speed, it's ensuring that when Windows tries to write 4096 bytes (which is what my Windows defaults to), that it actually only writes that 4096 byte block and doesn't write a 16k block or a 256k block. MTU Jumbo frames just means that the NIC can use up to 9k MTU per packet, instead of ~1500 (which is the default for WAN; lower on ppp and many other technologies - but since you're not sending traffic onto a WAN link, you can completely ignore that and set it as high as your NICs will allow). Programs will attempt to use as much MTU as the lowest MTU in the path will allow. I can't really say where you're getting bottlenecked by read speeds - what's your cpu utilization for the top process and load averages looking like? If you were running FreeBSD I'd suggest you used dtrace to create some flamegraphs to look at what the kernel and other processes involved are doing most of the time, but FreeNAS is pretty damn resistent to that kind of debugging. It's mostly just set-and-forget that's not designed to do what you're trying. D. Ebdrup fucked around with this message at 22:31 on Mar 27, 2017 |
![]() |
|
I have Intel X520 cards here, which are SFP+, and my NAS is hooked up directly via DAC. As edited above, experimentation is out currently due to bullshit from FreeNAS 10's part. Creating an iSCSI share in its current state will net me a volblocksize of 4KB, which is useless. (--edit: Oh hey, it defaults to 8K anyway. Still, I want 16K.)
|
![]() |
|
Yeah, I can't really help you with FreeNAS. To me, it's a good appliance for doing SMB and NFS sharing on ZFS, but if you want anything above that you're at the mercy of what iXsystems want to focus on, and not what FreeBSD is capable of.
|
![]() |
|
Yea, well, the link certainly ain't the problem.![]() That said, I don't have an L2ARC (yet). Looks like I'll need an L2ARC and an SLOG. Redoing this box for iSCSI boot would require new hardware, anyway.
|
![]() |
|
Thanks Ants posted:How long did you take to get it uploaded? Hard to say for sure as I added stuff over time.
|
![]() |
|
Combat Pretzel posted:Yea, well, the link certainly ain't the problem. The 32GB Optane drives also look very interesting for slog devices for anyone doing ZFS unless you're looking to fill 40Gbps or 100Gbps.
|
![]() |
|
At some point, it becomes cheaper to run a small and fast SSD (or a mirror thereof) instead of buying more RAM. 170 bytes of ARC for 16K of L2ARC (ideal volblocksize for iSCSI) looks like a nice trade off. Certainly worthwhile at 32GB of RAM. If you're going to use the server as remote disk, with the desire of SSD-like performance, you want as big of a read cache as possible, regardless of ARC hits. --edit: The 16K example above, if you had 8GB out of these 32GB in L2ARC pointers, that maps to more than 768GB. Combat Pretzel fucked around with this message at 17:38 on Mar 28, 2017 |
![]() |
|
I'm not entirely sure where you got that the idea that 16k volblocksize is ideal for iSCSI - could you provide some sources? Because as far as I know, it's all about matching width of blocks with the block size that your application is writing as described by skiselkov@nexenta who commits to openzfs. ARC and L2ARC don't map out files though, it maps out individual blocks based on their use, and only by looking at what's actually used rather than as a read-ahead cache. D. Ebdrup fucked around with this message at 18:47 on Mar 28, 2017 |
![]() |