«608 »
  • Post
  • Reply
The Milkman
Jun 22, 2003

No one here is alone,
satellites in every home


Lipstick Apathy

Hughlander posted:

Just use the --ip option to docker run.

Is there a way to invoke zeroconf (or whtatever) magicks to give containers a .local address?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Jesus fucking Christ, the iSCSI sharing UI is a huge fucking regression in FreeNAS 10. There's no way in the UI to share pre-existing ZVOLs (--edit: or their fancy schmancy CLI for that matter). I get that the whole setup in 9 was kind of annoying, but this simplification kind of goes too far the other way.

Combat Pretzel fucked around with this message at 16:54 on Mar 18, 2017

Hughlander
May 11, 2005



The Milkman posted:

Is there a way to invoke zeroconf (or whtatever) magicks to give containers a .local address?

I don't know of one. I looked briefly to setup a docker cups for AirPrint and just said fuck it and did it on the base vm.

caberham
Mar 18, 2009

by Smythe


Grimey Drawer

So ... plex works on corral no problem? I do have a box lying around but haven't upgraded yet

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


I haven't added any of my movies to Plex yet, but it installed just fine. If it's your only Docker, though, make sure you bump the RAM and CPU count up on the VM, because the default (I'm pretty sure) is 1 core and 2GB of RAM, which won't get you very far with transcoding.

Edit: Added my movie and TV collections to Plex, working like a champ. Getting ready to bring my NAS down fully to see if everything comes up cleanly on the next boot, now that I've got my important containers fully configured and running.

G-Prime fucked around with this message at 03:49 on Mar 19, 2017

mayodreams
Jul 4, 2003


Hello darkness,
my old friend


Another instance of FreeNAS Corral clusterfuckery after upgrading from the most recent 9.10 release.

I had 3 different iSCSI groups to segregate the LUNs and those imported as authentication groups with no settings. I had to fuck around for longer than I'd like on little info on how to fix the problem from the forums and the documentation.

tl;dr
1) Delete the imported portal group
2) Set the portal to default on each share
3) Set the authentication to 'no-authentication' on each share
4) Refresh/discover targets

I also had an issue with the LUN attached to a Windows 8.1 box that was marked as read-only in /etc/ctl.conf but was not denoted that in the GUI, and I could not set that flag via the new CLI. I had to delete the iSCSI share from the GUI (and NOT the ZVOL as it recommends) and create a new one which created properly and did not set read-only.

I really feel this was not ready for a release when shit like iSCSI is completely gimped on import and in the GUI. Let alone the compete lack of documentation for iSCSI in the CLI docs.

caberham
Mar 18, 2009

by Smythe


Grimey Drawer

So it seems like upgrade is a bigger liability for power users.

So how does the docket interface work? Can you select different apps to run in them and does each dock run a single different instance?

One dock is for plex, one dock is for .. I don't know a radius server or something etc.

Can't wait to go home and install it into my new box

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


Not trying to be rude, just wanting to be helpful. Docker is the name of the tool, it runs containers, not docks. I should have been more explicit in my posts and not mix up the two anyway.

In FreeNAS, there's a whole dedicated panel for Docker. Within that, you can have Docker Hosts. Those are virtual machines running under bhyve, which spin up boot2docker (a customized version of Linux). You need at least one, but can have multiple if you want. Then there's the containers themselves. They're similar to what you're used to with jails, but don't function quite the same. Generally speaking, a single container is intended to run a single app, like Plex, Sonarr, NZBGet, etc. Functionally, though, they can run whatever you want. The prepackaged plugin-style ones in the official repos (which FreeNAS refers to as Collections) are almost all these single app ones, but they also have Debian, Ubuntu, Arch, CentOS, and Gentoo, in case you want to spin up a complete Linux install. I personally wouldn't recommend that for daily use, but you CAN do it. You can write your own container definitions pretty easily, there's instructions for that up on the FreeNAS wiki, and Docker is a pretty well-documented tool outside of the FreeNAS world if you need more info.

Going back to VMs, you can have multiple Docker VMs if you want, but don't have to. The default VM config is 1 CPU and 2GB of RAM, which won't get you very far with the types of containers they offer, so it's wise to bump that up. Those things aside, you can freely run VMs through bhyve yourself as well. For example, I've got a standalone VM to use as a bastion host so I can SSH into my home network, and use that to jump to other machines. This is similar to iohyve in 9.10, and vaguely like the Virtualbox jail from 9.3, but it's far more slick and just works.

Addressing your other point, power users are absolutely the ones that get bitten in the ass by this upgrade. The more plugins you were running before the upgrade, the more of a pain it is, unless you want to throw them away and reconfigure from scratch. I spent a few hours yesterday migrating settings from several of my jails to be used in Docker. If you don't mind losing all your currently running torrents, for example, spinning up Transmission or Deluge is incredibly easy. I went through that hassle, and it wasn't fun, but it was doable. Did the same with NZBGet and Sonarr. By the time I got to Couchpotato, I decided I wanted to try out Radarr instead, so I started that one from nothing, and it worked really nicely. And then I did Plex from the ground up as well, because somebody in the thread asked about it.

Pre-upgrade, I had about 20 jails. I'm currently running 6 containers and 1 separate VM, which should fulfill my needs for now.

Edit: I'm not certain yet, but it looks like Plex might have a memory leak, at least in the version they've got in the collection right now. It was spewing tons of memory allocation errors, and my Docker VM insisted there was 21GB of RAM in use despite me allocating 16 to it. I just got around to doing the reboot I intended to do last night, and everything looks fine right now, but that might be relevant to folks.

G-Prime fucked around with this message at 14:10 on Mar 19, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

I'm still miffled that I can't create iSCSI shares on existing ZVOLs. I have to mount the old ones on CLI, copy the shit over into some ZFS filesystem, and then copy it to my computer via network, recreate the iSCSI shares and copy it back (I don't trust FreeNAS' ntfs driver for writing data). I can click on Import Shares until I turn blue, it won't detect the old iSCSI ones (even though some comment on their bugtracker suggests that it should). Luckily it isn't much important data.

Matt Zerella
Oct 7, 2002


Kinda happy I went with unraid after seeing all this.

eames
May 9, 2009



Matt Zerella posted:

Kinda happy I went with unraid after seeing all this.



I played around with an early beta of FreeNAS 10 and wasn't that happy, it felt like they were trying to morph it into a Synology point-and-click solution. Checked out unraid and never looked back.
ZFS would have been nice but the tiered SSD caching, flexible RAID mode and flawless Docker/KVM virtualization options far outweigh ZFS' checksumming and snapshotting for my application.

Thanks Ants
May 21, 2004

Bless You Ants, Blants



Fun Shoe

I think I'm going to stick with 9.10 for a few months and give a chance for things to mature a bit more

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

I just created a ZFS dataset and then shared with SMB in FreeNAS 10. I can't write shit to it. Yet the shares I imported by hand after a failed upgrade work fine. How the fuck is this even a release candidate?!

--edit: Some more permission bullshit. If I have this fancy wizard UI sort of stuff, might be nice to check and warn the user first.

Hughlander
May 11, 2005



Thanks Ants posted:

I think I'm going to stick with 9.10 for a few months and give a chance for things to mature a bit more

That's my plan though I may drop esxi from my solution. Right now I have a 32 gig Xeon running esxi. Pass through the LSI and 16 gigs to freenas then run 4 gigs for plex committed, 12 gigs with 8 committed to a docker host running 15 containers, and then as hoc windows / OS X Linux servers. I could see just doing one boot2docker with 24 gigs on freenas 10 and simplifying it.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


Hughlander posted:

I could see just doing one boot2docker with 24 gigs on freenas 10 and simplifying it.

I just bumped mine from 16 to 20, probably going to need to head toward 24 soon. I've determined at this point that with my library size, Plex absolutely dominates my system. Radarr looks like it's hitting things pretty hard too. If I keep Plex running, Deluge just crashes repeatedly. It's brutal. I'm going to have to reassess how I'm going to handle things.

Hughlander
May 11, 2005



G-Prime posted:

I just bumped mine from 16 to 20, probably going to need to head toward 24 soon. I've determined at this point that with my library size, Plex absolutely dominates my system. Radarr looks like it's hitting things pretty hard too. If I keep Plex running, Deluge just crashes repeatedly. It's brutal. I'm going to have to reassess how I'm going to handle things.

That was why I specified different vms originally so I could control the resource allocation. Using jackett sonarr and radarr on Linux is a huge sink. (All mono) you can also specify memory and CPU limits per container but haven't played much there.

phosdex
Dec 16, 2005



Tortured By Flan

Thanks Ants posted:

I think I'm going to stick with 9.10 for a few months and give a chance for things to mature a bit more

Same, I installed 10 on my esxi box to mess around a bit but my real box I'm gonna hold off on upgrading until things get a bit more stable.

D. Ebdrup
Mar 13, 2009



Hughlander posted:

That was why I specified different vms originally so I could control the resource allocation. Using jackett sonarr and radarr on Linux is a huge sink. (All mono) you can also specify memory and CPU limits per container but haven't played much there.
FreeBSDs jails work well with rctl and zfs datasets for disk quotas.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


I'm super confused at this point. Checked top on the boot2docker VM, it shows that I'm capping out RAM, at least numerically, though the actual bar graph is empty. The Docker stats command disagrees. Threw cAdvisor into the mix, and it insists that I'm using less than 3GB of RAM, which aligns with what docker stats says. Can't say I've seen behavior like this before, in my time working with Docker.

Edit: And I don't know if I made this clear before, but Plex was throwing memory allocation errors left and right, and Deluge was crashing at the same time, so something's clearly wrong, I just can't figure out what.

G-Prime fucked around with this message at 13:16 on Mar 20, 2017

robostac
Sep 23, 2009


I think it's just a display issue with how memory allocation works. Looking on mine, top shows my memory usage as (number of docker containers * total memory) which suggests that it's preallocating all of the memory for each container, but until the pages are accessed they're not actually in use. Pressing ' m' gets it to show that only 1.3GB is actually used which agrees with docker stats. I've not seen any issues with memory (I've allowed 4GB for plex/transmission/unifi/sonarr).

Super Slash
Feb 20, 2006

You rang ?

I've got my eye on a Synology RS815RP for a specific backup purpose, what I need to know without having used one is can it perform regular Rsync jobs unattended when connecting to a remote server or otherwise use SSH keys?

Droo
Jun 25, 2003



Super Slash posted:

I've got my eye on a Synology RS815RP for a specific backup purpose, what I need to know without having used one is can it perform regular Rsync jobs unattended when connecting to a remote server or otherwise use SSH keys?

Yes, it has everything necessary to do that included in the basic DSM software.

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


Everyone here recommends the TS120 (or is it the TS140?) for low-end NAS builds. Is there any old dual-socket LGA2011 or 2011v3 stuff that's priced as attractively? Rackmount would be OK but if it was a tower case it should also be a standard size that I could tear out and throw in a rack chassis if I wanted to (ATX, EATX, EEB, whatever).

I would also love any recommendations on any older-but-still-viable Xeons if there's something particularly cheap out there. Probably nothing older than Sandybridge-E though.

EATX and SSI EEB motherboards are the same size. Can you interchange them? Mostly I'd want to put an eATX board in an EEB case but just for my edification let me know on compatibility either way if you know. I would think the primary difference (if any) would be the pattern for the mounting screws?

Paul MaudDib fucked around with this message at 01:51 on Mar 22, 2017

SamDabbers
May 26, 2003



Fallen Rib

Relatively inexpensive dual socket LGA2011, you say? E5-2670s are the budget barnburner of choice these days if you want lots of cores and PCIe lanes.

Also,

Wikipedia posted:

While E-ATX and SSI EEB share the same dimensions, the screw holes of the two standards do not all align; rendering them incompatible.
The Phanteks Enthoo Pro is an inexpensive full tower case that explicitly supports SSI EEB as well as E-ATX and smaller.

Matt Zerella
Oct 7, 2002


4TB reds for 140 at Newegg

https://m.newegg.com/products/22-236-599

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


I'm looking at InfiniBand QDR adapters (QSFP). I'm looking at the Mellanox because they were used in Sun boxes and might have better ZFS support. The other thing I can find fairly cheaply is various models of HP. Any thoughts/experiences?

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!

Grimey Drawer

According to a guy on the FB Synology Admins & Users group Synology will be updating the Virtual Machine Manager package to allow it to host linux and windows VMs as well as virtual DSM. It has been demonstrated at CeBIT this week. This is great news for Synology users!

movax
Aug 30, 2008



Alright, I've decided to move up my NAS build because I have a finite amount of time to hit credit card spending for sweet, sweet rewards points that I've forgotten about.

Here's what I'm thinking:

FreeNAS on an ESXi host, running:

E3-1230v5
Supermicro X11SSL-CF
2x16GB DDR4 ECC (might do 4x16GB to just max it out now, not sure if UDIMM prices will track up or down over time)
Fractal Node 804
2x Samsung 850 for...something, either hosting disks for other VMs, or ZIL/L2ARC
8x WD80EZFX (RAID-Z2 on the SAS3008 passed through to FreeNAS)
Some TBD Corsair PSU probably
Some TBD fans if the Fractal's stock fans are too loud

I don't think I'm really tied to Skylake vs. Haswell, but I couldn't find massive savings picking up used 1150 Haswell gear. I decided to puss out on the QNAP solution because I like the scrubbing / checksumming I get with ZFS, and then ECC is the extra icing on the cake.

The mobo gives me the 8 SATA ports on the LSI 3008 I want (with 8TB drives, I don't see needing to go more for awhile), and I've got PCIe slots left if I wanted to add another HBA, or a 10G card if I ever upgrade to it. Anything stupid / weird / etc I'm doing here?

Qs:
1. What's the current wisdom on L2ARC/ZIL sizing? I won't be able to pass-through the raw controller to FreeNAS (unless I bought another HBA for some reason), but I think I can pass through an entire drive from ESXi, right?
2. SuperDOM is the way to go for the ESXi installation, right?
3. Dummy's guide to assigning cores/RAMs to VM? Predicted VM load: domain controller, Plex, Linux VM (hosting Postgre/engineering software) is about it.

movax fucked around with this message at 18:26 on Mar 23, 2017

Mr. Crow
May 22, 2008

Snap City mayor for life


Make sure your hardware is in the ESXi compatibility guide, I spent several weeks struggling with ESXi and other hypervisor because my hardware was not strictly supported.

It ran ok on the surface but there were a bunch of irritating issues that were not immediately obvious. Ended up doing a hyper-v server which is working very well (though not a very good UX... Still better than xen).

VMWare has been stripping compatibility out of each new version of ESXi released; which is all the more reason to make sure your hardware is compatible.

I use ESXi at work and it's lovely (Dell rack servers, of course they're supported), trying to do a home build was a nightmare.


To answer some of your questions, no you can only pass through the controller in ESXi, hyper-v supports individual drive pass through.

Assigning cores and memory is pretty intuitive, just don't assign more cores than you have. I think the general advice is never give more than half your cores to a vm, the hypervisors are pretty good about dividing then up to what is available. ESXi supports dynamic memory so VMs only use what they need (e.g. you can assign 64G across all your VMs if you only have 32G and as long as they aren't all at capacity it should generally work fine); though you can reserve chunks of it if needed.

Mr. Crow fucked around with this message at 19:27 on Mar 23, 2017

Matt Zerella
Oct 7, 2002


movax posted:

Alright, I've decided to move up my NAS build because I have a finite amount of time to hit credit card spending for sweet, sweet rewards points that I've forgotten about.

Here's what I'm thinking:

FreeNAS on an ESXi host, running:

E3-1230v5
Supermicro X11SSL-CF
2x16GB DDR4 ECC (might do 4x16GB to just max it out now, not sure if UDIMM prices will track up or down over time)
Fractal Node 804
2x Samsung 850 for...something, either hosting disks for other VMs, or ZIL/L2ARC
8x WD80EZFX (RAID-Z2 on the SAS3008 passed through to FreeNAS)
Some TBD Corsair PSU probably
Some TBD fans if the Fractal's stock fans are too loud

I don't think I'm really tied to Skylake vs. Haswell, but I couldn't find massive savings picking up used 1150 Haswell gear. I decided to puss out on the QNAP solution because I like the scrubbing / checksumming I get with ZFS, and then ECC is the extra icing on the cake.

The mobo gives me the 8 SATA ports on the LSI 3008 I want (with 8TB drives, I don't see needing to go more for awhile), and I've got PCIe slots left if I wanted to add another HBA, or a 10G card if I ever upgrade to it. Anything stupid / weird / etc I'm doing here?

Qs:
1. What's the current wisdom on L2ARC/ZIL sizing? I won't be able to pass-through the raw controller to FreeNAS (unless I bought another HBA for some reason), but I think I can pass through an entire drive from ESXi, right?
2. SuperDOM is the way to go for the ESXi installation, right?
3. Dummy's guide to assigning cores/RAMs to VM? Predicted VM load: domain controller, Plex, Linux VM (hosting Postgre/engineering software) is about it.

Skip FreeNAS/ECC and go with unRAID because it's The Best and who cares about bit rot at home.

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


SamDabbers posted:

Relatively inexpensive dual socket LGA2011, you say? E5-2670s are the budget barnburner of choice these days if you want lots of cores and PCIe lanes.

Hmm. I looked up the performance on these and we're only talking the same performance as a 7700K or 5820K at stock speeds (even in things that hit all the cores).

I realize you get two sockets per mobo, and the point is lanes and cores, not single-thread performance... but these mobos also raise some other configuration problems. The absolute best I'm going to do is three dual-slot cards on a dual-LGA-2011 motherboard and It's going to be a stretch to get infiniband too even then. And three is weird because you probably want two on one socket and one on the other plus infiniband - so you have non-uniform cards per MPI node and slightly asymmetrical latency to hit infiniband. And the PCIe lanes are going to be a little asymmetrical no matter how you slice it.

Not really their fault, I don't see any extra space on a dual-socket board to play around with.

The other configuration I was looking at was single-processor X99 - the EVGA X99 Classified has a slot configuration that would be suitable for running three GPUs plus infiniband plus M.2 on a single socket. The only catch would be one of the cards gets dropped to x4. Or if you had a pair of single-slot cards you could start really stacking them in there - but you could do that with the server boards too.

I wish Galax would come out with that single-slot 1070, that would make this whole thing a lot easier. At least I don't need low profile too - but the only way to get single-slot right now is putting a waterblock on each card and I'm not quite that ready to open the bloodgates yet.

That or a motherboard with QDR infiniband right onboard to save me a slot. And so far I haven't seen anything remotely cheap there.

So I guess right now I'm thinking I either have to put up with non-uniform nodes, I limit it to one GPU per CPU socket, or I start specializing into "GPU nodes" and "CPU nodes". I guess that's why there's specialized GPU compute blades.

movax
Aug 30, 2008



Mr. Crow posted:

Make sure your hardware is in the ESXi compatibility guide, I spent several weeks struggling with ESXi and other hypervisor because my hardware was not strictly supported.

It ran ok on the surface but there were a bunch of irritating issues that were not immediately obvious. Ended up doing a hyper-v server which is working very well (though not a very good UX... Still better than xen).

VMWare has been stripping compatibility out of each new version of ESXi released; which is all the more reason to make sure your hardware is compatible.

I use ESXi at work and it's lovely (Dell rack servers, of course they're supported), trying to do a home build was a nightmare.


To answer some of your questions, no you can only pass through the controller in ESXi, hyper-v supports individual drive pass through.

Assigning cores and memory is pretty intuitive, just don't assign more cores than you have. I think the general advice is never give more than half your cores to a vm, the hypervisors are pretty good about dividing then up to what is available. ESXi supports dynamic memory so VMs only use what they need (e.g. you can assign 64G across all your VMs if you only have 32G and as long as they aren't all at capacity it should generally work fine); though you can reserve chunks of it if needed.

Ugh (re: ESXi compatibility), don't want to have to deal with quirkiness if I can avoid it. Maybe I can scrape the ESXi HCL and see what white-box SuperMicro servers based on the X11SSL-CF there are. What are the major hardware issues, though? For PCIe devices that I'm going to pass-through via VT-d, the hypervisor won't touch them. The C232/C236 PCHs are basically required to be supported as that's all you can get from Intel, and I want to hope / assume the Intel NICs are also quite well supported (since I'm not using something like the X550 or similar). Maybe if it's something I can live without, I'll be OK. I had a good time with a ESXi install on an older Ivy Bridge board -- my issues there were OSes not having support for the latest vmxnet Ethernet device.

Any folks here built with the Fractal 804?

phosdex
Dec 16, 2005



Tortured By Flan

My FreeNAS box is in an 804, it's a nice case, typical Fractal quality. I did purchase one more 120 or 140mm fan or whatever it is that are in the front on the drive half of the case. I ran this for a bit as an esxi box with virtualized freenas, had 10 drives in there then.

Actuarial Fables
Jul 29, 2014



Taco Defender

movax posted:

Any folks here built with the Fractal 804?

The largest problem I have with the 804 is that the drive sled above the PSU doesn't have that much room for HDD cables. I wish I had gone with angled power/data connections, but it's still workable.

Hughlander
May 11, 2005



Matt Zerella posted:

Skip FreeNAS/ECC and go with unRAID because it's The Best and who cares about bit rot at home.

My counterpoint: It's like $40 more for ECC, and until unRAID gets snapshots and multiple parity drives it's not worth considering with 64TB of raw storage.

Matt Zerella
Oct 7, 2002


Hughlander posted:

My counterpoint: It's like $40 more for ECC, and until unRAID gets snapshots and multiple parity drives it's not worth considering with 64TB of raw storage.

You can do multiple parity with unraid. 2 counts right?

Or you can do SnapRAID/OpenMediaVault and get native dockers/kvm?

I dunno, the new FreeNAS is really slick but after playing with it and the mental gymnastics you have to do with ZFS, it just seems kind of silly for home use unless you're doing home lab stuff where you need the features. Just my 2 cents.

I've been enamored with how easy unRAID has been in terms of setting and forgetting, with the only PITA being preclearing drives. And maybe with 64TB of storage that might take forever.

Hughlander
May 11, 2005



Matt Zerella posted:

You can do multiple parity with unraid. 2 counts right?

Or you can do SnapRAID/OpenMediaVault and get native dockers/kvm?

I dunno, the new FreeNAS is really slick but after playing with it and the mental gymnastics you have to do with ZFS, it just seems kind of silly for home use unless you're doing home lab stuff where you need the features. Just my 2 cents.

I've been enamored with how easy unRAID has been in terms of setting and forgetting, with the only PITA being preclearing drives. And maybe with 64TB of storage that might take forever.

I did a quick google and it said it only supported one parity. If that's wrong i'll retract that. I'd also argue that 64TB is homelab. But the whole setting and forgetting is also why I'd go FreeNAS with ECC, ZFS /z2, overlaping snapshots, and crashPlan. The only time I even think about it is when I need to mount a snapshot 'cuz calibre shat the bed again.

Matt Zerella
Oct 7, 2002


That's fair I guess. With 64TB, expansion is probably not the priority.

Mr. Crow
May 22, 2008

Snap City mayor for life


movax posted:

Ugh (re: ESXi compatibility), don't want to have to deal with quirkiness if I can avoid it. Maybe I can scrape the ESXi HCL and see what white-box SuperMicro servers based on the X11SSL-CF there are. What are the major hardware issues, though? For PCIe devices that I'm going to pass-through via VT-d, the hypervisor won't touch them. The C232/C236 PCHs are basically required to be supported as that's all you can get from Intel, and I want to hope / assume the Intel NICs are also quite well supported (since I'm not using something like the X550 or similar). Maybe if it's something I can live without, I'll be OK. I had a good time with a ESXi install on an older Ivy Bridge board -- my issues there were OSes not having support for the latest vmxnet Ethernet device.

Any folks here built with the Fractal 804?

I had issues getting it to even see my SATA Controller with the new versions, had to do some hacking in stone file and manually add the manufacturer and model numbers and done other nonsense. Then it didn't like my Crucial MX500(300? One of the newest) and would constantly lose connection to it during write intensive stuff tanking performance. Tried older versions which had different issues entirely.

To be fair it was my first home server build so I didn't really pay attention to some things I should have. In hindsight I would have gone with a supermicro board, I think** you'll probably be fine with one but it's a huge headache if you're not.

I got an Asus motherboard and a lot of people had blogs on success with the mATX version of what I got; I saw the full ATX version had like 6 PCI slots and 14! SATA ports and jumped on it. Turns out ESXi doesn't even support Asus at all anymore and has limited support for their older stuff so... Yea. Bad luck of the draw.

Hopefully by the time I'm buying another server I'll have a house and I can just set up a rack in a closet and forget about it.

D. Ebdrup
Mar 13, 2009



It's not as if ZFS can't be expanded, either - you just can't add individual drives to an existing raidz vdev. If you do want to expand your zpool, you have two options: replace drives with bigger drives (and resilver between each replacement), or just add another vdev with a seperate striped parity configuration.
If you're a bit forward-thinking and smart about buying used server equipment, you can get a xeon board with ecc support and 4 daughter board slots capable of handling 16 drives per HBA (without port multipliers, which means you can fit plenty more than 16 drives per HBA since spinning rust for consumers or prosumers don't take up all the bandwidth of a SATA2 let alone SATA3) - all without breaking the bank.
If you add 4 devices at a time in raidz2 and you're careful about wiring, you can expand the zpool at least 16 times and lose any two HBAs without experiencing dataloss.

D. Ebdrup fucked around with this message at 20:37 on Mar 24, 2017

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »