|
Paul MaudDib posted:Asrock X99E-ITX/ac That is pure condensed awesome.
|
![]() |
|
I'm still in favor of the mini ITX Xeon-D boards with dual 10GbE NICs, 4 DIMM slots. Sure, you have to go up to uATX to get the SAS controllers, but 5 NICs is tough to turn away if you're going for a central storage server. http://www.asrockrack.com/general/p...#Specifications
|
![]() |
|
Thermopyle posted:There's not a care/don't-care dichotomy. There very much is a care/don't care dichotomy. There are diminishing returns on how much you want to invest in that, you can store everything on a non-checksummed file system with no backups and it'll still give you reliability, just not very much. Of the things I listed ECC is probably the least critical, at least in the presence of all the others, which is why the rest of my post mentions that I think it's fine to use a repurposed desktop. But if you're building a NAS you should get ECC, and it doesn't necessarily have to cost more. Either by buying old server parts for cheap, lower end server parts (the atom platforms were reasonable ways to get server grade components), or paying more if you must have tons of performance on your NAS for some reason.
|
![]() |
|
Desuwa posted:There very much is a care/don't care dichotomy. I mean, you can keep asserting this, and then contradicting yourself, but I don't know why you would do so.
|
![]() |
|
Desuwa posted:There very much is a care/don't care dichotomy. There are diminishing returns on how much you want to invest in that, you can store everything on a non-checksummed file system with no backups and it'll still give you reliability, just not very much. ![]() "If you care about your data -> get ECC" "If you don't get ECC -> you don't care about your data" "This is a sliding scale and removing any one of the pieces makes it less reliable" is just unarguable. The question is whether people who don't invest the extra time/effort/space/whatever in ECC don't care about their data. I'd venture that they do. Desuwa posted:Of the things I listed ECC is probably the least critical, at least in the presence of all the others, which is why the rest of my post mentions that I think it's fine to use a repurposed desktop. But if you're building a NAS you should get ECC, and it doesn't necessarily have to cost more. Either by buying old server parts for cheap, lower end server parts (the atom platforms were reasonable ways to get server grade components), or paying more if you must have tons of performance on your NAS for some reason. I'd really like to see a good justification for "if you're building a NAS, you should get ECC" that doesn't apply to literally every aspect of computing, because I don't think there is one. Lots of people don't want old server parts, and want a faster system with more expansion than Atoms, but still want a NAS. Largely because you can spend the extra money you'd invest in a platform which is ECC-capable on, y'know, drives.
|
![]() |
|
evol262 posted:That's not what a dichotomy is A lot of it comes down to hot cache and long hours of operation, and the fact that most people don't have a full backup of their 30Tb of random bullshit. I got a shitty Xeon board and ECC for my OmniOS box because I had an earlier hardware failure that was impossible to diagnose, and I wanted IPMI so I could remote it and turn it on/off remotely. The extra $200 or so paid for itself over the last 3 years, as it's already lasted longer than the last hand-me-down shitbox, and IPMI is handy for dealing with boot issues and whatnot.
|
![]() |
|
evol262 posted:That's not what a dichotomy is I think you're arguing something else. The idea of a class of data that I care about just enough to be completely fine with corruption from memory while not being fine with corruption from the file system is alien to me. What made you think I don't want ECC on everything though? My argument does apply to all computing systems. If not for Intel using it to force market segmentation I bet we'd be using ECC in all but the most budget systems. It might not be practical to build a new system just for ECC if I have an old desktop, but that doesn't mean that I'm not going to get ECC when I do build a new system. At best maybe I should have been less strong and said anyone who cares "should" instead of "is" a few posts ago, which I think is what you're both jumping on.
|
![]() |
|
Methylethylaldehyde posted:A lot of it comes down to hot cache and long hours of operation, and the fact that most people don't have a full backup of their 30Tb of random bullshit. I got a shitty Xeon board and ECC for my OmniOS box because I had an earlier hardware failure that was impossible to diagnose, and I wanted IPMI so I could remote it and turn it on/off remotely. The extra $200 or so paid for itself over the last 3 years, as it's already lasted longer than the last hand-me-down shitbox, and IPMI is handy for dealing with boot issues and whatnot. The question is "do you need a full backup of your 30TB of random bullshit with a checksumming filesystem?" And the answer (for data integrity) is generally "no". I mean, obviously you should have backups, and OOB access is great in any case (which generally comes with ECC, but vPro is/was a thing). Desuwa posted:I think you're arguing something else. Desuwa posted:The idea of a class of data that I care about just enough to be completely fine with corruption from memory while not being fine with corruption from the file system is alien to me. Desuwa posted:What made you think I don't want ECC on everything though? My argument does apply to all computing systems. If not for Intel using it to force market segmentation I bet we'd be using ECC in all but the most budget systems. Desuwa posted:It might not be practical to build a new system just for ECC if I have an old desktop, but that doesn't mean that I'm not going to get ECC when I do build a new system. Desuwa posted:At best maybe I should have been less strong and said anyone who cares "should" instead of "is" a few posts ago, which I think is what you're both jumping on.
|
![]() |
|
evol262 posted:Let's just let what a dichotomy is drop. I said there's a dichotomy between data you care about and that which you don't. I'm not sure where the confusion is. I don't expect people to have a continuum of different types of data that they're fine with different levels of corruption for. evol262 posted:It was a joke, but, uh... Registered ECC is marginally slower, so there's that. So is registered non-ECC RAM. Don't get registered RAM unless you need it. And yeah I'd choose ecc over more it bigger drives, provided I can at least meet my needs. If a person needs a high performance NAS with tons of storage they're already dropping 2000+ dollars then they can build a Ryzen system with ECC without shelling out for xeons. Desuwa fucked around with this message at 00:55 on Aug 12, 2017 |
![]() |
|
Desuwa posted:I don't expect people to have a continuum of different types of data that they're fine with different levels of corruption for. You're wrong.
|
![]() |
|
necrobobsledder posted:I'm still in favor of the mini ITX Xeon-D boards with dual 10GbE NICs, 4 DIMM slots. Sure, you have to go up to uATX to get the SAS controllers, but 5 NICs is tough to turn away if you're going for a central storage server. http://www.asrockrack.com/general/p...#Specifications I agree, I buy a lot of the supermicro ones for general purpose test servers at work and they are fantastic little machines. Curious if Intel is doing a skylake/kabylake SoC version.
|
![]() |
|
necrobobsledder posted:I'm still in favor of the mini ITX Xeon-D boards with dual 10GbE NICs, 4 DIMM slots. Sure, you have to go up to uATX to get the SAS controllers, but 5 NICs is tough to turn away if you're going for a central storage server. http://www.asrockrack.com/general/p...#Specifications If not, Denverton is worth waiting for because it has up to 12 SATA ports and 16 ~2GHz cores without L3 cache, instead of 8 SMT cores with L3 cache and a SAS controller that may or may not be flashable to IT-mode, all in the same mini-ITX formfactor.
|
![]() |
|
[major tangent] I haven't read the boards for a couple of days and this thread has hugened bigly. I just want to add that if anyone decides to go with the Gigabyte X150M-ECC board that I mentioned a couple of pages ago then it needs a BIOS flash in order to use the Kaby Lake G4560. I've actually got the i3 6100 in mine because I tried a G4560 first and when it wouldn't post I drove 100 miles at 8pm on a Friday night to pick up the 6100 from a guy on eBay just to get the thing working. I've updated the BIOS now, but don't intend to swap them back, so if anyone from the UK wants an 'as new' G4560 with box and unused HSF then it's available. I realise this should go in SA Mart but I'm just not in a rush to sell it. [/major tangent]
|
![]() |
|
priznat posted:Curious if Intel is doing a skylake/kabylake SoC version. D. Ebdrup fucked around with this message at 12:36 on Aug 12, 2017 |
![]() |
|
Desuwa posted:So is registered non-ECC RAM. Don't get registered RAM unless you need it. Desuwa posted:And yeah I'd choose ecc over more it bigger drives, provided I can at least meet my needs. If a person needs a high performance NAS with tons of storage they're already dropping 2000+ dollars then they can build a Ryzen system with ECC without shelling out for xeons. I'd choose spending less money. I don't even know what to say about "2000+ on a NAS". IB equipment is cheap on eBay, and it's not like the CPU load on a NAS is huge unless it's also running your VMs and everything else. You can get 24TB after RAID losses pushing 450mb/s under 1000 in a small form factor, and this is the home NAS thread, not the "build a production SAN" thread. Even then, I'd get more SSDs for cache or more memory instead of CPU power, since CPUs are almost never pegged, even if they're relatively weak.
|
![]() |
|
Yeah, I've got 4 VM's running on my i3: 1 for important shit & backups, 1 running Plex, 1 running a cryptocurrency node (not mining) and 1 as a sandbox for doing various stuff and the i3 is barely taking a stroll most of the time, unless I watch a movie through Plex and then it delivers a nice HD stream. It depends what you want to do, I suppose, but something the level of an i3 is more than adequate for me.
|
![]() |
|
Only home use case I can imagine for a powerful CPU in your NAS would be HEVC or VP9 transcoding near 30 fps for Plex. But nobody really has a client that will play that besides some Android phones and I'd bet Plex is a bit buggy so far on HEVC anyway. The disk savings sorta matter but HEVC mostly shines in lower bitrate efficiencies. Also, $2k is super easy to hit for a NAS if you bust into 8 drive 4 TB territory - the drives alone on sale now are about $1k USD with tax. A TS140 can be bought for like $250 and it'll support ECC RAM. No need to go crazy on a Xeon D or anything with a recent socket. I only obsess over size and performance and such because I move frequently and I'm willing to pay extra to avoid having to pack and unpack more equipment without resorting to cloud storage that is way, way too much in cost. My current NAS setup is an i3-4100 running in a UNAS 8-bay box and that's good enough for dedicated storage and light transcoding honestly.
|
![]() |
|
necrobobsledder posted:Only home use case I can imagine for a powerful CPU in your NAS would be HEVC or VP9 transcoding near 30 fps for Plex. But nobody really has a client that will play that besides some Android phones and I'd bet Plex is a bit buggy so far on HEVC anyway. The disk savings sorta matter but HEVC mostly shines in lower bitrate efficiencies. D. Ebdrup fucked around with this message at 16:52 on Aug 12, 2017 |
![]() |
|
necrobobsledder posted:Only home use case I can imagine for a powerful CPU in your NAS would be HEVC or VP9 transcoding near 30 fps for Plex. But nobody really has a client that will play that besides some Android phones and I'd bet Plex is a bit buggy so far on HEVC anyway. The disk savings sorta matter but HEVC mostly shines in lower bitrate efficiencies. Hotswap bays will cost a little more, but you're still at about 1000+tax quote="necrobobsledder" post="475290991"] A TS140 can be bought for like $250 and it'll support ECC RAM. No need to go crazy on a Xeon D or anything with a recent socket. I only obsess over size and performance and such because I move frequently and I'm willing to pay extra to avoid having to pack and unpack more equipment without resorting to cloud storage that is way, way too much in cost. My current NAS setup is an i3-4100 running in a UNAS 8-bay box and that's good enough for dedicated storage and light transcoding honestly. [/quote] I also move, sometimes to small places, which is why I don't want a full-depth 4u case or loud system.
|
![]() |
|
Sure, 3TB drives cost less. But it's hard to find a tower case that easily holds more than 8, and rackmount ones are loud as hell. I'd much rather pay more for higher density and a nearly silent case.
|
![]() |
|
G-Prime posted:Sure, 3TB drives cost less. But it's hard to find a tower case that easily holds more than 8, and rackmount ones are loud as hell. I'd much rather pay more for higher density and a nearly silent case. Fractal makes a matx which holds 10 for ~100, which is what I use now (after moving yet again and ditching the Norco 24 bay)
|
![]() |
|
evol262 posted:I'd choose spending less money. I don't even know what to say about "2000+ on a NAS". IB equipment is cheap on eBay, and it's not like the CPU load on a NAS is huge unless it's also running your VMs and everything else. You can get 24TB after RAID losses pushing 450mb/s under 1000 in a small form factor, and this is the home NAS thread, not the "build a production SAN" thread. I brought that up in response to you bringing up spending hundreds on Xeons or Threadripper; once you need that kind of power in your NAS you're already well into four digits and ECC ends up being a tiny part of it. If you don't need tons of CPU power there are options for ECC that are just as cheap as the cheapest low end consumer stuff. But now you're arguing that ECC isn't practical with your other requirements, not that you wouldn't go with ECC if it were, so I'm not sure what the disagreement even is anymore. If you would go with ECC if it didn't violate your other requirements (cost plus form factor) I think this is all just a miscommunication. Desuwa fucked around with this message at 21:48 on Aug 12, 2017 |
![]() |
|
Which one's that? I could pull off 11 in my Define R5, but that'd require putting a 3-in-2 bay into my 5.25s, and I didn't want to deal with the extra heat. The 8 built-in 3.5s are what I've been using, plus one of the SSD mounts on the rear of the motherboard tray.
|
![]() |
|
Thwomp posted:Depends. Are you the kind of person who will, upon seeing the capabilities of a new device, want to maximize said capabilities beyond the device's ability? I gave it a think and decided that I'd best build my own given my tinkering nature. ![]() It'll probably grab a Pentium to do it, though deals on vastly overkill xeon-d's tempt me. ![]()
|
![]() |
|
evol262 posted:Fractal makes a matx which holds 10 for ~100, which is what I use now (after moving yet again and ditching the Norco 24 bay) I have a Fractal Define Mini (mATX) which has 6 3.5" drive sleds, then the external 5.25" bay I plan on putting a 4-6 icydock 2.5" chassis in once I expand.
|
![]() |
|
G-Prime posted:Which one's that? I could pull off 11 in my Define R5, but that'd require putting a 3-in-2 bay into my 5.25s, and I didn't want to deal with the extra heat. The 8 built-in 3.5s are what I've been using, plus one of the SSD mounts on the rear of the motherboard tray. Node 804, though it gets kind of awkward with all ten drives and it has a pretty unusual footprint.
|
![]() |
|
evol262 posted:True, but 3tb drives are ~30% cheaper. You can hit 10 3tb drives for ~650+tax (less on sale), which still gives 24tb after losses for redundancy, plus a controller and a couple of cheap 32/64gb ssds for cache, which just leaves whatever CPU/memory/etc you want to get. i3 with 16gb is ~150, less if you get an older generation. I've been strongly considering the possibility of buying only 2.5" disks and shoving them into a case for better heat / power / space savings over 3.5" drives but I really can't find an enclosure that would accept the SATA backplane at the right size. There's stuff like this backplane or this better candidate (15mm height drive support is mandatory for 2.5" drives if you're doing bulk storage) but there's I think literally zero mini ITX cases out there that have 2 5.25" external bays. This is why I ultimately wound up with the UNAS NSC-800 case. D. Ebdrup posted:Isn't ffmpeg/libva/VAAPI quicksync support pretty good nowadays? Because I'd love to see a non-embedded Kaby Lake refresh of something like the i7-5700EQ which has ECC, vPro, QuickSync, even if it means going as far down the SKUs as an i3 without SMT, because it'd make a pretty perfect NAS/HTPC. Kaby Lake supports VP8 and VP9 HEVC encoding at least. Although supposedly VP8/VP9 encode was in Skylake if this code actually works.
|
![]() |
|
I recently got a Thermaltake V21 for about £50. Not as much room for drives as the Fractal cases, but it's very nice and I'm currently using three drives for storage. There's always PornHub instead of needing 8 drives. I expect this to be a contentious opinion. ![]() Yes, I'm apropos man fucked around with this message at 22:49 on Aug 12, 2017 |
![]() |
|
necrobobsledder posted:... So would it be worth swapping out my Skylake i3 and putting the Kaby Lake G4560 back in my server? The Kaby is sat on a shelf at the moment. Is there likely to be significant gains in future versions of Plex? Both CPU's are roughly the same compute power. I think the i3 is slightly more powerful.
|
![]() |
|
necrobobsledder posted:The bigger issue is that even if you have video decode hardware accelerated, the quality on all of them actually sucks pretty bad. I work in this area and all the big vendors are moving back to software-based encoders left and right because the hardware manufacturers are only caring about deep learning and the video field's algorithms are just too hard to make substantial gains each generation, not to mention serious limitations in variable quality / speed trade-offs (FPGAs are not cost-effective either, especially if you're doing it in the cloud). A 1080 Ti will be a monster on big-ass CNNs like the ones I wrote for video analysis that could need 128GB of VRAM (I was trying to do face identification at high res and compare with lower res), but Nvenc on there will be marginally faster than the one in the freakin' 1050. The hardware accelerated encode is viable for one specific scenario though - video conferencing / streaming. The part of the GPU activated during an encode is not the same parts activated when playing games. Any performance loss is from the CPU, so this is where streamers start to need extra cores to keep FPS up compared to those that don't. Decode performance doesn't change as you move to a faster GPU, but performance is "good enough". I've actually never heard anyone complain about video decoding performance before. As for video encoding, you don't do that stuff on a 1080 Ti, you use a Quadro that has the media core fully unlocked. That takes you from a (soft-limited) 4 streams at once to 32 streams at once. Obviously if you are saturating the media core already that doesn't get you anything but that would be very unusual. But yeah hardware-accelerated encoding is a pretty substantial tradeoff in quality or bitrate. The hardware just isn't as good as x264 and x265, its merit is how fast it runs. Speaking from the testing I've done on video game captures, at low bitrates you see quality improvement all the way down to veryslow with x264 at least (haven't played with x265 much, it's just too damned slow for day-to-day usage with current processors).
|
![]() |
|
Paul MaudDib posted:Decode performance doesn't change as you move to a faster GPU, but performance is "good enough". I've actually never heard anyone complain about video decoding performance before. Yeah, DXVA and the like decoding is inferior quality-wise compared to doing it in software. If you have the CPU, you should enable software decoding.
|
![]() |
|
apropos man posted:So would it be worth swapping out my Skylake i3 and putting the Kaby Lake G4560 back in my server? The Kaby is sat on a shelf at the moment. Is there likely to be significant gains in future versions of Plex? Both CPU's are roughly the same compute power. I think the i3 is slightly more powerful. From the content producer side, I know the industry is looking for 4k support and HEVC about at the same time (keeping bandwidth costs in check as the resolution quadruples in fillrate). Paul MaudDib posted:As for video encoding, you don't do that stuff on a 1080 Ti, you use a Quadro that has the media core fully unlocked. That takes you from a (soft-limited) 4 streams at once to 32 streams at once. Obviously if you are saturating the media core already that doesn't get you anything but that would be very unusual. And even with Quadros going everyone that's in the digital media space that's not doing this on their workstations is doing it on powerful CPUs or appliances that run $100k+ apiece that aren't using nvenc, vaapi, vdpau, or anything else like it either (for terrible value I might add, holy cow the encode speed on them isn't much better than my i7-4790k).
|
![]() |
|
Desuwa posted:But now you're arguing that ECC isn't practical with your other requirements, not that you wouldn't go with ECC if it were, so I'm not sure what the disagreement even is anymore. If you would go with ECC if it didn't violate your other requirements (cost plus form factor) I think this is all just a miscommunication. It's not a miscommunication. It's gone:
My position is that you can build a performant, capable NAS with 24tb usable for $1000, which doesn't include ECC or ECC-capable chipsets, and that any money spent on that would be better spent on memory/cache drives/nics if you want performance, with data integrity as a very tiny risk on checksumming filesystems even without ECC Not a miscommunication, just moving the goalposts to justify ECC every time. necrobobsledder posted:Issue is that you're now trading off price efficiency on storage and capping yourself on maximum useful storage by sitting on the same number of drives AND the capacity of said drives. I dunno about you but every time I upgraded the base drives (I went from 1 TB, 2TB, to 4TB, and am acquiring 8 TB drives on sales for $160 / each now) I quickly filled them up out of laziness. The first few weeks of not having to manage my storage capacity is pretty glorious at least. ![]() 3tb is a definite sweet spot for price right now, and "I'm gonna buy these drives with 25% less space for 33% less" is definitely not missing efficiency. It's still matx. As you said, they can always be subbed out as prices drop on larger drives. We can always spend more or use different components to remove those caveats, but 24 tb usable for 650 vs 24 tb usable for 700 with less redundancy (and less spindles) is a no brainier for me. Yes, it's capping number and capacity in my example of "you can get 24tb raid6 for $1000 vs 'can't build a NAS for less than 2k", but there are tradeoffs, and this is a reasonable one to me which gets excellent performance and capacity for a very reasonable price tag. Why would a NAS be 2k? Even with IB cards and multiple cache/log/journal offload drives?
|
![]() |
|
When I built mine two years ago, it was right at about 2k, with 8x4TB drives, a Xeon E3-1230v3, 32GB of ECC, a board supporting IPMI, and a tower case supporting large, quiet fans, all brand new parts. I'm running several VMs on it, requiring varying degrees of compute and memory (some for ![]() I still have to figure out what I'm going to do with the 8x4TB though.
|
![]() |
|
G-Prime posted:When I built mine two years ago, it was right at about 2k, with 8x4TB drives, a Xeon E3-1230v3, 32GB of ECC, a board supporting IPMI, and a tower case supporting large, quiet fans, all brand new parts. I'm running several VMs on it, requiring varying degrees of compute and memory (some for This is the real takeaway -- build what suits your needs. I don't run VMs on mine (my compute environment is oversized for most, primarily so I can run through 100 VMs at a time for testing), but it's not a dichotomy. Having a NAS and spending 2k is not an anachronism (especially as all "all in one" box). Neither is spending half that. My needs are an adequate amount of storage and very fast speeds. 4tb drives were very expensive 2 years ago. Much as I won't spend more than 300 on a gpu, I'm much more interested in the right intersection of performance/space/price. I'm just encouraging people to really evaluate that instead of spending twice as much for a cpu which you'll never stress and an amount of storage that many users won't fill before the drives are half what they originally cost. 2k on a NAS is fine in the abstract, like SLI GTX 99999s are ok. But don't throw down a ton of money for something which has rapidly diminishing returns vs gradually upgrading if you find that your original build was undersized, which, if we're honest, isn't a "everything is on fire" moment unless you're adding 1TB+ a week of ![]()
|
![]() |
|
I wish I could find a case big enough for 2 x (3x5.25" iStar hotswap bays) plus a full size ATX mobo with full size RX480 GFX card. Help Rackmount is fine, almost anything is fine except OMG GAMERZZZ BLING type cases.
|
![]() |
|
Finding a case with 6x5.25" externals is hard, honestly, but Newegg shows that they have 4 different ones available. They're all "gamer" cases, but the Rosewill one doesn't have a window and has TONS of cooling capabilities. And if you remove the front fans and replace them with unlighted ones, it's just a chunky, black case. Edit: Link: https://www.newegg.com/Product/Prod...N82E16811147053 Edit2: The fan up front has an on/off switch, so you don't even need to replace it. Also, holy shit, it's 230mm. That's MASSIVE.
|
![]() |
|
I'm looking to build a new fileserver system. My current one is an older system running an AMD E450 chip. I liked that since it is low power and doesn't require a fan. Any modern equivalents without dropping nearly $1000 on a xeon-d processor and board?
|
![]() |
|
G-Prime posted:Finding a case with 6x5.25" externals is hard, honestly, but Newegg shows that they have 4 different ones available. They're all "gamer" cases, but the Rosewill one doesn't have a window and has TONS of cooling capabilities. And if you remove the front fans and replace them with unlighted ones, it's just a chunky, black case. You know that is not bad at all. Thanks very much indeed!
|
![]() |
|
Paul MaudDib posted:
Nvidia locks off at 2 on my 730 . 4 is like a joke if that's true. It's been like 4 years. Now here's a hot tip: AMD doesn't have a lock. You can encode as much as you want. And encoding is something I need to do like I mentioned before. I would do it using QuickSync for Intel but my Xeon doesn't have that package. It is becoming old to have to predownload stuff to watch it on my phone with Plex but their work into the transcoder is just to make a worse version of ffmpeg you can't recompile (the loop holes they go through to not have to show their code in GitHub is hilarious).
|
![]() |