«608 »
  • Post
  • Reply
CommieGIR
Aug 22, 2006

If Godzilla can do it, you know I can deliver!

Pillbug

Paul MaudDib posted:

Asrock X99E-ITX/ac







edit: actually it is the X299E version with the daughterboards but the X99 is still a thing to behold.







Everyone else: it can't be done!!11!!one!

Asrock: hold my beer

That is pure condensed awesome.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

I'm still in favor of the mini ITX Xeon-D boards with dual 10GbE NICs, 4 DIMM slots. Sure, you have to go up to uATX to get the SAS controllers, but 5 NICs is tough to turn away if you're going for a central storage server. http://www.asrockrack.com/general/p...#Specifications

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

Thermopyle posted:

There's not a care/don't-care dichotomy.

I care a lot, but not enough to spend a lot of money buying server hardware instead of using old desktop non-ECC-supporting hardware. And you can skip parts of that chain and still gain reliability.

There very much is a care/don't care dichotomy. There are diminishing returns on how much you want to invest in that, you can store everything on a non-checksummed file system with no backups and it'll still give you reliability, just not very much.

Of the things I listed ECC is probably the least critical, at least in the presence of all the others, which is why the rest of my post mentions that I think it's fine to use a repurposed desktop. But if you're building a NAS you should get ECC, and it doesn't necessarily have to cost more. Either by buying old server parts for cheap, lower end server parts (the atom platforms were reasonable ways to get server grade components), or paying more if you must have tons of performance on your NAS for some reason.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Desuwa posted:

There very much is a care/don't care dichotomy.

I mean, you can keep asserting this, and then contradicting yourself, but I don't know why you would do so.

evol262
Nov 30, 2010
#!/usr/bin/perl

Desuwa posted:

There very much is a care/don't care dichotomy. There are diminishing returns on how much you want to invest in that, you can store everything on a non-checksummed file system with no backups and it'll still give you reliability, just not very much.
That's not what a dichotomy is

"If you care about your data -> get ECC"
"If you don't get ECC -> you don't care about your data"

"This is a sliding scale and removing any one of the pieces makes it less reliable" is just unarguable. The question is whether people who don't invest the extra time/effort/space/whatever in ECC don't care about their data. I'd venture that they do.

Desuwa posted:

Of the things I listed ECC is probably the least critical, at least in the presence of all the others, which is why the rest of my post mentions that I think it's fine to use a repurposed desktop. But if you're building a NAS you should get ECC, and it doesn't necessarily have to cost more. Either by buying old server parts for cheap, lower end server parts (the atom platforms were reasonable ways to get server grade components), or paying more if you must have tons of performance on your NAS for some reason.

I'd really like to see a good justification for "if you're building a NAS, you should get ECC" that doesn't apply to literally every aspect of computing, because I don't think there is one. Lots of people don't want old server parts, and want a faster system with more expansion than Atoms, but still want a NAS. Largely because you can spend the extra money you'd invest in a platform which is ECC-capable on, y'know, drives.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA


evol262 posted:

That's not what a dichotomy is

"If you care about your data -> get ECC"
"If you don't get ECC -> you don't care about your data"

"This is a sliding scale and removing any one of the pieces makes it less reliable" is just unarguable. The question is whether people who don't invest the extra time/effort/space/whatever in ECC don't care about their data. I'd venture that they do.


I'd really like to see a good justification for "if you're building a NAS, you should get ECC" that doesn't apply to literally every aspect of computing, because I don't think there is one. Lots of people don't want old server parts, and want a faster system with more expansion than Atoms, but still want a NAS. Largely because you can spend the extra money you'd invest in a platform which is ECC-capable on, y'know, drives.

A lot of it comes down to hot cache and long hours of operation, and the fact that most people don't have a full backup of their 30Tb of random bullshit. I got a shitty Xeon board and ECC for my OmniOS box because I had an earlier hardware failure that was impossible to diagnose, and I wanted IPMI so I could remote it and turn it on/off remotely. The extra $200 or so paid for itself over the last 3 years, as it's already lasted longer than the last hand-me-down shitbox, and IPMI is handy for dealing with boot issues and whatnot.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

evol262 posted:

That's not what a dichotomy is


I think you're arguing something else.

The idea of a class of data that I care about just enough to be completely fine with corruption from memory while not being fine with corruption from the file system is alien to me.

What made you think I don't want ECC on everything though? My argument does apply to all computing systems. If not for Intel using it to force market segmentation I bet we'd be using ECC in all but the most budget systems.

It might not be practical to build a new system just for ECC if I have an old desktop, but that doesn't mean that I'm not going to get ECC when I do build a new system.

At best maybe I should have been less strong and said anyone who cares "should" instead of "is" a few posts ago, which I think is what you're both jumping on.

evol262
Nov 30, 2010
#!/usr/bin/perl

Methylethylaldehyde posted:

A lot of it comes down to hot cache and long hours of operation, and the fact that most people don't have a full backup of their 30Tb of random bullshit. I got a shitty Xeon board and ECC for my OmniOS box because I had an earlier hardware failure that was impossible to diagnose, and I wanted IPMI so I could remote it and turn it on/off remotely. The extra $200 or so paid for itself over the last 3 years, as it's already lasted longer than the last hand-me-down shitbox, and IPMI is handy for dealing with boot issues and whatnot.

The question is "do you need a full backup of your 30TB of random bullshit with a checksumming filesystem?" And the answer (for data integrity) is generally "no". I mean, obviously you should have backups, and OOB access is great in any case (which generally comes with ECC, but vPro is/was a thing).

Desuwa posted:

I think you're arguing something else.
Let's just let what a dichotomy is drop.

Desuwa posted:

The idea of a class of data that I care about just enough to be completely fine with corruption from memory while not being fine with corruption from the file system is alien to me.
Checksumming filesystems have a practically zero chance of data corruption on disk unless you're constantly copying it around. ECC saves you from problems when the file is on the fly. Nothing more.

Desuwa posted:

What made you think I don't want ECC on everything though? My argument does apply to all computing systems. If not for Intel using it to force market segmentation I bet we'd be using ECC in all but the most budget systems.
It was a joke, but, uh... Registered ECC is marginally slower, so there's that.

Desuwa posted:

It might not be practical to build a new system just for ECC if I have an old desktop, but that doesn't mean that I'm not going to get ECC when I do build a new system.
You'd choose it over large drives or more of them?

Desuwa posted:

At best maybe I should have been less strong and said anyone who cares "should" instead of "is" a few posts ago, which I think is what you're both jumping on.
I also disagree with "should", but meh. If I can grab a Pentium CPU+mobo combo for $60 instead of a Xeon and Xeon board (or Threadripper, or Ryzen, or whatever) and use that extra $200 for larger drives or extra NICs, I'm probably gonna. Maybe just me.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

evol262 posted:

Let's just let what a dichotomy is drop.

I said there's a dichotomy between data you care about and that which you don't. I'm not sure where the confusion is. I don't expect people to have a continuum of different types of data that they're fine with different levels of corruption for.

evol262 posted:

It was a joke, but, uh... Registered ECC is marginally slower, so there's that.

So is registered non-ECC RAM. Don't get registered RAM unless you need it.

And yeah I'd choose ecc over more it bigger drives, provided I can at least meet my needs. If a person needs a high performance NAS with tons of storage they're already dropping 2000+ dollars then they can build a Ryzen system with ECC without shelling out for xeons.

Desuwa fucked around with this message at 00:55 on Aug 12, 2017

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Desuwa posted:

I don't expect people to have a continuum of different types of data that they're fine with different levels of corruption for.

You're wrong.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

necrobobsledder posted:

I'm still in favor of the mini ITX Xeon-D boards with dual 10GbE NICs, 4 DIMM slots. Sure, you have to go up to uATX to get the SAS controllers, but 5 NICs is tough to turn away if you're going for a central storage server. http://www.asrockrack.com/general/p...#Specifications

I agree, I buy a lot of the supermicro ones for general purpose test servers at work and they are fantastic little machines.

Curious if Intel is doing a skylake/kabylake SoC version.

D. Ebdrup
Mar 13, 2009



necrobobsledder posted:

I'm still in favor of the mini ITX Xeon-D boards with dual 10GbE NICs, 4 DIMM slots. Sure, you have to go up to uATX to get the SAS controllers, but 5 NICs is tough to turn away if you're going for a central storage server. http://www.asrockrack.com/general/p...#Specifications
Four NICs, the last one is for IPMI. That being said, it's a very nice platform if you need the horsepower.
If not, Denverton is worth waiting for because it has up to 12 SATA ports and 16 ~2GHz cores without L3 cache, instead of 8 SMT cores with L3 cache and a SAS controller that may or may not be flashable to IT-mode, all in the same mini-ITX formfactor.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

[major tangent]

I haven't read the boards for a couple of days and this thread has hugened bigly.

I just want to add that if anyone decides to go with the Gigabyte X150M-ECC board that I mentioned a couple of pages ago then it needs a BIOS flash in order to use the Kaby Lake G4560.

I've actually got the i3 6100 in mine because I tried a G4560 first and when it wouldn't post I drove 100 miles at 8pm on a Friday night to pick up the 6100 from a guy on eBay just to get the thing working.

I've updated the BIOS now, but don't intend to swap them back, so if anyone from the UK wants an 'as new' G4560 with box and unused HSF then it's available.

I realise this should go in SA Mart but I'm just not in a rush to sell it.

[/major tangent]

D. Ebdrup
Mar 13, 2009



priznat posted:

Curious if Intel is doing a skylake/kabylake SoC version.
I wouldn't expect a SoC upgrade before Coffeelake because of Optane Memory, in conjunction with Goldmont Plus is also coming out around that time.

D. Ebdrup fucked around with this message at 12:36 on Aug 12, 2017

evol262
Nov 30, 2010
#!/usr/bin/perl

Desuwa posted:

So is registered non-ECC RAM. Don't get registered RAM unless you need it.
We can also go with the fact that an equivalent amount of ECC costs more.

Desuwa posted:

And yeah I'd choose ecc over more it bigger drives, provided I can at least meet my needs. If a person needs a high performance NAS with tons of storage they're already dropping 2000+ dollars then they can build a Ryzen system with ECC without shelling out for xeons.

I'd choose spending less money. I don't even know what to say about "2000+ on a NAS". IB equipment is cheap on eBay, and it's not like the CPU load on a NAS is huge unless it's also running your VMs and everything else. You can get 24TB after RAID losses pushing 450mb/s under 1000 in a small form factor, and this is the home NAS thread, not the "build a production SAN" thread.

Even then, I'd get more SSDs for cache or more memory instead of CPU power, since CPUs are almost never pegged, even if they're relatively weak.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Yeah, I've got 4 VM's running on my i3: 1 for important shit & backups, 1 running Plex, 1 running a cryptocurrency node (not mining) and 1 as a sandbox for doing various stuff and the i3 is barely taking a stroll most of the time, unless I watch a movie through Plex and then it delivers a nice HD stream.

It depends what you want to do, I suppose, but something the level of an i3 is more than adequate for me.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

Only home use case I can imagine for a powerful CPU in your NAS would be HEVC or VP9 transcoding near 30 fps for Plex. But nobody really has a client that will play that besides some Android phones and I'd bet Plex is a bit buggy so far on HEVC anyway. The disk savings sorta matter but HEVC mostly shines in lower bitrate efficiencies.

Also, $2k is super easy to hit for a NAS if you bust into 8 drive 4 TB territory - the drives alone on sale now are about $1k USD with tax.

A TS140 can be bought for like $250 and it'll support ECC RAM. No need to go crazy on a Xeon D or anything with a recent socket. I only obsess over size and performance and such because I move frequently and I'm willing to pay extra to avoid having to pack and unpack more equipment without resorting to cloud storage that is way, way too much in cost. My current NAS setup is an i3-4100 running in a UNAS 8-bay box and that's good enough for dedicated storage and light transcoding honestly.

D. Ebdrup
Mar 13, 2009



necrobobsledder posted:

Only home use case I can imagine for a powerful CPU in your NAS would be HEVC or VP9 transcoding near 30 fps for Plex. But nobody really has a client that will play that besides some Android phones and I'd bet Plex is a bit buggy so far on HEVC anyway. The disk savings sorta matter but HEVC mostly shines in lower bitrate efficiencies.
Isn't ffmpeg/libva/VAAPI quicksync support pretty good nowadays? Because I'd love to see a non-embedded Kaby Lake refresh of something like the i7-5700EQ which has ECC, vPro, QuickSync, even if it means going as far down the SKUs as an i3 without SMT, because it'd make a pretty perfect NAS/HTPC.

D. Ebdrup fucked around with this message at 16:52 on Aug 12, 2017

evol262
Nov 30, 2010
#!/usr/bin/perl

necrobobsledder posted:

Only home use case I can imagine for a powerful CPU in your NAS would be HEVC or VP9 transcoding near 30 fps for Plex. But nobody really has a client that will play that besides some Android phones and I'd bet Plex is a bit buggy so far on HEVC anyway. The disk savings sorta matter but HEVC mostly shines in lower bitrate efficiencies.

Also, $2k is super easy to hit for a NAS if you bust into 8 drive 4 TB territory - the drives alone on sale now are about $1k USD with tax.
True, but 3tb drives are ~30% cheaper. You can hit 10 3tb drives for ~650+tax (less on sale), which still gives 24tb after losses for redundancy, plus a controller and a couple of cheap 32/64gb ssds for cache, which just leaves whatever CPU/memory/etc you want to get. i3 with 16gb is ~150, less if you get an older generation.

Hotswap bays will cost a little more, but you're still at about 1000+tax
quote="necrobobsledder" post="475290991"]
A TS140 can be bought for like $250 and it'll support ECC RAM. No need to go crazy on a Xeon D or anything with a recent socket. I only obsess over size and performance and such because I move frequently and I'm willing to pay extra to avoid having to pack and unpack more equipment without resorting to cloud storage that is way, way too much in cost. My current NAS setup is an i3-4100 running in a UNAS 8-bay box and that's good enough for dedicated storage and light transcoding honestly.
[/quote]

I also move, sometimes to small places, which is why I don't want a full-depth 4u case or loud system.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


Sure, 3TB drives cost less. But it's hard to find a tower case that easily holds more than 8, and rackmount ones are loud as hell. I'd much rather pay more for higher density and a nearly silent case.

evol262
Nov 30, 2010
#!/usr/bin/perl

G-Prime posted:

Sure, 3TB drives cost less. But it's hard to find a tower case that easily holds more than 8, and rackmount ones are loud as hell. I'd much rather pay more for higher density and a nearly silent case.

Fractal makes a matx which holds 10 for ~100, which is what I use now (after moving yet again and ditching the Norco 24 bay)

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

evol262 posted:

I'd choose spending less money. I don't even know what to say about "2000+ on a NAS". IB equipment is cheap on eBay, and it's not like the CPU load on a NAS is huge unless it's also running your VMs and everything else. You can get 24TB after RAID losses pushing 450mb/s under 1000 in a small form factor, and this is the home NAS thread, not the "build a production SAN" thread.

Even then, I'd get more SSDs for cache or more memory instead of CPU power, since CPUs are almost never pegged, even if they're relatively weak.

I brought that up in response to you bringing up spending hundreds on Xeons or Threadripper; once you need that kind of power in your NAS you're already well into four digits and ECC ends up being a tiny part of it. If you don't need tons of CPU power there are options for ECC that are just as cheap as the cheapest low end consumer stuff.

But now you're arguing that ECC isn't practical with your other requirements, not that you wouldn't go with ECC if it were, so I'm not sure what the disagreement even is anymore. If you would go with ECC if it didn't violate your other requirements (cost plus form factor) I think this is all just a miscommunication.

Desuwa fucked around with this message at 21:48 on Aug 12, 2017

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


Which one's that? I could pull off 11 in my Define R5, but that'd require putting a 3-in-2 bay into my 5.25s, and I didn't want to deal with the extra heat. The 8 built-in 3.5s are what I've been using, plus one of the SSD mounts on the rear of the motherboard tray.

Wistful of Dollars
Aug 25, 2009



Thwomp posted:

Depends. Are you the kind of person who will, upon seeing the capabilities of a new device, want to maximize said capabilities beyond the device's ability?

If not, grab whatever consumer storage device can handle your need at a price point you can afford.

If so, you have two options:

If you're just looking for file storage and nothing else, there's plenty of consumer products available that do that and nothing much else.

If you want to experiment with a home NAS that can store files and eventually do other things and you really can't control yourself, then building one from scratch would be advisable.

Devices from QNAP and Synology, for someone who just continues to tinker and expand your use-case, are gateway drugs that will leave you hanging. They're often powerful enough to give you glimpses of what's possible but not powerful enough to fulfill your desires.

I gave it a think and decided that I'd best build my own given my tinkering nature.

It'll probably grab a Pentium to do it, though deals on vastly overkill xeon-d's tempt me.

Moey
Oct 22, 2010

I LIKE TO MOVE IT


evol262 posted:

Fractal makes a matx which holds 10 for ~100, which is what I use now (after moving yet again and ditching the Norco 24 bay)

I have a Fractal Define Mini (mATX) which has 6 3.5" drive sleds, then the external 5.25" bay I plan on putting a 4-6 icydock 2.5" chassis in once I expand.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

G-Prime posted:

Which one's that? I could pull off 11 in my Define R5, but that'd require putting a 3-in-2 bay into my 5.25s, and I didn't want to deal with the extra heat. The 8 built-in 3.5s are what I've been using, plus one of the SSD mounts on the rear of the motherboard tray.

Node 804, though it gets kind of awkward with all ten drives and it has a pretty unusual footprint.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

evol262 posted:

True, but 3tb drives are ~30% cheaper. You can hit 10 3tb drives for ~650+tax (less on sale), which still gives 24tb after losses for redundancy, plus a controller and a couple of cheap 32/64gb ssds for cache, which just leaves whatever CPU/memory/etc you want to get. i3 with 16gb is ~150, less if you get an older generation.

Hotswap bays will cost a little more, but you're still at about 1000+tax
Issue is that you're now trading off price efficiency on storage and capping yourself on maximum useful storage by sitting on the same number of drives AND the capacity of said drives. I dunno about you but every time I upgraded the base drives (I went from 1 TB, 2TB, to 4TB, and am acquiring 8 TB drives on sales for $160 / each now) I quickly filled them up out of laziness. The first few weeks of not having to manage my storage capacity is pretty glorious at least.

I've been strongly considering the possibility of buying only 2.5" disks and shoving them into a case for better heat / power / space savings over 3.5" drives but I really can't find an enclosure that would accept the SATA backplane at the right size. There's stuff like this backplane or this better candidate (15mm height drive support is mandatory for 2.5" drives if you're doing bulk storage) but there's I think literally zero mini ITX cases out there that have 2 5.25" external bays. This is why I ultimately wound up with the UNAS NSC-800 case.


D. Ebdrup posted:

Isn't ffmpeg/libva/VAAPI quicksync support pretty good nowadays? Because I'd love to see a non-embedded Kaby Lake refresh of something like the i7-5700EQ which has ECC, vPro, QuickSync, even if it means going as far down the SKUs as an i3 without SMT, because it'd make a pretty perfect NAS/HTPC.
VAAPI support for HEVC can do real-time encoding but Plex doesn't support it yet to my knowledge. The bigger issue is that even if you have video decode hardware accelerated, the quality on all of them actually sucks pretty bad. I work in this area and all the big vendors are moving back to software-based encoders left and right because the hardware manufacturers are only caring about deep learning and the video field's algorithms are just too hard to make substantial gains each generation, not to mention serious limitations in variable quality / speed trade-offs (FPGAs are not cost-effective either, especially if you're doing it in the cloud). A 1080 Ti will be a monster on big-ass CNNs like the ones I wrote for video analysis that could need 128GB of VRAM (I was trying to do face identification at high res and compare with lower res), but Nvenc on there will be marginally faster than the one in the freakin' 1050. The hardware accelerated encode is viable for one specific scenario though - video conferencing / streaming. The part of the GPU activated during an encode is not the same parts activated when playing games. Any performance loss is from the CPU, so this is where streamers start to need extra cores to keep FPS up compared to those that don't.

Kaby Lake supports VP8 and VP9 HEVC encoding at least. Although supposedly VP8/VP9 encode was in Skylake if this code actually works.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

I recently got a Thermaltake V21 for about £50.

Not as much room for drives as the Fractal cases, but it's very nice and I'm currently using three drives for storage. There's always PornHub instead of needing 8 drives. I expect this to be a contentious opinion.



Yes, I'm stupid enthusiastic enough to add LED fans to a PC that's kept in a cupboard.

apropos man fucked around with this message at 22:49 on Aug 12, 2017

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

necrobobsledder posted:

...
Kaby Lake supports VP8 and VP9 HEVC encoding at least. Although supposedly VP8/VP9 encode was in Skylake if this code actually works.

So would it be worth swapping out my Skylake i3 and putting the Kaby Lake G4560 back in my server? The Kaby is sat on a shelf at the moment. Is there likely to be significant gains in future versions of Plex? Both CPU's are roughly the same compute power. I think the i3 is slightly more powerful.

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


necrobobsledder posted:

The bigger issue is that even if you have video decode hardware accelerated, the quality on all of them actually sucks pretty bad. I work in this area and all the big vendors are moving back to software-based encoders left and right because the hardware manufacturers are only caring about deep learning and the video field's algorithms are just too hard to make substantial gains each generation, not to mention serious limitations in variable quality / speed trade-offs (FPGAs are not cost-effective either, especially if you're doing it in the cloud). A 1080 Ti will be a monster on big-ass CNNs like the ones I wrote for video analysis that could need 128GB of VRAM (I was trying to do face identification at high res and compare with lower res), but Nvenc on there will be marginally faster than the one in the freakin' 1050. The hardware accelerated encode is viable for one specific scenario though - video conferencing / streaming. The part of the GPU activated during an encode is not the same parts activated when playing games. Any performance loss is from the CPU, so this is where streamers start to need extra cores to keep FPS up compared to those that don't.

Decode performance doesn't change as you move to a faster GPU, but performance is "good enough". I've actually never heard anyone complain about video decoding performance before.

As for video encoding, you don't do that stuff on a 1080 Ti, you use a Quadro that has the media core fully unlocked. That takes you from a (soft-limited) 4 streams at once to 32 streams at once. Obviously if you are saturating the media core already that doesn't get you anything but that would be very unusual.

But yeah hardware-accelerated encoding is a pretty substantial tradeoff in quality or bitrate. The hardware just isn't as good as x264 and x265, its merit is how fast it runs. Speaking from the testing I've done on video game captures, at low bitrates you see quality improvement all the way down to veryslow with x264 at least (haven't played with x265 much, it's just too damned slow for day-to-day usage with current processors).

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Paul MaudDib posted:

Decode performance doesn't change as you move to a faster GPU, but performance is "good enough". I've actually never heard anyone complain about video decoding performance before.

As for video encoding, you don't do that stuff on a 1080 Ti, you use a Quadro that has the media core fully unlocked. That takes you from a (soft-limited) 4 streams at once to 32 streams at once. Obviously if you are saturating the media core already that doesn't get you anything but that would be very unusual.

But yeah hardware-accelerated encoding is a pretty substantial tradeoff in quality or bitrate. The hardware just isn't as good as x264 and x265, its merit is how fast it runs. Speaking from the testing I've done on video game captures, at low bitrates you see quality improvement all the way down to veryslow with x264 at least (haven't played with x265 much, it's just too damned slow for day-to-day usage with current processors).

Yeah, DXVA and the like decoding is inferior quality-wise compared to doing it in software. If you have the CPU, you should enable software decoding.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

apropos man posted:

So would it be worth swapping out my Skylake i3 and putting the Kaby Lake G4560 back in my server? The Kaby is sat on a shelf at the moment. Is there likely to be significant gains in future versions of Plex? Both CPU's are roughly the same compute power. I think the i3 is slightly more powerful.
Turns out Plex's hardware decode is in alpha status now. Given the developers' statements so far, I wouldn't hold my breath on the support being all that great for months and months, so I wouldn't bother unless you want to be a tester for the Plex team. Even if Plex's HEVC support is super awesome right now, I'm watching Apple for HEVC because that's when you know that things will get super serious from the hardware side. In good (bad...?) news for you iOS 11 and High Sierra on macOS are getting support this year.

From the content producer side, I know the industry is looking for 4k support and HEVC about at the same time (keeping bandwidth costs in check as the resolution quadruples in fillrate).

Paul MaudDib posted:

As for video encoding, you don't do that stuff on a 1080 Ti, you use a Quadro that has the media core fully unlocked. That takes you from a (soft-limited) 4 streams at once to 32 streams at once. Obviously if you are saturating the media core already that doesn't get you anything but that would be very unusual.
I mostly mentioned decoding because that's the first scenario for hardware acceleration that should be measured - why would encoding be any decent quality if decode still sucks, right?
And even with Quadros going everyone that's in the digital media space that's not doing this on their workstations is doing it on powerful CPUs or appliances that run $100k+ apiece that aren't using nvenc, vaapi, vdpau, or anything else like it either (for terrible value I might add, holy cow the encode speed on them isn't much better than my i7-4790k).

evol262
Nov 30, 2010
#!/usr/bin/perl

Desuwa posted:

But now you're arguing that ECC isn't practical with your other requirements, not that you wouldn't go with ECC if it were, so I'm not sure what the disagreement even is anymore. If you would go with ECC if it didn't violate your other requirements (cost plus form factor) I think this is all just a miscommunication.

It's not a miscommunication.

It's gone:
  • use ECC if you care about your data
  • ECC doesn't cost more if you use a Xeon/ryzen
  • Why would you use those
  • You're gonna spend 2k on a NAS anyway
  • Why

My position is that you can build a performant, capable NAS with 24tb usable for $1000, which doesn't include ECC or ECC-capable chipsets, and that any money spent on that would be better spent on memory/cache drives/nics if you want performance, with data integrity as a very tiny risk on checksumming filesystems even without ECC

Not a miscommunication, just moving the goalposts to justify ECC every time.



necrobobsledder posted:

Issue is that you're now trading off price efficiency on storage and capping yourself on maximum useful storage by sitting on the same number of drives AND the capacity of said drives. I dunno about you but every time I upgraded the base drives (I went from 1 TB, 2TB, to 4TB, and am acquiring 8 TB drives on sales for $160 / each now) I quickly filled them up out of laziness. The first few weeks of not having to manage my storage capacity is pretty glorious at least.
I dunno how many you're using, but I don't lazily fill an extra 8tb.

3tb is a definite sweet spot for price right now, and "I'm gonna buy these drives with 25% less space for 33% less" is definitely not missing efficiency. It's still matx. As you said, they can always be subbed out as prices drop on larger drives.

We can always spend more or use different components to remove those caveats, but 24 tb usable for 650 vs 24 tb usable for 700 with less redundancy (and less spindles) is a no brainier for me. Yes, it's capping number and capacity in my example of "you can get 24tb raid6 for $1000 vs 'can't build a NAS for less than 2k", but there are tradeoffs, and this is a reasonable one to me which gets excellent performance and capacity for a very reasonable price tag.

Why would a NAS be 2k? Even with IB cards and multiple cache/log/journal offload drives?

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


When I built mine two years ago, it was right at about 2k, with 8x4TB drives, a Xeon E3-1230v3, 32GB of ECC, a board supporting IPMI, and a tower case supporting large, quiet fans, all brand new parts. I'm running several VMs on it, requiring varying degrees of compute and memory (some for , some for fun, some because they're a convenient testbed for stuff I want to prototype for work and I don't have to pay to spin up an EC2 instance). That price met all of my needs, that's why it cost that much, and may be the case for others. Two years later, I've put another ~1500 into it, by replacing the drives with 8x8TB. It's still meeting my needs perfectly, and that's what I wanted.

I still have to figure out what I'm going to do with the 8x4TB though.

evol262
Nov 30, 2010
#!/usr/bin/perl

G-Prime posted:

When I built mine two years ago, it was right at about 2k, with 8x4TB drives, a Xeon E3-1230v3, 32GB of ECC, a board supporting IPMI, and a tower case supporting large, quiet fans, all brand new parts. I'm running several VMs on it, requiring varying degrees of compute and memory (some for , some for fun, some because they're a convenient testbed for stuff I want to prototype for work and I don't have to pay to spin up an EC2 instance). That price met all of my needs, that's why it cost that much, and may be the case for others. Two years later, I've put another ~1500 into it, by replacing the drives with 8x8TB. It's still meeting my needs perfectly, and that's what I wanted.

I still have to figure out what I'm going to do with the 8x4TB though.

This is the real takeaway -- build what suits your needs. I don't run VMs on mine (my compute environment is oversized for most, primarily so I can run through 100 VMs at a time for testing), but it's not a dichotomy. Having a NAS and spending 2k is not an anachronism (especially as all "all in one" box). Neither is spending half that.

My needs are an adequate amount of storage and very fast speeds. 4tb drives were very expensive 2 years ago. Much as I won't spend more than 300 on a gpu, I'm much more interested in the right intersection of performance/space/price.

I'm just encouraging people to really evaluate that instead of spending twice as much for a cpu which you'll never stress and an amount of storage that many users won't fill before the drives are half what they originally cost.

2k on a NAS is fine in the abstract, like SLI GTX 99999s are ok. But don't throw down a ton of money for something which has rapidly diminishing returns vs gradually upgrading if you find that your original build was undersized, which, if we're honest, isn't a "everything is on fire" moment unless you're adding 1TB+ a week of

redeyes
Sep 14, 2002
I LOVE THE WHITE STRIPES!

I wish I could find a case big enough for 2 x (3x5.25" iStar hotswap bays) plus a full size ATX mobo with full size RX480 GFX card.

Help

Rackmount is fine, almost anything is fine except OMG GAMERZZZ BLING type cases.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


Finding a case with 6x5.25" externals is hard, honestly, but Newegg shows that they have 4 different ones available. They're all "gamer" cases, but the Rosewill one doesn't have a window and has TONS of cooling capabilities. And if you remove the front fans and replace them with unlighted ones, it's just a chunky, black case.

Edit: Link: https://www.newegg.com/Product/Prod...N82E16811147053

Edit2: The fan up front has an on/off switch, so you don't even need to replace it. Also, holy shit, it's 230mm. That's MASSIVE.

Nullsmack
Dec 7, 2001
Digital apocalypse

I'm looking to build a new fileserver system. My current one is an older system running an AMD E450 chip. I liked that since it is low power and doesn't require a fan. Any modern equivalents without dropping nearly $1000 on a xeon-d processor and board?

redeyes
Sep 14, 2002
I LOVE THE WHITE STRIPES!

G-Prime posted:

Finding a case with 6x5.25" externals is hard, honestly, but Newegg shows that they have 4 different ones available. They're all "gamer" cases, but the Rosewill one doesn't have a window and has TONS of cooling capabilities. And if you remove the front fans and replace them with unlighted ones, it's just a chunky, black case.

Edit: Link: https://www.newegg.com/Product/Prod...N82E16811147053

Edit2: The fan up front has an on/off switch, so you don't even need to replace it. Also, holy shit, it's 230mm. That's MASSIVE.

You know that is not bad at all. Thanks very much indeed!

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!


Switchblade Switcharoo

Paul MaudDib posted:


As for video encoding, you don't do that stuff on a 1080 Ti, you use a Quadro that has the media core fully unlocked. That takes you from a (soft-limited) 4 streams at once to 32 streams at once. Obviously if you are saturating the media core already that doesn't get you anything but that would be very unusual.

But yeah hardware-accelerated encoding is a pretty substantial tradeoff in quality or bitrate. The hardware just isn't as good as x264 and x265, its merit is how fast it runs. Speaking from the testing I've done on video game captures, at low bitrates you see quality improvement all the way down to veryslow with x264 at least (haven't played with x265 much, it's just too damned slow for day-to-day usage with current processors).

Nvidia locks off at 2 on my 730 . 4 is like a joke if that's true. It's been like 4 years.

Now here's a hot tip: AMD doesn't have a lock. You can encode as much as you want.

And encoding is something I need to do like I mentioned before. I would do it using QuickSync for Intel but my Xeon doesn't have that package. It is becoming old to have to predownload stuff to watch it on my phone with Plex but their work into the transcoder is just to make a worse version of ffmpeg you can't recompile (the loop holes they go through to not have to show their code in GitHub is hilarious).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »