«608 »
  • Post
  • Reply
Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


CopperHound posted:

Okay, I decided to do a foolish thing and build my own NAS and I want a quick reality check:
I want to use a NSC-810a case. Is a X11SSM-F pretty much my best motherboard option? I don't see a lot of other mATX boards with 8 sata ports.
My only hang-up is that I don't think I need ecc memory and that motherboard can only handle up to 7th gen processors. It would not be holding anything mission critical and I don't have plans to use ZFS. The main use would likely be for Plex media and other bulk storage.

E: oh and keeping the cost down where reasonable would be nice too.

Right now I would say go with some flavor of the X11SCH (-F, -LN4F, etc). It's basically the same board but updated for 8th-gen CPUs. Supermicro said they'll be releasing 9th gen support later this year which means you can use an i3-9300/9300F/9350KF and get both ECC support, a real quad-core, and the higher turbo clocks on 9th gen. It also goes to dual x4 NVMe slots vs the single x2 slot on the X11SSM-F. However, unlike the previous gen, Supermicro has said they won't be releasing a 10 GbE variant of the board, if that matters to you.

If you are going to go with a high-tier motherboard build you might as well go ECC. If you are already shopping for a midrange+ server board with IPMI then it's not really a large marginal expense.

You can use a cheaper gaming board and a SAS/SATA HBA instead if you want, this obviously means sacrificing one of your PCIe slots right out of the gate, and you probably won't get IPMI. You should also consider the cost of the HBA - if you're paying $100 for the board and $50-75 for an HBA then you're really only saving maybe $50-75 vs just getting the board that has everything built in. $75 may be worth it for the luxury of having both your slots free for expansion (or cleanliness of airflow).

There is also an X470 board with IPMI (Asrock X470D4U) but it has fewer SATA ports so you will need to lose one slot right off the bat for your HBA. Intel's C-series boards are one of the few that has 8 ports onboard - and some models of mobos even have SAS controllers to let you get another 8 ports beyond the chipsets.

Also note that 8 ports means you need to think about how you're going to boot, because I don't think you can boot from a RAID array (or raidz, etc). You can use a USB stick or a USB-to-SATA adapter (tight fit but possible on this case if you flip the SSD to the inside) on the internal 3.0 port, or you can boot from NVMe.

Paul MaudDib fucked around with this message at 21:08 on Aug 5, 2019

H2SO4
Sep 11, 2001

put your money in a log cabin




Buglord

It's worth mentioning that Supermicro sells SATADOMs that don't need any additional power on the X10 and up boards, typically the SATA connectors that support that are yellow/orange. You could boot from that, just keep write endurance in mind when configuring logging.

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




Paul MaudDib posted:


Also note that 8 ports means you need to think about how you're going to boot, because I don't think you can boot from a RAID array (or raidz, etc).

Why not?

CopperHound
Feb 14, 2012



Paul MaudDib posted:

Right now I would say go with some flavor of the X11SCH (-F, -LN4F, etc). It's basically the same board but updated for 8th-gen CPUs. Supermicro said they'll be releasing 9th gen support later this year which means you can use an i3-9300/9300F/9350KF and get both ECC support, a real quad-core, and the higher turbo clocks on 9th gen.
Just to be clear, do you mean x11sch will be getting a bios update to support 9th gen CPUs? I think I'd really like a 4 core i3 vs paying for Xeon with quicksync.

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


H2SO4 posted:

It's worth mentioning that Supermicro sells SATADOMs that don't need any additional power on the X10 and up boards, typically the SATA connectors that support that are yellow/orange. You could boot from that, just keep write endurance in mind when configuring logging.

Right, or you could just use a regular SATA SSD, but either way you lose a SATA port and you're now down to 7 SATA ports for your array, meaning you can't fill all 8 slots in the chassis without using a HBA, which is the point of this exercise. Using a USB or NVMe interface for the boot drive sidesteps that and leaves you with 8 SATA ports free for your chassis drives.


... can you? I didn't think you could, since RAID mode wouldn't be loaded until the OS got far enough to boot, but maybe there's enough on /boot to get the array mounted? Especially if you go FreeBSD I guess, that probably would work there, but not sure if LVM would on Linux, and ZFS-on-Linux seems like it definitely wouldn't.

There's probably some other downsides, like a full array potentially making your system disks unwriteable and thus unbootable, but in the general case it's probably fine then.

CopperHound posted:

Just to be clear, do you mean x11sch will be getting a bios update to support 9th gen CPUs? I think I'd really like a 4 core i3 vs paying for Xeon with quicksync.

Yes, I emailed Supermicro support a few months ago and was told "later this year".

The other nice thing is that i3s are clocked relatively high compared to Xeons. Like, when I looked at the 7100, you had to step up to pretty much the highest-tier Xeon to get the same clockrate and it was a turbo, while the 7100 will nominally sit at its max clock all day since max clock = base clock. The 9-series i3s have turbo but it's still probably better than most Xeons will have.

While Plex encoding obviously scales across cores, a lot of basic server tasks are still quite single-threaded. A 7100 or 9300/F/9350KF is pretty ideal for a home user who wants low power usage and isn't going to be serving a billion users in parallel.

Paul MaudDib fucked around with this message at 05:06 on Aug 6, 2019

Moey
Oct 22, 2010

I LIKE TO MOVE IT


Paul MaudDib posted:


... can you? I didn't think you could, since RAID mode wouldn't be loaded until the OS got far enough to boot, but maybe there's enough on /boot to get the array mounted? Especially if you go FreeBSD I guess, that probably would work there, but not sure if LVM would on Linux, and ZFS-on-Linux seems like it definitely wouldn't.

There's probably some other downsides, like a full array potentially making your system disks unwriteable and thus unbootable, but in the general case it's probably fine then.

Before virtualization became the norm, tons of businesses would do hardware raid 1 for their OS boot volume then use the remaining disks for raid 5/6/10 for the local storage for the physical server.

ACRE & EQUAT
Aug 28, 2004

FUNERAL BREADS
WAR BREAD


Thank you all for the explanations

The raw data is 10s of tb and it gets added to the pile back at the lab which is ~100 TB. Then we produce analysis which is < 10 TB.

I'm going to take this advice for the raw data:

Actuarial Fables posted:

With a single device holing all the data, it becomes a single point of failure and should something happen to it (someone spills water over it, gets dropped hard, gets stolen, lost during shipping) then you're dead. Having multiple independent drives means that they can be in different physical locations and be in separate bags for shipping.
and get a small raid 1 for the analysis.

Thanks!

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.


Clapping Larry

Yeah having multiple storage servers and then having backups of those multiple ones would be ideal. That way there is no single point of failure, even per server.

I cannot stress how much a local backup copy will come in handy if something goes wrong, like if drives fall out of the array or your sata controller dies and has been spitting garbage in your parity tables.

I had no idea how fragile this stuff can be until I started fixing storage servers.

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




Paul MaudDib posted:

... can you? I didn't think you could, since RAID mode wouldn't be loaded until the OS got far enough to boot, but maybe there's enough on /boot to get the array mounted? Especially if you go FreeBSD I guess, that probably would work there, but not sure if LVM would on Linux, and ZFS-on-Linux seems like it definitely wouldn't.

you're forgetting about the existence of hardware RAID controllers that load their own bootroms into the BIOS before OS exec

CopperHound
Feb 14, 2012



I just ordered the 810a chassis & PSU. I think I still need to weigh the cost of server grade boards vs consumer stuff before ordering the rest of the components. The X11SCH looks like it runs ~$200 more than a consumer board plus HBA adapter. While IPMI would be nice, I don't think I would mind physically plugging in a monitor with the initial setup.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Paul MaudDib posted:

... can you? I didn't think you could, since RAID mode wouldn't be loaded until the OS got far enough to boot, but maybe there's enough on /boot to get the array mounted? Especially if you go FreeBSD I guess, that probably would work there, but not sure if LVM would on Linux, and ZFS-on-Linux seems like it definitely wouldn't.


you can boot from mdadm and zfs-on-linux. I haven't looked into the details of how it's accomplished, but I know you've been able to boot from mdadm for like a decade or more.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.



Oven Wrangler

CopperHound posted:

I just ordered the 810a chassis & PSU. I think I still need to weigh the cost of server grade boards vs consumer stuff before ordering the rest of the components. The X11SCH looks like it runs ~$200 more than a consumer board plus HBA adapter. While IPMI would be nice, I don't think I would mind physically plugging in a monitor with the initial setup.

My approach was to buy server grade but several years old, since even a ten year old quad core Xeon is still more than adequate for a NAS+Plex server. I went with Lynnfield (X8SIL-F + Xeon L3426) but you could get Ivy Bridge pretty cheap at this point; an X9SCM with a Xeon E3-1220 v2 will go for under $100. Conveniently, ECC DDR3 is also getting very inexpensive by now. Is there a reason you want a new platform other than reliability concerns?

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




Eletriarnation posted:

My approach was to buy server grade but several years old, since even a ten year old quad core Xeon is still more than adequate for a NAS+Plex server. I went with Lynnfield (X8SIL-F + Xeon L3426) but you could get Ivy Bridge pretty cheap at this point; an X9SCM with a Xeon E3-1220 v2 will go for under $100. Conveniently, ECC DDR3 is also getting very inexpensive by now. Is there a reason you want a new platform other than reliability concerns?

Power consumption on anything pre Sandy Bridge is going to be god fucking awful comparatively. I say this as someone who (somewhat as a joke) own a quad socket Gainestown. Nehalem is old. Plus newer instruction sets on newer CPUs help a lot.

Moey
Oct 22, 2010

I LIKE TO MOVE IT


Eletriarnation posted:

My approach was to buy server grade but several years old, since even a ten year old quad core Xeon is still more than adequate for a NAS+Plex server. I went with Lynnfield (X8SIL-F + Xeon L3426) but you could get Ivy Bridge pretty cheap at this point; an X9SCM with a Xeon E3-1220 v2 will go for under $100. Conveniently, ECC DDR3 is also getting very inexpensive by now. Is there a reason you want a new platform other than reliability concerns?

Hey now, depends on how many transcodes you are cranking out (without GPU offload). A limited upload speed + a handful of remote users will cripple some CPUs.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.



Oven Wrangler

Crunchy Black posted:

Power consumption on anything pre Sandy Bridge is going to be god fucking awful comparatively. I say this as someone who (somewhat as a joke) own a quad socket Gainestown. Nehalem is old. Plus newer instruction sets on newer CPUs help a lot.

Good thing I recommended Ivy Bridge, then. That's not quite true though - Lynnfield is a lot more similar to Sandy Bridge than to Nehalem. I see 20-30W idle out of the whole system before drives, vs. ~60W with my old underclocked i7-920.

Moey posted:

Hey now, depends on how many transcodes you are cranking out (without GPU offload). A limited upload speed + a handful of remote users will cripple some CPUs.

This is fair, but even so if that's a concern it's worth considering whether GPU offload would be the more cost-effective decision vs. a new server board. In my case I don't have many concurrent users and the older processor does fine.

Hadlock
Nov 9, 2004





Eletriarnation posted:

Good thing I recommended Ivy Bridge, then.

Ivy Bridge is about the oldest modern processor worth using at this point, it's not substantially different from what's being released these days in 2019. Haswell, released in 2013, is basically ivy Bridge, but with half the power usage. My ivy Bridge products pull just under 10 watts at idle running Windows.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Just my seemingly weekly reminder to really figure out if the power draw really matters to you. I mean having double or whatever the power draw sounds pretty bad, but if you're spending $2 vs $1 a month you might not care that much.

My whole Lynfield server costs like 5 bucks a month.

Hadlock
Nov 9, 2004





I am mostly interested in the heat output of the device. 10W device generally puts out a fraction of the heat of a 100W device. My DS418 lives in a small cabinet with a small ventilation/cable routing hole and runs mostly silent with cool drive temps. If it were a mini-ITX running a 2007 era core2 duo that idles at 65W the heatsink fan would be running in hairdryer mode most of the time.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.



Oven Wrangler

My server is a full tower sitting in a walk-in closet, so all I really care about is the time a new system will take to pay itself off in power savings. I pay <$0.09/kWh, so even if I somehow for $200 reduce my Lynnfield's current 25W to ~0 it will take a decade for me to break even. If your power is more expensive or you have other constraints then it might not be true for you, but there are situations where running 10 year old gear makes perfect sense.

Eletriarnation fucked around with this message at 02:53 on Aug 7, 2019

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


CopperHound posted:

I just ordered the 810a chassis & PSU. I think I still need to weigh the cost of server grade boards vs consumer stuff before ordering the rest of the components. The X11SCH looks like it runs ~$200 more than a consumer board plus HBA adapter. While IPMI would be nice, I don't think I would mind physically plugging in a monitor with the initial setup.

Some tips on this build: you will need an 8-pin EPS power (CPU aux power) extension cable and a 24-pin ATX extension cable to make it work. They're cheap on ebay but you'll have to deal with the slowboat from china, so get it on order. I've emailed U-NAS support to mention this and ask them to put it on the page but they said "it depends on the PSU" which is total BS. There's no way a 1U flex PSU comes with long enough cables to make that work My one wasn't the exact one they sold, the one they sold is a Seasonic 300W and I got the 350W version instead but it's pretty darn similar.

Grab an assortment pack of heat-shrink tubing off ebay as well. I screwed around with trying to use a pin puller tool to remove the spare strings from the power supply cables and then just threw my hands up and clipped them with tin snips and put heatshrink over the ends. This will help reduce clutter inside the case, there is absolutely no spare room in the case with everything closed.

The case doesn't come with a riser cable. The official one is too long and you will have to do something with the excess slack, for me it is bunched up where the second card would go. If I wanted to do a second card I would have to swap out for shorter cables.

Be VERY careful with the front plate, that rubberized coating is extremely thin and will scratch from as little as putting the thing on its front plate. I would work on a towel.

Paul MaudDib fucked around with this message at 21:28 on Aug 8, 2019

CopperHound
Feb 14, 2012



Paul MaudDib posted:

Some tips on this build: you will need an 8-pin EPS power (CPU aux power) extension cable and a 24-pin ATX extension cable to make it work....
...The case doesn't come with a riser cable. The official one is too long and you will have to do something with the excess slack, for me it is bunched up where the second card would go. If I wanted to do a second card I would have to swap out for shorter cables.
Ugh, of course. I ordered the PSU and riser cables from them with the assumption that it would reduce the amount of monkeying around I'd have to do.

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




Hadlock posted:

Ivy Bridge is about the oldest modern processor worth using at this point, it's not substantially different from what's being released these days in 2019. Haswell, released in 2013, is basically ivy Bridge, but with half the power usage. My ivy Bridge products pull just under 10 watts at idle running Windows.

Even the military knows this (well, Air Force)
/worked for supplier of x86 compute for several projects. They could literally keep the planes in the air longer per refuel every time they re-computed the airframes

Thermopyle posted:

Just my seemingly weekly reminder to really figure out if the power draw really matters to you. I mean having double or whatever the power draw sounds pretty bad, but if you're spending $2 vs $1 a month you might not care that much.

My whole Lynfield server costs like 5 bucks a month.

I mean my dual 2603v3 is insane overkill even undervolted but combined with the drives and the fact that its in a 2u enclosure is far and away my biggest power consumer and heat producer.

Schadenboner
Aug 15, 2011

I MEAN, TURN OFF YOURE MONITOR, MIGTH EXPLAIN YOUR BAD POSTS, HOPE THIS HELPS?!

Crunchy Black posted:

Even the military knows this (well, Air Force)
/worked for supplier of x86 compute for several projects. They could literally keep the planes in the air longer per refuel every time they re-computed the airframes


I mean my dual 2603v3 is insane overkill even undervolted but combined with the drives and the fact that its in a 2u enclosure is far and away my biggest power consumer and heat producer.

Why not, like, install a windmill on the plane to power the computers though?

E: Even better, use the computer heat to boil water to turn a turbine to power an air conditioner?

Schadenboner fucked around with this message at 01:08 on Aug 9, 2019

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




Eletriarnation posted:

Good thing I recommended Ivy Bridge, then. That's not quite true though - Lynnfield is a lot more similar to Sandy Bridge than to Nehalem.

Sorry, I got my nomenclatures mixed up with the early teens model numbers. That said, go as new as you can afford, period.

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




Schadenboner posted:

Why not, like, install a windmill on the plane to power the computers though?

E: Even better, use the computer heat to boil water to turn a turbine to power an air conditioner?

I realize you're mostly joking but you realize that most mil airplanes generate 440hz power right

Schadenboner
Aug 15, 2011

I MEAN, TURN OFF YOURE MONITOR, MIGTH EXPLAIN YOUR BAD POSTS, HOPE THIS HELPS?!

Crunchy Black posted:

I realize you're mostly joking but you realize that most mil airplanes generate 440hz power right

What if Convair X-6 NB-36H but with computars?

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.



Oven Wrangler

Crunchy Black posted:

Sorry, I got my nomenclatures mixed up with the early teens model numbers. That said, go as new as you can afford, period.

I don't really know what to call that except dogma. You're playing it safe and that's understandable, but there are a lot of use cases where new hardware is just not necessary and plenty of money can be saved by not buying it. I could afford to buy a brand new server every year for my home NAS but I'd probably be looking back after ten years going "why did I waste all those hours and thousands of dollars?"

Eletriarnation fucked around with this message at 01:35 on Aug 9, 2019

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




Schadenboner posted:

What if Convair X-6 NB-36H but with computars?

Well played.

Eletriarnation posted:

I don't really know what to call that except dogma. You're playing it safe and that's understandable, but there are a lot of use cases where new hardware is just not necessary and plenty of money can be saved by not buying it.

I don't really see what you're arguing, here. For this project, the airframes are literally stripped down to fuselage and even the turbofans are replaced basically every 8 years. So why wouldn't they replace 1m worth of compute when they're doing many times that in repowering?

Sorry you edited that after my post

Eletriarnation posted:

I don't really know what to call that except dogma. You're playing it safe and that's understandable, but there are a lot of use cases where new hardware is just not necessary and plenty of money can be saved by not buying it. I could afford to buy a brand new server every year for my home NAS but I'd probably be looking back after ten years going "why did I waste all those hours and thousands of dollars?"

You're talking about one of the most prized airborne projects of the military, not one of our dinky homelabs. I don't understand the comparison.

Crunchy Black fucked around with this message at 01:38 on Aug 9, 2019

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.



Oven Wrangler

Crunchy Black posted:

Well played.


I don't really see what you're arguing, here. For this project, the airframes are literally stripped down to fuselage and even the turbofans are replaced basically every 8 years. So why wouldn't they replace 1m worth of compute when they're doing many times that in repowering?

Sorry you edited that after my post


You're talking about one of the most prized airborne projects of the military, not one of our dinky homelabs. I don't understand the comparison.

What? I was responding to you saying "go as new as you can afford, period" which you said right after quoting me talking about Ivy Bridge. If you're not saying "don't buy Ivy Bridge for your home NAS if you can manage it, buy something newer" or something similar then what are you saying there?

I wasn't making any statement about planes or military systems, and I totally agree with you that buying an old computer to put in a new plane would be an exceptionally strange decision.

Apologies for the ninja edit, I have a tendency to reread what I wrote a minute later and think of something else to better clarify it.

Eletriarnation fucked around with this message at 01:49 on Aug 9, 2019

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Crunchy Black posted:

I don't really see what you're arguing, here. For this project, the airframes are literally stripped down to fuselage and even the turbofans are replaced basically every 8 years. So why wouldn't they replace 1m worth of compute when they're doing many times that in repowering?

What does this have to do with anyone in this thread?

Anyway, FWIW:

My whole Lynnfield server with 24ish drives averages ~140 watts and costs me around 10/USD/month. (I use telegraf to extract powerusage from apcupsd every 5 minutes and feed it to influxdb)

My drives probably average 30 watts and other system components probably another 30, which means the CPU is averaging around 80W.

If I halved the average power draw with a newer CPU that would save me around $3/month.

Arvid
Oct 9, 2005


I have a question about setting up a new volume on a Synology NAS. Right now I have a DS213J with 2 x 3TB WD Red drives that´s 99% full so I have bought a RS819 and one 10TB WD Red drive to replace the old gear entirely. The DS213J is running JBOD since redundancy was not a consideration with just two drive bays. While setting up my new RS819 I have to decide what kind of volume I want. There will of course be no redundancy right now since I have just one drive, but how should I set it up for the best flexibility in the future when I add drives in the future? I might want to have redundancy though it´s not a big concern right now. I have absolutely no experience with RAID stuff so I don´t really know what option to use, but it seems that SHR might be flexible for whatever I might want to do in the future, whether that means just adding more drives to the same volume or adding drivrs for redundancy? I don´t want to end up in a situation where I have to rebuild the entire array to do what I want and then having nowhere to put the data meanwhile.

Schadenboner
Aug 15, 2011

I MEAN, TURN OFF YOURE MONITOR, MIGTH EXPLAIN YOUR BAD POSTS, HOPE THIS HELPS?!

If you have a 5-bay Synology (DS1019+) with 5 disks of the same size, can you have two 2-disk RAID1s with the 5th being hot spare for either (whichever one fails first)?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles



https://www.synology.com/en-global/...anager/hotspare

Hot spares are global so they'll jump on whatever pool needs it. You're consuming 3 disks for redundancy in that config, I would suggest looking at a single raid6 group for better utilization of your hardware

Schadenboner
Aug 15, 2011

I MEAN, TURN OFF YOURE MONITOR, MIGTH EXPLAIN YOUR BAD POSTS, HOPE THIS HELPS?!

BangersInMyKnickers posted:

https://www.synology.com/en-global/...anager/hotspare

Hot spares are global so they'll jump on whatever pool needs it. You're consuming 3 disks for redundancy in that config, I would suggest looking at a single raid6 group for better utilization of your hardware

If you're dealing with disks of the same size is there any difference between/reason to choose RAID6 versus SHR2? SHR/SHR2's thing is that they make chucking a bunch of differently-sized disks together work better, right?

ChiralCondensate
Nov 13, 2007

what is that man doing to his colour palette?


Grimey Drawer

Maybe this isn't the best thread, but: is there such a thing as a KVM-over-IP card that actually is its own VGA adapter, instead of needing to capture the existing video output? E.g. with this KVM-over-IP card you plug a stubby VGA cable into it from the system's video output, which my system doesn't have. (I know I can just buy a video card, but I also wanted the advantages of KVM-over-IP.)

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles



Schadenboner posted:

If you're dealing with disks of the same size is there any difference between/reason to choose RAID6 versus SHR2? SHR/SHR2's thing is that they make chucking a bunch of differently-sized disks together work better, right?

If its all the same size disk, RAID5:SHR and RAID6:SHR2 are functionally identical. SHR is nice if you're planning to grow because you only need to add in two disks of the newer larger capacity for some amount of the space to become usable, that same system in SHR2 would require 3 of the larger disks so it can properly distribute the second parity copy. Keep that in mind with future upgrade paths.

The Milkman
Jun 22, 2003

No one here is alone,
satellites in every home


Lipstick Apathy

This is maybe more networking related, but I have a sneaking suspicion I have some bum/old cables somewhere in my rats' nest. My Unifi controiller reports everything is working at full duplex gigabit but transfer speeds between my FreeNAS and desktop but also in between other machines have been suspiciously slow. I'd like to run some network benchmarks, preferably being able to also test reading from disk. Searching brought up iperf, and it seems to be what I want?

Schadenboner
Aug 15, 2011

I MEAN, TURN OFF YOURE MONITOR, MIGTH EXPLAIN YOUR BAD POSTS, HOPE THIS HELPS?!

BangersInMyKnickers posted:

If its all the same size disk, RAID5:SHR and RAID6:SHR2 are functionally identical. SHR is nice if you're planning to grow because you only need to add in two disks of the newer larger capacity for some amount of the space to become usable, that same system in SHR2 would require 3 of the larger disks so it can properly distribute the second parity copy. Keep that in mind with future upgrade paths.

Starting with a two-disk SHR and then adding three more disks a few months later (once I'm sure the NAS is working and I'm actually going to use it) and converting it to either a 5-disk SHR2 or a 4-disk SHR2 with a spare would be cromulant right?

E: The original thinking was for a 918+ but the price difference between that and the 1019+ (548 vs. 640 on Amazon) drops to like once you factor in buying Synology ram for the 918 and "MORE DISK HOLES EQUALS BETTER THAN" (as the kids say these days).

E2: The right way to do the M.2 SSD cache thing on a Synology is Read rather than R/W right?

Schadenboner fucked around with this message at 20:20 on Aug 14, 2019

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



I've got 24 drives smushed into my current case.

I need more space for drives and more sata ports and I don't like to spend money.

I only have some PCI Express 2 x1 slots left on my current motherboard.

My first thought is to buy another case and hack something together with some long-ish SATA cables and some SATA cards to go in those open slots I have.

That's starting to take up a lot of physical space though.

However, it seems like maybe that's my only choice without finding some expensive rack-mount case with room for more drives with new mobo/cpu/ram.

Any other suggestions?

D. Ebdrup
Mar 13, 2009



The Milkman posted:

This is maybe more networking related, but I have a sneaking suspicion I have some bum/old cables somewhere in my rats' nest. My Unifi controiller reports everything is working at full duplex gigabit but transfer speeds between my FreeNAS and desktop but also in between other machines have been suspiciously slow. I'd like to run some network benchmarks, preferably being able to also test reading from disk. Searching brought up iperf, and it seems to be what I want?
iperf version 2.x (ie. not iperf3) is what you want, because iperf2 is built to be scalable with the number of threads available whereas iperf3 is optimized specifically for Linux's network stack where it's more important to have a single turbo-boosted thread.
If they're running at half-duplex, though, it's typically very obvious from the speed that you'd be getting, so what kind of speeds are you getting?

Thermopyle posted:

I've got 24 drives smushed into my current case.

I need more space for drives and more sata ports and I don't like to spend money.

I only have some PCI Express 2 x1 slots left on my current motherboard.

My first thought is to buy another case and hack something together with some long-ish SATA cables and some SATA cards to go in those open slots I have.

That's starting to take up a lot of physical space though.

However, it seems like maybe that's my only choice without finding some expensive rack-mount case with room for more drives with new mobo/cpu/ram.

Any other suggestions?
Are they open-ended ports? If so, you could get some JBOD expanders and a HBA flashed to IT mode.
Otherwise yeah, it does seem like you need to get yourself a rack.

I already got the rack server I need for my setup, I'm now just looking for a 4U JBOD expander.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »