«608 »
  • Post
  • Reply
Ninja Rope
Oct 22, 2005

Wee.


LACP will only balance on a "per connection" granularity at best. Meaning if you have 4 NICs bonded together a single connection will never run across more than one of them, so you'll never get more than one NIC of performance out of a single connection.

I don't know anything about non-LACP load-balanced NIC bonding.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

If you're serious about using FCoE and want that kind of throughput, you should be throwing out a target IOPS figure you'd like the array to provide. Also, your choice of hypervisors may be somewhat relevant in that your SAN network topology may be affected a tad (Hyper-V's LUN-VHD mappings are substantially different from how VMware favors things to be run, for example).

But I think you could hit the sort of things you're looking for by scaling to at least 20 SATA 7200RPM spindles, picking up a decent gigabit ethernet switch (there's some HP and Dell switches for <$200 that could do the job that I can't quite find the links for now), and making sure that your storage traffic is physically isolated or at least on separate VLANs from the rest of your network traffic. SAS expanders may be helpful in your setup but if you're trying to do it as cheap as possible, you may want to buy one of those Norco 4U cases with 20+ drive bays available and shoving disks into an array and packing a 4-port NIC or a couple 2-port NICs onto the drive. You'll likely want a lot more CPU and memory horsepower than what an HP Microserver can provide, although I suspect you could get away with something like a last-gen Core i3 CPU even if you're running ZFS.

Ultimately, what you need to buy beyond just disks and NICs will also depend upon your storage performance requirements (not just in terms of service bandwidth but in terms of latency, expected storage traffic load, etc.). And after that your logical storage layout will be the prevailing factor (how you map your LUNs out to disk groups, dedupe settings, how many vdevs in a zpool, etc.)

Comradephate
Feb 28, 2009



College Slice

necrobobsledder posted:

If you're serious about using FCoE and want that kind of throughput, you should be throwing out a target IOPS figure you'd like the array to provide. Also, your choice of hypervisors may be somewhat relevant in that your SAN network topology may be affected a tad (Hyper-V's LUN-VHD mappings are substantially different from how VMware favors things to be run, for example).

But I think you could hit the sort of things you're looking for by scaling to at least 20 SATA 7200RPM spindles, picking up a decent gigabit ethernet switch (there's some HP and Dell switches for <$200 that could do the job that I can't quite find the links for now), and making sure that your storage traffic is physically isolated or at least on separate VLANs from the rest of your network traffic. SAS expanders may be helpful in your setup but if you're trying to do it as cheap as possible, you may want to buy one of those Norco 4U cases with 20+ drive bays available and shoving disks into an array and packing a 4-port NIC or a couple 2-port NICs onto the drive. You'll likely want a lot more CPU and memory horsepower than what an HP Microserver can provide, although I suspect you could get away with something like a last-gen Core i3 CPU even if you're running ZFS.

Ultimately, what you need to buy beyond just disks and NICs will also depend upon your storage performance requirements (not just in terms of service bandwidth but in terms of latency, expected storage traffic load, etc.). And after that your logical storage layout will be the prevailing factor (how you map your LUNs out to disk groups, dedupe settings, how many vdevs in a zpool, etc.)

Thanks for the info - I have a better idea of what I need to do now, or at least, I have a better idea of what I need to learn about now. I unfortunately never really interact with SANs at my current job, so they're somewhat mysterious to me.

I'm using hyper-v, but only because I was interested in learning more about it at the time that I bought some new hardware, and I had some licenses from websitespark. As for my IOPS needs, they're pretty minimal. This is entirely a project for my own amusement/education - the only thing I have on my current host is two domain controllers and a media server. I'll probably add an archlinux box to play around with and something to handle home automation once I have some money to throw at it. I stream to 2-3 targets with the media server, and the domain controllers just manage the various devices I have in my home - nothing strenuous. All of the VMs are currently running on a raid 10 of 4 commodity 3TB drives without issue.

Megaman
May 8, 2004
I didn't read the thread BUT...

Can someone tell me about upgrading freenas? 9.1 just came out, and I'm running 8.1 with a 7 disk z3 setup. Should I just be able to format a new key, pop it in, and import the old set? Or should I literally backup the configs from the old setup and import them into the new setup? Not sure what the BEST way to do it is.

Odette
Mar 19, 2011



Megaman posted:

Can someone tell me about upgrading freenas? 9.1 just came out, and I'm running 8.1 with a 7 disk z3 setup. Should I just be able to format a new key, pop it in, and import the old set? Or should I literally backup the configs from the old setup and import them into the new setup? Not sure what the BEST way to do it is.

I am assuming you keep the OS on a USB, if so; It would be a lot easier to wipe the USB and install a fresh copy of 9.1 on it. I'm not sure when FreeNAS upgraded the zpool, you may have to upgrade the zpool after importing it.

SEKCobra
Feb 28, 2011


Odette posted:

I am assuming you keep the OS on a USB, if so; It would be a lot easier to wipe the USB and install a fresh copy of 9.1 on it. I'm not sure when FreeNAS upgraded the zpool, you may have to upgrade the zpool after importing it.

Can I take along my config?

Odette
Mar 19, 2011



SEKCobra posted:

Can I take along my config?

You'd probably have to export the config and import it once you've got 9.1 all set up. OR you could just install 9.1 over 8.1. Can anyone more experienced let me know if I'm wrong?

Megaman
May 8, 2004
I didn't read the thread BUT...

Odette posted:

I am assuming you keep the OS on a USB, if so; It would be a lot easier to wipe the USB and install a fresh copy of 9.1 on it. I'm not sure when FreeNAS upgraded the zpool, you may have to upgrade the zpool after importing it.

You can upgrade the zpool? Interesting, I'll have to read about that.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


Upgrading a zpool just changes the ZFS version that the pool uses.

Farmer Crack-Ass
Jan 2, 2001

~this is me posting irl~


Has anyone heard of Dell FS12 servers? It sounds like they were custom servers built for someone that are now making their way to the refurb market. They seem to all come with PERC6/i RAID controllers.

Fourteen
Aug 15, 2002

No, no, no you imbecile! That's not talc, that's paprika!

Gendo posted:

Has anyone here built an array with 4TB drives? The Seagate 4TB NAS drives just dropped to 209.99 which makes them a better deal in terms of cost per gig than the 3TB WD Reds. I'm just nervous about the prospect of rebuild times for an array with 4TB members.

I'm going to be putting them in a Synology device (either the DS1812+ or DS1813+). Probably in a RAID-10 array.

I have 5 of them in a 1512+ using SHR. Previously had the 3TB drives and upgraded to the 4TB. Took about a day for each drive to rebuild. Solid so far.

thideras
Oct 27, 2010

Fuck you, I'm a tree.


Fun Shoe

Farmer Crack-Ass posted:

Has anyone heard of Dell FS12 servers? It sounds like they were custom servers built for someone that are now making their way to the refurb market. They seem to all come with PERC6/i RAID controllers.
Looks like they are custom built, yes. That isn't too bad for the price, but I would at least put in a different controller. The Perc 5/6 are pretty old and there are much better options. You lose out on a bit of the Dell quality (no hot swap power supplies, monitoring, etc), but that is cheap. I haven't looked at that age of hardware recently, but I don't think you will find much that competes with it.

My only concern would be finding replacement parts, specifically the power supply.

I'd be interested in one if they didn't come with a motherboard/CPU/RAM, to use as a drive backend.

EDIT: I also looked up some of the processors these are running and they are older than I thought. Looking through the listing, these are running Shanghai/Istanbul era processors. I figured they were at least Magny Cours/Bulldozer. I'd avoid it just based on that.

thideras fucked around with this message at 03:40 on Aug 16, 2013

Farmer Crack-Ass
Jan 2, 2001

~this is me posting irl~


It looks like some of the FS12s come with Harpertown Xeon processors.

I think I see what you mean about the Perc6 though - it looks like Perc6 doesn't support hard drives bigger than 2TB.

jarito
Aug 26, 2003



Biscuit Hider

Looks like the N54L is on sale again at Newegg for $289. Is that still the recommended box?


http://www.newegg.com/Product/Produ...N82E16859107921

Megaman
May 8, 2004
I didn't read the thread BUT...

jarito posted:

Looks like the N54L is on sale again at Newegg for $289. Is that still the recommended box?


http://www.newegg.com/Product/Produ...N82E16859107921

I bought one and I cannot recommend it enough. I put 5 drives in mine and loved it so much, bought a second as an offline backup, and backed that up to single offline disks as well. Freenas on both, rsync --delete to the second one once a week and shut it down after it's done. Flawless home storage + backup solution

tarepanda
Mar 26, 2011

Living the Dream

Does anyone have any idea when HP will kill the N54L line in favor of the G8 line?

AlternateAccount
Apr 25, 2005
FYGM

jarito posted:

Looks like the N54L is on sale again at Newegg for $289. Is that still the recommended box?


http://www.newegg.com/Product/Produ...N82E16859107921

Is the PSU in this going to have enough beef to spin four 3 or 4TB drives?

Megaman
May 8, 2004
I didn't read the thread BUT...

AlternateAccount posted:

Is the PSU in this going to have enough beef to spin four 3 or 4TB drives?

I'd need to look at the specs more closely, but I"m fairly certain 3/4 4TBs is ok since I'm running 5 1tbs just fine. And it actually is known to be able to handle 6 1tbs as well. I believe power consumption for 1tbs is roughly the same as power consumption for 4tbs, it's just platter size that varies the most. So I think you're fine.

featurecreep
Jul 23, 2002

Yes, Robinson, take the Major, the Robot, your wife and kids... but leave Will for my plea-- his education.

AlternateAccount posted:

Is the PSU in this going to have enough beef to spin four 3 or 4TB drives?

I run 5 3TB WD Reds and an SSD in mine without issues.

thideras
Oct 27, 2010

Fuck you, I'm a tree.


Fun Shoe

I got the replacement in for my file server: a second Dell Poweredge R710. This one is also sporting dual E5645 processors and "only" 128 GB of RAM. I could only get it with disks and a RAID controller, so it has eight Seagate Savvio 10K.3 drives paired with a H700 card. This is only used for the virtual machines. For mass storage, I'm using the old M1015 from the old server, hooked up through an internal to external SAS card, then out to an Omnistor/Rackable Systems SAS expander.

Since I was running Xen Cloud Platform, the migration was somewhat easy. Just had to image the old VM to the new one. It wouldn't let me do a migration because the processors were different.

Just need to get a few small parts for the server and do the wiring/setup for the network, then it will be fully done.


The scanner is there for a Broadcastify/RadioReference stream for my county.


Cabling is terrible.


Please remind me again what the word "overkill" means.

thideras fucked around with this message at 20:54 on Oct 21, 2017

Splinter
Jul 4, 2003
Cowabunga!

Anyone ever have issues with CrashPlan losing progress on backups to Central?

I've been backing up around 400GB to Central and was nearing completion (over 300GB completed). When I checked my progress last night, it had dropped back down to a bit over 100GB completed. No changes were made to the backup set. It looks like my entire backup may have restarted. Here's my latest 2 backup reports:

August 2-9
code:
Source > Target              Selected          Files     Backed Up %  Last Completed  Last Activity
CASEY-PC > HTPC-PC           449.9GB           107k      100.0%       5.6 days        5.6 days
HTPC-PC > CrashPlan Central  410.8GB ^279.2GB  401 ^264  47.6%        7.2 days        18 mins
August 9-16
code:
Source > Target              Selected          Files     Backed Up %  Last Completed  Last Activity
CASEY-PC > HTPC-PC           449.9GB ^467.5KB  107k ^18  100.0%       1.3 days        1.3 days
HTPC-PC > CrashPlan Central  410.8GB ^751.9KB  401       76.2%        14.3 days       10 mins
Crashplan currently reports 33% completed to Central out of 410 GB.

All I can think of is this happened around when I switched from the free trial license to the Crashplan+ unlimited license (purchased on the 13th, trial expired around the 17th). I added the purchased key to the crashplan client on HTPC-PC (probably on the 14th or 15th) since that is the one that backs up to central. I don't think I added the key to the client on CASEY-PC.

Any ideas? Did I mis-configure something?

rock2much
Feb 6, 2004



Grimey Drawer

Hello!

For a few years I've had an Acer easyStore H340 and it recently died on me. I want to move on to another NAS and hopefully reuse the hard drives if I can pull the data off. A coworker recommended this Netgear ReadyNAS 314 though he uses a DS1812+. My budget is about $500-ish since I already have the drives.

What I'd like to do is backup my computer on a schedule, store and duplicate photos in case one of the drives dies, store other files that can be lost (videos, mp3s, etc), and have it serve files to a Roku/computer/PS3 in another room. I just don't have much direction on what I should be looking to buy. The Acer model used WHS 2003, came with some software I could install on my PC to schedule backups. If I get something else, I can buy a newer version of WHS or try something free, but I don't know how I'd set up these 4 drives to get going.

The rough idea I'm getting is:
-buy something with a 3 year+ warranty and two NICs
-maybe it has video in case both NICs die (paranoia)
-can take some of the newer, larger drives since the Acer won't see anything over 2TB
-learn more about RAID

Any input would be appreciated. Thanks.

jmoney
May 15, 2003

what might have been

rock2much posted:

Hello!

For a few years I've had an Acer easyStore H340 and it recently died on me. I want to move on to another NAS and hopefully reuse the hard drives if I can pull the data off. A coworker recommended this Netgear ReadyNAS 314 though he uses a DS1812+. My budget is about $500-ish since I already have the drives.

What I'd like to do is backup my computer on a schedule, store and duplicate photos in case one of the drives dies, store other files that can be lost (videos, mp3s, etc), and have it serve files to a Roku/computer/PS3 in another room. I just don't have much direction on what I should be looking to buy. The Acer model used WHS 2003, came with some software I could install on my PC to schedule backups. If I get something else, I can buy a newer version of WHS or try something free, but I don't know how I'd set up these 4 drives to get going.

The rough idea I'm getting is:
-buy something with a 3 year+ warranty and two NICs
-maybe it has video in case both NICs die (paranoia)
-can take some of the newer, larger drives since the Acer won't see anything over 2TB
-learn more about RAID

Any input would be appreciated. Thanks.

These things must have a kill timer on them. My H340 kicked the bucket yesterday. I wish it hadn't, but I secretly had wanted an upgrade for a while. i pulled the trigger on a N54L and WHS 2011 and the plan is just to add the drives back and move to drivepool, which is apparently not that big of a deal to do.

If this was a planned transition, I would have sprung for a Gen8 MicroServer. They have a 2 port NIC. Also it looks way cooler. Thankfully, both this and the N54L have VGA ports. That really would have come in handy with the Acer, especially at times like these.

rock2much
Feb 6, 2004



Grimey Drawer

jmoney posted:

the plan is just to add the drives back and move to drivepool, which is apparently not that big of a deal to do.

What does that entail?

MREBoy
Mar 14, 2005

MREs - They're whats for breakfast, lunch AND dinner !

Anyone here ever used an Acer RevoCenter, specifically this one ? Thanks to a class action settlement thing involving eMachines, I was given the opportunity to get one of these brand new for $35 bucks. I was thinking of putting 3 x 3tb WD Greens in it, as the 2tb it comes with is apparently a Green. This thing would be replacing an ancient Dell 1ghz Pent. 3 that I slapped a 320gb drive into about 7 years ago as my form of NAS

jmoney
May 15, 2003

what might have been

rock2much posted:

What does that entail?

My new server hasn't arrived yet, so I haven't tried it yet, but I came across a couple of links with some instructions. This one I think I got from the StableBit site, or something related, so I'm going to try it. It seems straight forward enough, install new OS, install drivepool, add drives back, and fiddle with some stuff.

sleepy gary
Jan 11, 2006



Splinter posted:

Anyone ever have issues with CrashPlan losing progress on backups to Central?

I've been backing up around 400GB to Central and was nearing completion (over 300GB completed). When I checked my progress last night, it had dropped back down to a bit over 100GB completed. No changes were made to the backup set. It looks like my entire backup may have restarted. Here's my latest 2 backup reports:

August 2-9
code:
Source > Target              Selected          Files     Backed Up %  Last Completed  Last Activity
CASEY-PC > HTPC-PC           449.9GB           107k      100.0%       5.6 days        5.6 days
HTPC-PC > CrashPlan Central  410.8GB ^279.2GB  401 ^264  47.6%        7.2 days        18 mins
August 9-16
code:
Source > Target              Selected          Files     Backed Up %  Last Completed  Last Activity
CASEY-PC > HTPC-PC           449.9GB ^467.5KB  107k ^18  100.0%       1.3 days        1.3 days
HTPC-PC > CrashPlan Central  410.8GB ^751.9KB  401       76.2%        14.3 days       10 mins
Crashplan currently reports 33% completed to Central out of 410 GB.

All I can think of is this happened around when I switched from the free trial license to the Crashplan+ unlimited license (purchased on the 13th, trial expired around the 17th). I added the purchased key to the crashplan client on HTPC-PC (probably on the 14th or 15th) since that is the one that backs up to central. I don't think I added the key to the client on CASEY-PC.

Any ideas? Did I mis-configure something?

Did you press "compact" on the destinations screen? Are any of your files very large (say, over 30gb)?

IOwnCalculus
Apr 2, 2003





Alternatively, is the Crashplan service running out of RAM? It limits itself to I think 512MB initially and it uses that up pretty damn quick.

Splinter
Jul 4, 2003
Cowabunga!

DNova posted:

Did you press "compact" on the destinations screen? Are any of your files very large (say, over 30gb)?
I don't recall ever pressing 'compact'. If I did, it was when I first installed CrashPlan. I can't check the file sizes right now, but it's certainly possible. I'm sending CrashPlan backup files to Central. Do they ever get that large?

My setup is my PC backs up to my server/HTPC, then those backup files are backed up to Central from the server. Nothing else on the server is being sent to Central besides my PC's backup files. It certainly looks like the backup is being compressed a bit, as the 449GB from the source is down to 410GB when going to Central, however I'm not sure if that's from hitting 'compact' or CrashPlan's normal compression.

IOwnCalculus posted:

Alternatively, is the Crashplan service running out of RAM? It limits itself to I think 512MB initially and it uses that up pretty damn quick.
I don't think so (haven't seen any errors or warnings), but how would I know? My server has 4GB of RAM. Most of the day it is only running CrashPlan and uTorrent. For a couple hours a night it sometimes is also used for video watching and light web browsing.

rock2much
Feb 6, 2004



Grimey Drawer

jmoney posted:

My new server hasn't arrived yet, so I haven't tried it yet, but I came across a couple of links with some instructions. This one I think I got from the StableBit site, or something related, so I'm going to try it. It seems straight forward enough, install new OS, install drivepool, add drives back, and fiddle with some stuff.

Let me know how it works out. My general impression is if I put that OS drive into another NAS it'll say "I don't know how I got here so I'm not going to let you in. Put me back!" and I'll have to reinstall an OS. I'm waiting for a slave drive kit to arrive so I can try and pull anything I really want to keep.

jmoney
May 15, 2003

what might have been

rock2much posted:

Let me know how it works out. My general impression is if I put that OS drive into another NAS it'll say "I don't know how I got here so I'm not going to let you in. Put me back!" and I'll have to reinstall an OS. I'm waiting for a slave drive kit to arrive so I can try and pull anything I really want to keep.

Well, I haven't ever tried it, but I presumed that since they were separate partitions, even if there were an issue with my OS partition, my share stuff should be fine if I plug my main drive into something else.

The server I ordered comes with a 250gb hd, so I'm going to install WHS 2011 on there, setup everything, and then when I'm done I'll erase the OS partition from my original drive, copy the new one over, and use my existing drives as I had before.

sweat poteto
Feb 16, 2006

Everybody's gotta learn sometime

Just a warning to anyone considering the new(ish) Synology ds213j: nice enough hw but there is no bootstrap / ipkg for this new arm chip.

Going to return this unit and get a 212j with half the ram :-(

rock2much
Feb 6, 2004



Grimey Drawer

jmoney posted:

If this was a planned transition, I would have sprung for a Gen8 MicroServer. They have a 2 port NIC. Also it looks way cooler.

I spoke to one of the server guys at my job and he recommended this to me too so I might pull the trigger on the 2020T (http://www.newegg.com/Product/Produ...N82E16859108029) in a week or so.

thideras
Oct 27, 2010

Fuck you, I'm a tree.


Fun Shoe

Well, it was bound to happen some time. After the server swap and after moving the disks to the Omnistor, I started hearing The Click of Death from one of the drives, and narrowed it down to one. Once I got the virtual machine up and running, ZFS immediately flagged the suspect drive as failed and dropped it from the array. I'm currently running in limp mode (with backups if it comes to it).

code:
        NAME                                            STATE     READ WRITE CKSUM
        StoragePool                                     DEGRADED     0     0     0
	<snip>
          raidz1-1                                      DEGRADED     0     0     0
            ata-ST31500341AS_9VS18FCY                   ONLINE       0     0     0
            ata-ST31500341AS_9VS3DX5F                   ONLINE       0     0     0
            ata-ST31500341AS_9VS1Y86S                   ONLINE       0     0     0
            ata-ST31500341AS_9VS20RAK                   UNAVAIL      4   114     1  corrupted data
	<snip>
I don't have a spare for this disk and eBay people want ~100 shipped for these drives, which is obscene for a used and unknown-quality drive that may or may not arrive this year. I've ordered six 3 TB WD Reds after hearing how well they hold up from the people in this thread.

However, I have a question regarding vdevs in ZFS. After setting up my configuration, I was notified that my pool was not ideal. I had four vdevs with four disks each. Doing some reading, I see that I should use an odd number of disks in RAIDz1 and an even number in RAIDz2. Is this correct? My Omnistor gives me 20 bays, which will allow up to three vdevs of six drives in RAIDz2 if that works well.

UndyingShadow
May 15, 2006
You're looking ESPECIALLY shadowy this evening, Sir

thideras posted:

Well, it was bound to happen some time. After the server swap and after moving the disks to the Omnistor, I started hearing The Click of Death from one of the drives, and narrowed it down to one. Once I got the virtual machine up and running, ZFS immediately flagged the suspect drive as failed and dropped it from the array. I'm currently running in limp mode (with backups if it comes to it).

code:
        NAME                                            STATE     READ WRITE CKSUM
        StoragePool                                     DEGRADED     0     0     0
	<snip>
          raidz1-1                                      DEGRADED     0     0     0
            ata-ST31500341AS_9VS18FCY                   ONLINE       0     0     0
            ata-ST31500341AS_9VS3DX5F                   ONLINE       0     0     0
            ata-ST31500341AS_9VS1Y86S                   ONLINE       0     0     0
            ata-ST31500341AS_9VS20RAK                   UNAVAIL      4   114     1  corrupted data
	<snip>
I don't have a spare for this disk and eBay people want ~100 shipped for these drives, which is obscene for a used and unknown-quality drive that may or may not arrive this year. I've ordered six 3 TB WD Reds after hearing how well they hold up from the people in this thread.

However, I have a question regarding vdevs in ZFS. After setting up my configuration, I was notified that my pool was not ideal. I had four vdevs with four disks each. Doing some reading, I see that I should use an odd number of disks in RAIDz1 and an even number in RAIDz2. Is this correct? My Omnistor gives me 20 bays, which will allow up to three vdevs of six drives in RAIDz2 if that works well.

Yup, go with raidz2. I don't remember exactly why but doing z1 with even numbers causes a speed hit. Plus, It's a very nice feeling when a drive dies and you realize you'd have to lose two more to suffer data loss.

Ninja Rope
Oct 22, 2005

Wee.


It's not a big deal. If the number of sectors written isn't evenly divisible by the number of drives, some drives will have to perform one extra IOP. For 7200 RPM drives that can do 100 IOP/sec you'll probably never notice unless you're maxing out your bandwidth doing nothing but small writes with too small/no write cache.

thideras
Oct 27, 2010

Fuck you, I'm a tree.


Fun Shoe

Ninja Rope posted:

It's not a big deal. If the number of sectors written isn't evenly divisible by the number of drives, some drives will have to perform one extra IOP. For 7200 RPM drives that can do 100 IOP/sec you'll probably never notice unless you're maxing out your bandwidth doing nothing but small writes with too small/no write cache.
If I was just using it for storage, it wouldn't be an issue, but I will likely be using this as virtual machine disk storage as well. If you scroll up a few posts or check my post history, you can see my server configuration. I have the hardware to do paired 4 GB fiber to give the hypervisors access to more storage, in addition to their local disks. The more IOPS I can get out of the array, the better.

Ninja Rope
Oct 22, 2005

Wee.


I don't know if even that information is enough to predict the impact on your performance. There's a lot of buffering/caching being done (and even more with virtualization) that will completely obliviate the issue. It's not going to be better than matching the number of drives to the RAIDZ level, but it's likely to have zero performance impact in a lot of cases. Configure your arrays however you feel comfortable, but I wouldn't stay awake at night worrying about the performance loss that's only going to show up under some very specific scenarios.

thideras
Oct 27, 2010

Fuck you, I'm a tree.


Fun Shoe

Ninja Rope posted:

I don't know if even that information is enough to predict the impact on your performance. There's a lot of buffering/caching being done (and even more with virtualization) that will completely obliviate the issue. It's not going to be better than matching the number of drives to the RAIDZ level, but it's likely to have zero performance impact in a lot of cases. Configure your arrays however you feel comfortable, but I wouldn't stay awake at night worrying about the performance loss that's only going to show up under some very specific scenarios.
That is a fair argument. I was simply worried that it was a large difference when it wasn't setup optimally. Depending on how the existing setup holds up, how quickly the new drives get here, and how much free time I have with school starting this week, I may do some quick tests to see if there is any difference.

AlternateAccount
Apr 25, 2005
FYGM

Well, last week I bought the HP Microserver N54L that was on sale(but not at its lowest ever price) on Newegg. I went back and forth on OSes for a while. Tried Unraid, tried Server 2012's drive pooling, ended up just going with DrivePool itself, which seems to be a pretty fantastic product for $20. The build quality and design on the microserver is definitely good, it doesn't really feel like a low-end product. I'd like one of the new G8s, but...

All that said, I can't seem to find a straight answer online. My transfers seem to run ~60-70MB/s, and that's really fine for me, but is there no way to improve the performance of the onboard network and enable jumbo frames? Do I need to just get a card and put it in?


Thanks for this thread.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »