«608 »
  • Post
  • Reply
Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



D. Ebdrup posted:

No.
...
Just add an additional set of disks with their own parity to your existing pool.

He said "one at a time".

IOwnCalculus
Apr 2, 2003





Just add single drive vdevs!

Kreeblah
May 17, 2004

INSERT QUACK TO CONTINUE



Taco Defender

If you create three partitions on each disk, you can have parity.

Killer_B
May 23, 2005


Tornhelm posted:

Or even better, Grab a Microserver and put XPEnology (pretty much roll-your-own Synology) on it. Pretty much the best of both worlds.

The HP Microservers seem to go on sale semi-often...Anybody remember how often? I'm thinking of getting one (N54L, most likely) at some point in the future when they do.

I'll likely run unRAID on it, and it seems like a good solution. Your suggestion is also a good one, however.

movax
Aug 30, 2008



necrobobsledder posted:

2. Increased urgency for ECC since there is actual evidence of people losing data on ZFS that ECC could have prevented (this should be baseline today IMO though given how unreliable storage itself is becoming while we rely upon it more)
5. ZFS design presently mandates that you can't incrementally add storage by adding disks one at a time in practice - you generally need to upgrade all the drives in an array at once or painfully slowly one at a time. This is pretty standard for most corporate / business environments but very difficult to justify oftentimes for home users. There's all sorts of implied risks with this sort of upgrade path if you don't plan carefully.

I'd be interested in hearing more about #2, I currently live dangerously without ECC

And for #5, even if you add an entire vdev (i.e. if you have 6-drive RAID-Z2s, and you add another 6 drives), performance can still be weird because your data will be distributed oddly; it will only copy data to the new vdev. I had two 6-drive RAID-Z2s for about 2 years, filled around 85% capacity, and then finally added a 3rd to the pool; I forget the command off hand, but it shows that my tertiary vdev I just added is getting the brunt of the workload while the other two mostly idle, because they are essentially full

The solution of course to that is move all your data off, and copy it back over...if you have a large spare storage pool sitting around. At least, that's the way I understood it the last time I read about it.

Thoom
Jan 12, 2004

LUIGI SMASH!

necrobobsledder posted:

3. Some restrictions on OS compatibility such as Solaris and FreeBSD for the best / most stable implementations while Linux is nearly two generations or so behind them in its ZFS implementation. This leads to...

Speaking of, I've been having a minor but extremely annoying issue with my Linux+ZFS NAS box. Every time it reboots, CrashPlan manages to start up before the ZFS filesystems are mounted, and creates a /tank/crashplan directory before ZFS can mount /tank. This prevents ZFS from working its magic, and shit is broken until I manually stop CrashPlan, delete /tank/crashplan, mount the filesystems, and restart CrashPlan.

Following the advice of this guide:

quote:

Debian and Ubuntu, and probably other systems use a parallized boot. As such, init script execution order is no longer prioritized. This creates problems for mounting ZFS datasets on boot. For Debian and Ubuntu, touch the "/etc/init.d/.legacy-bootordering file, and make sure that the /etc/init.d/zfs init script is the first to start, before all other services in that runlevel.

I've touched /etc/init.d/.legacy-bootordering, and I'm pretty sure ZFS is meant to start before Crashplan, because in all the /etc/rc.X.d directories, ZFS is prefixed with S20 and Crashplan with S99. Yet still the problem persists. Am I misunderstanding something about how Ubuntu orders boot scripts?

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!



Killer_B posted:

The HP Microservers seem to go on sale semi-often...Anybody remember how often? I'm thinking of getting one (N54L, most likely) at some point in the future when they do.

I'll likely run unRAID on it, and it seems like a good solution. Your suggestion is also a good one, however.

They seem to go on sale every couple of months. There were two or three newegg sales on them in the last couple of months probably due to black friday adding an additional one.

evol262
Nov 30, 2010
#!/usr/bin/perl

Thoom posted:

Speaking of, I've been having a minor but extremely annoying issue with my Linux+ZFS NAS box. Every time it reboots, CrashPlan manages to start up before the ZFS filesystems are mounted, and creates a /tank/crashplan directory before ZFS can mount /tank. This prevents ZFS from working its magic, and shit is broken until I manually stop CrashPlan, delete /tank/crashplan, mount the filesystems, and restart CrashPlan.

Following the advice of this guide:


I've touched /etc/init.d/.legacy-bootordering, and I'm pretty sure ZFS is meant to start before Crashplan, because in all the /etc/rc.X.d directories, ZFS is prefixed with S20 and Crashplan with S99. Yet still the problem persists. Am I misunderstanding something about how Ubuntu orders boot scripts?

Does Crashplan ship with an Upstart script that may be used instead?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

movax posted:

I'd be interested in hearing more about #2, I currently live dangerously without ECC
Similar to how hard drives work, UDIMM reliability hasn't gone up terribly much while capacity has gone up - it is a bit uncomfortably common (although that paper is mostly DDR1 and DDR2 focused, age is a huge factor and should make one pause at the thought of reusing old parts). ZFS does checksums upon reading back data but generating the checksum itself is done without a read-back during the full write transaction (it has to be done quickly since it's a transaction, gee). Single bit errors are becoming rather common as well, and ECC's single bit parity can protect you from that much at least by correcting them and detecting worse errors. I'm not 100% sure, but there may be a way to get ZFS to skip writing to memory and use DMA + hardware crypto instructions for checksums.

For anecdotal evidence (I'm having trouble finding them currently), but a few people on the FreeNAS forums had examples of what happens when you get single bit errors and they lost a big chunk of their data. I believe one guy had a bad bit written to an uberblock and it would technically be possible to recover, but the problem was that ZFS copied that same uberblock to other spots (for resilience) and they were all corrupt. Hence, metadata lost basically and you'd need to start combing through the vdev manually with data recovery tools.

The good news is that for all of us that try to do a semi-serious data fortress at home, in another couple years when companies are ditching their current rackmounts based on lower power Sandy/Ivy Bridge Xeons, we can get proper RDIMM based servers on the cheap for ubernerd approved home storage. Those L5520 Xeons I see for refurb everywhere are terrible for 24/7 home use. I highly doubt that the amount of data that many of us store will be cheaper to store in the cloud by even 2017.

movax posted:

And for #5, even if you add an entire vdev (i.e. if you have 6-drive RAID-Z2s, and you add another 6 drives), performance can still be weird because your data will be distributed oddly; it will only copy data to the new vdev.:
Yeah, I think of extra vdevs in a zpool to not be so much like RAID0 but more like JBODs because RAID0 generally doesn't care about load or capacity to drives in the array. But the bigger reason for me to avoid scaling out completely via "moar RAIDZ vdevs!" is mostly that you're just adding another point of failure - if that vdev putzes out, you're screwed. With literally millions of vdevs, you've now made a dice roll a million times of any of those vdevs becoming faulted.

quote:

The solution of course to that is move all your data off, and copy it back over...if you have a large spare storage pool sitting around. At least, that's the way I understood it the last time I read about it.
This is basically what I'm doing and even then it kind of sucks because your old array is going to limp along replicating / exporting to your new array. If you really care that much, you should have an on-site backup with your most critical data present along with an offsite backup solution anyway.


Thoom posted:

Speaking of, I've been having a minor but extremely annoying issue with my Linux+ZFS NAS box. Every time it reboots, CrashPlan manages to start up before the ZFS filesystems are mounted, and creates a /tank/crashplan directory before ZFS can mount /tank. This prevents ZFS from working its magic, and shit is broken until I manually stop CrashPlan, delete /tank/crashplan, mount the filesystems, and restart CrashPlan.
Perhaps you can modify the Crashplan init script to wait for `ls -l /tank/ | wc -l` to come back greater than 0 before proceeding (and with a time-out) which should let you continue using parallel start scripts? Also, if ZFS has to do something like scrub or resilver upon restart (I've seen it have to do it during some horky maintenance I did) that could just block Crashplan too.

Also, you may want to try using the LSB init script conventions to get your dependencies established for the Crashplan script (thought that script should handle it honestly) https://wiki.debian.org/LSBInitScripts

Mr Shiny Pants
Nov 12, 2012


ECC memory is cheap these days, no reason not to buy it.

As for the VDEV growing. I agree it is a bit of a pain but it's not a showstopper when comparing my current build to other options.

I've just gone from 3 x 1TB to 3 x 3TB and the swap disk, resilver, swap disk resilver etc etc is annoying it is pretty easy and foolproof way to grow a ZFS pool.

IMHO ofcourse.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster


Pillbug

Most all servers even tower's are ECC; you pay a 10% premium on ECC.

Mr Shiny Pants
Nov 12, 2012


Dilbert As FUCK posted:

Most all servers even tower's are ECC; you pay a 10% premium on ECC.

True, but you already pay the RAID overhead for data integrity so 10% extra for ECC was an easy choice.

Fangs404
Dec 20, 2004

I time bomb.

My FreeNAS server (N40L) lives on the same battery backup as my PC. The UPS (CyberPower CP1500PFCLCD) is more than powerful to handle both. The problem is that the UPS only has 1 USB output that I have going to my PC. I need a way to get my N40L to shut down safely whenever there's a power failure. What solution would you guys use? I hardly ever shut down my computer (but I'll restart once every week or two), so creating a task that triggers on shutdown that sends a shutdown command to the N40L seems like the way to go. Have you guys done anything like this?

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


Fangs404 posted:

My FreeNAS server (N40L) lives on the same battery backup as my PC. The UPS (CyberPower CP1500PFCLCD) is more than powerful to handle both. The problem is that the UPS only has 1 USB output that I have going to my PC. I need a way to get my N40L to shut down safely whenever there's a power failure. What solution would you guys use? I hardly ever shut down my computer (but I'll restart once every week or two), so creating a task that triggers on shutdown that sends a shutdown command to the N40L seems like the way to go. Have you guys done anything like this?

Can you set the response on the PC to run a batch file instead of just issuing a shutdown? If so, you could send the shutdown command from that, wait until it's done, and then have it run "shutdown -s -f" on the PC to bring it down as well.

Phone
Jul 30, 2005

ああ!彼からのメールだ!

College Slice

Eh, it might get a bit tricky but I would think you could have a script open a terminal session and execute a shutdown /f with a little bit of looking around. You might need a Professional copy of Windows to enable some "enterprise" level features.

Fangs404
Dec 20, 2004

I time bomb.

G-Prime posted:

Can you set the response on the PC to run a batch file instead of just issuing a shutdown? If so, you could send the shutdown command from that, wait until it's done, and then have it run "shutdown -s -f" on the PC to bring it down as well.

Yeah, that was my idea. When my PC shuts down, it runs a batch file that sends the shutdown command to the NAS. It looks like I can add shutdown triggers under the group policy editor, and I can run arbitrary batch scripts.

And I do have Windows 7 Pro (yay for still being a student).

Ninja Rope
Oct 22, 2005

Wee.


Fangs404 posted:

My FreeNAS server (N40L) lives on the same battery backup as my PC. The UPS (CyberPower CP1500PFCLCD) is more than powerful to handle both. The problem is that the UPS only has 1 USB output that I have going to my PC. I need a way to get my N40L to shut down safely whenever there's a power failure. What solution would you guys use? I hardly ever shut down my computer (but I'll restart once every week or two), so creating a task that triggers on shutdown that sends a shutdown command to the N40L seems like the way to go. Have you guys done anything like this?

apcupsd, the APC UPS daemon that FreeBSD uses, is able to send a notification to clients over the network when it's time to shut down. I think there's a Windows version, if you could run that and configure it on FreeNAS you should be able to shut everything down at once. You'd need your PC on 24/7 though.

Fangs404
Dec 20, 2004

I time bomb.

Ninja Rope posted:

apcupsd, the APC UPS daemon that FreeBSD uses, is able to send a notification to clients over the network when it's time to shut down. I think there's a Windows version, if you could run that and configure it on FreeNAS you should be able to shut everything down at once. You'd need your PC on 24/7 though.

My UPS isn't an APC.

Thanks Ants
May 21, 2004

Bless You Ants, Blants



Fun Shoe

Can you not do it backwards and have the UPS connected to the server and the server handling the network part of things, so your PC is just a client?

Fangs404
Dec 20, 2004

I time bomb.

Caged posted:

Can you not do it backwards and have the UPS connected to the server and the server handling the network part of things, so your PC is just a client?

I could do that, but since my UPS isn't an APC, the NAS would use a different driver, and I'd be back to square one. Unless I'm misunderstanding you....

Phone
Jul 30, 2005

ああ!彼からのメールだ!

College Slice

Caged posted:

Can you not do it backwards and have the UPS connected to the server and the server handling the network part of things, so your PC is just a client?

It's Wake On LAN not Sleep On LAN, gosh.

Thanks Ants
May 21, 2004

Bless You Ants, Blants



Fun Shoe

Fangs404 posted:

I could do that, but since my UPS isn't an APC, the NAS would use a different driver, and I'd be back to square one. Unless I'm misunderstanding you....

http://doc.freenas.org/index.php/UPS

http://www.networkupstools.org/stable-hcl.html

If your UPS is on that list shouldn't it work with minimal fuckery? Then you just need a Windows NUT client to handle shutting your desktop down.

Comradephate
Feb 28, 2009



College Slice

Let's say I've chosen to over-complicate my environment for fun and want to run fibre channel. I probably need ~12 ports to account for the addition of another hypervisor and another storage device. Brocade SilkWork 200E seems like a reasonably cheap device that would suit my needs - any reason to not go that route?

Fangs404 posted:

I could do that, but since my UPS isn't an APC, the NAS would use a different driver, and I'd be back to square one. Unless I'm misunderstanding you....

You could connect the UPS to a device that has the appropriate drivers, and then when it triggers, have it run a script to gracefully shut down the nas via ssh or whatever.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


Fangs404 posted:

Yeah, that was my idea. When my PC shuts down, it runs a batch file that sends the shutdown command to the NAS. It looks like I can add shutdown triggers under the group policy editor, and I can run arbitrary batch scripts.

And I do have Windows 7 Pro (yay for still being a student).

I may have been somewhat unclear. I'm saying instead of running the batch file on a shutdown trigger, make it so the UPS management app runs the batch file directly. That way, when you manually shut down your PC, the UPS doesn't go off as well. It'd only bring both offline when you run the batch file.

Though, it looks like their software doesn't support running arbitrary commands.

evol262
Nov 30, 2010
#!/usr/bin/perl

Comradephate posted:

Let's say I've chosen to over-complicate my environment for fun and want to run fibre channel. I probably need ~12 ports to account for the addition of another hypervisor and another storage device. Brocade SilkWork 200E seems like a reasonably cheap device that would suit my needs - any reason to not go that route?
Because iSCSI over IP over Infiniband is cheaper and more complicated.

Thoom
Jan 12, 2004

LUIGI SMASH!

evol262 posted:

Does Crashplan ship with an Upstart script that may be used instead?

It doesn't appear to. That would be in /etc/init, right?

necrobobsledder posted:

Perhaps you can modify the Crashplan init script to wait for `ls -l /tank/ | wc -l` to come back greater than 0 before proceeding (and with a time-out) which should let you continue using parallel start scripts? Also, if ZFS has to do something like scrub or resilver upon restart (I've seen it have to do it during some horky maintenance I did) that could just block Crashplan too.

Also, you may want to try using the LSB init script conventions to get your dependencies established for the Crashplan script (thought that script should handle it honestly) https://wiki.debian.org/LSBInitScripts

I tried editing the dependencies in /etc/init.d/crashplan to include
code:
# Required-Start:    $local_fs $network $remote_fs zfs
since the zfs-mount init script provides "zfs", but that didn't help. Edit: Could it be ignoring this because of the legacy-bootordering I set in a previous troubleshooting attempt?

The ls/wc trick does seem to have worked, though. Thanks!

Thoom fucked around with this message at 19:39 on Jan 3, 2014

Comradephate
Feb 28, 2009



College Slice

evol262 posted:

Because iSCSI over IP over Infiniband is cheaper and more complicated.

I considered IB, but we actually use FC at work, and 4gig FC looks to be cheaper than IB. Granted, IB would be 10Gbps, but I won't even max out bonded 4Gbps links except occasionally from the storage device, so that's not a huge selling point.

thebigcow
Jan 3, 2001

Bully!

Fangs404 posted:

My FreeNAS server (N40L) lives on the same battery backup as my PC. The UPS (CyberPower CP1500PFCLCD) is more than powerful to handle both. The problem is that the UPS only has 1 USB output that I have going to my PC. I need a way to get my N40L to shut down safely whenever there's a power failure. What solution would you guys use? I hardly ever shut down my computer (but I'll restart once every week or two), so creating a task that triggers on shutdown that sends a shutdown command to the N40L seems like the way to go. Have you guys done anything like this?

Have you looked into the pro version of their software? Its free and IIRC does most of this.

Thoom
Jan 12, 2004

LUIGI SMASH!

Thoom posted:

The ls/wc trick does seem to have worked, though. Thanks!

It appears I was a bit hasty here. The trick worked to stall the script when manually stopping/starting the service with the filesystem unmounted and mounting it in another terminal, but on actual boot, it just seems to stall out the whole process and the filesystem never mounts.

The actual problem seems to be that mountall is failing to mount the ZFS partitions during boot, period.

Thoom fucked around with this message at 20:50 on Jan 3, 2014

Fangs404
Dec 20, 2004

I time bomb.

thebigcow posted:

Have you looked into the pro version of their software? Its free and IIRC does most of this.

I think this is probably the right solution. I'll play around with it and see if I can get it working. Thanks!

Killer_B
May 23, 2005


Rexxed posted:

They seem to go on sale every couple of months. There were two or three newegg sales on them in the last couple of months probably due to black friday adding an additional one.

Funny I should have asked about it....There's another sale on, I went and snagged one. Granted, semi-decent chunk of it's a $50 gift card rebate.

edit : I had been looking at the Western Digital Red's, either 2 or 3 gb...Where have people generally been buying these from?

Newegg has had inconsistent shipping practices in the past when regarding OEM drives, where very often the drives may have been barely padded, or the bubble wrap has been destroyed. (leading to drives often getting slammed into the corners) But, sometimes they arrive packed like they ought to be.

Apologies if it's going overboard, as I realize that a nicely packed drive might not be more reliable than a crappily packed one...The crappy packed ones probably don't help the drive's lifespan, however.

Killer_B fucked around with this message at 03:57 on Jan 5, 2014

Ninja Rope
Oct 22, 2005

Wee.


Amazon ships them in proper packaging. I think someone earlier posted that newegg has upgraded their packaging but why risk it.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.


The drives weve received from newegg lately have been in these bubble sausage packages and seems very sturdy.

Killer_B
May 23, 2005


Don Lapre posted:

The drives weve received from newegg lately have been in these bubble sausage packages and seems very sturdy.

It seems to stem from which warehouse they're being shipped from...Sadly still a case where there isn't a set process in place when shipping items out.

That might be the reason people tend to gripe about it every now and then.

Irritated Goat
Mar 12, 2005

This post is pathetic.


I have a set up I need some opinions on.

Currently, I've got a PC with 2x2tb drives. At some point, I'd like to offload those to a NAS device and leave my PC only for gaming. Normally, this would be easy but my PC is also my Plex server for my household. I don't do any converting of my media so Plex has to transcode all my stuff so I can watch it. I don't think a standard NAS device (Diskstation\QNAP kind) can handle doing 1080p mkvs.

Ideas? I was hoping not to build a 2nd box to do NAS work but I may have to.

JasH
Jun 20, 2001



I am putting together my first NAS for home use; hardware wise, I think everything is under control:
  • 1x HP N54L
  • 1x Kingston 8Gb DDR 3 RAM (1333 MHz - ECC)
  • 1x 4 GB USB key (for the OS)
  • 4x WD Red 3Tb (WD30EFRX; bought from different vendors)

Software and configuration wise, my knowledge is missing...
Option 1: FreeNAS with 3 drives in ZFS configuration and 1 drive as a spare (apparantly hot spares are not supported on FreeNAS?)
Option 2: FreeNAS with 4 drives in RAIDZ2 configuration

Any opinions on this setup? How should I configure the 4 drives to be correctly protected against failures?

Mr Shiny Pants
Nov 12, 2012


Irritated Goat posted:

I have a set up I need some opinions on.

Currently, I've got a PC with 2x2tb drives. At some point, I'd like to offload those to a NAS device and leave my PC only for gaming. Normally, this would be easy but my PC is also my Plex server for my household. I don't do any converting of my media so Plex has to transcode all my stuff so I can watch it. I don't think a standard NAS device (Diskstation\QNAP kind) can handle doing 1080p mkvs.

Ideas? I was hoping not to build a 2nd box to do NAS work but I may have to.

Keep the PC as plex server but host the files on the NAS? My main PC runs plex server and the files reside on my NAS.

Another option is to create a NAS from something like mini itx PC with an I3 or I5 for transcoding duties.

If you have some money to spare: A Xeon 1230v3 with ECC RAM and a nice case would make an awesome NAS and provide enough horsepower for any transcoding duties you might throw at it.

Load it up with 32GB RAM install Linux on it and let it run Plex. Or create a small VM using KVM for it.

A good build with some quality components doesn't use that much power. And something like a synology or Qnap are almost complete PCs.

Mr Shiny Pants
Nov 12, 2012


JasH posted:


Software and configuration wise, my knowledge is missing...
Option 1: FreeNAS with 3 drives in ZFS configuration and 1 drive as a spare (apparantly hot spares are not supported on FreeNAS?)
Option 2: FreeNAS with 4 drives in RAIDZ2 configuration

Any opinions on this setup? How should I configure the 4 drives to be correctly protected against failures?

ZFS supports spares just fine, I don't know why Freenas would not support it.

Maybe do it from the command line?

https://blogs.oracle.com/eschrock/entry/zfs_hot_spares

Daylen Drazzi
Mar 10, 2007

Why do I root for Notre Dame? Because I like pain, and disappointment, and anguish. Notre Dame Football has destroyed more dreams than the Irish Potato Famine, and that is the kind of suffering I can get behind.

Irritated Goat posted:

I have a set up I need some opinions on.

Currently, I've got a PC with 2x2tb drives. At some point, I'd like to offload those to a NAS device and leave my PC only for gaming. Normally, this would be easy but my PC is also my Plex server for my household. I don't do any converting of my media so Plex has to transcode all my stuff so I can watch it. I don't think a standard NAS device (Diskstation\QNAP kind) can handle doing 1080p mkvs.

Ideas? I was hoping not to build a 2nd box to do NAS work but I may have to.

When I was flirting with FreeNAS the latest version had Plex built into it as an add-on, however since I've migrated back to using my Linux box as my file server I set up Plex on a Windows 7 VM and installed Plex Server on it. Seems to work just fine for me, so you don't need to install Plex Server on the same machine that you have your files stored on - you just need for both machines to be on the same domain or subnet.

Mr Shiny Pants
Nov 12, 2012


What is the consensus on installing your OS on a flash disk?

I am worried that it is gonna crap out on me or do they last a long time?

I have very small SANDisk Cruzer fit that would be awesome as my bootdrive but I am worried about its longevity.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »