«608 »
  • Post
  • Reply
Longinus00
Dec 29, 2005
Ur-Quan

Thermopyle posted:

Hmm, well I came across this:

quote:

The way raidz works, you get the IOps (I/O operations per second) of a single drive for each raidz vdev. Also, when resilvering (rebuilding the array after replacing a drive) ZFS has to touch every drive in the raidz vdev. If there are more than 8 or 9, this process will thrash the drives and take several days to complete (if it ever does).

evil_bunnY posted:

Going by your quotes that nerd has no fucking clue.

So the quote is basically saying two things: 1. raidz has lower IOPS than more 'typical' raid and 2. rebuilding large raid setups takes a long time.

First of all, 2 is trivially true; the more data you have the more data that needs to be read to rebuild the missing data. As a consequence of the rebuild taking longer, you increase your exposure to a more disks going bad before you have restored your redundancy . This is exactly the reason people say raid5 is worthless for multi-TB datasets. Whether or not 2 parity disks is sufficient for any specific raid setup can be calculated with some simple math but 16TB is probably not going to be an issue with 2 parity disks (for fun go ahead and calculate the chance of an unrecoverable read eror on a 2/3/4TB drive if all you did was just read the whole drive).

His first statement actually follows as a consequence of 2. Because ZFS cares about data integrity every read requires data checksum validation/correction. This means that every read needs to read a chunk from every drive in your array. As a consequence, for certain workloads your IOPS can be effectively that of a single drive.


You asked a bunch of stuff so I'll just touch on some important things.

Raid5/6 (which RAIDZ is close enough to) doesn't have a dedicated parity drive. When people talk about raid 5 having 1 parity drive they mean it works as if 1 drive worth of space is dedicated to parity. Compare the differences between these two images
http://en.wikipedia.org/w/index.php...ID_4.svg&page=1
http://en.wikipedia.org/w/index.php...ID_5.svg&page=1

Yes 4GB is fine for ZFS if all you're doing is storing and serving large static files to a single client.

Upgrading arrays is pretty straightforward, just remember these two rules:
1. When you replace a disk you need to rebuild so that data is copied from the disks in your array to the new disk.
2. When you set up an array you're limited by the smallest size disk in that array.
As an example; if you have 3 1TB disks and replace one with a 3TB disk, after you rebuild your data is still limited as if you have 3 1TB disks (smallest disk in the array is 1TB). Only once you have replaced all your 1TB drives with 3TB drives in the same manner can your array be used as 3x3TB (because only now is the smallest drive in the array 3TB).

If you care about bit rot then your only realistic options are btrfs and zfs. Of the two zfs via freenas, or something similar, is going to be easier for you to setup and maintain. Remember, simple parity doesn't protect you form bit rot, checksumming does.


DrDork posted:

8-10 drives is easily doable in a decent sized case: http://www.newegg.com/Product/Produ...N82E16811129020 If you need to use external enclosures, there are a wide variety from SAS-enabled ones: http://www.newegg.com/Product/Produ...N82E16816133030 down to what are basically just metal racks: http://www.newegg.com/Product/Produ...N82E16816111045

In any event, don't worry, there are ways to work around it--you're far more likely to have issues finding SATA ports for them all than worrying about where to physically store them.

The 2nd link you posted is not an external enclosure, I think you meant to post something like this.
http://www.newegg.com/Product/Produ...N82E16816132016

These were on sale for $20 ($10 off + $10 rebate) a little bit ago.
http://www.newegg.com/Product/Produ...N82E16811146075
8 internal 3.5 bays and a 3 height 5.25 area lets you cram in another 4/5. For $20 it was a steal. It's still not a bad prospect at $40.

If you want to rackmount or just lay your computer flat then this is another cheap(ish) option.
http://www.newegg.com/Product/Produ...N82E16811147154

Longinus00 fucked around with this message at 10:50 on Oct 30, 2012

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Longinus00 posted:

So the quote is basically saying two things: 1. raidz has lower IOPS than more 'typical' raid and 2. rebuilding large raid setups takes a long time.

First of all, 2 is trivially true; the more data you have the more data that needs to be read to rebuild the missing data. As a consequence of the rebuild taking longer, you increase your exposure to a more disks going bad before you have restored your redundancy . This is exactly the reason people say raid5 is worthless for multi-TB datasets. Whether or not 2 parity disks is sufficient for any specific raid setup can be calculated with some simple math but 16TB is probably not going to be an issue with 2 parity disks (for fun go ahead and calculate the chance of an unrecoverable read eror on a 2/3/4TB drive if all you did was just read the whole drive).

His first statement actually follows as a consequence of 2. Because ZFS cares about data integrity every read requires data checksum validation/correction. This means that every read needs to read a chunk from every drive in your array. As a consequence, for certain workloads your IOPS can be effectively that of a single drive.

Thanks. FWIW, I did finally find that Sun themselves recommend no more than 9 drives per vdev. I just linked to this, but at the time I didn't realize that it was an official (or at least endorsed) Sun/Oracle site (linked to straight from their ZFS administration guide).

quote:

The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple groups.

Anyway, now I just have to figure out if I want to follow that recommendation.

The issues here are:

  • If I use a 10-disk raidz2 theres 2 drives worth of parity that cover the failure of any two drives. Pro
  • If I used two 5-disk raidz2 vdevs that's 4 drives "lost" to parity instead of 2. Con
  • If I used two 5-disk raidz1 vdevs, that's back to just two drives for parity, but it's only raidz1 and all 10 disks don't get to share in the pool of parity. In other words, it can't be any two disks to fail, it has to be one in each vdev. Con
  • Sun recommends equal sized vdevs. If I want to add more storage via another vdev, that requires adding 30TB worth of drives at once if I go with one 10-disk raidz2 vdev. Con

Decisions, decisions.

movax posted:

I think that's temporary dude.

Yep.

movax
Aug 30, 2008



Thermopyle posted:

Anyway, now I just have to figure out if I want to follow that recommendation.

The issues here are:

  • If I use a 10-disk raidz2 theres 2 drives worth of parity that cover the failure of any two drives. Pro
  • If I used two 5-disk raidz2 vdevs that's 4 drives "lost" to parity instead of 2. Con
  • If I used two 5-disk raidz1 vdevs, that's back to just two drives for parity, but it's only raidz1 and all 10 disks don't get to share in the pool of parity. In other words, it can't be any two disks to fail, it has to be one in each vdev. Con
  • Sun recommends equal sized vdevs. If I want to add more storage via another vdev, that requires adding 30TB worth of drives at once if I go with one 10-disk raidz2 vdev. Con

Decisions, decisions.

For what it's worth, when I ran into this I just elected to buy more drives, and ended up with 6-drive RAID-Z2s as my basic building block. Current zpool is 3 6-drive RAID-Z2s all striped together, so as long as the wrong 3 drives don't die...

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


As much money as I've already thrown away on hot swap enclosures, my next NAS is getting built around this: http://www.newegg.com/Product/Produ...N82E16811219038

movax
Aug 30, 2008



FISHMANPET posted:

As much money as I've already thrown away on hot swap enclosures, my next NAS is getting built around this: http://www.newegg.com/Product/Produ...N82E16811219038

Shit yeah. I've got the OG Norco RPC-4020 and it's been solid, both in an actual rack and sitting on some IKEA tables. That's pretty pricey though, I don't remember if it was a massive sale or not, but I paid around...250 I think back in 2009 for it. ()

I'm down to 3TB free though, so maybe I'll need to upgrade to a 4224 to fit 4 6-drive vdevs in there, or start resilvering my drives up to 3TV.

Fancy_Lad
May 15, 2003
Would you like to buy a monkey?

FISHMANPET posted:

As much money as I've already thrown away on hot swap enclosures, my next NAS is getting built around this: http://www.newegg.com/Product/Produ...N82E16811219038

Yeah, I would have saved money and pain just starting there instead of my Antec 1200 plus 5 in 3 Icy Docks. Oh well, they have served me well...

Old pic as I now have 3 Icy Docks in it and have since got rid of the dumb blue led fan. Not pictured: using my Dremel to cut off the stupid tabs on each of those 5.25 bays so the Icy Docks would fit. Ugh.

evil_bunnY
Apr 2, 2003



Longinus00 posted:

So the quote is basically saying two things: 1. raidz has lower IOPS than more 'typical' raid and 2. rebuilding large raid setups takes a long time.

First of all, 2 is trivially true; the more data you have the more data that needs to be read to rebuild the missing data.
The missing data is always one drive's worth. RaidZ2 will happily saturate a single drive while rebuilding. I've rebuilt vdevs before, this is what actually happens.

The reason we use 2-parity raidsets is because of the size of the drives, not the number of them.


Longinus00 posted:

for certain workloads your IOPS can be effectively that of a single drive.
Pray tell, what are these workloads?

Minty Swagger
Sep 8, 2005

Ribbit Ribbit Real Good


FISHMANPET posted:

As much money as I've already thrown away on hot swap enclosures, my next NAS is getting built around this: http://www.newegg.com/Product/Produ...N82E16811219038

Check out this thread before you buy, might be a good alternative.
http://www.avsforum.com/t/1412640/a...224-alternative

Longinus00
Dec 29, 2005
Ur-Quan

evil_bunnY posted:

The missing data is always one drive's worth. RaidZ2 will happily saturate a single drive while rebuilding. I've rebuilt vdevs before, this is what actually happens.

The reason we use 2-parity raidsets is because of the size of the drives, not the number of them.

Yes, you will have to write out one drive's worth of data. Your ability to saturate a single drive is highly dependent on things such as size of your vdev and system load so no, you can't just say that all rebuilds take the same amount of time.

evil_bunnY posted:

Pray tell, what are these workloads?

Random IO as in IOPS. Like I said, this is due to the nature of having to read a full "stripe" to return any data from within that stripe. If you want to read 1 KB at location A and 1 KB from location B then the FS will have to do the operations serially if they aren't close enough together. This is in contrast to the sort of optimizations you can do with classic raid 1 where you have one disk read A and the other disk read B. Obviously things like NCQ (or any layer of request reordering) and adding SSD cache pools to your ZFS array can help alleviate things and increase your IO but the fundamental problem still remains.

If you would like to read more about how raidz works then here's a few random blog posts about it from ex SUN engineers.
https://blogs.oracle.com/roch/entry/when_to_and_not_to
https://blogs.oracle.com/ahl/entry/what_is_raid_z

thebigcow
Jan 3, 2001

Bully!

movax posted:

Shit yeah. I've got the OG Norco RPC-4020 and it's been solid, both in an actual rack and sitting on some IKEA tables. That's pretty pricey though, I don't remember if it was a massive sale or not, but I paid around...250 I think back in 2009 for it. ()

I'm down to 3TB free though, so maybe I'll need to upgrade to a 4224 to fit 4 6-drive vdevs in there, or start resilvering my drives up to 3TV.

You could always add on.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Thermopyle posted:

Thanks. FWIW, I did finally find that Sun themselves recommend no more than 9 drives per vdev. I just linked to this, but at the time I didn't realize that it was an official (or at least endorsed) Sun/Oracle site (linked to straight from their ZFS administration guide).


Anyway, now I just have to figure out if I want to follow that recommendation.

The issues here are:

  • If I use a 10-disk raidz2 theres 2 drives worth of parity that cover the failure of any two drives. Pro
  • If I used two 5-disk raidz2 vdevs that's 4 drives "lost" to parity instead of 2. Con
  • If I used two 5-disk raidz1 vdevs, that's back to just two drives for parity, but it's only raidz1 and all 10 disks don't get to share in the pool of parity. In other words, it can't be any two disks to fail, it has to be one in each vdev. Con
  • Sun recommends equal sized vdevs. If I want to add more storage via another vdev, that requires adding 30TB worth of drives at once if I go with one 10-disk raidz2 vdev. Con

Decisions, decisions.


Yep.

Ok, fuck this noise.

The least bad solution (to me) is to just use 10 drives in a single raidz2 vdev and when it comes time to add more disks, I'll just start another pool.

Ninja Rope
Oct 22, 2005

Wee.


At what size array do drives with a lower UER become vital?

Also, based on the WD Red drive spec of 600k load/unload cycles, that's enough to sleep/wake the drive 100 times a day (every ~15 minutes) for 16 years. It seems like letting drives sleep after a few hours of inactivity shouldn't stress them much. Though, if there is a failure caused by start/stop cycles, it may impact all drives at once.

KS
Jun 10, 2003


Outrageous Lumpwad

tehschulman posted:

BTW, Amazon and Newegg are selling the 3TB Seagate Barracuda for $120 down from $150. Ordered two over the weekend to fill the last 2 of my 4 bay NAS.

frumpsnake posted:

The ST3000DM001s have worked well for me, just make sure you apply the CC4H firmware update or they'll overenthusiatically park their heads.

I'm having significant problems with these drives -- they have made NAS4free, FreeBSD Stable/9, and Centos 6.3 lock up/go unresponsive. I've done the firmware update, no help.

Finally seems to be stable with FreeNAS 8.3, but figured I'd chime in. It's sucked up a few weeks of my work-life.

System is a SuperMicro SC847, 4 LSI9211s, and 32 of the ST3000DM001s with a ZeusRAM as ZIL.

Megaman
May 8, 2004
I didn't read the thread BUT...

Fancy_Lad posted:

Yeah, I would have saved money and pain just starting there instead of my Antec 1200 plus 5 in 3 Icy Docks. Oh well, they have served me well...

Old pic as I now have 3 Icy Docks in it and have since got rid of the dumb blue led fan. Not pictured: using my Dremel to cut off the stupid tabs on each of those 5.25 bays so the Icy Docks would fit. Ugh.



These have backplanes and can failed the drives if they shit the bed, why would you want anything with backplanes?

Longinus00
Dec 29, 2005
Ur-Quan

Ninja Rope posted:

At what size array do drives with a lower UER become vital?

Also, based on the WD Red drive spec of 600k load/unload cycles, that's enough to sleep/wake the drive 100 times a day (every ~15 minutes) for 16 years. It seems like letting drives sleep after a few hours of inactivity shouldn't stress them much. Though, if there is a failure caused by start/stop cycles, it may impact all drives at once.

When you sleep drives they stop spinning so it's not just load/unload cycles that are important.

I alluded to this before but you can do the math pretty quickly and then decide for yourself what you're "comfortable" with. UREs in newer drives range from 10^14 to 10^16 depending on the specific drive in question. Let's look at what that means in terms of how many bytes read, on average, it will be between read errors (I will be using SI notation).
pre:
10^14 bits = 1.25 * 10^13 bytes =   12.5 TB
10^15 bits = 1.25 * 10^14 bytes =  125   TB
10^16 bits = 1.25 * 10^15 bytes = 1250   TB
Another way of looking at the numbers is to calculate the chance of a read failure for reading a specific amount of data. I'll be doing it for bits read and not sectors but the numbers should be similar.
pre:
probability of no read errors = (1-1/error rate)^(amount read)
probability of a read error   = 1 - probability of no read errors
A graph of the function for a URE of 10^14 can be seen here (x = TB).

From this you can see that if you were to read a 1TB drive in it's entirety you might expect to get a read error less than 10% of the time. Reading a 4TB drive in it's entirety would get you a read error a less than 30% of the time. Reading a 12TB array and you would expect to have a read error approximately 60% of the time or so.

Increasing the URE by an order of magnitude lowers all of these numbers by approximately an order of magnitude. Using this knowledge you can decide for yourself whether or not you feel a specific URE is enough for the job and/or if you want to take other outstanding measures to protect against bit rot.

Fancy_Lad
May 15, 2003
Would you like to buy a monkey?

Megaman posted:

These have backplanes and can failed the drives if they shit the bed, why would you want anything with backplanes?

2 have been running for 5 years and change through a couple different cases, 1 for about a year. In that time I have had 0 backplanes failures of any kind with the icy docks and somewhere around 5 drives fail and have flipped out multiple drives while upgrading capacities. Worth it to me for the ease of changing drives out. Biggest hardware failure during that time has been a controller card and when I put in a new one the array was fine (yay software raid). Anything important is backed up anyway so it is just for ease of use.

Why I went that route to begin with was a cheap way to add some 3.5 drive bays to rip some of my most watched dvd collection, but once I started ripping my media I went all out on it and shortly after needed more bays. Once I started down the path, it became harder and harder to justify anything from a cost perspective other than keeping on the path...

I am confused by this statement, though... Is that NORCO not using a backplane as well?

movax
Aug 30, 2008



Fancy_Lad posted:

I am confused by this statement, though... Is that NORCO not using a backplane as well?

The Norcos have backplanes as well, and unless they've changed radically from my 4020 (I think newer ones actually remembered airflow is important and the backplanes are thinner/turned to allow for it), they are passive; SATA traces are just routed on a PCB. I don't think mine has any "real" hotswap support circuitry, per se.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


The 4220 and 4224 (the next generation after 4020) uses SAS backplanes instead of SATA, so now you only need SAS cables rather than a mess of breakout cables. They also sell a 120mm fan adapter so you can use quieter larger fans.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Devian666 posted:

If it was permanent they would be gpus and this would be in the bitcoin thread. It does look like that copy will take a while. I'm trying not to accumulate that much data but I suspect I will build an amahi box next year.

Only 90 hours to go!

movax
Aug 30, 2008



Thermopyle posted:

Only 90 hours to go!

Is everything involved attached to an UPS? All pets / people safely kept away

FISHMANPET posted:

The 4220 and 4224 (the next generation after 4020) uses SAS backplanes instead of SATA, so now you only need SAS cables rather than a mess of breakout cables. They also sell a 120mm fan adapter so you can use quieter larger fans.

Aw man, "early" adopting. Apparently the 120mm fan bracket isn't fittable into "older" 4020s either.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



movax posted:

Is everything involved attached to an UPS? All pets / people safely kept away

Yes to the UPS's.

I like living on the edge, though. Remember that picture I posted earlier of the hard drives temporarily hooked up and sitting on the floor? Well they're sitting under my desk, a short, forgetful foot movement away from getting knocked over/disconnected/smashed to smithereens!


In other news, just browsing around I came across SnapRAID.

It looks like a decent solution for storage that is mainly large media files. I'm going to stick with ZFS, but I like a lot of the features of SnapRAID.

Instead of file system changes or whatever, it merely calculates parity and checksum on all files (or blocks), on demand and stores them on 1 (RAID5-ish) or 2 (RAID6-ish) parity disks. You can just stop using it if you want to and all your data is still available since it doesn't do any striping or pooling or anything. You can recover from accidental file deletions. You can use disks of any size and add more at any time.

Anyone here played around with this? I'm wondering if you couldn't just use a simple LVM pool and use SnapRAID on the pool.

Longinus00
Dec 29, 2005
Ur-Quan

Thermopyle posted:

Yes to the UPS's.

I like living on the edge, though. Remember that picture I posted earlier of the hard drives temporarily hooked up and sitting on the floor? Well they're sitting under my desk, a short, forgetful foot movement away from getting knocked over/disconnected/smashed to smithereens!


In other news, just browsing around I came across SnapRAID.

It looks like a decent solution for storage that is mainly large media files. I'm going to stick with ZFS, but I like a lot of the features of SnapRAID.

Instead of file system changes or whatever, it merely calculates parity and checksum on all files (or blocks), on demand and stores them on 1 (RAID5-ish) or 2 (RAID6-ish) parity disks. You can just stop using it if you want to and all your data is still available since it doesn't do any striping or pooling or anything. You can recover from accidental file deletions. You can use disks of any size and add more at any time.

Anyone here played around with this? I'm wondering if you couldn't just use a simple LVM pool and use SnapRAID on the pool.

Using it with straight LVM would be a very bad idea. SnapRAID is a file level replication system so if the disk you lost contained important metadata blocks there is nothing it can do to recover those. Another problem is if you only give SnapRAID 1 device I'm not sure how it's supposed to come up with any parity information.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Longinus00 posted:

Using it with straight LVM would be a very bad idea. SnapRAID is a file level replication system so if the disk you lost contained important metadata blocks there is nothing it can do to recover those. Another problem is if you only give SnapRAID 1 device I'm not sure how it's supposed to come up with any parity information.

Oh yeah. Durr.

UndyingShadow
May 15, 2006
You're looking ESPECIALLY shadowy this evening, Sir

Thermopyle posted:

Yes to the UPS's.

I like living on the edge, though. Remember that picture I posted earlier of the hard drives temporarily hooked up and sitting on the floor? Well they're sitting under my desk, a short, forgetful foot movement away from getting knocked over/disconnected/smashed to smithereens!


In other news, just browsing around I came across SnapRAID.

It looks like a decent solution for storage that is mainly large media files. I'm going to stick with ZFS, but I like a lot of the features of SnapRAID.

Instead of file system changes or whatever, it merely calculates parity and checksum on all files (or blocks), on demand and stores them on 1 (RAID5-ish) or 2 (RAID6-ish) parity disks. You can just stop using it if you want to and all your data is still available since it doesn't do any striping or pooling or anything. You can recover from accidental file deletions. You can use disks of any size and add more at any time.

Anyone here played around with this? I'm wondering if you couldn't just use a simple LVM pool and use SnapRAID on the pool.
Snap raid is great, and I use it in my HTPC which has 6 or 7 2tb or 3tb hard drives. It's dead simple, set to sync every 3 hours. And just last week I lost a drive, and it was able to rebuild about 1.5tb worth of data in 3 hours, so I can confirm it works

Lediur
Jul 16, 2007
The alternative to anything is nothing.

I've been looking for something that does flexible (mix-and-match, able to add drives later) Windows-based storage, since Windows 8 Storage Spaces has some shortcomings. Are there any good alternatives?

Also, anyone have recommendations for putting 3.5" hard drives in 5.25" bays? I've run out of hard drive bays in my case but I have a bunch of 5.25" bays I could use.

Dotcom Jillionaire
Jul 19, 2006

Social distortion


If this is the wrong place to ask this question then by all means tell me to shove it, but does anyone have advice for recovering data from a crashed NAS? I have a RAID-5 array with about 3TB of data on it which, due to a power outage, fell out of sync. I was unaware how degraded the array became, but continued to use the disks until they became unmountable.

Most of the errors I'm seeing relate to bad superblocks and not being able to start all 4 drives up (well, really 3 drives and 1 spare, but who's counting?). I've done a lot of fsck runs to try and sort the problem out automatically, but that didn't seem to work. I've also been trying to rebuild the array by swapping one disk out of the pool and then back in, but this hasn't seemed to help either. Now when I try to start the array with mdadm I get an error saying that the array can't be started because it doesn't have enough disks to start (2 disks are active, 1 it says it still being rebuilt).

Anyhow, I think I've gotten to the point where I am ready to spend money on a clean room and get the data recovered that way, but I'm wondering what other options might be on the table. Clean room will run a pretty penny but I am really anxious to get my data back. Any further disk hacking one could recommend would also be appreciated.

Longinus00
Dec 29, 2005
Ur-Quan

tehschulman posted:

If this is the wrong place to ask this question then by all means tell me to shove it, but does anyone have advice for recovering data from a crashed NAS? I have a RAID-5 array with about 3TB of data on it which, due to a power outage, fell out of sync. I was unaware how degraded the array became, but continued to use the disks until they became unmountable.

Most of the errors I'm seeing relate to bad superblocks and not being able to start all 4 drives up (well, really 3 drives and 1 spare, but who's counting?). I've done a lot of fsck runs to try and sort the problem out automatically, but that didn't seem to work. I've also been trying to rebuild the array by swapping one disk out of the pool and then back in, but this hasn't seemed to help either. Now when I try to start the array with mdadm I get an error saying that the array can't be started because it doesn't have enough disks to start (2 disks are active, 1 it says it still being rebuilt).

Anyhow, I think I've gotten to the point where I am ready to spend money on a clean room and get the data recovered that way, but I'm wondering what other options might be on the table. Clean room will run a pretty penny but I am really anxious to get my data back. Any further disk hacking one could recommend would also be appreciated.

Have you tried asking in the linux-raid (mdadm) mailing list for help? (please give them actual logs/dmesg outputs and not just descriptions) Assuming you get the array working, you might then need to ask for help on the filesystem mailing list to piece it back together. Your post doesn't give enough technical information (vague descriptions are no good) to give any more help than that but I highly recommend you don't touch that computer or randomly try to rebuild anymore.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Lediur posted:

I've been looking for something that does flexible (mix-and-match, able to add drives later) Windows-based storage, since Windows 8 Storage Spaces has some shortcomings. Are there any good alternatives?

Also, anyone have recommendations for putting 3.5" hard drives in 5.25" bays? I've run out of hard drive bays in my case but I have a bunch of 5.25" bays I could use.

Windows-based storage to do what? SnapRAID, which I just mentioned above, works on Windows.

It doesn't do storage pooling, only redundancy.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.


Lediur posted:

Also, anyone have recommendations for putting 3.5" hard drives in 5.25" bays? I've run out of hard drive bays in my case but I have a bunch of 5.25" bays I could use.
Something like this? I got some of them in my box.
http://www.amazon.de/gp/product/B00...ils_o04_s00_i01
No idea what those are called in English.

Lediur
Jul 16, 2007
The alternative to anything is nothing.

Thermopyle posted:

Windows-based storage to do what? SnapRAID, which I just mentioned above, works on Windows.

It doesn't do storage pooling, only redundancy.

Yeah, I was looking for a way to do storage pooling.

I was thinking of installing FlexRAID, moving my stuff off of the drives that I put into a Storage Space, and redoing everything using FlexRAID's storage pooling. I'll probably be using Snapshot RAID because the data doesn't change very often (it's for media and data storage).

If there are comparable free solutions then I'd rather go with those instead.

Lediur fucked around with this message at 07:25 on Nov 1, 2012

Dotcom Jillionaire
Jul 19, 2006

Social distortion


Longinus00 posted:

Have you tried asking in the linux-raid (mdadm) mailing list for help? (please give them actual logs/dmesg outputs and not just descriptions) Assuming you get the array working, you might then need to ask for help on the filesystem mailing list to piece it back together. Your post doesn't give enough technical information (vague descriptions are no good) to give any more help than that but I highly recommend you don't touch that computer or randomly try to rebuild anymore.

Yeah I run Ubuntu so I've been asking on Ubuntuforums, who were very helpful, but the conclusion seemed to be "back everything up and run fsck to fix and pray". I have the space enough now to do an rync to my new array, but it's going to be a long and boring process and might not even work. I'll toss up some more updated logs but here is where I am pretty much at:

ubuntu@ubuntu:~$ sudo mdadm --assemble --scan
mdadm: excess address on MAIL line: spares=1 - ignored
mdadm: /dev/md0 has been started with 2 drives and 1 spare.
mdadm: /dev/md1 has been started with 3 drives (out of 4).



Error when mounting the array:
[71991.727370] EXT4-fs (md1): bad geometry: block count 1069181904 exceeds size of device (1069181568 blocks)



and then:
ubuntu@ubuntu:~$ sudo fsck.ext4 /dev/md1
e2fsck 1.42 (29-Nov-2011)
fsck.ext4: Group descriptors look bad... trying backup blocks...
fsck.ext4: Bad magic number in super-block when using the backup blocks
fsck.ext4: going back to original superblock
fsck.ext4: Group descriptors look bad... trying backup blocks...
fsck.ext4: Bad magic number in super-block when using the backup blocks
fsck.ext4: going back to original superblock
The filesystem size (according to the superblock) is 1069181904 blocks
The physical size of the device is 1069181568 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? cancelled!


/dev/md1: ********** WARNING: Filesystem still has errors **********

evol262
Nov 30, 2010
#!/usr/bin/perl

cat /proc/mdstat

Look at which drive/partition is missing.

mdadm --examine /that/device

Post here.

Dotcom Jillionaire
Jul 19, 2006

Social distortion


evol262 posted:

cat /proc/mdstat

Look at which drive/partition is missing.

mdadm --examine /that/device

Post here.

Ok here are the current outputs

quote:

ubuntu@ubuntu:~$ sudo mdadm --assemble --scan
mdadm: /dev/md0 has been started with 2 drives and 1 spare.
mdadm: /dev/md1 assembled from 2 drives and 1 rebuilding - not enough to start the array.

ubuntu@ubuntu:~$ cat /proc/mdstat
Personalities : [raid1]
md1 : inactive sda3[0](S) sdd3[4](S) sdc3[2](S) sdb3[1](S)
5702299648 blocks super 1.2

md0 : active raid1 sdb2[0] sda2[2](S) sdd2[1]
39062464 blocks [2/2] [UU]

unused devices: <none>

md0 = system partition
md1 = /home partition (storage partition basically)

Not sure why it's reporting that /dev/md1 is RAID1, maybe that has something to do with the rebuild errors. Since cat /proc/mdstat is reporting all the drives are spares, I'll post the mdadm --examine for each (sorry for spam!)

quote:

ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sda3
/dev/sda3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 995072fb:2b3697eb:43058d45:0a9033ba
Name : ubuntu:1 (local to host ubuntu)
Creation Time : Sat Oct 20 08:33:21 2012
Raid Level : raid5
Raid Devices : 4

Avail Dev Size : 2851149824 (1359.53 GiB 1459.79 GB)
Array Size : 8553446400 (4078.60 GiB 4379.36 GB)
Used Dev Size : 2851148800 (1359.53 GiB 1459.79 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : cd45ff74:4592b8ff:3a638f79:93fac561

Update Time : Sat Oct 20 19:30:15 2012
Checksum : 73db0f5d - correct
Events : 14

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 0
Array State : AA.A ('A' == active, '.' == missing)


ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sdb3
/dev/sdb3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 995072fb:2b3697eb:43058d45:0a9033ba
Name : ubuntu:1 (local to host ubuntu)
Creation Time : Sat Oct 20 08:33:21 2012
Raid Level : raid5
Raid Devices : 4

Avail Dev Size : 2851149824 (1359.53 GiB 1459.79 GB)
Array Size : 8553446400 (4078.60 GiB 4379.36 GB)
Used Dev Size : 2851148800 (1359.53 GiB 1459.79 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : eb8c3e44:3121773e:8ad38d85:cd697bb5

Update Time : Sat Oct 20 19:30:15 2012
Checksum : e18cc4f1 - correct
Events : 14

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 1
Array State : AA.A ('A' == active, '.' == missing)


ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sdc3
/dev/sdc3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 995072fb:2b3697eb:43058d45:0a9033ba
Name : ubuntu:1 (local to host ubuntu)
Creation Time : Sat Oct 20 08:33:21 2012
Raid Level : raid5
Raid Devices : 4

Avail Dev Size : 2851149824 (1359.53 GiB 1459.79 GB)
Array Size : 8553446400 (4078.60 GiB 4379.36 GB)
Used Dev Size : 2851148800 (1359.53 GiB 1459.79 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : a8b5b82e:06fae08b:f967a548:89d5dabd

Update Time : Sat Oct 20 11:32:56 2012
Checksum : e4e656d0 - correct
Events : 10

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing)


ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sdd3
/dev/sdd3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x2
Array UUID : 995072fb:2b3697eb:43058d45:0a9033ba
Name : ubuntu:1 (local to host ubuntu)
Creation Time : Sat Oct 20 08:33:21 2012
Raid Level : raid5
Raid Devices : 4

Avail Dev Size : 2851149824 (1359.53 GiB 1459.79 GB)
Array Size : 8553446400 (4078.60 GiB 4379.36 GB)
Used Dev Size : 2851148800 (1359.53 GiB 1459.79 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Recovery Offset : 1708983392 sectors
State : clean
Device UUID : da311324:3d5775c6:bc3c6367:9c1a0416

Update Time : Sat Oct 20 19:30:15 2012
Checksum : f19abe52 - correct
Events : 14

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 3
Array State : AA.A ('A' == active, '.' == missing)

Longinus00
Dec 29, 2005
Ur-Quan

tehschulman posted:

Ok here are the current outputs


md0 = system partition
md1 = /home partition (storage partition basically)

Not sure why it's reporting that /dev/md1 is RAID1, maybe that has something to do with the rebuild errors. Since cat /proc/mdstat is reporting all the drives are spares, I'll post the mdadm --examine for each (sorry for spam!)




Just curious, what error was mdadm spitting at you before you decided to pull a drive and rebuild? Also post your mdadm.conf.

Fangs404
Dec 20, 2004

I time bomb.

Mantle posted:

Didn't see this posted here but FreeNAS 8.3 was released on Friday. Changelog and download links here:

http://sourceforge.net/projects/fre...-8.3.0/RELEASE/

Doesn't seem that much different than 8.2, but 8.3 has support for zfs v28 just like NAS4Free now.

Just upgraded from 8.2.0 p1 and did a zpool upgrade without a hitch.

Dotcom Jillionaire
Jul 19, 2006

Social distortion


Longinus00 posted:

Just curious, what error was mdadm spitting at you before you decided to pull a drive and rebuild? Also post your mdadm.conf.

It was partially an accident, but I consider it a success because it didn't destroy the data when I rebuilt the array on one of the disks . Essentially it had come down to me running fsck and going through some of the errors and accepting the changes. It appeared as though every single block needed to be moved or fixed. I didn't go through more than a few dozen sectors before I decided this was probably a bad idea and quit out.

When I looked at the mdadm -e outputs it looked like /dev/sdc3 was the only drive out of the group that didn't seem in sync (later realizing this was probably the spare in the array). I had also found this blog about recovering a RAID5 mdadm array that seemed completely lost (he tried to rebuild the array and it seemed to work).

It's been a long battle to get to this point anyway. At one point I had the RAID stable enough and mounted that I could transfer some of my files off, but during that transfer the copy operation started to fail more frequently and things are now in worse condition (unmountable).

Dotcom Jillionaire fucked around with this message at 08:11 on Nov 2, 2012

Legdiian
Jul 14, 2004


Looking for a little advice -

I'm an XBMC user that is constantly outgrowing my storage capacity. I started out ripping my DVDs to the local computer. When that ran out of space I started storing data on a WD MyBook. Next I went with one of the entry level Synology 1 bay enclosures.

Currently my media is spread out all over the place and I'm ready to take the plunge and hopefully do this right.

Am I better suited going with a Synology style device or maybe something like a HP Microserver N40L? I'm currently storing about 4TB of data, but I obviously would like room for expansion. I would be interested in a device that would let me run "apps" like SABnzbd+ and Sick Beard. I guessing being able to access my media remotely on iOS devices would be a plus too. I'll also add that I have a Middle Atlantic rack in my closet so anything rackmountable would be welcomed.

Any suggestions?

Mantle
May 15, 2004


I am SO SO SO SO close to getting the permissions set right on my FreeNAS box. This isn't really a NAS question, but setting permissions is a key part of NAS admin so I'll ask here anyways.

I am trying to use setfacl to change the default permissions for new files and directories to 660 and 770. After banging my head against the wall for hours, I discovered that NFSv4 ACL and POSIX ACL are different and have incompatible command line options.

Finally I found the commands that work on FreeBSD to set the permissions I want:
code:
setfacl -b /mnt/pdrive
chmod 770 /mnt/pdrive
setfacl -a2 owner@:rwatTnNcCy:fi:allow /mnt/pdrive
setfacl -a5 group@:rwatTnNcCy:fi:allow /mnt/pdrive
setfacl -a7 everyone@:rwatTnNcCy:fi:deny /mnt/pdrive
setfacl -a3 owner@:rwatTnNcCyD:di:allow /mnt/pdrive
setfacl -a7 group@:rwatTnNcCyD:di:allow /mnt/pdrive
setfacl -a10 everyone@:rwatTnNcCyD:di:deny /mnt/pdrive
So the next step is to apply this recursively throughout the entire tree. But wait! FreeBSD setfacl has incompatible functionality, and there is no recursive option in FreeBSD setfacl!

This page (http://signalboxes.net/howto/freebs...c-clients/#ACLs) says I can do it recursively using find:
code:
find . -type f -exec setfacl -m user:username:read_set::allow
find . -type d -exec setfacl -m user:username:read_set:fd:allow
But I get this error when I modify it to work with my setfacl commands:
code:
find: -exec: no terminating ";" or "+"
TL;DR Can anyone help me apply my setfacl commands to my entire directory tree in FreeBSD?

movax
Aug 30, 2008



Mantle posted:

I am SO SO SO SO close to getting the permissions set right on my FreeNAS box. This isn't really a NAS question, but setting permissions is a key part of NAS admin so I'll ask here anyways.

I am trying to use setfacl to change the default permissions for new files and directories to 660 and 770. After banging my head against the wall for hours, I discovered that NFSv4 ACL and POSIX ACL are different and have incompatible command line options.

Finally I found the commands that work on FreeBSD to set the permissions I want:
code:
setfacl -b /mnt/pdrive
chmod 770 /mnt/pdrive
setfacl -a2 owner@:rwatTnNcCy:fi:allow /mnt/pdrive
setfacl -a5 group@:rwatTnNcCy:fi:allow /mnt/pdrive
setfacl -a7 everyone@:rwatTnNcCy:fi:deny /mnt/pdrive
setfacl -a3 owner@:rwatTnNcCyD:di:allow /mnt/pdrive
setfacl -a7 group@:rwatTnNcCyD:di:allow /mnt/pdrive
setfacl -a10 everyone@:rwatTnNcCyD:di:deny /mnt/pdrive
So the next step is to apply this recursively throughout the entire tree. But wait! FreeBSD setfacl has incompatible functionality, and there is no recursive option in FreeBSD setfacl!

This page (http://signalboxes.net/howto/freebs...c-clients/#ACLs) says I can do it recursively using find:
code:
find . -type f -exec setfacl -m user:username:read_set::allow
find . -type d -exec setfacl -m user:username:read_set:fd:allow
But I get this error when I modify it to work with my setfacl commands:
code:
find: -exec: no terminating ";" or "+"
TL;DR Can anyone help me apply my setfacl commands to my entire directory tree in FreeBSD?

Put a semi-colon on the end. I.E, if I want to remove executable perms from all files (Solaris, but I am using NFSv4 ACLs as well)
code:
find . -type f -exec chmod -x {} \;
FWIW, here's what I do on my NAS (this is only for videos here)
code:
#!/bin/zsh
chmod -R A0=owner@:rwxpdDaARWcCos:d:allow *
chmod -R A1=owner@:rwpdDaARWcCos:f:allow *
chmod -R A+group@:rxaRcs:d:allow *
chmod -R A+group@:raRcs:f:allow *
chmod -R A+everyone@:rxaRcs:d:allow *
chmod -R A+everyone@:raRcs:f:allow *
Lazy but basically my user on the NAS owns all the files, and that's what I use to connect to it from my personal box. Everyone else gets read-only access to the files.

Mantle
May 15, 2004


movax posted:

Put a semi-colon on the end. I.E, if I want to remove executable perms from all files (Solaris, but I am using NFSv4 ACLs as well)
code:
find . -type f -exec chmod -x {} \;
FWIW, here's what I do on my NAS (this is only for videos here)
code:
#!/bin/zsh
chmod -R A0=owner@:rwxpdDaARWcCos:d:allow *
chmod -R A1=owner@:rwpdDaARWcCos:f:allow *
chmod -R A+group@:rxaRcs:d:allow *
chmod -R A+group@:raRcs:f:allow *
chmod -R A+everyone@:rxaRcs:d:allow *
chmod -R A+everyone@:raRcs:f:allow *
Lazy but basically my user on the NAS owns all the files, and that's what I use to connect to it from my personal box. Everyone else gets read-only access to the files.

If I run
code:
find . -type f -exec setfacl -a2 owner@:rwatTnNcCy:fi:allow /mnt/pdrive;
I still get the error about the closing ; or +. What do the {} and \ represent in your shell command?

The reason I cannot use a chmod solution is because I have multiple users accessing the NAS all with two different permissions classes. I can't give everyone the owner login, and other users need to be able to access the files that others create, in certain directories.

Mantle fucked around with this message at 19:35 on Nov 2, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »