|
I'm getting ready to upgrade my RAID5 array from 3 drives to 4 and I have a question about how I should configure this thing. Currently my motherboard has 4 SATA ports all of which are used (3 for the array, 1 for the system). This is a software RAID setup. Since I have to buy a controller card I figure I might as well do this right. Would it be better to buy one 4 port card and put all 4 of the array drives on that? Two 2 port cards (with only the system drive plugged into the MB)? One 2 port card and just put the new drive on it leaving the other 3 on the MB? Sorry if this is a stupid question but I've tried ![]()
|
![]() |
|
Horn posted:I'm getting ready to upgrade my RAID5 array from 3 drives to 4 and I have a question about how I should configure this thing. Currently my motherboard has 4 SATA ports all of which are used (3 for the array, 1 for the system). This is a software RAID setup. You're looking to buy a SATA controller card? Or a SATA+RAID controller card? What is your budget? Just put them all on the same card.
|
![]() |
|
Horn posted:I'm getting ready to upgrade my RAID5 array from 3 drives to 4 and I have a question about how I should configure this thing. Currently my motherboard has 4 SATA ports all of which are used (3 for the array, 1 for the system). This is a software RAID setup. I can't really speak to other platforms, but Linux software raid (mdadm) doesn't care where the drives are. You will want to keep them all the same interface speed if you can (eg. all SATA II or all SATA I, but not some SATA II and some SATA I) because the slowest drive will slow the whole raid5 down. It's probably best to put them all on the same card, but basically do whatever's cheapest or has the expantion abilities you want. If you are getting a SATA+RAID card and want to use the card's ability to do RAID they have to all be on the same SATA+RAID card. PS. Begin with a backup.
|
![]() |
|
HorusTheAvenger posted:
While that is sound advice, I'm willing to bet a forum upgrade he doesn't have any suitable media on which to perform said backup. The reality right is you need a pair of RAID-6 arrays in parallel to cost effectively back-up that amount of data, unless you have access to a super fast internet pipe and use a service like Mozy I guess. So find a friend and make sure the array he builds is at least as big as yours, and close enough that syncing one another's Linux ISO collections isn't *TOO* much of a bother when someone wants to expand their array... I want to try and do a point to point wifi connection with a fellow goon here in town once I actually get my NAS up and running. Will post thread if we succeed ![]() roadhead fucked around with this message at 23:05 on Sep 20, 2009 |
![]() |
|
HorusTheAvenger posted:I can't really speak to other platforms, but Linux software raid (mdadm) doesn't care where the drives are. You will want to keep them all the same interface speed if you can (eg. all SATA II or all SATA I, but not some SATA II and some SATA I) because the slowest drive will slow the whole raid5 down. It's probably best to put them all on the same card, but basically do whatever's cheapest or has the expantion abilities you want. My current setup is built like this, with 4 SATAII disk on an Intel ICH8 and a 5th on a Silicon Image 3132 PCIe card. One of the ICH8 port is the system disk, while the 4th RAID disk is on the 3132. If you use mdadm, make sure you compile "Autodetect RAID arrays at boot" out in the kernel options. Mine would absolutely not pick up the RAID member on the PCIe card and would always start the array in a failed state, making me have to login and manual reassemble it after each reboot.
|
![]() |
|
HorusTheAvenger posted:I can't really speak to other platforms, but Linux software raid (mdadm) doesn't care where the drives are. You will want to keep them all the same interface speed if you can (eg. all SATA II or all SATA I, but not some SATA II and some SATA I) because the slowest drive will slow the whole raid5 down. It's probably best to put them all on the same card, but basically do whatever's cheapest or has the expantion abilities you want. I'm sticking with software raid so I'll just pick up a 2 port card since it doesn't seem to make a difference. HorusTheAvenger posted:PS. Begin with a backup. Of course. I've been doing this stuff for a while but I didn't know if the card configuration would have any real affect on the performance of this thing. roadhead posted:While that is sound advice, I'm willing to bet a forum upgrade he doesn't have any suitable media on which to perform said backup. Thanks for the advice goons.
|
![]() |
|
What's the general consensus on storing non-critical data like movies and TV shows? I'm torn between no raid, RAID 5, and RAID 6. I'm only talking about movies that I've ripped (so I have the physical media as my backup), TV shows, and application/game ISOs (which again, I have the physical media for). This is data I can afford to lose but of course I'd prefer that I don't. If I don't go with any RAID and a HD dies than I lose everything on that drive. If I go with RAID 5 and a HD dies I have the possibility of losing all other drives during a rebuild (eg. second-drive failure). If I go with RAID 6 I will a LOT of space but am relatively safe. I'm not certain that I need this kind of safety but I'm not sure. The array size I'm talking about is about 5x 1 TB drives and my non-critical media is currently taking up about 2.5 GB of space.
|
![]() |
|
Vinlaen posted:What's the general consensus on storing non-critical data like movies and TV shows? Just go with RAID5 and make sure you scrub the drives each week while you have redundancy. echo check > /sys/block/md0/md/sync_action http://en.gentoo-wiki.com/wiki/RAID...#Data_Scrubbing
|
![]() |
|
Lobbyist posted:Just go with RAID5 and make sure you scrub the drives each week while you have redundancy. Very interesting, I assume there is a similar command to do the same thing in FreeBSD running RAID-Z ?
|
![]() |
|
Well, the Dell Perc 5/i (which I purchased) will still continue an array rebuild even after a URE so I'm that terrified of bad sectors. What I am scared of is having a second drive fail during a rebuild (which seems fairly likely from everything I read). That's the really bad thing about RAID 5 (ie. losing EVERYTHING as opposed to just one or two drives).
|
![]() |
|
That's why I'm going with RAID-6 plus hotspare.
|
![]() |
|
roadhead posted:Very interesting, I assume there is a similar command to do the same thing in FreeBSD running RAID-Z ?
|
![]() |
|
Vinlaen posted:Well, the Dell Perc 5/i (which I purchased) will still continue an array rebuild even after a URE so I'm that terrified of bad sectors. No, what you should be scared of is drive failure + URE upon rebuild. This is mitigated by scrubbing your array regularly. The scrubbing is a consistency check that will identify a URE before your 1st disk failure, mark the space as bad, and create a new copy in a new location. You find the UREs before a disk fails and has to be rebuilt. Lobbyist fucked around with this message at 17:59 on Sep 23, 2009 |
![]() |
|
I have a sinking feeling that one of my WD drives is beginning to fail. Writes are taking much longer than they used to. An example is moving a 1GB file from one partition on the drive to another partition took something like 5 minutes. Moving a file from a separate drive took about the same time, maybe a bit quicker. At any rate, I'd like to somehow check to see if the drive is beginning to fail. Unfortunately, this is the drive I use for my main Windows partition, and I believe that it might be continuous swap writes that are causing some degredation in performance. I'm looking to get upgrade to 2GB of RAM(currently at 1GB). In short: What's the best way to test a drive for degradation?
|
![]() |
|
Lobbyist posted:No, what you should be scared of is drive failure + URE upon rebuild. This is mitigated by scrubbing your array regularly. Even if the Dell Perc 5/i gets a URE during a rebuild, it will continue rebuilding the array so you can still access everything even though some data will be corrupted. (which is still better than losing everything, in my opinion) However, I guess RAID 6 is really the best solution. I think RAID 6 plus hot spare is a bit overkill (with my data at least) especially since it would take more than six drives to become more efficient than RAID 1.
|
![]() |
|
So why is OpenSolaris and Raid-Z not more popular? I had a friend set it up on an old box that he had, and it worked on all the hardware that he had lying around. I'm considering doing this myself, but wondering what OpenSolaris would like on random cobbled-together hardware. I've heard lots of raving about how good ZFS is, I want to see it myself.
|
![]() |
|
Vinlaen posted:However, I guess RAID 6 is really the best solution. I think RAID 6 plus hot spare is a bit overkill (with my data at least) especially since it would take more than six drives to become more efficient than RAID 1. I'm also (going to be) running a 10-drive array, so the odds of me hitting a second failure during rebuild are a bit higher.
|
![]() |
|
roadhead posted:So FreeBSD or Solaris for hosting the RAID-Z ? ESXi as the base OS, with the OpenSolaris VM given direct access to all the disks. That way I get the best performance out of the dozen or so VMs I'll be using, while still getting to use ZFS for all the data storage. I've got no idea if I can get the opensolaris VM to boot from it's ZFS volume, but if it can't, I have a ton of 250GB disks I can use as a boot disk for the VM.
|
![]() |
|
Weinertron posted:So why is OpenSolaris and Raid-Z not more popular? I had a friend set it up on an old box that he had, and it worked on all the hardware that he had lying around. I'm considering doing this myself, but wondering what OpenSolaris would like on random cobbled-together hardware. I've heard lots of raving about how good ZFS is, I want to see it myself. If sun were to open source fishworks, you can bet thousands of geeks would flock to it at once.
|
![]() |
|
adorai posted:Probably the unixy-ness of it. Make no mistake, it is far more powerful, but less easy to maintain than say, openfiler. If it had a GUI for configuration, I'm sure a lot more people would use it. Trying out esoteric command line statements on your only copy of your precious data with only a --help to guide you would be a bit nerve wracking. Hell, the only difference between OpenFiler and Solaris is the nifty web UI.
|
![]() |
|
I'm interested in ZFS but don't want to run opensolaris as my server OS. I'd rather pick a Linux distro of my choice. Linux software RAID-5 and XFS gives me tremendous performance as well - 300MB/sec reads and 200MB/sec writes with 5 1TB SATA drives and a low end Phenom X4 9600. How is ZFS performance?
|
![]() |
|
Lobbyist posted:I'm interested in ZFS but don't want to run opensolaris as my server OS. I'd rather pick a Linux distro of my choice. Linux software RAID-5 and XFS gives me tremendous performance as well - 300MB/sec reads and 200MB/sec writes with 5 1TB SATA drives and a low end Phenom X4 9600. How is ZFS performance? Methylethylaldehyde posted:ESXi as the base OS, with the OpenSolaris VM given direct access to all the disks. That way I get the best performance out of the dozen or so VMs I'll be using, while still getting to use ZFS for all the data storage. Or you can run FreeBSD (either as a guest of ESXi or on the metal) but I'm not aware of any Linux distros that have STABILIZED their port of ZFS. Probably only a matter of time though ![]()
|
![]() |
|
roadhead posted:Or you can run FreeBSD (either as a guest of ESXi or on the metal) but I'm not aware of any Linux distros that have STABILIZED their port of ZFS. Probably only a matter of time though Pretty much, but given I know shit all about FreeBSD, linux, and solaris in general, I might as well learn the one system with a native support for it. That and you can get Nexenta, which gives you the OpenSolaris Kernel with the Ubuntu userland, so you can use all those fancy programs you don't find on a regular distro.
|
![]() |
|
Methylethylaldehyde posted:Pretty much, but given I know shit all about FreeBSD, linux, and solaris in general, I might as well learn the one system with a native support for it. That and you can get Nexenta, which gives you the OpenSolaris Kernel with the Ubuntu userland, so you can use all those fancy programs you don't find on a regular distro. Would you guys suggest Nexenta? I have been running an opensolaris box with raidz1 for the last six months and am getting sick of the process to update apps and such.
|
![]() |
|
PrettyhateM posted:Would you guys suggest Nexenta? I have been running an opensolaris box with raidz1 for the last six months and am getting sick of the process to update apps and such. I haven't really played with it outside occasional fuckery in a VM, but using apt-get for packages was pretty fucking nice once I edited in the repositories I wanted to use. It comes as a LiveCD, so I suppose you could play with it and see if you like it.
|
![]() |
|
Yeah i'll be playing with the liveCD this weekend. Wondering if having the liveCD look at my raidz will cause any issues when booting back into opensolaris.
|
![]() |
|
roadhead posted:I'm not aware of any Linux distros that have STABILIZED their port of ZFS. Probably only a matter of time though
|
![]() |
|
adorai posted:Nope, ZFS is not GPL and will never be integrated into the kernel. It can run in userland, but will not perform well. Not necessarily true: http://www.sun.com/emrkt/campaign_d...zfs_gen.html#10 Wouldn't expect it soon though. On another note - I'm curious if anyone has experience running MogileFS? http://www.danga.com/mogilefs/
|
![]() |
|
DLCinferno posted:Not necessarily true:
|
![]() |
|
Synology just upgraded the 2009 2-disk version again for faster performance, claimed speeds are 58MB/s to NAS, 87MB/s from NAS.![]() http://www.synology.com/enu/products/DS209+II/index.php Looks like they flattened the front plate too, ![]()
|
![]() |
|
I have a 4x500GB RAID 5 that's a few years old now and I'd like to upgrade it to a 4x1.5TB array. I could move all the data to one 1.5TB drive temporarily, build a 3x1.5TB array, add the final drive into the mix, and then transfer all of the data back but if there is an easier way to go that would be helpful. This is all being done on a HighPoint RocketRAID 2300.
|
![]() |
|
Buy a fifth 1.5TB drive, move everything to it, build your 4x1.5 array, move everything over to the array, and stick the fifth drive in an external enclosure. Or sell it to one of your friends. Might not be cheapest but it seems like that would be the least-effort way.
|
![]() |
|
Hey guys, I need to build a new backup server at work, and in the past we've used RAID5 with mdadm and lvm which has worked decently. I have heard that hard drive sizes are getting to the point where rebuilding arrays can take really long, I am wondering what the new sexy is as far as pooled storage. I have played with ZFS briefly on a Nexenta system but Nexenta is a cobble together POS and Solaris is pretty annoying for me since I am comfy in Linux. Is ZFS on linux thru FUSE a viable thing yet? What other configurations should I consider? I will most likely be using about eight 1.5 or 2TB drives in the array.
|
![]() |
|
Eyecannon posted:Solaris is pretty annoying for me since I am comfy in Linux.
|
![]() |
|
adorai posted:did you try opensolaris, sxce, or solaris proper? I'd never touched solaris before and picked up opensolaris in about a day. Seriously, if you're comfortable with linux opensolaris really isn't that bad. Once you wrap your head around smf it's great. Edit: Documentation is really lacking though, the opensolaris bible is very good and very helpful.
|
![]() |
|
It's just that I am constantly having to search for equivalent commands of what I am used to... not a huge deal but it kills my productivity quite a bit. But is ZFS the answer for me?
|
![]() |
|
Eyecannon posted:But is ZFS the answer for me?
|
![]() |
|
How is OpenSolaris nowadays with custom hardware? I remember having some issues in the past with drivers being unavailable for certain things like onboard network cards. I made a ZFS pool on a retired fileserver that had a few iffy drives, and it actually still worked pretty well most of the time, though I did have some unrecoverable file problems after a while even though it was ZFS. These were old, abused drives though, I've not tried to install Solaris on any current hardware.
|
![]() |
|
^^^^^ Yes ZFS is for you. ZFS is fucking amazing. This box that I talk about below was built out of random hardware lying around, and is using both onboard nic and an add-in card. My friend brought over his Opensolaris box to let me dump some data I had backed up for him to it, and I'm seeing transfer speed slow down as time passes. Furthermore, I seem to have overloaded it by copying too many things at once and I lost the network share for a second. I ssh'ed into it, and everything looks fine, but transfer speed keeps dropping from the initial 60MB/s writes I was seeing all the way down to 20MB/s. Is everything OK as long as zpool status returns 0 errors? I don't know much about zfs, how full should the volume be allowed to run? Its on 4x1TB drives, so it has about 2.67TB logical space. Of this, about 800GB is available right now.
|
![]() |
|
I'm looking to roll my own NAS/MediaCenter combo and I'm not really sure about what I'm doing anymore. I have a mediacenter that I've stuffed 8 goddamn drives into and troubleshooting things on it are a pain and I have a lousy case that I hate... so I want to just fucking start over. Is it just a terrible idea to take one of these: http://www.newegg.com/Product/Produ...N82E16816111057 and throw it on a barebones PC that's capable of playing out to a TV and running Sabnzd/Utorrent and just using it as semi-NAS storage? slightly more specifically, this? http://www.newegg.com/Product/Produ...N82E16856101070 Expiration Date fucked around with this message at 03:52 on Oct 5, 2009 |
![]() |