|
kiwid posted:Bigger drives, more drives, and going from raidz1 to raidz2. As far as I know, you still can't expand a vdev with more drives, right? Nope, not as far as I know
|
![]() |
|
Combat Pretzel posted:The Solaris one has 600000+ load cycles, the Drobo only 4000+. I don't get it? "only" That's still a horrendous load cycle count.
|
![]() |
|
Combat Pretzel posted:The Solaris one has 600000+ load cycles, the Drobo only 4000+. I don't get it? thats because Im dumb and got the screenshots mixed up. its the other way around. edited...
|
![]() |
|
BackBlaze has a new blog post up about enterprise vs consumer drives http://blog.backblaze.com/2013/12/0...ve-reliability/ quote:So, Are Enterprise Drives Worth The Cost?
|
![]() |
|
so I just happened to look at that SMART values on my other NAS with older drives. I haven't been getting any SMART errors or warnings or anything (tests are scheduled weekly) but one of the drives looks like this:code:
|
![]() |
|
I don't think so, IIRC the one you really want to look at is Reallocated sector count/current pending sector count. Those represent actual bad things developing and indicating impending general failures.
|
![]() |
|
I wouldn't sweat it yet, you still have no reallocated sectors and no uncorrectable errors. I'm also glad I got my latest shipment of 3TB Reds when I did; I was playing musical drives to try and get my three lowest-hour 1.5TB drives in the same raidz1 vdev, when my single highest-hour one (with over 37.7k hours on it!) decided to shit the bed and use up what was my last remaining 1.5TB spare. I have to check now but I think the three Samsung 1.5s that died on me were all in a very short string of nearly-sequential serial numbers ![]() For the record, here's how Newegg shipped mine - the third drive easily had enough room to rock from angled to sitting against the end of the box. ![]() IOwnCalculus fucked around with this message at 21:00 on Dec 5, 2013 |
![]() |
|
titaniumone posted:"only"
|
![]() |
|
IOwnCalculus posted:I wouldn't sweat it yet, you still have no reallocated sectors and no uncorrectable errors. Regardless of the lack of actually securing the drives, I really do like those bubble case things. They're so much less annoying than the old layers and layers of regular bubble wrap taped together every which way.
|
![]() |
|
Completely agreed, I have at least some hope (and if not, well, there's always WD's RMA process).
|
![]() |
|
IOwnCalculus posted:
I really did luck out then. That sucks.
|
![]() |
|
IOwnCalculus posted:For the record, here's how Newegg shipped mine Yup. That's how mine looked too. SO. I think I'm going to give NAS4Free a try. It's a little unnerving to try to learn something completely from scratch, but I am hoping I can make it work. (that, and apparently I need to secure an extra 4GB of DDR3 memory if I'm going to run it on that AMD E-350, or DDR2 if I'm going to run it on my spare Athlon X2 platform) Not really sure, however, what the OS actually installs to. It looks like it runs off of a USB drive. If so, that means I get to return the 32GB SSD I bought off of Amazon to run Windows on, but whatever. As far as stress testing the array once I've got it built...how does one go about it?
|
![]() |
|
You can run it off of the SSD, and there are some theoretical benefits to doing so (you can give it some swap space, for example). However, when I looked into it most of the documentation strongly recommended against that type of installation due to increased difficulty of updates. You could still do the USB-type installation (no swap, the system loads from it once to boot and never touches it again except to update the config.xml as needed) on a non-USB storage device, but typically you can save some money and free up a SATA port by just using whatever cheap USB stick you can get your hands on. Or, if you aren't concerned about the cost, you could use a USB stick to boot from, and add the SSD in as a cache device. Stress testing? Copy as much data as you can to it and read it back. Write as many copies of the Ubuntu install ISO to it as you can, and run zpool scrub over and over again so that ZFS is having to constantly validate the data. It's been my experience with NAS4Free that if you're doing any sort of heavy workload like this and you're going to see a problem, the problem drive is going to start chucking errors left and right into the system log; otherwise it will quietly do what you ask of it.
|
![]() |
|
So then the best option (especially considering my status as a total novice), is just to run it off a USB stick? Edit: moved into its own post. Psimitry fucked around with this message at 22:18 on Dec 5, 2013 |
![]() |
|
If I were starting from scratch, yeah, that's what I'd do. Either use the SSD for cache or return it and save the money / the SATA port.
|
![]() |
|
As far as the testing, is there an automated way to do this? I've heard something about using HDtune as an automated tool to run it for however long you specify, writing zeroes, reading the entire drive, writing ones, reading, repeat. Obviously that's not going to spot random read errors, but again I am very new to all of this.
|
![]() |
|
I dd if=/dev/zero of=/dev/thedrive bs=32M count=100 or 1000 or so, then do SMART short offline test, and the next day do a long offline test. If it survives that I use it. Edit: Actually, I think I started filling the drive (leave off the count=blah part of dd) then doing the rest. Ninja Rope fucked around with this message at 01:50 on Dec 6, 2013 |
![]() |
|
Bob Morales posted:BackBlaze has a new blog post up about enterprise vs consumer drives Be careful with those conclusions -- they haven't had the enterprise drives for more than two years, so they don't have any data on third year failures. Even if they do have the same failure rate, that's where the five-year warranty on most enterprise drives would come into play in the bang/buck calculations. There are also other warranty considerations, including the manufacturers liking to not honor warranties on consumer drives used in RAIDs.
|
![]() |
|
Ninja Rope posted:I dd if=/dev/zero of=/dev/thedrive bs=32M count=100 or 1000 or so, then do SMART short offline test, and the next day do a long offline test. If it survives that I use it. I don't really have any idea what that means - is this done through a command line from NAS4Free or some other Linux variant?
|
![]() |
|
Yes, sorry. That command fills up the drive with zeroes, touching (ideally) every sector. dd is kind of complicated and I can't find a tutorial right now, but in order to use dd you'll need to know the raw device for the drive you want to test. I don't know if FreeNAS/NAS4Free shows you in the UI but it should be something like "/dev/ada0" or just "ada0". You'll also see them listed in the "dmseg" command (along with a ton of other data) or, if you've created the zpool already, "zpool status" may list the drives as well (but without the /dev/ part). If the drive actually is /dev/ada0, the dd command to fill the drive with zeroes would be: code:
The command will take forever and not show any progress and probably not report any errors, but checking the SMART data afterwards (again I don't know if there's a FreeNAS/NAS4Free UI for this or if you have to use smartmontools) or forcing a SMART offline test would report any errors dd exposed.
|
![]() |
|
Ninja Rope posted:The command will take forever and not show any progress and probably not report any errors, but checking the SMART data afterwards (again I don't know if there's a FreeNAS/NAS4Free UI for this or if you have to use smartmontools) or forcing a SMART offline test would report any errors dd exposed. Gnu ddrescue will let you monitor progress and stop and resume which can be useful for large drives. The badblocks program also does something similar but has a non destructive option if you don't want to write to the drive. It can also show progress and be resumed.
|
![]() |
|
Prince John posted:Gnu ddrescue will let you monitor progress and stop and resume which can be useful for large drives. kill -USR1 will also show progress from dd.
|
![]() |
|
I'm glad for the student-run computer shops in the nearby city, which houses a fairly large tech-oriented university. They get their drives delivered in well-padded manufacturer boxes of 20 each and sell them over the counter. None of that shit packing that's killing drives. And they're competitive with online shops in their pricing, too.
|
![]() |
|
McGlockenshire posted:Be careful with those conclusions -- they haven't had the enterprise drives for more than two years, so they don't have any data on third year failures. I've never had Seagate or WD give me flack about RMAs on drives used in RAID arrays, even Greens. I guess I've just been lucky so far ![]() In more relevant to the thread posting, I'm surprisingly happy with this LSI 9211-8i. Rebuilding the 3 * 3tb set in to a 4 * 3tb set is chugging along at 40 MB/sec, even while playing a movie off of it. ~20 hours rebuild time was a lot less than I was expecting. I had an array of 3 * 3tb 7200 RPM drives + 5 * 2tb green drives, just running mdadm raid-5s. Ditched all the greens, because screw that. I ended up picking up 4 3tb seagate SAS drives because a buddy was selling them new/unopened for cheap (100$ for Constellation ESes was too good to pass up for feeding my storage addiction). So I'm going to end up with an array of 4 * 3tb sata, 4 * 3tb SAS (Both raid5), and then the last 4 3tb SATA get shoved in to my desktop, probably in a RAID-10 array for FRAPS (Really dxtory and/or shadowplay, but fraps just sorta became synonymous with "Recording games" to most people) recording duties. I feel like Agreed with his videocard addiction, except with storage at home ![]()
|
![]() |
|
Yay! My FreeNAS server is up and running, although I was apparently not comprehending correctly when I installed 4x2TB drives in a RAIDZ2 configuration. I was expecting to have 6TB of usable space, but only have 4TB. I still could have arranged it in a less-optimal configuration to get the 6TB, but I decided the massive redundancy was the better way to go (really don't want to lose any data). I've also got a 4TB external drive coming that I can use as a backup solution, so I'm happy that I don't have to build another server to do backup. Of course, when I do feel the need to expand my storage space I'm going to have to figure out how I want to go about doing it - at present my Frankenstein machine has a paltry 4 SATA ports, so it's maxed out in the number of drives I can shove into it. Friend says I'm getting too far ahead of myself, as 4TB will be more than enough space for a long time to come for me. Even so, I might start pulling together a parts list to make a proper FreeNAS server, so that if/when the money comes available I can do it right. In case anyone is interested I'm using a Core2Duo E8400 CPU, Gigabyte GA-EP35-DS3L Motherboard, 8GB of DDR2 800 (PC2 6400) RAM, an AMS 4-in-3 Trayless backplane, and a 4GB USB flash drive to run FreeNAS, with 4x2TB Seagate NAS drives. The drives were, without a doubt, the most expensive part of the server and made up about 80% of the cost.
|
![]() |
|
The number behind the Z in RAIDZ defines the amount of parity disks. Two disks are used on parity, so that's why 4TB instead of 6TB.
|
![]() |
|
I got that after reading the user's guide half a dozen times, searching the web, and trying to figure out what the hell was going on. Frustration is a great teacher.
|
![]() |
|
If you have good backups, you might almost be better to do a RAID10 at 4 drives (stripe two nested raid1 groups) and have the same usable space but increased performance. You can technically lose two drives but there is a chance you can only lose one. edit: but for home use, you're probably better sticking with the raidz1-3 kiwid fucked around with this message at 17:20 on Dec 6, 2013 |
![]() |
|
Ninja Rope posted:The command will take forever and not show any progress and probably not report any errors, but checking the SMART data afterwards (again I don't know if there's a FreeNAS/NAS4Free UI for this or if you have to use smartmontools) or forcing a SMART offline test would report any errors dd exposed. There's three ways you would see errors pop up on this. One is in NAS4Free's SMART screen at Diagnostics -> Information -> SMART, where each drive will have an entry like this. code:
Second place is (if you use ZFS) in the ZFS screen at Disks -> ZFS -> Information. You'll see something like this: code:
Third place you would see errors is going to be the system log at Diagnostics -> Log. My log doesn't have any errors in it since I rebooted it after the last drive failure, but when a drive decides to really shit the bed, you'll see a bunch of lines showing up repeatedly at the end of the log referencing being unable to read that particular drive. It's at this point that ZFS will typically drop the drive from the array altogether. Really, you should set it up, start writing some data to it, and configure email monitoring such that it emails you if ZFS ever shows a status other than "ONLINE". There was a guide somewhere on how to do it but I can't seem to find it. I stuck the two scripts on pastebin: http://pastebin.com/j6VZKDXC and http://pastebin.com/HKM0CevX. I put these on a folder inside my ZFS array since I didn't want to dick around with unionfs and just have a cronjob set to run zfs_errors.sh every 30 minutes. I also have another cronjob set to run 'zpool scrub yourpoolnamehere' on a weekly basis.
|
![]() |
|
Is there a way to do an initial drive test on a Synology box?
|
![]() |
|
kiwid posted:If you have good backups, you might almost be better to do a RAID10 at 4 drives (stripe two nested raid1 groups) and have the same usable space but increased performance. You can technically lose two drives but there is a chance you can only lose one.
|
![]() |
|
Well fuck, it looks like a hardware issue with this ReadyNAS, and of course it's out of warranty. Seeing as how it's btrfs based, can I just drop these drives into something else and mdadm it back to life? Or am I most likely going to lose everything?
|
![]() |
|
Pudgygiant posted:Well fuck, it looks like a hardware issue with this ReadyNAS, and of course it's out of warranty. Seeing as how it's btrfs based, can I just drop these drives into something else and mdadm it back to life? Or am I most likely going to lose everything? As with everything, it depends on how much ReadyNAS does or doesn't do. It might actually be as simple as mounting a single drive via btrfs, it would pick up the rest of the drives and give you access to the cluster.
|
![]() |
|
They're all assembled into a raid mountpoint (md127) which is then mounted to /data. I guess we'll see. I'm close to pulling the trigger on all this (and Freenas on a thumb drive), anything I should change?code:
|
![]() |
|
Combat Pretzel posted:The number behind the Z in RAIDZ defines the amount of parity disks. Two disks are used on parity, so that's why 4TB instead of 6TB. EDIT: here is a look into how zpools do parity across disks from my bookmarks. D. Ebdrup fucked around with this message at 17:58 on Dec 7, 2013 |
![]() |
|
D. Ebdrup posted:This is bugging me a bit, so just to clarify: those numbers aren't indicators of the amount of parity disks - rather, it's an indicator of how many disks you can lose and still be able to rebuild/resilver because of parity stored on each drive in the array/pool. Well and the same is true for parity with RAID5/6 as well, nothing uses a dedicated disk for parity (unless you are doing RAID 4 somehow). RAID 5 is roughly equivalent to RAIDZ in that they can tolerate a single disk loss and RAID 6 and RAIDZ2 can tolerate two disk failures in a pool. The parity information is striped across all the disks in the pool in all cases listed. Many people say RAID5/RAIDZ has one parity disk and RAID6/RAIDZ2 has two parity disks because that is the amount of space you lost to parity and they were taught wrong or at least a simplified version of what was really going on.
|
![]() |
|
Are there any decent cases for ITX/mATX custom NAS boxes? I've got an IBM ServeRAID M1015 controller laying around, so I figure I could go with an ITX board (or mATX failing that). I probably only need 4 drives, so something as small and slick as a Synology box would be nice. I've read the past couple of pages, but I haven't seen any recent posts on hardware choices.
|
![]() |
|
Did I set this up correctly in that it is a RAID10 setup?![]() I have a feeling it's just two groups of RAID1 and no striping. kiwid fucked around with this message at 01:02 on Dec 8, 2013 |
![]() |
|
kiwid posted:I have a feeling it's just two groups of RAID1 and no striping. Not sure why you need an intent log, tho. Are you writing out that much data? D. Ebdrup posted:This is bugging me a bit, so just to clarify: those numbers aren't indicators of the amount of parity disks - rather, it's an indicator of how many disks you can lose and still be able to rebuild/resilver because of parity stored on each drive in the vdev/zpool. Combat Pretzel fucked around with this message at 01:15 on Dec 8, 2013 |
![]() |
|
Combat Pretzel posted:ZFS load balances data between different vdevs based on IO load, among other things. Data's being spread, just not in an interleaved fashion you're expecting. Thanks, I thought I set it up in some weird JBOD of mirrored groups or something, didn't know it load balances across all vdevs. As for the intent log. This isn't my main NAS, this is just a thing I'm fucking with for a home lab with vmware.
|
![]() |