|
modeski posted:It's only anecdotal, but I've had my WHSv1 server for three years now, with 2Tb WD Caviar Green drives, and had to return/replace three of them over that time. I'll be going with Reds for my next server. So just to make it more difficult I have two QNAPs with 6 WD greens and not had a problem with any of them. They don't get used a bunch but have been running for over a year.
|
![]() |
|
dotster posted:So just to make it more difficult I have two QNAPs with 6 WD greens and not had a problem with any of them. They don't get used a bunch but have been running for over a year. I have 4 2TB Greens paired with 4 2TB Reds in a Linux-ZFS NAS that have an average powered-on time of anywhere from 18 months to 30 months. They did live as just a bunch of independent drives in a Windows machine up until this past January, when I added the Reds as a vdev on the new machine, moved the contents of the Greens, then added the Greens as a second vdev in the pool. No complaints so far, though I do need to add another vdev, or swap the older drives out for some new 3TB drives to increase capacity. I will note that all the 2TB drives came from Amazon, I've only ordered older lower capacity drives from Newegg. Amazon does tent to pack their drives better, all my Reds came in an OEM-style box with the plastic drive holders, which was then wrapped in heavy packing paper and in a larger box. Newegg always shipped my drives in static bags, wrapped in bubble wrap, then packed in foam peanuts.
|
![]() |
|
Moey posted:1. You will have to copy it to a temp location, then rebuild the array. Right now I have a NAS and a HTPC, but no way to play rented optical media (yes, we still do this [cheaply] in Japan). My NAS has a bluray drive in it, so I was thinking about just saving the data, wiping it, and starting over with Windows or Ubuntu and making it an all-in-one so I can do xbmc AND optical media.
|
![]() |
|
tarepanda posted:Right now I have a NAS and a HTPC, but no way to play rented optical media (yes, we still do this [cheaply] in Japan). My NAS has a bluray drive in it, so I was thinking about just saving the data, wiping it, and starting over with Windows or Ubuntu and making it an all-in-one so I can do xbmc AND optical media. It'll be much easier to get something like this: http://www.amazon.com/Blu-Ray-USB-E...W/dp/B001QA2Y9S Sorry: http://www.amazon.co.jp/s/ref=nb_sb...k%3Ablu-ray+usb
|
![]() |
|
Odette posted:Can I have a link to the thread? I'm interested in reading more about that. In retrospect, I'm not sure if I should trust this guy. Because of this: quote:Me: I had always issues with Greens. The best WD drives I have are a bunch of 500GB RE2 from 2007 that have been powered almost constantly. They're still truckin' without issues. I have a bunch of RE4 now, too, but once a while they're doing some thermal recalibration or whatever that grinding out of the blue is. Doesn't sound reassuring. quote:Him: Raid Editions are FREAKING FANTASTIC. I got a few test units for myself while working, and they boot up my system from start to ready in seconds. That's with all the bells and whistles loaded yo. quote:Me: Do the Blue ones actually have variable RPM, or is it just an implication that isn't true? Same was said about the Greens in the beginning, until it was proven they spin at constant 5900rpm. quote:Him: Blue and Green utilize tech to spin the disks at variable speeds. I'm not sure about data stating that disks for the greens spin at constant 5900, can you direct me to it?
|
![]() |
|
Combat Pretzel posted:RE drives are essentially Black editions with different firmware and ostensibly extended burn-in testing. They're not suddenly attain SSD like speeds out of the blue. RE hard drives are disks with TLER for hardware RAID. They may use better components to get the longer MTBF that is listed or just take care of it with warranty and charge the price premium as insurance. RE4s and Blacks perform at the same level. It could be like a lot of employees at a lot of companies that he believed the marketing.
|
![]() |
|
Has anyone tried XPEnology on a HP microserver? I'm running WHS2011 w/Drivepool on my N40L and it's just kinda shitty. Was gonna move to a synology box eventually when they release a successor to the DS412+, but I figure this would be a fun project to tide me over. Alternatively, I have access to Server 2012 R2 over Dreamspark. Is storage spaces a decent solution for pooled, upgradeable storage now? I'd heard it was improved in R2. All I really want is a decent fileserver that does DLNA and itunes serving. WHS is kinda shitty at the media serving, and it looks like either alternative has more solid options.
|
![]() |
|
Combat Pretzel posted:Here's something from some ex-worker of a Western Digital factory that posted on Reddit: Counterpoint: So? A hard drive is a complex mechanical device, but it's not a magical single component that can't be fixed. There's absolutely no reason why if they put a drive together and it has a failed component on the circuitry, or a head that's not working, that the manufacturer has to go and throw every other component on the drive away. These sorts of repairs are well out of the reach of most people because we don't have massive cleanroom factories with the machinery to properly diagnose these failures, the spare parts to replace them with, or the skill to replace them without trashing something else in the process. HD manufacturers do.
|
![]() |
|
Civil posted:Has anyone tried XPEnology on a HP microserver? I'm running WHS2011 w/Drivepool on my N40L and it's just kinda shitty. Was gonna move to a synology box eventually when they release a successor to the DS412+, but I figure this would be a fun project to tide me over. I've been running XPEnology on my N54L for several months now; I've got it running Sickbeard, SABnzdb, and pyTivo and it's been rock solid. It's a great solution for something that just works and I don't have to think about. Installation was also a breeze.
|
![]() |
|
IOwnCalculus posted:Counterpoint: So? It's still more handling than a drive that worked straight away. I'm not sure if the extra attention is good or bad. I think it's a little silly to worry so much about individual drives, personally.
|
![]() |
|
DNova posted:It's still more handling than a drive that worked straight away. I'm not sure if the extra attention is good or bad. I think it's a little silly to worry so much about individual drives, personally. Typically (and anecdotally) speaking some refurb/remanufacture processes go through more thorough testing than typical line devices, at least when it comes to other consumer products like appliances.
|
![]() |
|
Hey guys. This may be more of a generic hardware question, but since it's for a NAS device I'll start here. I'm trying to resurrect my old computer as a FreeNAS box, which is working but, well, it's loud. The culprit seems to be the CPU fan, so the question is: can anyone point me towards a good low-noise replacement for a Core2 Duo? The current is something I grabbed from Best Buy long ago when the original suddenly died on me. In a related vein, how big of a power source do I really need? It used to be a gaming machine so the current one is definitely over-powered for just a CPU and a couple of hard disks (and maybe a small $30 video card if I end up going full HTPC).
|
![]() |
|
Krailor posted:I've been running XPEnology on my N54L for several months now; I've got it running Sickbeard, SABnzdb, and pyTivo and it's been rock solid. Sold. This is my weekend project.
|
![]() |
|
dupersaurus posted:Hey guys. This may be more of a generic hardware question, but since it's for a NAS device I'll start here. I'm trying to resurrect my old computer as a FreeNAS box, which is working but, well, it's loud. The culprit seems to be the CPU fan, so the question is: can anyone point me towards a good low-noise replacement for a Core2 Duo? The current is something I grabbed from Best Buy long ago when the original suddenly died on me. You can search for an LGA775 compatible HSF on Newegg, just read the specs and reviews, most will give you exactly how much air they push (CFM, cubic feet per minute) and how loud they are (in db). Hard disks really don't pull much power, just look for a power supply that has enough SATA connectors for however many disks you're planning on using.
|
![]() |
|
Civil posted:Sold. This is my weekend project. Here's the N40L specific thread from the main XPEnology forums: http://xpenology.com/forum/viewtopic.php?f=2&t=6 The one thing to remember when going through the initial setup in Synology Assistant is don't auto-create the pools. Wait until setup is done and then log into the synology web app and create the pools in there. If you auto-create the pools then it includes the USB drive with the OS which screws everything up. This seems to be responsible for 99% of the non-hardware issues people have when installing XPEnology.
|
![]() |
|
Geemer posted:Right now, there's a sale on WD Caviar Green drives at that same computer store and he's thinking of getting two 2 TB ones for the NAS. I seem to remember horror stories about Caviar Greens in servers and stuff and would like to know if that's still the case or not (or if it ever really was the case). And if so, if I really need to steer him towards the (much more) expensive Caviar Reds. For more anecdotal evidence, I accidentally purchased sixteen 1TB WD green drives for a project and we ended up keeping them for something else. Of those 16 drives (in RAID-0 and RAID-10 configurations), five have failed in the last two years. Their five warranty replacements have not failed. So, out of 21 WD Greens in non-Parity hardware- and software-RAID configurations, five have failed. Take note: I did have to do some tweaking of the hardware RAID adapters to keep the drives from powering down while a member of a RAID set. The drives would power down on their own and fall out of the set and break the RAID volume until I realized what was happening and how to fix it. Make of this data what you will.
|
![]() |
|
Agrikk posted:For more anecdotal evidence, I accidentally purchased sixteen 1TB WD green drives for a project and we ended up keeping them for something else. Of those 16 drives (in RAID-0 and RAID-10 configurations), five have failed in the last two years. Their five warranty replacements have not failed. I had 12 WD RE4s in a RAID 60 on one of our servers, no SMART warnings from the RAID controller everything was fine the system ran for almost two years. Powered the server off for maintenance and of the 12 only 4 came back online. Since then I think we have replaced two more that failed a bit more gracefully. Also, I have a friend who ran IT storage for his last company with a consumption rate of about a petabyte per quarter. Their average yearly failure rate on enterprise class disks was 12%.
|
![]() |
|
dotster posted:Also, I have a friend who ran IT storage for his last company with a consumption rate of about a petabyte per quarter. Their average yearly failure rate on enterprise class disks was 12%. This sounds absurdly high.
|
![]() |
|
Not necessarily. If they're consuming a petabyte a quarter, even if they're using 4TB SATA disks rather than something like a 2TB SAS disk, that's still 256 new disks every quarter without a shred of redundancy. That's a shitload of new disks, and disks do have a bathtub curve for failure rates. If you're buying >1k disks a year, you're always going to have a lot of disks in that first part of the curve.
|
![]() |
|
I expect the enterprise class drives to have more of an exponential curve than bathtub given the failure rate should be closer to the constant that's theorized. Green drives make sense to be washtub / bimodal. The failure rates I've heard from Fusion IO are much better than that, but I'm pretty sure their stuff makes EMC blush on price terms. It's basically battery backed RAM from what I've gathered.
|
![]() |
|
necrobobsledder posted:I expect the enterprise class drives to have more of an exponential curve than bathtub given the failure rate should be closer to the constant that's theorized. Green drives make sense to be washtub / bimodal. The failure rates I've heard from Fusion IO are much better than that, but I'm pretty sure their stuff makes EMC blush on price terms. It's basically battery backed RAM from what I've gathered. Fusion IO gear comes with a five year warranty which helps offset its eye watering price. It's PCI express SSD with an in ram cache and clever drivers. We have IO drive 2s in our VMWare machines and the performance is great. I can't imagine ever using it for home stuff though.
|
![]() |
|
jre posted:I can't imagine ever paying for it for home stuff though. FTFY
|
![]() |
|
dotster posted:I had 12 WD RE4s in a RAID 60 on one of our servers, no SMART warnings from the RAID controller everything was fine the system ran for almost two years. Powered the server off for maintenance and of the 12 only 4 came back online. Since then I think we have replaced two more that failed a bit more gracefully.
|
![]() |
|
AlternateAccount posted:This sounds absurdly high. Those were the numbers he gave me. I was a large HPC environment for the most part, a bunch of engineers accessing large data sets as well. I wasn't too surprised really. We have pretty regular disk failures on our EMC and Netapp arrays.
|
![]() |
|
Combat Pretzel posted:Thanks for making me crap my pants. I've opted for the RE4 in my system after good experiences with the RE2 (almost 40000 power-on hours on each of them), after some WD Greens and some Seagate Barracudas taking a shit in rapid succession. Well like everyone says over and over RAID is not backup. I would just make sure they are not to hot and that you have a spare on hand.
|
![]() |
|
I'm having terrible luck with hard drives this year. In early September, the drive in one of my Dell/"work" laptops crashed. Two weeks ago, the main backup external hard drive for my VMs crashed after 6 months in operation, and a replacement is on the way from Western Digital. Last week my main Time Machine backup drive, a Seagate, started making ominous noises; Wednesday my other Dell laptop ran an unprompted CHKDSK and today, it trashed my main virtual machine, possibly due to a bad sector. I do my backups to a frightningly complex combination (CrashPlan Central's "online" backup, 2 external drives for nightly/weekly Windows images, a third external drive for Time Machine, and a Synology DiskStation for local CrashPlan backups) which should give me at least some measure of security for cases like this... ...Except the only viable backup of my VM is on Crashplan Central, because the external drive I use as a local backup crashed 2 weeks ago, and Crashplan on Windows won't easily back up to a network drive unlike the versions for Linux and Mac. It will take at least the weekend to download it. At least it happened on a Friday? Is this the thread where I can ask for external HDD recommendations? I know they're not strictly NAS-y but they are storage. I'm looking for the most reliable external HDDs in sizes 1-2TB. If not, where should I ask?
|
![]() |
|
jre posted:Fusion IO gear comes with a five year warranty which helps offset its eye watering price. It's PCI express SSD with an in ram cache and clever drivers. We have IO drive 2s in our VMWare machines and the performance is great. I can't imagine ever using it for home stuff though. Also, I've seen controllers on EMC, Hitachi, and NetApps fail on occasion beyond just disk failures. Recovering from those is something that home RAID setups probably will require a backup or a replicated external zpool to do effectively.
|
![]() |
|
ChickenOfTomorrow posted:I'm having terrible luck with hard drives this year. You can get Crashplan to use the Synology by creating a VHD disk in windows that's mapped to a folder on the Synlology. It seriously takes less than 5 min to setup and then you won't have to worry about dealing with external drives and can just consolidate all of your backups to the Synology. Here's the instructions I used: http://homeservershow.com/forums/in...k-share-solved/
|
![]() |
|
Krailor posted:You can get Crashplan to use the Synology by creating a VHD disk in windows that's mapped to a folder on the Synlology. It seriously takes less than 5 min to setup and then you won't have to worry about dealing with external drives and can just consolidate all of your backups to the Synology. Thanks! I tried following a different set of instructions that involved making a symlink; that didn't work at all. Will try this now.
|
![]() |
|
So, to review, using DrivePool, if you take one of the drives out of the comp that its pooled on, and plugged it into another standard Windows PC with no drivepool software, you'd still be able to read the contents of that specific disk (whatever DrivePool had decided to store on that particular disk)? Is that right?
|
![]() |
|
So I restart both home and work servers every week or three just to make sure the drives come back up, is this normal? I've caught one drive that went out that way. I'd just rather discover it proactively rather than having an unrecoverable number of failures randomly.
|
![]() |
|
AlternateAccount posted:So I restart both home and work servers every week or three just to make sure the drives come back up, is this normal? I've caught one drive that went out that way. I'd just rather discover it proactively rather than having an unrecoverable number of failures randomly. I don't think it's normal (in the sense that people do it) but I don't see anything wrong with that as long as you have backups which should be a given.
|
![]() |
|
AlternateAccount posted:So I restart both home and work servers every week or three just to make sure the drives come back up, is this normal? I've caught one drive that went out that way. I'd just rather discover it proactively rather than having an unrecoverable number of failures randomly. Are you referring to known failed drives or something that would only fail on reboot? All of the major server vendors have Nagios plugins that can check for failed drives (and other stuff like high temps, failed fans, etc), that's probably the best way to check for those kinds of things. Also, if we're talking about ZFS volumes, I found a python script a while back that parses the results of zpool status that you can throw into cron and it will send an email if anything is amiss. If anyone is interested in it, I'll pull it off my server when I get home. grizzlepants fucked around with this message at 20:47 on Sep 30, 2013 |
![]() |
|
grizzlepants posted:
I would like this very much, please.
|
![]() |
|
grizzlepants posted:Also, if we're talking about ZFS volumes, I found a python script a while back that parses the results of zpool status that you can throw into cron and it will send an email if anything is amiss. If anyone is interested in it, I'll pull it off my server when I get home. You can also just pass the -x flag to zpool status: man zpool posted:-x This shell script will send an email if there is a problem: quote:#!/usr/bin/sh
|
![]() |
|
Novo posted:This shell script will send an email if there is a problem: That's much cleaner than the one I was using before, thanks.
|
![]() |
|
Anyone have any reason to believe this RAM wouldn't work in an N36L? I just bought the one necrobobsledder mentioned in this thread and it doesn't want to post with a pair of those. Only think I can see is maybe the voltage requirement is too high? edit: These are RDIMMS aren't they?
|
![]() |
|
DJ Commie posted:edit: These are RDIMMS aren't they? Sure is, and I would bet that the N36L wants UDIMMs (but honestly I'm not sure).
|
![]() |
|
IOwnCalculus posted:Sure is, and I would bet that the N36L wants UDIMMs (but honestly I'm not sure). Damnit! Oh well, might as well order another 3TB Red. It looks like some 16GB kits work, but I can't think of a single reason I'd need that much RAM since it doesn't relly have enough horsepower for real VM work or dedupe.
|
![]() |
|
Yeah, unless you're buying Opterons or Xeons with server/server-ish (read: lol, "enthusiast") socket types like LGA2011 and LGA1156, you're not going to be using RDIMMs. UDIMMs aren't quite electrically compatible with RDIMMs and are mutually exclusive. You can put UDIMMs into regular DIMM slots and be fine though. Most AMD CPUs + chipsets until recently accept UDIMMs and will do the parity calculation.
|
![]() |