|
IOwnCalculus posted:Sure is, and I would bet that the N36L wants UDIMMs (but honestly I'm not sure). I have the n40l, and I did not put in parity ram, and it runs FreeNAS stable for months at a time. The 36 may be different, but double check that you can't get away with normal ram.
|
![]() |
|
I'm building a combined NAS/HTPC for my livingroom and would appreciate if someone could look over the parts I've picked and tell me if I'm making a huge mistake somewhere. It will be connected to a reciever via HDMI. Fractal Design Array R2 Mini-ITX NAS MSI B85I, Socket-1150 Intel Core i3-4130 Crucial DDR3 BallistiX Sport 1600MHz 8GB Samsung SSD 840 EVO 120GB OEM WD Red 4TB NAS Harddrive x 3
|
![]() |
|
Only thing I'd mention here is that mini ITX boards seem to run into heatsink capacitor clearance issues around the CPU socket combined with a potential lack of vertical space to let you install an appropriate heatsink + fan. I got a Gigabyte mini ITX LGA1150 board and with the particular case I have, there appears to be only one heatsink on the market that will fit, including every rackmount heatsink fan type I could find.
|
![]() |
|
grizzlepants posted:Are you referring to known failed drives or something that would only fail on reboot? All of the major server vendors have Nagios plugins that can check for failed drives (and other stuff like high temps, failed fans, etc), that's probably the best way to check for those kinds of things. Referring more to things that fail on reboot, and that seems to be more the case with the stuff here. It's old, 4+ years and the old 250GB drives don't have a lot of gas left. I have good backups, but normally if a drive had dropped out, I grab a spare out of the dozen I bought on the cheap and the RAID usually rebuilds in about an hour. I am deeply ashamed that I had never heard of this Nagios before, so I am definitely going to look into that.
|
![]() |
|
necrobobsledder posted:Only thing I'd mention here is that mini ITX boards seem to run into heatsink capacitor clearance issues around the CPU socket combined with a potential lack of vertical space to let you install an appropriate heatsink + fan. I got a Gigabyte mini ITX LGA1150 board and with the particular case I have, there appears to be only one heatsink on the market that will fit, including every rackmount heatsink fan type I could find. I've used this one in my ITX builds. Temps could definitely be better, but at least it's quiet. grizzlepants fucked around with this message at 15:12 on Oct 2, 2013 |
![]() |
|
TheRat posted:I'm building a combined NAS/HTPC for my livingroom and would appreciate if someone could look over the parts I've picked and tell me if I'm making a huge mistake somewhere. It will be connected to a reciever via HDMI. Damn I'm building a similar thing (HTPC and NAS in one) with almost similar parts except I went the AMD route (FX-8350..?) Good luck!
|
![]() |
|
grizzlepants posted:I've used this one in my ITX builds. Temps could definitely be better, but at least it's quiet. My tests make me think the HSF still isn't seated right because it runs 49c with the case lid off and it takes hours to get down to near room temperature at idle while the heatsink stays the same temp the whole time. I seriously can't tell if the caps (they're insulated) are touching the HSF fins but in the meantime I'm living in 2003 looking for some copper shims.
|
![]() |
|
For what its worth, I just received 3 4TB Reds from amazon. They came in one box, with that big bubble packaging that are linked together, pretty well packed, over 3 individual boxes, each containing the drive held within those card board holder things that suspend it within the box. Havent tested the drives yet, cant tell you if they work yet.
|
![]() |
|
If anyone is thinking about buying one, I'm selling a 4TB WD Red Drive, brand new, over in SA-Mart: http://forums.somethingawful.com/sh...hreadid=3572903
|
![]() |
|
dotster posted:I would just make sure they are not to hot and that you have a spare on hand. Over the years I've become obsessive about actively cooling my drives, fearing premature thermal death. In every case I've ever filled I always make sure there's a 120mm fan blowing directly onto my set of drives and my drives are set with plenty of space between them. Not a strong fan - I typically run it at 7v - but strong enough to create a breeze over the disk. I get uneasy if any of my drives creeps over 40C.
|
![]() |
|
Agrikk posted:I get uneasy if any of my drives creeps over 40C. As a note, I bought 2 7200rpm Toshiba drives and 2 WD red drives last week. After grinding several days straight, the reds are at 93F while the Toshiba drives are hovering around 100F. The reds definitely run cool, which is nice to see.
|
![]() |
|
Agrikk posted:Over the years I've become obsessive about actively cooling my drives, fearing premature thermal death. In every case I've ever filled I always make sure there's a 120mm fan blowing directly onto my set of drives and my drives are set with plenty of space between them. Not a strong fan - I typically run it at 7v - but strong enough to create a breeze over the disk. Google's huge dataset showed that cooler drives failed at significantly higher rates ![]()
|
![]() |
|
DNova posted:Google's huge dataset showed that cooler drives failed at significantly higher rates It does show higher failure rates in lower temp drives, especially for young disks (3-6 months old). The report showed, "...a mostly flat failure rate at mid-range temperatures and a modest increase at the low end of the temperature distribution." It went on though to say,"What stands out are the 3 and 4 year old drives, where the trend for higher failures with higher temperature is much more constant and also more pronounced." They go on to say,"Overall our experiments can confirm previously reported temperature effects only for the high end of our temperature range and especially for older drives." And the important final notes on temperature say, "We can conclude that at moderate temperature ranges it is likely that there are other effects which affect failure rates much more strongly than temperatures do." I think what is being seen here in the very young disks are failures that are part of the infant mortality pool and not necessary related solely to temp. There is a definite spike in failures on the other end of the temp. range for drives 3+ years old that temperature probably plays a more significant part in (especially over 40C) but it is hard to tell since general AFR spikes again and they do not show a drive utilization vs. temp. graph so we could draw better conclusions. It does show failure numbers inline with the anecdotal info I have from friends who run large enterprise storage deployments, for high utilization disks the failure rate is ~15% in the first year. For fellow insomniacs Failure Trends in a Large Disk Drive Population.
|
![]() |
|
I think the dead drive curve with enterprise drives is more inline with the theoretical exponential decay curve of failure rates that most of us learn about in college (or maybe high school if you go to a non-crappy one) statistics as engineers. The bar graphs from that paper make it look really drastic of a failure rate difference over time (granted, 1-10% is an order magnitude, yes), but when plotted on a timeline it's going to not look like a death sentence one way or another. One funny thing I heard at one point about hard drive mechanical failures is that hard drives are designed to spin (self-lubricates upon spin) to prolong their life and that spinning them up and down or just letting them sit in storage kills them in no time at all. I don't think I've read a study that measured the spin-up count that has greatly affected the lifetime of a lot of folks using Green drives in their NAS systems, but anecdotally it's one of the fastest ways to destroy a hard drive it seems besides physically abusing it directly. They looked at the specific kinds of errors reported by SMART moreso than looking at going through all the possible variables and mining which ones is most strongly correlated with a failure (they hand picked the failure counters).
|
![]() |
|
necrobobsledder posted:spin-up count And that's the catch-22 isn't it? All of this green technology trying to save power by powering down the drive, only to destroy it faster and create more e-waste. I am perfectly happy to restart a server. But I get really, really nervous about powering one off for any length of time. Who knows what could happen in the guts of a server when it cools down from running at a constant temp for months on end. Nothing like a good ronkronkronk when you power up a box to let you know that a failed bearing has just extended your work night.
|
![]() |
|
The spin-up count won't go up exponentially higher if you don't use green drives with hardware RAID or you just enable TLER (or whatever equivalent setting you have) on them before putting them into the RAID. Because of the drives used in these DCs not being such drives, it probably won't matter and is why nobody looked at those figures in those drive reliability figures (they're paid mostly to research corporate interests rather than consumer ones in these IT papers - you have Gartner to do industry reports otherwise, and that's still to tell business folks "how much money can I make / save off of mom & pop technology noobs?" Also, it only kicks in when there's an error on one or more of the drives in the RAID. Green drives are just fine for users in single-drive or JBOD configurations, and with more people using cloud storage instead of home NASes, we should probably expect a smaller market for consumer hard drives in general. I still would be curious to see how strong the correlation is on that metric given all these other errors they combed through is all I meant.
|
![]() |
|
So i'm finally finishing my NAS build. I've got everything more or less connected except the HDs themselves. Even though my motherboard (http://www.asus.com/Motherboards/M5A99X_EVO/) has quite a few SATA ports, should i use those chained cables? I'm gonna be plugging them into a 3x4 5.25" to 3.5" bays thingy which has the input plugs right on the back (in which case those chained cables would be at least physically convenient). Advice?
|
![]() |
|
What do you mean by chained cables? Are you using SAS expanders or something (I really, really doubt it but asking because I'm just grasping for something sensible)? Do you mean provided cables?
|
![]() |
|
Gozinbulx posted:So i'm finally finishing my NAS build. I've got everything more or less connected except the HDs themselves. Even though my motherboard (http://www.asus.com/Motherboards/M5A99X_EVO/) has quite a few SATA ports, should i use those chained cables? I'm gonna be plugging them into a 3x4 5.25" to 3.5" bays thingy which has the input plugs right on the back (in which case those chained cables would be at least physically convenient). Advice? I think you're thinking of a 4x-Sata-to-MiniSAS cable, which is used when you're plugging into a RAID controller. If you're plugging directly into your motherboard, you won't need something like that. SATA cables are flat and really compact, just zip tie those suckers ![]() grizzlepants fucked around with this message at 21:21 on Oct 9, 2013 |
![]() |
|
By chained cables do you mean http://www.pccasegear.com/index.php...oducts_id=24664 ? They'd take up much the same space as 4 cables zip tied together. ^^^
|
![]() |
|
Anyone have a recommendation on a plain SATA (preferably SATA III) controller? I need to put a couple more drives into my storage box and I am out of SATA ports. No need for RAID, just ports. Plenty of choices on NewEgg, etc. but would prefer some first hand recommendations if possible.
|
![]() |
|
What's the OS you expect to use with that machine? The problem with recommending the cheapest SATA controllers is that they have a tendency to have poor driver support besides Windows. Stuff like the M1015 is supported everywhere and is just plain solid, which is why they're oftentimes recommended over some Rosewill or no-name SATA-only chipset based controller. As the number of drives / resources increases, the likelihood you're probably looking for more business-class hardware goes up as well.
|
![]() |
|
The higher end controllers can actually be had on eBay used for about the cost of a new piece-of-shit one, too. If you don't need support for drives over 2TB, you can find LSI1064 based controllers for about $20 each, and it gets you a proper HBA with four SATA connectors so you don't even need special cables. If you're willing to wait you can sometimes find anything based on the LSI2008 family for cheap. I stumbled across someone selling a bunch of M1015s without brackets and snagged one for $40 not long ago, and just swapped the bracket from one of my old 1064 cards. On those you will need to buy a SFF8087 to 4x SATA cable, though.
|
![]() |
|
Also keep in mind that there are two kinds of SFF-8087 cables - reverse and forward, where forward is what people typically want. Reverse is for connecting SATA ports on your motherboard to a SAS backplane typically, not for connecting drives to the SAS controller. Hell, I actually have some spare forward breakout cables and would be willing to sell them for cheap.
|
![]() |
|
This would be with Windows Server 2012. The M1015 is tempting (found the un-bracketed version for $35 on eBay), but I wasn't really looking to get that crazy at this point. I do want to attach 3 or 4 TB drives to it, so I need support for that.
|
![]() |
|
I know we've covered this before, but I forget: How do you use those bracketless cards? Just plug them in and make sure you route your cables in a manner to not put much stress on the card?
|
![]() |
|
That's what I've done when I haven't been able to bolt a card down for one reason or another. The PCIe slots are actually pretty damn good at keeping the cards in place if the box isn't getting moved around a lot.
|
![]() |
|
I had a bracket from some old PCI wireless card that lined up on the bottom hole of my M1015 so I used that. If you're really paranoid you should be able to find a bracket for less than $10 shipped. Though if you're finding a card for $35, make sure it's not an Advanced Features key (like this) and is actually the full card. The price is still around $100 on ebay so I'd be surprised if you found one that worked for $45.
|
![]() |
|
Wiggly posted:This would be with Windows Server 2012. The M1015 is tempting (found the un-bracketed version for $35 on eBay), but I wasn't really looking to get that crazy at this point. I do want to attach 3 or 4 TB drives to it, so I need support for that. Are you sure it's the actual card and not a feature key? IOwnCalculus posted:That's what I've done when I haven't been able to bolt a card down for one reason or another. The PCIe slots are actually pretty damn good at keeping the cards in place if the box isn't getting moved around a lot. Cool.
|
![]() |
|
Thermopyle posted:Are you sure it's the actual card and not a feature key? My bad, it was actually for the a BR10i that came up under a M1015 search.
|
![]() |
|
BR10i isn't bad, but it's the older LSI1068 that (like the 1064 I have) doesn't support drives over 2TB.
|
![]() |
|
Just saw this case (Silverstone DS380) introduced this past June coming out that may be enough to delay people from getting the NSC-800 like I did. I don't think I regret the NSC-800 - far from it - but it kind of made me re-think whether the difficult build was worth the loss of USB 3.0 front ports and being forced to use a 1U or FlexATX PSU. Nothing will really beat the NSC-800 on spacial terms without severely restricting the aftermarket parts that could work but this is closer to a holy grail build for those of us that hate compromising on the capability, power, size trade-offs of computers and are willing to pay a bit more for it. https://www.youtube.com/watch?v=-eTj2bUBqTU
|
![]() |
|
necrobobsledder posted:Just saw this case (Silverstone DS380) introduced this past June coming out that may be enough to delay people from getting the NSC-800 like I did. I don't think I regret the NSC-800 - far from it - but it kind of made me re-think whether the difficult build was worth the loss of USB 3.0 front ports and being forced to use a 1U or FlexATX PSU. Nothing will really beat the NSC-800 on spacial terms without severely restricting the aftermarket parts that could work but this is closer to a holy grail build for those of us that hate compromising on the capability, power, size trade-offs of computers and are willing to pay a bit more for it.
|
![]() |
|
necrobobsledder posted:Just saw this case (Silverstone DS380) introduced this past June coming out that may be enough to delay people from getting the NSC-800 like I did. I don't think I regret the NSC-800 - far from it - but it kind of made me re-think whether the difficult build was worth the loss of USB 3.0 front ports and being forced to use a 1U or FlexATX PSU. Nothing will really beat the NSC-800 on spacial terms without severely restricting the aftermarket parts that could work but this is closer to a holy grail build for those of us that hate compromising on the capability, power, size trade-offs of computers and are willing to pay a bit more for it. Do you have any details on what you did with the NSC-800? That looks pretty close to what I want (although the Silverstone is tempting too). I figure if I ever feel the urge to go full-autism on my NAS setup, I'll just do a rackmount thing in my basement (this requires me to get a place with a basement first).
|
![]() |
|
I was wondering if yall had any experience with the Qnap TS-412 nas, its relatively inexpensive and the reviews say good things about it. Also, what 3tb hdds would yall recommend for it, ive been looking at the WD Reds for now. Thanks wisemanofhyrule fucked around with this message at 05:09 on Oct 15, 2013 |
![]() |
|
Avenging Dentist posted:Do you have any details on what you did with the NSC-800? That looks pretty close to what I want (although the Silverstone is tempting too). I figure if I ever feel the urge to go full-autism on my NAS setup, I'll just do a rackmount thing in my basement (this requires me to get a place with a basement first).
Running FreeNAS 9.1.1 with the CPS, Sabnzbd, sickbeard nerd approved stack after I re-wrote half their PBIs and spent way too long screwing with mount points. Aside from the software bitching, the NSC-800 is insane to mount everything and align with cables. I've had far easier times tearing apart and rebuilding laptops and 1U servers.
|
![]() |
|
I lost a drive on my freeNAS media server's RAIDZ2 last week and it's rebuilding. I spent a while trying to find more than a confirmation that resilvering is occurring in the web UI last night and gave up, but then I saw this in the nightly email:code:
|
![]() |
|
Not sure about the web interface for FreeNAS, but in shell you'd see that by running 'zpool status'. In NAS4Free it would be at Disks -> ZFS -> Pools -> Information.
|
![]() |
|
FreeNAS has a shell available from the web admin interface. There's some meters and monitoring you can setup to watch the zpool status and to e-mail if it changes from ONLINE to anything else.
|
![]() |
|
Shell works, thanks
|
![]() |