|
Don Lapre posted:There appears to be a bug in that 5.1 Xpenology that breaks drive mounts after 12-24 hours. You can downgrade though. Ugh ![]() How is that even possible?
|
![]() |
|
eightysixed posted:Ugh Probably something running on a schedule and its corrupting something. Doesn't seem to hurt your data. I successfully downgraded to 5.0. Developers are working on it
|
![]() |
|
I have a huge boner for the Q-NAP TS-451 ever since learning I could potentially roll my NAS and HTPC into a single unit, but I'm struggling to find any real reviews of the hardware decoding/media aspects, just read/write performance metrics. Anyone have first hand experience with one?
|
![]() |
|
ElehemEare posted:I have a huge boner for the Q-NAP TS-451 ever since learning I could potentially roll my NAS and HTPC into a single unit, but I'm struggling to find any real reviews of the hardware decoding/media aspects, just read/write performance metrics. Anyone have first hand experience with one? Personally I don't want to use my nas for anything but server stuff. The nice thing about a nas is its always available cause its generally not fucked with.
|
![]() |
|
I built a FreeNAS box from a HP Microserver and 5x 2tb drives about 3 and a half years, and I've just run into my first issue with it. I got an email saying that a scrub was starting and then, a few hours later, an email saying that the volume was degraded. All drives appeared to be online, and I went through smartctl stats and found no issues. I then did a long S.M.A.R.T test on each drive, all of which came back clean. The only thing I can find in the logs around the time that I got the warning email is that one of the drives 'detached'. As far as i can tell, everything appears to be ok now. I should probably put in an order for another 2tb drive, shouldn't i?
|
![]() |
|
That sounds like a reset on the SATA bus or something similar which could interrupt transactions but not necessarily cause data loss. SATA cables can be a little unreliable on occasion too as well as HBAs themselves. Lots and lots of things can go wrong, and while we can get through engineering of lots and lots of places, we have few ways to back up and fix things when bad data has been written to disk and committed to transactions or things just fall away and can't come back, which is really all we're trying to prevent. Recoverable errors happen all the time. So perhaps you can count yourself in the lucky pile there.
|
![]() |
|
I'm thinking of setting up a RAID 5 array of three HGST 4TB drives in my desktop. I already have two 4TB drives, but want to add a third and go to RAID 5 so I have redundancy and can rebuild if a drive fails. I have a Z87 motherboard. Will the onboard RAID be okay, or should I get an add-in controller card? If so, which controller card?
Dick Fagballzson fucked around with this message at 21:17 on Jan 20, 2015 |
![]() |
|
Dick Fagballzson posted:I'm thinking of setting up a RAID 5 array of three HGST 4TB drives in my desktop. I already have two 4TB drives, but want to add a third and go to RAID 5 so I have redundancy and can rebuild if a drive fails. I have a Z87 motherboard. Will the onboard RAID be okay, or should I get an add-in controller card? If so, which controller card? RAID 5 for drives that large is in my opinion not a great idea if you want a good chance of a successful rebuild in case of a single drive failure.
|
![]() |
|
So what would you suggest? RAID6?
|
![]() |
|
I'm a big fan of ZFS RAIDZ.
|
![]() |
|
4x drives in a RAID6 or RAID10? That'll allow you to lose two drives before being in danger of data loss. The problem with RAID5 is that there's a chance that a second drive could fail while you're replacing and resilvering the first failed drive.
|
![]() |
|
So RAID 6 or 10 with 4 drives then. What's a good bang for the buck controller card?
|
![]() |
|
Dell PERC H200 + breakout cable. It only does RAID 10 though, not 6.
|
![]() |
|
If you need RAID6 support the PERC H700, 800 both support RAID6.
|
![]() |
|
DNova posted:I'm a big fan of ZFS RAIDZ. I did that until I was scared by everyone saying how bad it is if a drive fails. I bought 2 more drives and now am doing RAIDZ2.
|
![]() |
|
mayodreams posted:I did that until I was scared by everyone saying how bad it is if a drive fails. I bought 2 more drives and now am doing RAIDZ2. When I say "RAIDZ" I mean any of the redundancy "levels," and ZFS in itself is just so fantastic. RAIDZ is much more tolerant to an unrecoverable read error on resilvering than traditional hardware RAID, which is also a huge advantage, especially with larger drives. Basically I am in love with ZFS.
|
![]() |
|
Does RAID1+0 offer the same protection as RAID6? I believe there's a situation in RAID1+0 where you can have two specific drives fail and lose data, where a RAID6 won't give you dataloss if any two drives fail.
|
![]() |
|
Let's imagine the trivial situation of 4 disks in a RAID6 config. You can loose 2 disks, any 2 disks, and not loose any data. Now in RAID 1+0 (or 0+1) you have 2 pairs of identical disks. If you lose both identical disks, your data is gone. So the right 2 drives failing can take out all your data.
|
![]() |
|
DNova posted:When I say "RAIDZ" I mean any of the redundancy "levels," and ZFS in itself is just so fantastic. RAIDZ is much more tolerant to an unrecoverable read error on resilvering than traditional hardware RAID, which is also a huge advantage, especially with larger drives. Seriously. My ZFS array is a bit of an abomination in that I have it spread across three raidz1 vdevs, and I have encountered the ultimate "oh shit" of losing a drive while resilvering another drive in the same vdev. I only lost 38 photos, and ZFS told me exactly which ones.
|
![]() |
|
IOwnCalculus posted:I only lost 38 photos, and ZFS told me exactly which ones. Being confident in the integrity of your primary data set is paramount to me. Having ZFS tell you "hey, I can't fix exactly this subset of your data!" lets you go and restore that data from backups and continue like nothing happened. And it's free and easy to use!
|
![]() |
|
ZFS is so forgiving with consumer hardware. I've had a few times where a disk will freeze up or something and all of a sudden zfs lists 17k unrecoverable errors. If you can manage to reboot before your anus clenches so tightly you pass out, and you're lucky and the drive wakes up again, ZFS will happily go on chugging, rather than just shitting all over your data. Unrelated: my anus is incredibly clenched while I wait for my 3 WD Red 3TB drives to arrive so I can just get rid of all the Samsung/Seagate 1.5TB drives in my array.
|
![]() |
|
Nothing like running a scrub, seeing CKSUM errors adding up, and then at the end it goes "fixed a few MB of errors, no biggie" to give me faith in my storage choice. Now I just need to get off my ass and RMA a couple of my 4TB Reds. When they get warm (north of 104°F, so typically during a scrub or heavy IO) they like to freak out and reset their SATA connection. They're also in the upper drive bays, in a case with apparently inadequate airflow, so it's a problem. Right now I have a box fan shoving air into the side of the case, and all the drives stay in the mid 70°s F. I need to replace the case with something to hold 12 3.5" disks and give good airflow past the disk cages, then hopefully I won't have to worry about it anymore.
|
![]() |
|
Backblaze's latest hard drive data is out. Oof those Seagate 3TB drives. At least their 4TB drives redeem them. ![]() HGST just kicking all kinds of ass. Star War Sex Parrot fucked around with this message at 18:16 on Jan 21, 2015 |
![]() |
|
The hilarious part of the HGST business is that it's what IBM's old Deskstar AKA Deathstar division is from.
|
![]() |
|
I'm pretty sure that the Deskstar line was, on average, pretty good. If I remember correctly it was one particular model that was prone to extreme failure.
|
![]() |
|
DNova posted:I'm pretty sure that the Deskstar line was, on average, pretty good. If I remember correctly it was one particular model that was prone to extreme failure. 60GXP / 75GXP, and I'd still wager half of it was the fact that nobody had really been accustomed to the idea of having to cool hard drives before those came out.
|
![]() |
|
Well, IBM seems to have known that the failure rates for those drives were about an order magnitude higher than the competition whatever the cause. I had one of those hard drives myself and it never died though. In fact, I remember it out-living most of my other drives (who remembers Quantum Fireballs? Yeah...). I didn't do any special cooling or anything then, I just stuck it in the internal drive bay to replace the 1.6 GB Seagate. Yeah, back then you could wait like 4 years and have an order of magnitude improvement in hard drive sizes. I don't see my 24 TB hard drives now and it's been 6 years ![]() I wish we could get those kind of drastic improvements today, but *sigh* capitalism and dumbass physics, what can we do?
|
![]() |
|
IOwnCalculus posted:60GXP / 75GXP, and I'd still wager half of it was the fact that nobody had really been accustomed to the idea of having to cool hard drives before those came out. ![]() ...they didn't help any of my all four 75GXPs that died.
|
![]() |
|
Oh god the Deathstar...I can still hear the clicking in my nightmares
|
![]() |
|
Dont spin up a deathstar and then hit them with a hammer, unless you want glass in your eyes
|
![]() |
|
I'm getting ready to build a hybrid VMWARE esxi (never used it before, but I need to increase my server OS knowledge and this seems like a good platform for it) and NAS box. Here's what I have so far: Intel Quad Port Nic 60 ASUS SABERTOOTH 990FX R2.0 179.99 AMD FX-8350 Black Edition Vishera 8-Core 4.0GHz (4.2GHz Turbo) Socket AM3+ 125W Desktop Processor 179.99 Memory 4 X 8 244 M115 Controller 75 NZXT Source 220 50 HGST Deskstar 4tb x2 330 2 X 256gb SSD 240 The intention is to have a max of 5 VMs running (1 lightweight pfsense, 1 freenas, and 3 various server OSes) running off the mirrored SSDs and the mirrored hard drives. Then I would pass the M115 directly to FreeNas and plug in 6 drives a few months later as my current NAS runs out of room. The PFsense vm gets 2 nics, the FreeNAS vm gets 1 nic, and the other 3 guests share a nic port. I'm not married to the idea of AMD, but I do want to keep this under 1500. Is this possible? Does it make sense?
|
![]() |
|
UndyingShadow posted:I'm getting ready to build a hybrid VMWARE esxi (never used it before, but I need to increase my server OS knowledge and this seems like a good platform for it) and NAS box. Here's what I have so far: I run an almost identical setup to what you are looking to build. If you would like to do esxi, I would strongly recommend you get a board with a supported chipset (opteron/xeon) from VMware's HCL. As for the networking, you can get away with using only 2 NICs for your setup. I have two vSwitches: WAN and LAN. Each vlan is connected to one of my onboard Intel NIC's and then to the cable modem and a gigabit switch respectfully. pFense gets two vmxnet3 (or E1000 which is easier but uses WAY more CPU cycles under load) that bridge the WAN and LAN vSwitches. I then have other gigabit switches and a 802.11AC router in bridge mode for wireless. ![]() I also am running FreeNAS and passing through an LSI 9211-8i for an HBA. Having everything on vmxnet3 and local to VMware is awesome because it is a 40gb connection, and I've moved data from VM to VM from the FreeNAS store at 129Mb/sec. ![]()
|
![]() |
|
mayodreams posted:I run an almost identical setup to what you are looking to build. If you would like to do esxi, I would strongly recommend you get a board with a supported chipset (opteron/xeon) from VMware's HCL. Noted on the networking. Trying to find a server motherboard seems like it'd be super expensive. Is there something I'm missing?
|
![]() |
|
UndyingShadow posted:Noted on the networking. Trying to find a server motherboard seems like it'd be super expensive. Is there something I'm missing? I have the Sandy Bridge version of this board and a Xeon E3 1230 processor. http://www.newegg.com/Product/Produ...N82E16813132014 It has a C226 chipset that supports a Xeon or certain i3 processors. The added bonus is there is a driver pack for ESXi 5.5 that enables support the onboard i210 Intel nics. mayodreams fucked around with this message at 02:31 on Jan 22, 2015 |
![]() |
|
I'd say that should work, I have a Sandy Bridge E3-1230 and I've run ESXi on it before to run a Hadoop cluster using VMware Serengeti. The dumbest problem I had with my ESXi install was that after the VMware kernel loads, the USB driver won't load right and every USB port is disabled except for the one located directly on the motherboard rather than as a header or connected to the backplate. So I can't access my ESXi host physically unless it's with a PS/2 keyboard and mouse.
|
![]() |
|
UndyingShadow posted:I'm getting ready to build a hybrid VMWARE esxi (never used it before, but I need to increase my server OS knowledge and this seems like a good platform for it) and NAS box. Here's what I have so far: How about a Lenovo TS140 with an E3-1225v3 for $330? Add the SAS controller, quad NIC, 32GB ECC RAM, and drives and you'll come out with something more powerful (and ESXi HCL-friendly) with ECC RAM and Intel AMT for about the same money.
|
![]() |
|
SamDabbers posted:How about a Lenovo TS140 with an E3-1225v3 for $330? Add the SAS controller, quad NIC, 32GB ECC RAM, and drives and you'll come out with something more powerful (and ESXi HCL-friendly) with ECC RAM and Intel AMT for about the same money. That would be fine, except I plan to shove 8 drives into my whitebox, and that IBM only holds 4 ![]() Still, I'm starting to realize the price difference between unsupported consumer hardware and budget server hardware is a whole lot less than I thought. I'm having second thoughts about the AMD FX-8350. Would a Quad Core Xeon Haswell be a better fit (is it worth giving up 8 cores for 4+4 hyperthreading?)
|
![]() |
|
UndyingShadow posted:That would be fine, except I plan to shove 8 drives into my whitebox, and that IBM only holds 4 What do you need 8 cores for?
|
![]() |
|
Don Lapre posted:What do you need 8 cores for? Running 4 VMs I thought it would be nice to give them 2 (or even 3) cores each?
|
![]() |
|
Or you could buy a good processor.
|
![]() |