|
Star War Sex Parrot posted:The first step is admitting that you have a problem. You're an enabler.
|
![]() |
|
IOwnCalculus posted:I've got it in the NZXT Source 210 Elite. It actually works quite well, it's reasonably quick to swap drives out of and cools well. But as I mentioned, the SAS connectors on the board are quite close to some of the drives. I'll eventually order some breakout cables with 90-degree SFF-8087 ends on them if I want to use them. Correction: With careful finagling, it is actually possible to get the SFF-8087 connectors on the X8SI6 connected and looped back to the drives. I think it would've actually been more difficult with right-angle connectors.
|
![]() |
|
IOwnCalculus posted:Correction: With careful finagling, it is actually possible to get the SFF-8087 connectors on the X8SI6 connected and looped back to the drives. I think it would've actually been more difficult with right-angle connectors. Did you consider a ts140 with the xeon cpu or any other servers when you came across this deal? What is the rest of your build like? Im considering this deal or a microserver paired with 4x4tb hard drives for a media server with xmbc or similar.
|
![]() |
|
lampey posted:Did you consider a ts140 with the xeon cpu or any other servers when you came across this deal?
|
![]() |
|
lampey posted:Did you consider a ts140 with the xeon cpu or any other servers when you came across this deal? What is the rest of your build like? I've been evolving this setup for a long, long time. If I was starting from scratch I'd do a Microserver or something like it with fewer, larger drives, but I've got nine drives (plus spares - my 1.5TBs are ancient so I don't trust them enough to stripe it to yet another raidz vdev) so it's not an option. In this case, I wanted to finally get my fileserver at least onto some ECC hardware since I cheaped out a wee bit on the previous build (as an ESXi all-in-one) and didn't do an ECC build. It's not doing any transcoding or anything since I don't want to bother getting more ECC RAM or making it work under NAS4Free, but that X3450 is certainly up for the task.
|
![]() |
|
WD announced 5 and 6tb reds today. $299 for the big boys. Also up to 4tb Red Pros which are 7200rpm drives and designed to go into configurations with up to 16 drives.
|
![]() |
|
Correct me if I'm wrong but weren't those announced a while ago?
|
![]() |
|
KOTEX GOD OF BLOOD posted:Correct me if I'm wrong but weren't those announced a while ago? Dont think so? http://anandtech.com/show/8273/west...nd-pro-versions
|
![]() |
|
KOTEX GOD OF BLOOD posted:Correct me if I'm wrong but weren't those announced a while ago? HGST and Seagate had announced 6TB offerings using helium as the gas inside to reduce friction. WD had mentioned future plans for 6TB, but this is the first confirmation of pricing and a new line with the Pro drives.
|
![]() |
|
G-Prime posted:HGST and Seagate had announced 6TB offerings using helium as the gas inside to reduce friction. WD had mentioned future plans for 6TB, but this is the first confirmation of pricing and a new line with the Pro drives. Note the WD drives are regular old drives. No helium or anything.
|
![]() |
|
Don Lapre posted:Note the WD drives are regular old drives. No helium or anything. I'm very curious to know what kind of voodoo they're doing there, and how different the fail rates will be between the helium ones and the regular air ones. I'd think that the extra friction will lead to more head crashes.
|
![]() |
|
G-Prime posted:I'm very curious to know what kind of voodoo they're doing there, and how different the fail rates will be between the helium ones and the regular air ones. I'd think that the extra friction will lead to more head crashes. It's not so much "extra" friction as it is "about the same friction as before."
|
![]() |
|
DNova posted:It's not so much "extra" friction as it is "about the same friction as before." Well, yes, but. More platters in the same space means less air gap between platters and heads, which means more potential ramifications to the existing friction. You're correct. I should have worded it better.
|
![]() |
|
Anandtech already posted a head-to-head between the WD 6TB Red, Seagate's enterprise 6TB, and HGST's helium-filled 6TB: http://www.anandtech.com/show/8263/...gate-ec-hgst-he I haven't read it yet, but these drives aren't exactly direct competitors. I guess you have to test what's available, though.
|
![]() |
|
G-Prime posted:Well, yes, but. More platters in the same space means less air gap between platters and heads, which means more potential ramifications to the existing friction. You're correct. I should have worded it better.
|
![]() |
|
Star War Sex Parrot posted:It's the same 5-platter configuration as the 4TB Red. Platter density just went up: 1.2TB/platter versus the old 800GB/platter drive. Oh hell, I stand corrected. Excellent. That makes me feel a lot more comfortable.
|
![]() |
|
I don't understand the new Red Pros. Why does the hard drive need to support/even know about how many drives are in a bay?
|
![]() |
|
KOTEX GOD OF BLOOD posted:Fractal Design FD-CA-NODE-804-BL
|
![]() |
|
KOTEX GOD OF BLOOD posted:I'm about to pull the trigger on this FreeNAS build, is there any reason I'd need a drop-in PCI SATA or RAID card if the mobo has enough SATA ports for the number of drives I'm using? Not really since it is all software. If you want to expand past your onboard sata, look into an M1015 on ebay.
|
![]() |
|
Ninja Rope posted:I don't understand the new Red Pros. Why does the hard drive need to support/even know about how many drives are in a bay? For OEMs maybe
|
![]() |
|
Ninja Rope posted:I don't understand the new Red Pros. Why does the hard drive need to support/even know about how many drives are in a bay? Because the more drives you have the more vibration you have. Notice under drive features NAS drives are designed to better handle the vibration from being next to other drives. ![]()
|
![]() |
|
Market segmentation. I assume if you run a normal red in a case with a bunch of drives and mention it during the warranty claim they'll just say no.
|
![]() |
|
hifi posted:For OEMs maybe
|
![]() |
|
I've been waiting for the new Red to start a FreeNAS build. I was beginning to think we'd never see them because of market segmentation.
|
![]() |
|
Hmm, now I need to consider if a 5Tb or 6Tb drive will be better than a 4Tb in my upcoming build. I'm mostly backing up media, with essential documents/photos backed up elsewhere also, so losing a drive wouldn't be the end of the world, even in a non-RAID setup (WHS2011+Drivepool). Two 6tb would get me the same space as three 4Tb drives and I guess would be cheaper. Hmm. EDIT: Actually, looks like I'd still be better off financially with 4Tb drives. AU$726 for 3x4Tb versus AU$798 for 2x6Tb. modeski fucked around with this message at 12:10 on Jul 22, 2014 |
![]() |
|
Fuck. I did zpool add instead of zpool attach again. This was with the hard drive I was going to copy everything over to to rebuild the pool. Does anyone have experience on fixing this massive fuck up? I was going to do a zpool destroy, create a new pool with this hard drive and try to do a zpool import of the original pool. Edit: That didn't work. ZFS is much smarter than myself and would not bring the pool back up with a missing device. How unfortunate for me. Ethereal fucked around with this message at 06:20 on Jul 23, 2014 |
![]() |
|
Ethereal posted:Fuck. I did zpool add instead of zpool attach again. This was with the hard drive I was going to copy everything over to to rebuild the pool. Something like this: http://www.paulsohier.nl/blog/2011/...fs-disk-remove/
|
![]() |
|
If it's a parity disk or part of a mirror missing, you should be able to force it online. Unless it was a single disk vdev that faulted, then the pool's screwed.
|
![]() |
|
Combat Pretzel posted:If it's a parity disk or part of a mirror missing, you should be able to force it online. Unless it was a single disk vdev that faulted, then the pool's screwed. deimos posted:Something like this: http://www.paulsohier.nl/blog/2011/...fs-disk-remove/ Unfortunately the whole thing is fucked. I had a 3 drive raidz-1 vdev, another vdev in the pool with a single small HD, and then I added another vdev to the pool by accident. Removing a single vdev, even if there was nothing written to it ruins the pool. Fun times. Lost a lot, but nothing critical. Just a ton of raw video and photo footage I had taken. It's too bad I can't grab files off of the other vdevs even in this case. ZFS seems to say if this happens, the entire thing is done.
|
![]() |
|
Recently, I replaced a drive on an older ZFS pool -- it wasn't very smooth though. The zpool was created with 512 byte sector drives (ashift=9). However, the replacement disk I purchased was a newer 4kb sector drive. ZFS wouldn't let me replace the drive, so I had to backup, destroy, recreate and restore the zpool (which now has ashift=12 set, 4 512 drives and 1 4kb drive). Annoying, but since it was an older machine with 1tb drives I use for scratch storage in my office lab, it wasn't really a big deal (only 2.5TB of data, mostly test VMs I could blow away). Now at home, I've got almost the same situation (although the disks are OK right now). I have a zpool with 8 disks in raidz2, but in this case, there's no easy way to back up the data (about 7.5TB). I already have the important data backed up in multiple locations, but the rest of the data is media that I wouldn't want to spend the time re-downloading. Online cloud backup seems like it would cost money (not a problem) and take 2 months to upload (holy shit). I'm actually considering tape, since I already have an LTO3 drive sitting around at my office (media would be cheaper than hard drives, plus the backup/restore time is much better than 2-4 months). Any thoughts? I actually think tape is going to win this one -- anything I should know before I start?
|
![]() |
|
Ethereal posted:Unfortunately the whole thing is fucked. I had a 3 drive raidz-1 vdev, another vdev in the pool with a single small HD, and then I added another vdev to the pool by accident. Removing a single vdev, even if there was nothing written to it ruins the pool. Fun times. Lost a lot, but nothing critical. Just a ton of raw video and photo footage I had taken. It's too bad I can't grab files off of the other vdevs even in this case. ZFS seems to say if this happens, the entire thing is done. I mean, every sysadmin can fuck this up at some point, there's still no way to fix a mistake that wouldn't involve lots of downtime. Combat Pretzel fucked around with this message at 17:06 on Jul 24, 2014 |
![]() |
|
Finally finished the disk swap that I started Monday evening. It's been too long since I ran a scrub, I think. Lots of checksum errors that were repaired, and because of some system stability issues during the swap I ended up pulling a bunch of data off the pool because ZFS thought it was corrupted. All of it checked out fine (images, videos, nothing irreplacable or not backed up), but I restored what was backed up and just copied back what wasn't. Autoexpand worked without an issue, bumped the pool up to a total capacity of 18TB from 12TB. Luckily the original pool was already set for advanced format drives, but even if it hadn't I could have created a new pool with the new drives, copied all or almost all of the original data over, destroyed the old pool and re-added the drives I was keeping to the new pool. Next is physically re-arranging the drives in the system and pulling the unused disks. I'll keep some of them for external drive enclosures, or anything that I don't mind losing. Luckily 6TB should be more than enough space for the foreseeable future, until I replace the other 4 drives due to old age. code:
|
![]() |
|
Ugh. Got a 3TB Red to replace a failed 1.5TB Seagate drive. The old drive had 512b sectors, the new drive has 4k sectors. I can't do a zpool replace:code:
|
![]() |
|
FISHMANPET posted:Ugh. Got a 3TB Red to replace a failed 1.5TB Seagate drive. The old drive had 512b sectors, the new drive has 4k sectors. I can't do a zpool replace: Do they still have the thing where you can set a jumper on the drive to have it present itself as a 512b sector drive?
|
![]() |
|
thebigcow posted:Do they still have the thing where you can set a jumper on the drive to have it present itself as a 512b sector drive? I think the jumper thing is only for correct alignment on older operating systems. I just went through the same thing (look a few posts up) and the solution was to completely rebuild my zpool. I guess it's a great opportunity to dump Solaris 11.
|
![]() |
|
thebigcow posted:Do they still have the thing where you can set a jumper on the drive to have it present itself as a 512b sector drive?
|
![]() |
|
Ugh, I don't have 11TB of free space laying around to dump all my data to AGAIN.
|
![]() |
|
Does it make sense to use a 2TB portable for data backup? I'm thinking of transitioning my 3TB desktop external to internal since the enclosure sometimes randomly loses connection.
|
![]() |
|
Josh Lyman posted:Does it make sense to use a 2TB portable for data backup? I'm thinking of transitioning my 3TB desktop external to internal since the enclosure sometimes randomly loses connection. I wouldn't use a portable hard disk as my only data backup. It's good for keeping a local backup to make it easier to restore from a backup over the internet, but I would definitely still backup my stuff with a cloud service like Crash Plan.
|
![]() |
|
fletcher posted:I wouldn't use a portable hard disk as my only data backup. It's good for keeping a local backup to make it easier to restore from a backup over the internet, but I would definitely still backup my stuff with a cloud service like Crash Plan.
|
![]() |