|
H2SO4 posted:I just bought another seagate backup plus 8TB from Amazon to fully populate my DS1019+. I've ran FreeNAS forever but i have been impressed by the synology - it's like a grown-up Drobo. ![]()
|
![]() |
|
Nam Taf posted:Grabbed 6 of the 10TB drives last night when they were still on sale. They state shipping on 3/6. Pray for me that it doesn't get cancelled. I'm hoping you're in a country where 3/6 means June 3rd and not March 6th?
|
![]() |
|
Red_Fred posted:Is anyone using Synology Surveillance and DS Cam? I got it all setup but Im not convinced that DS Cam and my phone are working as a geofence to turn home mode on and off. Ok I got it sorted, you need to use your QuickConnect name not the IP. However the geofence still seems to be problematic.
|
![]() |
|
I remember reading some article several years ago that looked at the error rate of drives and it argued that the larger the drives you use the more likely you'll get fucked on a restore. Has anything changed there? It'd be nice to grab a lot of 10TB drives to replace my current 4TB, but it seems risky.
|
![]() |
|
As a general rule of thumb most RAID failures happen during a restore or parity recovery. So that might be what it is refering too.
|
![]() |
|
Can I power a several external hard drives with this or would that be a regrettable mistake? https://www.amazon.com/dp/B076HKCRNG
|
![]() |
|
gary oldmans diary posted:Can I power a several external hard drives with this or would that be a regrettable mistake? You'll have to look at the power output of the transformers they use now. If each drive uses about 1A @ 12V then that could theoretically handle five of them. I wouldn't trust the current rating on a generic power supply like that and would assume it's going to handle less than it says it will. I wouldn't trust my data to that rating.
|
![]() |
|
gary oldmans diary posted:Can I power a several external hard drives with this or would that be a regrettable mistake? I personally wouldn't use something like that. Use the original power supplies and this: https://www.amazon.com/dp/B004LZ5XM...i_1I84CbPV3VMSB It's not as "neat" but it's way more safe for your drives.
|
![]() |
|
Pardot posted:I remember reading some article several years ago that looked at the error rate of drives and it argued that the larger the drives you use the more likely you'll get fucked on a restore. Has anything changed there? It'd be nice to grab a lot of 10TB drives to replace my current 4TB, but it seems risky. Basically, every drive has an "Unrecoverable Bit Error" rate at which they are expected to maintain data integrity. Consumer drives are rated at one bit error per 10^12 reads, while enterprise drives are 10^14 or something like that. Those numbers haven't improved as much over time and so by the letter of the math, there is a pretty high probability of at least one error cropping up during a rebuild of a >10 TB array. In practice though, drive reliability tends to vastly exceed the official rating. If that rating were real then you would hit errors all the time during a ZFS scrub, and I've never seen even one over ~100 scrubs on my dozens-of-tb pools. In practice, a scrub is a "dress rehearsal" for a resilver operation since all data needs to be read out of the pool to validate the checksums anyway. Also, a sane soft-raid should just lose/corrupt that file instead of failing the whole rebuild over a single bit-error.
|
![]() |
|
Paul MaudDib posted:Also, a sane soft-raid should just lose/corrupt that file instead of failing the whole rebuild over a single bit-error. Thanks, this makes sense. I looked a bit, but cant find for sure if the synology hybrid raid does this. Does anyone know?
|
![]() |
|
Errors in batches of drives have fairly high cohort based correlation. What that means is that drives from a similar batch of drives from the same place and time tend to fail together. This also leads to the likelihood of failures during a rebuild higher because the drives probably came together as a group when some idiot was drunk and dropped the whole pallet of them, some wind blew onto the line in Thailand, etc.
|
![]() |
|
gary oldmans diary posted:Can I power a several external hard drives with this or would that be a regrettable mistake? I think the last time I had an external HDD plugged into my KillAWatt it measured an average draw of ~10 W, from a PSU rated for 20 W (likely to handle the surge upon spin-up.) Since P=IV (or W = A * V) that thing you linked is rated for 60 W. It might be able to support a few drives in active operation, but wouldn't be able to handle the surge of all 8 drives spinning up at once. I wouldn't try it.
|
![]() |
|
gary oldmans diary posted:Can I power a several external hard drives with this or would that be a regrettable mistake? I would get one rated higher, but unlike the nay-sayers here I don't see much of a problem with it. Inrush current on small 5400rpm motors isn't that much, and it should be leveled off by some capacitors on the boards themselves. I would still make sure everything is correctly backed up. 120->12v transformers have been a thing for a long time. Try not to buy the cheapest one.
|
![]() |
|
necrobobsledder posted:Errors in batches of drives have fairly high cohort based correlation. What that means is that drives from a similar batch of drives from the same place and time tend to fail together. This also leads to the likelihood of failures during a rebuild higher because the drives probably came together as a group when some idiot was drunk and dropped the whole pallet of them, some wind blew onto the line in Thailand, etc. I can say I got bit by this in two different cases (kinda). Ordered 4 drives at the same time from the same place and they all had similar production dates (8TB WD Reds) half of them were DOA the other two died 1-2 years later. This could have been someone drop kicking newegg's box though since one drive was "spindle was broken and making grinding noises, drive won't even show up in the bios dead" earlier this year I ordered a single drive (8TB Seagate Ironwolf) and noted the production date. It failed within amazon's return period so I got a replacement. The replacement's production date was within a couple weeks of the first failed drive and it ALSO failed in the same way (reallocated sector count going up). The replacement for that replacement's production date was several months later and it's been fine so far. Hope it lasts at least a few years.
|
![]() |
|
BeastOfExmoor posted:I'm hoping you're in a country where 3/6 means June 3rd and not March 6th? In any case, the orders were cancelled. I got 2x $30 vouchers out of it, but burned my prime trial memberships so that sucks. It looks like they all got cancelled, judging by others replies.
|
![]() |
|
Nam Taf posted:In any case, the orders were cancelled. I got 2x $30 vouchers out of it, but burned my prime trial memberships so that sucks. It looks like they all got cancelled, judging by others replies. This calm and pensive response has no place in a tech-deals discussion! Unless the hard drives are delivered to you pre-shucked on a gold platter, you and all others should fly off the deep end and say that Amazon built it's entire business over many years just to screw you and only you at this and only this moment.
|
![]() |
|
I was having issues with FreeNAS throughput and it wasn't liking my disk controller on my SuperMicro board, so I got bored and did something. I virtualized FreeNAS on my Xen cluster. Installed MDAM on the Xenserver, RAID-5'ed the 8 1TB drives together, then broke those into 8 500GB clusters in Xen to the the FreeNAS VM, and it actually is getting BETTER throughput than it did on metal. Only major downside is my external USB 3.0 backup drive can only passthrough as a USB 2.0, so rsync is slow between the cluster and the USB, but overall I'm really happy with how it came out, and Xen gives me better use of the hardware, and management over the box resources. Plus I can use some of the RAID-5 cluster for handling snapshots and backups of my other VMs I went overboard I know, and I was skeptical about the performance of the Array being RAID'ed then broken back up into a ZFS pool, but I can't find anything to complain about. CommieGIR fucked around with this message at 22:47 on May 22, 2019 |
![]() |
|
Nam Taf posted:Yes I live in whats commonly referred to as literally anywhere but the US.
|
![]() |
|
Anyone who uses anything other than YYYY-MM-DD has opinions which are literally poop from a butt (the butt is they mouf).
|
![]() |
|
CommieGIR posted:I was having issues with FreeNAS throughput and it wasn't liking my disk controller on my SuperMicro board, so I got bored and did something. ![]() FreeBSD got rid of this back in 1999 for what seems like excellent reasons. Schadenboner posted:Anyone who uses anything other than YYYY-MM-DD has opinions which are literally poop from a butt (the butt is they mouf). ![]()
|
![]() |
|
gary oldmans diary posted:You completely forgot all the places Y/M/D is used. Which still interpret a two-item date as dd/mm in my experience. Ive only ever had it be ambiguous to Americans.
|
![]() |
|
Nam Taf posted:Which still interpret a two-item date as dd/mm in my experience. Ive only ever had it be ambiguous to Americans. ![]() ![]()
|
![]() |
|
CommieGIR posted:I virtualized FreeNAS on my Xen cluster. Installed MDAM on the Xenserver, RAID-5'ed the 8 1TB drives together, then broke those into 8 500GB clusters in Xen to the the FreeNAS VM, and it actually is getting BETTER throughput than it did on metal. What kind of demon are you trying to summon? The first commandment of ZFS is "present your disks directly to ZFS" and you're sticking like three layers of abstraction in the middle.
|
![]() |
|
That's always been the mystery of ZFSonLinux to me: How does it cope with the kernels device caching? ![]()
|
![]() |
|
H2SO4 posted:What kind of demon are you trying to summon? The first commandment of ZFS is "present your disks directly to ZFS" and you're sticking like three layers of abstraction in the middle. It was a test to see how it'd work. Frankly, so far so good. The unknown is why I've got good full backups. I just got sick of fighting FreeNAS over controllers. Like I said in the first post, it was a stupid, crazy idea that I wanted to try. So far so good. CommieGIR fucked around with this message at 21:54 on May 23, 2019 |
![]() |
|
I have a spare Intel x79 system laying around, converting it into a FreeNAS box doesn't seem like a bad idea? Can the thread recommend a HBA that's gonna support 8 direct attach WDred or shucked WD drives, and where I can buy it?
|
![]() |
|
Alzabo posted:I have a spare Intel x79 system laying around, converting it into a FreeNAS box doesn't seem like a bad idea? Basically any LSI/Avago/Broadcom that is in "IT mode" e: if you held a gun to my head, here you go: https://www.ebay.com/itm/New-LSI-Me...XAAAOSwdGFYwCX-
|
![]() |
|
That looks like the next generation of the 2008 based model I've been running in a few boxes for years. Not sure if 12Gb bandwidth would actually buy you anything with consumer drives though. I run the LSI 9211-8i which you can find for like $75 on amazon.
|
![]() |
|
Alzabo posted:I have a spare Intel x79 system laying around, converting it into a FreeNAS box doesn't seem like a bad idea? X79 is perfectly fine as long as you will tolerate the power. It's sub-Ryzen performance but HEDT power consumption. Being able to run ECC RDIMMs/cheap 128GB+ capacity is cool, and that's something that's unique to X79 (unless you go with a 2011-3 server board) but personally I would go with something else these days. The merits of ECC RDIMMs have faded as the memory crisis has failed.
Alzabo posted:Can the thread recommend a HBA that's gonna support 8 direct attach WDred or shucked WD drives, and where I can buy it? IBM M1015 flashed into IT mode, it's got an LSI controller underneath and it's widely used in the white-box NAS community. Or if you want to have 16 ports per card, LSI makes a card for that too (16i/16e suffix is 16 ports internal or external, I want to say 9300-16i?). And yeah LSI is your best bet as far as compatibility. You do need to watch driver support on FreeBSD, and FreeNAS is not always up to date with the underlying FreeBSD either. Paul MaudDib fucked around with this message at 19:01 on May 24, 2019 |
![]() |
|
If your motherboard has a decent number of SATA ports, consider bypassing FreeNAS + HBA and just running some flavor of Windows 10 with Stablebit DrivePool. For example, if youre a student you might get Windows Server 2016 or 2019 for free through Azure for Students.
|
![]() |
|
Paul MaudDib posted:
I think the X11SSH-F board is going to suit my needs for a new FreeNAS build, thanks for the suggestions! The price difference between the i3-7100 and a E3-1225 V6 is small enough that I might just spring for the upgrade.
|
![]() |
|
Unsurprisingly there's another 8TB Easystore sale on at Best Buy for $130: https://www.bestbuy.com/site/wd-eas...p?skuId=5792401
|
![]() |
|
Rexxed posted:Unsurprisingly there's another 8TB Easystore sale on at Best Buy for $130: Been waiting for this, and I grabbed two. Thanks! Edit: I ended up with two EMAZ drives. manero fucked around with this message at 00:28 on May 27, 2019 |
![]() |
|
I grabbed two also; time to fill out all the bays in my DS1817+
|
![]() |
|
Snagged the last 3 at my store, replacing two 4 tbs and a 2tb.
|
![]() |
|
RIP power bill. ![]()
|
![]() |
|
Any Unraid users swapped out to a larger drive in the array before? Not sure which is less disruptive, removing an old drive and letting the array rebuild data on the new drive, or copying data off the old drive first before putting in the new drive.
|
![]() |
|
Why not put in new drive, move folders over from old drive then remove old drive and rebuild? Or copy folders over maybe to prevent any possible issues
|
![]() |
|
I think there is an app called unbalance that you can use to tell unraid to clear out a disk, then you can remove it. It just shuffles files onto other drives.
|
![]() |
|
I've seen a few stories now of people swapping old, smaller drives, for large ones. Call me cheap (lots of people do), but if you have these drives in a parity/raid protected array (and you have backups of truly important stuff that are not raid/parity, which you should, because raid/parity is not backup) then why not simply use the drives until they die? The data is merely a rebuild away at worst.
|
![]() |