|
Heners_UK posted:Can I use the gift card to buy a new drive with a fresh warranty? Maybe. This is anecdotal, but a few years ago I had to RMA one of my brother's 5TB Toshiba drives that he got on sale as it developed bad sectors or whatever and was failing. Toshiba's RMA department made me pay shipping since they didn't offer advance replacement, and then once they received the defective drive, they notified me that I would be receiving a Visa Reward Card in 3-4 weeks which was confusing because I thought they'd ship out a replacement unit. When I received the Visa card, it wasn't enough to cover the cost of a new 5TB drive. This experience has led me to never buy a Toshiba HDD again. Maybe their warranty/RMA policy is different in Canada?
|
![]() |
|
teagone posted:Maybe. This is anecdotal, but a few years ago I had to RMA one of my brother's 5TB Toshiba drives that he got on sale as it developed bad sectors or whatever and was failing. Toshiba's RMA department made me pay shipping since they didn't offer advance replacement, and then once they received the defective drive, they notified me that I would be receiving a Visa Reward Card in 3-4 weeks which was confusing because I thought they'd ship out a replacement unit. When I received the Visa card, it wasn't enough to cover the cost of a new 5TB drive. This experience has led me to never buy a Toshiba HDD again. Maybe their warranty/RMA policy is different in Canada? I've been happy enough with my Toshibas to date (zero deaths out of four) but holy fuck I'll never buy one again either.
|
![]() |
|
Crossposting from the hardware thread - I bought some iStarUSA five drive hotswap bays (BPU-350SATA) which almost work perfectly except the fans are intolerably loud. Not only that they have no real temperature setting at all and simply other ramp up to 100% on any activity or just stay at 100%. They're 80mm garbage fans and I ordered some Noctuas to replace them however the plug interface is slightly smaller on the fans that came with the units and won't fit! See the attached picture. Is there an adapter for this or should I just cut and splice the goddamned cables? I was thinking of just cabling the new 80mm fans straight to the PSU of the system however the enclosure has an alarm when no fan is detected spinning. Link for reference - http://www.istarusa.com/en/istarusa/products.php?model=BPU-350SATA ![]()
|
![]() |
|
Keshik posted:Thoughts on something like this? It took me a while of Googling to finally find out what to call them, then found a plethora on NewEgg and Amazon. I had my eye on something similar, by Mediasonic. I asked about it in this thread but nobody had experience with them, although they should probably work fine. It's a single USB3 connection, so note that it'll bottleneck under heavy use (e.g. sequential transfer to more than 2 drives) but otherwise it should work fine. I do like the Orico brand though; specifically I've used several of their transparent USB3 enclosures, and they work well plus are cheap and of good quality. The brand is certainly just a rebrand of some Chinese products that could be found elsewhere under different names, but I haven't had any complaints about any of their stuff so far.
|
![]() |
|
Less Fat Luke posted:Crossposting from the hardware thread - I bought some iStarUSA five drive hotswap bays (BPU-350SATA) which almost work perfectly except the fans are intolerably loud. Not only that they have no real temperature setting at all and simply other ramp up to 100% on any activity or just stay at 100%. They're 80mm garbage fans and I ordered some Noctuas to replace them however the plug interface is slightly smaller on the fans that came with the units and won't fit! See the attached picture. I believe that's a JST-PH type of connector on the old fan. Wasn't able to find any adapters, but if you're handy with small parts and don't mind ordering hundreds then you could make your own. If you're going to cut the cables to splice, keep in mind that cutting the cable on the noctua will void the warranty. You could alternatively cut a fan extension cable and splice that with the JST-PH connector, then attach the noctua to the extension (will take up more room though).
|
![]() |
|
Actuarial Fables posted:I believe that's a JST-PH type of connector on the old fan. Wasn't able to find any adapters, but if you're handy with small parts and don't mind ordering hundreds then you could make your own. Oh shit interesting, thanks! The 80mm fans are cheap enough that I'm not super worried about the warranty. Are the regular case fan sizes JST-XH then? I see some adapters though honestly splicing is probably what I'm going to do as opposed to ordering more stuff.
|
![]() |
|
Regular PC fan connectors use the Molex KK series. Might be able to jam them together though, the difference of pitch vs. JST-HX is just .002in/0.04mm, but you'd have to snip away some plastic to get them to mate.
Actuarial Fables fucked around with this message at 18:20 on Dec 3, 2019 |
![]() |
|
Gotcha thanks again! I'll do it proper and grab some new wire nuts tomorrow to splice and dice.
|
![]() |
|
Less Fat Luke posted:Gotcha thanks again! I'll do it proper and grab some new wire nuts tomorrow to splice and dice. Those are a little small for wire nuts, I'd either crimp them with a butt connector or more likely put some heatshrink on, solder them together and then heatshrink over the join. Sometimes when I need to do an adapter like that I just use a couple of jumper wires with one end crimped female and one male and plug it in. Zip tying the extra and optionally putting a dab of hotglue on the connector gives it that professional hackjob touch.
|
![]() |
|
Rexxed posted:Those are a little small for wire nuts, I'd either crimp them with a butt connector or more likely put some heatshrink on, solder them together and then heatshrink over the join. Sometimes when I need to do an adapter like that I just use a couple of jumper wires with one end crimped female and one male and plug it in. Zip tying the extra and optionally putting a dab of hotglue on the connector gives it that professional hackjob touch.
|
![]() |
|
I asked about building a NAS a few weeks ago, but since then I have thought about just buying a "off the shelf" server to use. Honestly, if it does what I want it to, it will be cheaper and I am worried about flashing the card to IT mode. That seems to be a bit over my head. This is the one I have been looking at. https://www.ebay.com/itm/Supermicro...5.c100005.m1851 I also messaged the eBay seller and he said that the RAID controller will be shipped in IT mode for Unraid. I know servers are normally loud, but I have seen some guides on swapping the old server fans out with normal PC fans to make it much quieter. What do you all think of it? All I plan on using it for is Unraid storage and Plex with at most 3 transcodes. Thanks!
|
![]() |
|
It's not a bad price, but it's overkill. You don't really need a dual-CPU setup in all probability, as unless you're planning to stream to remote devices, or have some other specific known need, you probably won't actually be transcoding all that often. It'll also be reasonably loud, thanks to what appear to be 80mm fans behind the drive racks and 40mm fans in the redundant PSUs. At least it's not a 1U? You might want to consider something more like a tower server, which will likely be single-CPU and quieter.
|
![]() |
|
IndianaZoidberg posted:What do you all think of it? All I plan on using it for is Unraid storage and Plex with at most 3 transcodes. I'll be honest, it's really a LOT for just that. You didn't mention how many drives you foresee using but it it's not many, this may be excessive. I'd get it if have cheap power and an isolated place (e.g. garage) where you don't care about noise. If you're planning on a lot of VMs and Dockers then we're talking.
|
![]() |
|
Heners_UK posted:I'll be honest, it's really a LOT for just that. You didn't mention how many drives you foresee using but it it's not many, this may be excessive. I'd get it if have cheap power and an isolated place (e.g. garage) where you don't care about noise. Still, a modern CPU will run circles around dual E5 v1 Xeon chips, suck less power and generate less heat. The only benefit of that server is the relatively low purchase price and the 24x bays.
|
![]() |
|
Moey posted:Still, a modern CPU will run circles around dual E5 v1 Xeon chips, suck less power and generate less heat. I had this vague idea also to buy that server then get rid of the motherboard and CPU and just reuse the case and RAID controller. But everyone had made some good points so I will think about it and do some more looking around. I don't know how many drives I'm going to use, but I have over 20tb right now on external drives on my main computer, and I would want 2 parity drives. Also planing for future expansion.
|
![]() |
|
20TB right now is literally doable with two drives for under $300 (plus whatever parity you want). You certainly don't need dozens of bays unless you're also getting a ton of free drives to put in em.
|
![]() |
|
You can change the thermal profiles on Dell R620s/R720s and make them very quiet, and my 16 x 900GB 10k SAS R720 is idling at 288 watts on average, sometimes dipping down to 165 watts
|
![]() |
|
Random question, but does anyone know why BackBlaze doesn't use Western Digital drives anymore?
|
![]() |
|
Former Human posted:Random question, but does anyone know why BackBlaze doesn't use Western Digital drives anymore? It was mentioned in the Q2 report, although the why is likely just that they aren't the cheapest: https://www.backblaze.com/blog/hard...-stats-q2-2019/ quote:Goodbye Western Digital
|
![]() |
|
I know the cool kids just let their 2.5inchers hang wherever they wish, but I would like a slot for them. With that in mind, do you think this is a good solution for 6 at once? https://www.amazon.ca/Dovewill-Conv...r/dp/B074SJVMKV
|
![]() |
|
Get with the times. If you are going with SSDs (which are slimmer), you can shove 8 into a bay. They also have 4x and 6x options. https://www.icydock.com/goods.php?id=293
|
![]() |
|
Gonna throw a couple of 480GB SATA SSDs I have laying around into my server as a RAID0 just for torrent scratch / usenet unpacking purposes. mdraid or ZFS?
|
![]() |
|
IOwnCalculus posted:Gonna throw a couple of 480GB SATA SSDs I have laying around into my server as a RAID0 just for torrent scratch / usenet unpacking purposes. mdraid or ZFS? MD. It'll be faster and not require as much ram.
|
![]() |
|
Yeah, plus no COW worries if the array ever gets full because nzbget doesn't clean up after itself reliably. I prefer the ZFS toolset but I figured this probably wasn't the best use for it.
|
![]() |
|
You could also stripe them as an LVM volume group, if you are already using LVM that might be convenient. But I think ZFS provides LVM-like functionality (not completely sure on that) so you probably are not using it.
|
![]() |
|
IOwnCalculus posted:Gonna throw a couple of 480GB SATA SSDs I have laying around into my server as a RAID0 just for torrent scratch / usenet unpacking purposes. mdraid or ZFS?
|
![]() |
|
"Only" 64GB. Enough that it never runs into issues (especially now that I've banished Crashplan) but not remotely enough for something like that. Only reason I have these laying around is I upgraded two systems at home to NVMe SSDs.
|
![]() |
|
What's the failure rate of the 8 TB+ WD Easystore drives been like for everyone else? I'm seeing 38% failure rates within about 2.5 years across a 50/50 mix of the reds and the white labels. I've had 4 drives in a 8 drive RAIDZ2 array now go bad and even one I just had sitting idle for a spare and merely powered on (not actually writing) backup drive off of a desktop now go bad. Had to pull one out I had stashed away brand new to start resilvering. So that makes 5 drives out of 13 and 3 of them seem to be within RMA time limits. This seems really high compared to the WD green drives I've had previously to the extent that I'm about to stop using Easystores to transition over to Toshiba drives because drives failing this close together in time isn't cool with me. The same setup and chassis had about 3 failures within 6 years off of a mix of WD Green and Samsung Spinpoint drives.
|
![]() |
|
necrobobsledder posted:What's the failure rate of the 8 TB+ WD Easystore drives been like for everyone else? I'm seeing 38% failure rates within about 2.5 years across a 50/50 mix of the reds and the white labels. I've had 4 drives in a 8 drive RAIDZ2 array now go bad and even one I just had sitting idle for a spare and merely powered on (not actually writing) backup drive off of a desktop now go bad. Had to pull one out I had stashed away brand new to start resilvering. So that makes 5 drives out of 13 and 3 of them seem to be within RMA time limits. This seems really high compared to the WD green drives I've had previously to the extent that I'm about to stop using Easystores to transition over to Toshiba drives because drives failing this close together in time isn't cool with me. The same setup and chassis had about 3 failures within 6 years off of a mix of WD Green and Samsung Spinpoint drives. I haven't had any fail. I've got 6 (or is it 8...I can't remember) 8TB shucked drives . Half of them are from back when people started shucking and half of them spread out over the past 18 months.
|
![]() |
|
Just to add to the pool of results.. 4x WD EMAZ (8tb) @ 8925 hours (372 days) - 0 bad sectors 4x WD EFAX (8tb) @ 14984 hours (624 days) - 0 bad sectors Sniep fucked around with this message at 19:14 on Dec 7, 2019 |
![]() |
|
necrobobsledder posted:What's the failure rate of the 8 TB+ WD Easystore drives been like for everyone else? 0% on 6x EMAZ 2x EFAX. Also 0% on a smattering of others (Reds, Greens) totaling like eight drives. In thirty years the only failures I've ever had have been Seagate. Sheep fucked around with this message at 22:45 on Dec 7, 2019 |
![]() |
|
The only drive make I haven't had any issues with yet that I can recall is Toshiba. I'm going from personal drives here, not experiences from work. I recently disassembled and tossed 12 drives that all either failed in some way, or started generating SMART errors (I don't waste any time with drives that start generating errors, unless of course they're simply transmission errors caused by cables). It's probably not very interesting, but the sample of the 12 was as follows: 3x Western Digital (2x WD20EARS, 1x WD800) 5x Seagate (3x ST3000DM001, 1x ST3250823A, 1x ST3500630AS) 1x HGST (HUA722010CLA330) 3x Samsung (HD204UI) Again, most of these worked in the sense you could use them, but they had various degrees of issues, some minor, some major. Life's too short to play around with storage that's starting to fail. HalloKitty fucked around with this message at 23:06 on Dec 7, 2019 |
![]() |
|
All my greens are still kickin like 5 years later.
|
![]() |
|
Heners_UK posted:I know the cool kids just let their 2.5inchers hang wherever they wish, but I would like a slot for them. I have the configuration in the bottom left, I think it's Icydock branded, and it's fine in a Shuttle XPC (one of the last PCs I have around here with a 5.25" bay.) If you're just mounting SSDs then go with whatever one can cram 6+ in a single bay (I actually wanted to mount another 3.5" drive though, the XPC only has 4 SATA ports anyway.) Henrik Zetterberg posted:All my greens are still kickin like 5 years later. I think I have a 3 TB Green with like 70k power-on hours and it still worked like normal when I shucked it because it was in a NAS and was "on" but spun down most of the time.
|
![]() |
|
Is it normal that a sonarr docker container in unraid pegs a intel g4400 to 100%
|
![]() |
|
No, but are you out of memory too? Unraid doesn't have a swap file and, in my experience at least, sends the cpu to 100% when it's out of memory. I know that it supposed to deliver a nice clean kill to a process when it happens... Didn't seem to. (Guessing that docker immediately restarted said process or another saturated memory straight after)
|
![]() |
|
Heners_UK posted:No, but are you out of memory too? Unraid doesn't have a swap file and, in my experience at least, sends the cpu to 100% when it's out of memory. I know that it supposed to deliver a nice clean kill to a process when it happens... Didn't seem to. (Guessing that docker immediately restarted said process or another saturated memory straight after) Nope only at 46% utilization, 8 gb of 2400 ddr4
|
![]() |
|
I hate to use anecdata but given how I spread out my purchases across time and stores so I'm having to conclude that something new is affecting my array like this faulty chassis fan that's keeping air from circulating in half the case (the drives that failed were not in the +6C hotspots). Certainly a lot cheaper to fix that than to spend double on disks to only find out there's a bad vibration / resonance happening that's causing drive failures. I have no idea how I keep getting so unlucky with drives because I've had at least 3 more drive failures prior, one of which was a 90s Deathstar.
|
![]() |
|
Anyone have a current(ish) guide on setting up SMB (or other) file shares from an unraid server to a wireless home network for a win10 system? Initial attempts digging around found a lot of discussion about setting up SMB file shares from an Unraid system were all from almost 10 years ago. If that's still what works, cool, but wanted to ask before attempting those instructions.
|
![]() |
|
That Works posted:Anyone have a current(ish) guide on setting up SMB (or other) file shares from an unraid server to a wireless home network for a win10 system? You have to install SMB1 on windows 10 and the shares will show up when you browse to \\tower.local Allegedly 6.8 is adding modern SMB.
|
![]() |