«608 »
  • Post
  • Reply
D. Ebdrup
Mar 13, 2009



eames posted:

I don't know enough about microarchitectures to comment but IIRC ZFS was designed for 64 bit systems, not sure how well that would work on a 32 bit system in its current state. That'd be my main concern with this little DIY system.
You're absolutely right, Sun workstations back then were SPARC64.

This should apply to ARMv7 too, but I can remember having had success with ZFS on i386 FreeBSD by changing the kernel virtual address size. It has the unfortunate side-effect that if drivers for any of the drivers don't use busdma(9), where you can end up not being able to address the devices on the memory mapping they expect, but this is only true for ISA and PCI devices which haven't been converted, and they're much more likely to exist on a i386 platform that's been running in production for way too long - so if you're talking about ARMv7, you may be in luck.
If I had any such systems left I'd be interested to see if the 4/4G split or PAE tables implemented recently changes anything about ZFS on i386.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

D. Ebdrup posted:


That's where the the price would be justified if it included the ARMv8.2 NEON SIMD which can accelerate AES and SHA2 operations which are by far the most computationally expensive operations of ZFS.

Interesting. Which current CPU's are good at ZFS, then? I'm using a very low TDP Skylake Xeon for my main ZFS box and I'm also testing out an LSI HBA in another box with an i5-7500 (non-K) in it.

I take it that for ZFS performance in the home you're looking at using most mid to high-end i5/i7/Ryzen chips? If you aren't needing absolute balls to the wall throughput but just want a system than can deal with ZFS real nice.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

I saw people running ZFS on Atom 330 boxes years ago that we’re doing well over 100 MBps throughput. Your CPU will only be a bottleneck on high IOPS and throughput systems which usually aren’t the case in home servers.

D. Ebdrup
Mar 13, 2009



apropos man posted:

Interesting. Which current CPU's are good at ZFS, then? I'm using a very low TDP Skylake Xeon for my main ZFS box and I'm also testing out an LSI HBA in another box with an i5-7500 (non-K) in it.

I take it that for ZFS performance in the home you're looking at using most mid to high-end i5/i7/Ryzen chips? If you aren't needing absolute balls to the wall throughput but just want a system than can deal with ZFS real nice.
The system I have now which is properly optimized for fileserving, and is doing little else, is a dual-core 1.3GHz ARM chip and it satuates 1Gbps LAN an WAN with NFSv4.
The system I'm looking at has 8 cores (no hyperthreading) at 2GHZ, but the reason I want it is that with it I can switch to SHA512 checksumming which can be offloaded to the QAT chip that the SoC on the motherboard comes with.

ZFS scales insanely well, moreso than existing (outdated?) "documentation" would indicate (thinking mostly of Wikipedia and various Linux people coming up with justifications for why they didn't need ZFS before they could get it).

IOwnCalculus
Apr 2, 2003





necrobobsledder posted:

I saw people running ZFS on Atom 330 boxes years ago that we’re doing well over 100 MBps throughput. Your CPU will only be a bottleneck on high IOPS and throughput systems which usually aren’t the case in home servers.

I wouldn't expect the CPU to matter much during normal / healthy operations, the big question is rebuild speeds.

Which ZFS can be pretty fucking glacial at, since it's re-walking the whole tree. With that said I don't think I saw any notable increase in CPU load on my system during the past weeks of rebuilds, but then again it's not exactly CPU bound for anything short of 4K transcoding.

Zorak of Michigan
Jun 10, 2006

Waiting for his chance

IOwnCalculus posted:

I wouldn't expect the CPU to matter much during normal / healthy operations, the big question is rebuild speeds.

Which ZFS can be pretty fucking glacial at, since it's re-walking the whole tree. With that said I don't think I saw any notable increase in CPU load on my system during the past weeks of rebuilds, but then again it's not exactly CPU bound for anything short of 4K transcoding.

Is ZFS any slower than the alternatives? I generally expect it to be faster, since it knows what's on the disk, and only has to rebuild actual data. Other systems that separate out the disk management from the file system end up having to rebuild every last sector.

H110Hawk
Dec 28, 2006


Zorak of Michigan posted:

Is ZFS any slower than the alternatives? I generally expect it to be faster, since it knows what's on the disk, and only has to rebuild actual data. Other systems that separate out the disk management from the file system end up having to rebuild every last sector.

For a reasonably full disk it will go at disk-speed for resilvering but it will use more cpu cycles to do it. If you're CPU bound this means it will resilver slower than other common block level raid systems rebuild as it simply has more work to do verifying the information that makes ZFS more rot resistant.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

It seems that I overestimated when I was thinking about the CPU overhead of day-to-day running. I guess that there's little point in having a powerful CPU (i5 and above) because it's only gonna get slammed if you're either doing a rebuild, which is hopefully very rare, or during transcoding.

What about when we finally get native encryption on Linux? It can't be that far off, can it. I'm using luks underneath my ZFS mirror (well, one of them) but it'll be much easier to set up a new array when encryption is standard. I expect that the difference between running a ZFS array with luks underneath it to running the same ZFS array on the same system but using ZFS native encryption will be slightly less load on the CPU? Since you're dealing with one less layer of abstraction?

H110Hawk
Dec 28, 2006


apropos man posted:

It seems that I overestimated when I was thinking about the CPU overhead of day-to-day running. I guess that there's little point in having a powerful CPU (i5 and above) because it's only gonna get slammed if you're either doing a rebuild, which is hopefully very rare, or during transcoding.

What about when we finally get native encryption on Linux? It can't be that far off, can it. I'm using luks underneath my ZFS mirror (well, one of them) but it'll be much easier to set up a new array when encryption is standard. I expect that the difference between running a ZFS array with luks underneath it to running the same ZFS array on the same system but using ZFS native encryption will be slightly less load on the CPU? Since you're dealing with one less layer of abstraction?

Unless there is some kind of issue where ZFS's writes down to the LUKS layer aren't block aligned, and thus causing extra encryption, it should be a nearly transparent change from a CPU utilization perspective. Most of the work is the AES math, not the abstraction layer.

D. Ebdrup
Mar 13, 2009



IOwnCalculus posted:

I wouldn't expect the CPU to matter much during normal / healthy operations, the big question is rebuild speeds.

Which ZFS can be pretty fucking glacial at, since it's re-walking the whole tree. With that said I don't think I saw any notable increase in CPU load on my system during the past weeks of rebuilds, but then again it's not exactly CPU bound for anything short of 4K transcoding.
At least ZFS is intelligent when you have more than one vdev and only operates on the vdev that actually has any replacement going on.

Zorak of Michigan posted:

Is ZFS any slower than the alternatives? I generally expect it to be faster, since it knows what's on the disk, and only has to rebuild actual data. Other systems that separate out the disk management from the file system end up having to rebuild every last sector.
You're right that it knows its structure and knows where to rebuild, but ZFS has a lot longer of a write path (code-wise) than any other FS because it needs to do so much to ensure your data doesn't get fucking eaten. That said, it's not so long as to matter in most non-enterprise workloads like databases (and there are ways to mitigate it, and even theories about how to exceed traditional FS by for example using the ZFS objects for databases instead of going through the ZFS POSIX layer).

apropos man posted:

What about when we finally get native encryption on Linux? It can't be that far off, can it. I'm using luks underneath my ZFS mirror (well, one of them) but it'll be much easier to set up a new array when encryption is standard. I expect that the difference between running a ZFS array with luks underneath it to running the same ZFS array on the same system but using ZFS native encryption will be slightly less load on the CPU? Since you're dealing with one less layer of abstraction?
I have no fucking idea how it's going to work on ZFS on Linux with one of Linus' lieutenants shitting all over the work that Brian Behlendorf especially, but many others as well, have done. In all other OS' where ZFS is implemented, the crypto primitives get handed off to either OpenCrypto, OpenSSL/LibreSSL/other crypto frameworks, or in some instances plain hand-written assembly code - but that's not possible after Greg had his hissy-fit, so who the fuck knows?


EDIT: As to the ideas for how to better databases on ZFS, Thomas Munro (long-time PostgreSQL developer and recent FreeBSD developer) has a lot of ideas that he didn't get through in his his FOSDEM presentation on Walking Through Walls.

D. Ebdrup fucked around with this message at 22:34 on Feb 19, 2019

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

H110Hawk posted:

Unless there is some kind of issue where ZFS's writes down to the LUKS layer aren't block aligned, and thus causing extra encryption, it should be a nearly transparent change from a CPU utilization perspective. Most of the work is the AES math, not the abstraction layer.

I'm guessing that if it's well coded then you may actually see a benefit from ZFS actually doing some of the decision-making in encrypting the blocks, too. So a well-implemented encryption could be marginally faster than just writing the blocks without knowing that they're being encrypted afterwards? Maybe ZFS itself could avoid running certain blocks through a cypher twice, if it knows the same block is going to two different drives, etc. Actually, that's a bad example. I was just thinking of ways that ZFS could optimise the encryption when it's in charge of doing it.

apropos man fucked around with this message at 22:35 on Feb 19, 2019

H110Hawk
Dec 28, 2006


apropos man posted:

I'm guessing that if it's well coded then you may actually see a benefit from ZFS actually doing some of the decision-making in encrypting the blocks, too. So a well-implemented encryption could be marginally faster than just writing the blocks without knowing that they're being encrypted afterwards? Maybe ZFS itself could avoid running certain blocks through a cypher twice, if it knows the same block is going to two different drives, etc.

I don't know enough about the implementations to comment much further than I have, other than to say block aligned writes into LUKS will result in once through the cipher, which is the "hard" part of encryption. Everything else is a rounding error and likely to be paper/theoretical improvements.

IOwnCalculus
Apr 2, 2003





Zorak of Michigan posted:

Is ZFS any slower than the alternatives? I generally expect it to be faster, since it knows what's on the disk, and only has to rebuild actual data. Other systems that separate out the disk management from the file system end up having to rebuild every last sector.

For a nearly full vdev in my experience it's a bit slower than disk speed. I did all of my most recent disk swaps without physically removing the old disk first, and the rebuild on my first vdev with the most data on it was considerably slower than, say, doing a sequential copy of the source disk to the destination. But:

D. Ebdrup posted:

At least ZFS is intelligent when you have more than one vdev and only operates on the vdev that actually has any replacement going on.

Yes, this is very true. Swapping disks on my most lightly populated vdev with a short write history was very quick.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

H110Hawk posted:

I don't know enough about the implementations to comment much further than I have, other than to say block aligned writes into LUKS will result in once through the cipher, which is the "hard" part of encryption. Everything else is a rounding error and likely to be paper/theoretical improvements.

You mean like when you create a partition to be aligned with the blocks on a disk? I can't recall if I ever considered whether my luks is aligned to my ZFS array, which was created over a year ago.

eames
May 9, 2009



I'm intrigued by the little ARM NAS posted earlier and found this SBC based on the Rockchip RK3399 (64-bit Dual Core Cortex-A72 + Quad Core Cortex-A53). SoC datasheet here

https://www.friendlyarm.com/index.p...&product_id=234
https://www.friendlyarm.com/index.p...&product_id=254

It's cheaper too, $90 for the 2GB version with the four SATA ports but no ECC RAM. I'm just not sure if these things run "vanilla" arm64 linux distributions.

D. Ebdrup posted:

The system I have now which is properly optimized for fileserving, and is doing little else, is a dual-core 1.3GHz ARM chip and it satuates 1Gbps LAN an WAN with NFSv4.

I'm curious, what system is that?

D. Ebdrup
Mar 13, 2009



eames posted:

I'm intrigued by the little ARM NAS posted earlier and found this SBC based on the Rockchip RK3399 (64-bit Dual Core Cortex-A72 + Quad Core Cortex-A53). SoC datasheet here

https://www.friendlyarm.com/index.p...&product_id=234
https://www.friendlyarm.com/index.p...&product_id=254

It's cheaper too, $90 for the 2GB version with the four SATA ports but no ECC RAM. I'm just not sure if these things run "vanilla" arm64 linux distributions.


I'm curious, what system is that?
The RK3399 chips are generally very well-regarded (they're used by PINE to make the ROCK64 and ROCK64Pro, for which FreeBSD support is coming).
Unfortunately there is a BIG gap between SBCs and the high-end server systems (Like the ThunderX2), wherein an ARM competitor for the MicroServer would fit rather excellently - especially if it used a standard mini-ITX or micro-ATX board size, standard memory sockets, and 4-12 SATA ports.
As it is, the SBCs like the RPI3 do work for ZFS - mine runs an ELK stack with NGINX hosting the web-parts of that; but it's not exactly fast, and I wouldn't consider doing something that requires more writing than an ELK stack for a small network requires, because it's very limited in the number of IOPS it can handle, and there's NO room for ARC.

My current system is the Microserver G7 with a AMD Neo2 N36L dual-core @ 1.3GHz, 16GB ECC memory, a em(4)-based NIC with an 82575L chip (instead of the Broadcom BMC5723 chip which had problems back in FreeBSD 9.x), and the HP Microserver G7 Remote Access Card which contains a ASPEED AST2400 chip that connects to the motherboards SuperIO chip and lets you do IPMI with vKVM and vISO.

EDIT: I've been looking at that NanoPI + HAT solution you linked (because I apparently didn't look close enough the first time), and that does look like a pretty neat solution.

EDIT 2: Wasn't there talk of AMD offering some competition to ARM in the embedded storage/network segment with EPYC? It looks like that's closer to happening.

D. Ebdrup fucked around with this message at 16:36 on Feb 20, 2019

Kibayasu
Mar 28, 2010



So the business I work for likely needs to move to more central storage soon and it seems like NAS is the way to go for what we need. Since I'm not familiar with this area I figured I'd type my thoughts out.

We're not really dealing with massive amounts of data - I think our entire business archive is less than 20 GB - nor are we sending data around the world. 3-4 people that only need local access when they're at the office so I don't think we've moved into "get professional help" territory yet. The major problem to solve is just making sure someone that needs to edit something has the correct version of the file they're working on without needing to worry that someone else updated it but didn't save properly

After looking at the off the shelf options at various retailers it seems like the basic options, mostly Western Digital stuff, should do what we need - perhaps with backing up to a flash drive instead of a RAID since a lot of the basic ones have single drives and the multiple drives is just way overkill - but one of the remaining questions is if they can be just used like a hard drive, connect them, setup a shortcut, and just use them like any other folder on your computer instead of whatever packaged software nearly all of them advertise. Sorry if that's pretty basic and of course they'd be able to do that, I'm just not familiar at all with the networking side of things beyond what Windows walks you through. Most of it seems to talk about photos and videos so presumably it's just a way to organize those things but we literally just need the folders we already have just not tied to a single person's computer. Any suggestions would be very helpful.

H110Hawk
Dec 28, 2006


Kibayasu posted:

So the business I work for likely needs to move to more central storage soon and it seems like NAS is the way to go for what we need. Since I'm not familiar with this area I figured I'd type my thoughts out.

We're not really dealing with massive amounts of data - I think our entire business archive is less than 20 GB - nor are we sending data around the world. 3-4 people that only need local access when they're at the office so I don't think we've moved into "get professional help" territory yet. The major problem to solve is just making sure someone that needs to edit something has the correct version of the file they're working on without needing to worry that someone else updated it but didn't save properly

After looking at the off the shelf options at various retailers it seems like the basic options, mostly Western Digital stuff, should do what we need - perhaps with backing up to a flash drive instead of a RAID since a lot of the basic ones have single drives and the multiple drives is just way overkill - but one of the remaining questions is if they can be just used like a hard drive, connect them, setup a shortcut, and just use them like any other folder on your computer instead of whatever packaged software nearly all of them advertise. Sorry if that's pretty basic and of course they'd be able to do that, I'm just not familiar at all with the networking side of things beyond what Windows walks you through. Most of it seems to talk about photos and videos so presumably it's just a way to organize those things but we literally just need the folders we already have just not tied to a single person's computer. Any suggestions would be very helpful.

Windows? Mac?

Raid is about availability, and to a lesser extent durability, not versioning or "overkill." I would grab a 2-disk Synology and slap a pair of 3ish TB disks in there raid 1. Setup an offsite backup (Backblaze?) on a schedule, versioned. You could also start doing desktop backup to these disks, which is why I suggested "large" ones. There also isn't much price break going down in size. Don't send those to Backblaze unless you allow people to save files to their desktop. In which case redirect the folders to the fileserver.

The issue is going to be knowing if someone saved or not. Windows is pretty good about complaining if someone else has a given file open. This might be all you need. You can test this out today by sharing a file on computer A, then opening it on Computer B, making a change, not saving it, then opening it on Computer C. If it's Word/Excel it will probably complain. Unfortunately this is an application by application basis. If you need something more technical than that you're suddenly into something terrible like Sharepoint.

Alternatively, I don't know how complicated these documents are, but if Google Docs/Sheets level functionality is all you need then I would strongly suggest that.

H110Hawk fucked around with this message at 23:49 on Feb 20, 2019

astral
Apr 26, 2004



Kibayasu posted:

but one of the remaining questions is if they can be just used like a hard drive, connect them, setup a shortcut, and just use them like any other folder on your computer instead of whatever packaged software nearly all of them advertise. Sorry if that's pretty basic and of course they'd be able to do that, I'm just not familiar at all with the networking side of things beyond what Windows walks you through. Most of it seems to talk about photos and videos so presumably it's just a way to organize those things but we literally just need the folders we already have just not tied to a single person's computer. Any suggestions would be very helpful.

Yes. What you are looking for is a NAS that will make folders/volumes available as a Samba (SMB) share, which realistically all the off-the-shelf products you're looking at will do.

edit:

quote:

The major problem to solve is just making sure someone that needs to edit something has the correct version of the file they're working on without needing to worry that someone else updated it but didn't save properly

A NAS isn't really going solve this for you if two people are trying to edit the same file at the same time, as mentioned by H110Hawk. You can at least configure file versioning so when someone does do that it's not lost forever.

astral fucked around with this message at 23:48 on Feb 20, 2019

Kibayasu
Mar 28, 2010



I don't think editing a file at the same time will be a likely issue. I was thinking more along the lines of two people having different local files and making different edits to them, and now there are 2 versions. A lot of the work is done on templates that get saved individually, the work that isn't is generally the purview of 1 person. If they're not there they won't be editing it.

What I'm hoping for is something I can just connect to the router, doing whatever set-up it requires to connect to the network, and having the hard drive appear in the Network list. Done. If it can be easy that sounds great. If it should be that easy and isn't for whatever reason the day of that's another problem for another day.

H110Hawk posted:

Windows? Mac?

It'll be 2 windows and 2 macs.

H110Hawk
Dec 28, 2006


Kibayasu posted:

I don't think editing a file at the same time will be a likely issue. I was thinking more along the lines of two people having different local files and making different edits to them, and now there are 2 versions. A lot of the work is done on templates that get saved individually, the work that isn't is generally the purview of 1 person. If they're not there they won't be editing it.

What I'm hoping for is something I can just connect to the router, doing whatever set-up it requires to connect to the network, and having the hard drive appear in the Network list. Done. If it can be easy that sounds great. If it should be that easy and isn't for whatever reason the day of that's another problem for another day.

It'll be 2 windows and 2 macs.

Yeah a synology will get you there. You are then going to have a "layer 8" problem of getting people to use the synology, and it being your fault when it breaks, etc. You will need to map the disk the first time, but hitting "reconnect on login" on the windows machines is all it takes to make it persist. Mac's I don't know how to get it to persist but I bet google does.

If you can hack it, I would try to redirect the home directories to the synology. That way when it inevitably breaks the whole company grinds to a halt.

Something like this: https://www.synology.com/en-us/products/DS218%2B

Alternatively, Dropbox or Google Drive.

H110Hawk fucked around with this message at 01:17 on Feb 21, 2019

Moey
Oct 22, 2010

I LIKE TO MOVE IT


Is everyone still avoiding all 3tb drives due to failure rate?

Sniep
Mar 28, 2004

All I needed was that fatty blunt...



King of Breakfast


Fun Shoe

Moey posted:

Is everyone still avoiding all 3tb drives due to failure rate?

Am I wrong to assume that any odd-tb drives are that way due to one side of a platter already having failed in initial QA?

mystes
May 31, 2006



So I have backups set up with my QNAP NAS set up now and it seems to be working pretty well, but there are two other annoying limitations with the QNAP software:

1) If you run an actual rsync daemon you can only set up one user for it

2) The built in ssh server only lets administrators log in and this can't be changed.

This is annoying because I want to use rsync to backup my desktop, but it seems logical to have it log in as a user that only has access to a specific folder, and the somewhat lame solution is to install third party software (Entware) that uses an opkg package manager and use that to install openssh, after which normal users can log in using ssh public key authentication.

Basically it's dumb because the QNAP software seems really nice but they've put this random artificial limitations into it and it seems like I'm wasting time fighting against it. I think if I was doing this again I would probably just go for a computer running linux at this point.

Edit: I guess it's still decent for $170 though.

mystes fucked around with this message at 03:23 on Feb 21, 2019

IOwnCalculus
Apr 2, 2003





Sniep posted:

Am I wrong to assume that any odd-tb drives are that way due to one side of a platter already having failed in initial QA?

You're assuming that the platters are in whole TB increments, which might not actually be the case.

As far as 3TB drives, most people here avoid them today because we're looking for the lowest cost per TB, and that's either 8TB or 10TB right now.

At any rate my 3TB reds have a stupid number of hours on them and still don't have any errors across them all.

H110Hawk
Dec 28, 2006


The 1.5tb debacle was largely an isolated incident, kinda like the IBM desk/death stars with their click of death.

on my 3tb disks. One bad sector in 5 years across 5 disks. Not really anything that keeps me up at night.

taqueso
Mar 8, 2004









Fun Shoe

Maybe they have too many good ones and you can 'unlock' more space like some old video cards or cpus that were orginally lower quality bins. i'm not holding my breath

Heners_UK
Jun 1, 2002


Kibayasu posted:

So the business I work for .....

We're not really dealing with massive amounts of data - I think our entire business archive is less than 20 GB - nor are we sending data around the world. 3-4 people that only need local access when they're at the office so I don't think we've moved into "get professional help" territory yet. The major problem to solve is just making sure someone that needs to edit something has the correct version of the file they're working on without needing to worry that someone else updated it but didn't save properly

I'm going to be run out of town for saying this, let me be clear this is a WORK answer: Have you considered OneDrive, Google Drive or Dropbox Pro etc? Thinking of OneDrive, it might be included in an O365 sub etc.

Atomizer
Jun 24, 2007

Bote McBoteface. so what


Moey posted:

Is everyone still avoiding all 3tb drives due to failure rate?

No, the one drive to avoid was the ST3000DM001.

Sniep posted:

Am I wrong to assume that any odd-tb drives are that way due to one side of a platter already having failed in initial QA?

No, you could easily have a drive with, for example, 3x 1 TB platters. 3 TB HDDs were pretty common, 5 TB less so, and anything odd above that nonexistent AFAIK, probably due to available platter densities and drive capacities (IIRC the drives with like 6 or more platters are the Helium-filled ones.)*

*That's for 3.5" drives; 2.5" HDDs have more limited space for platters depending on their z-height. IIRC you can find 2 platters max in a 7 mm height drive, which corresponds to 2 TB max, in either the WD Blue or the Seagate Barracuda/FireCuda, which are SMR; you might find 3 platters in a 9.5 mm drive, but the maximum capacity I'm aware of is still 2 TB (e.g. the Toshiba L200.) It's not until you get to the 15 mm drives that you can find 3-5 TB capacities, and those don't fit in laptops (they're for external enclosures and servers and such.)

Kibayasu
Mar 28, 2010



Heners_UK posted:

I'm going to be run out of town for saying this, let me be clear this is a WORK answer: Have you considered OneDrive, Google Drive or Dropbox Pro etc? Thinking of OneDrive, it might be included in an O365 sub etc.

Those are still options, yes. Personally I’ve always been a fan of keeping things more physical and close at hand. The other advantage is just switching to what amounts to another hard drive keeps things simpler - everything will look the same, files are just in a slightly different directory.

IOwnCalculus
Apr 2, 2003





IOwnCalculus posted:

You're assuming that the platters are in whole TB increments, which might not actually be the case.

As far as 3TB drives, most people here avoid them today because we're looking for the lowest cost per TB, and that's either 8TB or 10TB right now.

At any rate my 3TB reds have a stupid number of hours on them and still don't have any errors across them all.

Lesson learned: Don't post about them. One of them just shit a brick. Good thing I've got one spare already in there.

Gay Retard
Jun 7, 2003



Atomizer posted:

No, the one drive to avoid was the ST3000DM001.

Literally the only hard drive I’ve had fail on me over the past 20 years.

MagusDraco
Nov 11, 2011

even speedwagon was trolled


Gay Retard posted:

Literally the only hard drive I’ve had fail on me over the past 20 years.

Meanwhile I've had five 8tb wd reds, and two 8tb Seagate ironwolfs come dead (only one of the WD reds really came dead) or have to reallocate x # of sectors over the past few years.

My luck is shit but could be worse. Replacement 8tb iron wolf just went "hey I had to reallocate 8 sectors" like one week before Amazon's return period for the replacement ended. Let's hope the replacement for the replacement lasts more than 4 weeks.

edit: like of these 7 drives over 2 to 3 years it was an order of 4 WD Reds (1 DOA wont' spin, 1 had bad sectors after badblocks, other 2 were fine for a couple years). 2 separately RMAed replacement WD Reds for the two dead ones. and then of that new set of 4, 3 of them had to reallocate sectors over the course of 2ish years. Their warranties end around September/October 2019 and I've RMaed one of them. Got replaced with an EFAX WD Red that runs 5-10 degrees celsius hotter than the old helium WD Reds (and required putting a fan on it to stop it from sitting at 50 celsius while idle). I replaced one of the three WD Reds with a HGST Destkar NAS and that's been fine. The other has been replaced with those ironwolfs that have gotten several reallocated sectors before the amazon 30 day return period was up. twice.

MagusDraco fucked around with this message at 21:40 on Feb 21, 2019

redeyes
Sep 14, 2002
I LOVE THE WHITE STRIPES!

This is why I love those HGST NAS drives. I've sold maybe 100 over the last 5-6 years and none of them have come back failed that I know of.

eames
May 9, 2009



Has anybody played with minio yet? The feature set looks intriguing. Checksumming, bitrot protection, compression, encryption and redundancy using JBOD with any filesystem.
Their CLI client supports basic upload/mirroring functions but it is fully S3 compatible.
I'm trying to figure out if/how this could be useful for a personal NAS.

eames fucked around with this message at 15:21 on Feb 22, 2019

D. Ebdrup
Mar 13, 2009



eames posted:

Has anybody played with minio yet? The feature set looks intriguing. Checksumming, bitrot protection, compression, encryption and redundancy using JBOD with any filesystem.
Their CLI client supports basic upload/mirroring functions but it is fully S3 compatible.
I'm trying to figure out if/how this could be useful for a personal NAS.
One thing immediately sets off alarm bells for me:

the developers of minio posted:

HighwayHash can be used to prevent hash-flooding attacks or authenticate short-lived messages. Additionally it can be used as a fingerprinting function. Note that HighwayHash is not a general purpose cryptographic hash function (such as Blake2b, SHA-3 or SHA-2) and should not be used if strong collision resistance is required.
So it's not designed for what they're using it for, is what this is implicitly saying.
Other than that, I have to wonder what sort of hardware requirements it has, because it's using Reed-Solomon codes which are very well-established but very computationally expensive, as it has to do with matrix inversions in finite fields which is the sort of thing you would typically have to have a supercomputer to do, if you weren't doing it in hardware with shift registers (which is what ECC memory, CD and DVD players do, as they all use Reed-Solomon code).

D. Ebdrup
Mar 13, 2009



I've been trying to read through their source code (which it felt like I had to jump through hoops to get to from their documentation, but that's just a minor annoyance) to understand a bit of what they're doing, and one thing that's clear to me is that it's full of third party dependencies that's pulled in at buildtime via https, which seems like a complete mess but I guess is par for the course with go?
Other than that, the tree structure and source code seems almost intentionally obfuscated, since the first is not laid out in a hierarchy that makes any kind of sense or one that I've seen before, and the code seems to follow its own brand of styling that's not similar to anything anyone else is doing?

I mean, I know I'm not a programmer so it's bound to be a bit up-hill, but I'm not sure I wanna trust my data to code that's this messy. We'll see what actual programmers, whose previous work speaks for itself, have to say about it - and what happens once it's actually been production tested.
I ain't touching it until then, that's for sure.

EDIT:

Minio developers posted:

  • Minio does not support encryption with compression because compression and encryption together enables room for side channel attacks like CRIME and BREACH
  • Minio does not support compression for Gateway (Azure/GCS/NAS) implementations.
Firstly, they're wrong about compression and encryption being incompatible; their source for this claim is the attack against HTTPS using the HTTP compression, but that's just a flaw in a given implementation - not a problem with the idea of implementing compression and encryption together.
Secondly, what the fuck is the point of it then?! Am I going mad, or does this feel like amateur hour at a comedy club?
Why is all of their documentation outside of their own sourcecode, which is unreadable and nigh-obfuscated, about deploying minio instead of describing how it works except in the broadest possible sense involving a lot of handwaving.

D. Ebdrup fucked around with this message at 20:18 on Feb 22, 2019

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast


A pointless anecdote about UnRAID: I just replaced a 3TB drive for a 10TB one, and it took 22 hours to rebuild parity (total usable size from 38TB to 45TB, 8 drives including parity) and worked fine, with no fuss.

spincube
Jan 31, 2006

I spent so I could say that I finally figured out what this god damned cube is doing. Get well Lowtax.


Grimey Drawer

I feel like this is a solved problem, but here goes: what's the best way of mirroring syncing a directory on my Synology with a directory on my Windows 10 PC? I can't believe this isn't a Windows feature already.

So far I've been playing with Syncthing - via a Docker instance on the NAS, and the SyncTrayzor application on my PC - but it's already choked on a folder that's now causing me grief with sync issues. My aim is to have NAS\Pictures map to (Windows username)\Pictures, NAS\Documents to (Windows username)\Documents, and so on, so Duplicati can catch everything in NAS\ and back it all up to a cloud service overnight.

[e] phrasing

spincube fucked around with this message at 12:30 on Feb 23, 2019

thiazi
Sep 27, 2002


spincube posted:

I feel like this is a solved problem, but here goes: what's the best way of mirroring syncing a directory on my Synology with a directory on my Windows 10 PC? I can't believe this isn't a Windows feature already.

So far I've been playing with Syncthing - via a Docker instance on the NAS, and the SyncTrayzor application on my PC - but it's already choked on a folder that's now causing me grief with sync issues. My aim is to have NAS\Pictures map to (Windows username)\Pictures, NAS\Documents to (Windows username)\Documents, and so on, so Duplicati can catch everything in NAS\ and back it all up to a cloud service overnight.

[e] phrasing

Sounds like you are looking for junctions or some Group Policy voodoo to point directories elsewhere, not syncing. Nothing about mapping different folders requires syncing.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »