«608 »
  • Post
  • Reply
Atomizer
Jun 24, 2007

Bote McBoteface. so what


eames posted:

The drive I bought was the relatively new Lacie USB-C mobile drive
.
Mine is the thicker 4TB Version, I strongly suspect it has a ST4000LM024 inside but all the firmware is rebranded to Lacie. There is some controversy around this because the official datasheet lists this as a PMR drive but it behaves nothing like one. Seagate support got cagey when I asked for the type/model of the drive inside and only told me not to worry because SMR is great!
I was in a pinch, needed a large USB powered drive and this was all the local Apple Store had in stock, so it is what it is.

I’m extra careful with keeping versioned backups of that drive but so far it works fine and gives me SSD-like performance when it hits the cache, which happens quite frequently.

Ah ok, I thought for some reason that you were referring to one of the 2 TB 7 mm drives I had mentioned. Otherwise, I'm familiar with those 4 & 5 TB 15 mm drives. Seagate is indeed pretty cryptic about the use of SMR due to [undue] consumer backlash, although at this point considering how widespread it seems to be within their product line I'd just go ahead and assume all their drives use it. It's even worse because technically the drives are still PMR, because SMR refers to "shingled" PMR (as opposed to longitudinal vs perpendicular.)

The Gunslinger
Jul 24, 2004

Do not forget the face of your father.

Fun Shoe

priznat posted:

Did unraid ever fix the issue with scheduled parity checks not happening? I don’t get why they aren’t, the crontab looks correct. I just click the check now button every week or so.

It works perfectly for me and I get the emails each time. I use a third party plugin to update my dockers every week automatically but that one is sometimes dodgy, I need to sort that out at some point.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.




Grimey Drawer

Starting to put together parts for my next desktop build, thinking of going SSD for the OS/programs and three 8 or 10 TB drives in RAID for storage. What's the go-to Windows program to do that (or is the built in Storage Spaces good enough)? Is there any way to tell what kind of speed hit that would be vs single drives, still suitable for games etc?

Ursine Catastrophe
Nov 9, 2009



Rocket jumping?
That sounds dangerous...





Dinosaur Gum

I don't know if this counts as a "NAS" question per se, but is there a decent cross-platform dropbox replacement that I can tie into my local NAS floating around anywhere? Normally I'd jump straight to "scheduled rsync tasks" but A. that doesn't really cover mobile devices, and B. something that I could set up fairly cleanly and have a minimum of fuss for new/replacement devices would be primo. Optimally it'd be a "syncs when on the network, retains the local copy offline" kind of situation.

Atomizer
Jun 24, 2007

Bote McBoteface. so what


Takes No Damage posted:

Starting to put together parts for my next desktop build, thinking of going SSD for the OS/programs and three 8 or 10 TB drives in RAID for storage. What's the go-to Windows program to do that (or is the built in Storage Spaces good enough)? Is there any way to tell what kind of speed hit that would be vs single drives, still suitable for games etc?

Definitely put the OS on an SSD. A single modern HDD is just fine for games; do you explicitly need RAID? Otherwise it's just another point of failure. For non-mission-critical use, (e.g. for games, and/or for data that are properly backed up elsewhere,) Windows Storage Spaces seems functional enough the little bit I played around with it. The performance impact can be positive or negative depending on the RAID implementation, but I tested a striped 3-drive setup (with old, throwaway drives) and it pleasantly aggregated their R/W performance. But again, if 3 separate HDDs would work for you then I'd just use them like that; it makes them easier to back up, and replace, or move around, etc.

Heners_UK
Jun 1, 2002


Ursine Catastrophe posted:

, but is there a decent cross-platform dropbox replacement that I can tie into my local NAS floating around anywhere.

NextCloud has loud, strident fans who'll tell you to use it given half a chance.

SyncThing is what I'm using on Linux, Windows and Android.

BurgerQuest
Mar 17, 2009



I just setup Storage Spaces with 4 disks (3+Parity) and the write performance with parity is about 230MByte/s testing with winsat. In my case I don't really care about write performance so this is ok for me but should be noted.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.




Grimey Drawer

Atomizer posted:

Definitely put the OS on an SSD. A single modern HDD is just fine for games; do you explicitly need RAID? Otherwise it's just another point of failure. For non-mission-critical use, (e.g. for games, and/or for data that are properly backed up elsewhere,) Windows Storage Spaces seems functional enough the little bit I played around with it. The performance impact can be positive or negative depending on the RAID implementation, but I tested a striped 3-drive setup (with old, throwaway drives) and it pleasantly aggregated their R/W performance. But again, if 3 separate HDDs would work for you then I'd just use them like that; it makes them easier to back up, and replace, or move around, etc.

My initial thought was it would be more convenient to have one big blob of storage rather than multiple drives, while also building in a little bit of fault tolerance because I'm super lazy about actually uploading stuff to my NAS. Is using parity-drive RAID for daily storage introducing all the drawbacks while eliminating all the benefits?

Atomizer
Jun 24, 2007

Bote McBoteface. so what


Takes No Damage posted:

My initial thought was it would be more convenient to have one big blob of storage rather than multiple drives, while also building in a little bit of fault tolerance because I'm super lazy about actually uploading stuff to my NAS. Is using parity-drive RAID for daily storage introducing all the drawbacks while eliminating all the benefits?

Having a single volume is only more convenient if you need that 20 or 30 TB for something where individual drives wouldn't work, but you don't need backups (e.g. you're just storing games or something unimportant and replaceable) or you already have a sufficient backup solution (i.e. remote, or otherwise are you backing up to another 20-30 TB array?) I could imagine a single volume being somewhat more convenient for games, although 20+ TB of games sounds pretty rare and it's not like individual drives would be a hindrance anyway (Steam for example will let you install to multiple drives and it just presents them all in the single list.) Similarly, if you were hosting a media server, if you manually had to search for files on multiple HDDs I could see that being annoying, but if you're using something like Plex then it will automatically sort your stuff for you anyway regardless of the number of drives you're using.

If you did something like RAID5, then sure, your array would be intact if you lost a drive, but then you'd have to replace that drive quickly and hope nothing bad happens during the rebuild. Basically, it depends exactly what you're trying to accomplish here; if you don't specifically need RAID (or spanning, or whatever) then you're probably better off with the individual drives. It may very well be that the extra complexity isn't worth any advantages you'd otherwise gain.

I just booted up that backup gaming desktop I mentioned with the 3-drive array to update Steam. The only reason I set them up that way is because it's an older, backup system, the data on the drives isn't important (just games that are available elsewhere,) and it put to use some old drives that wouldn't have a purpose otherwise (although I could use them in my Steam content caching server because that's a perfect use for throwaway drives.) The drives are 10+ year old 250 GB SATA HDDs, so you can imagine they aren't particularly useful (especially considering you can of course get single drives with many times the capacity and better performance on top of that) individually, however 750 GB is enough for quite a few of the older games that would run just fine on that system. Otherwise, I wouldn't bother; every other PC I use has individual drives because I have no need for any kind of array.

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




Atomizer posted:

good advice

Also, just in case someone hasn't seen it, Lee Hutchinson of ARS did an incredible deep dive, as he is wont to do, of building a Steam Cache Server to alleviate a lot of headache if you already have a NAS and do have some sort of failure.
https://arstechnica.com/gaming/2017...andwidth-blues/

Enos Cabell
Nov 3, 2004



Crunchy Black posted:

Also, just in case someone hasn't seen it, Lee Hutchinson of ARS did an incredible deep dive, as he is wont to do, of building a Steam Cache Server to alleviate a lot of headache if you already have a NAS and do have some sort of failure.
https://arstechnica.com/gaming/2017...andwidth-blues/

I might have to do this. I've been using the hilariously old and outdated backup/restore feature in Steam and storing those on my server. It's so old it still defaults to splitting things into 650mb CD sized chunks.

Atomizer
Jun 24, 2007

Bote McBoteface. so what


Crunchy Black posted:

Also, just in case someone hasn't seen it, Lee Hutchinson of ARS did an incredible deep dive, as he is wont to do, of building a Steam Cache Server to alleviate a lot of headache if you already have a NAS and do have some sort of failure.
https://arstechnica.com/gaming/2017...andwidth-blues/

I had explored this but did the (somewhat simpler) official Steam content server (intended for cyber cafes or LAN events.) You create two new Steam accounts (one for the "company" and one for the "site") and after a quick setup you can just use the caching feature. Without necessarily going into too much detail at this point (because this feels off-topic, unless someone wants to know more here,) it works as expected, when it works....

Enos Cabell posted:

I might have to do this. I've been using the hilariously old and outdated backup/restore feature in Steam and storing those on my server. It's so old it still defaults to splitting things into 650mb CD sized chunks.

The weird thing about the Steam backup tool is that it seems to take way too long to both backup and restore, especially compared to just moving the files manually, and sometimes it restores a fresh, up-to-date game installation and then still needs to download a significant amount of data. I find that it's generally faster to just move over existing game files manually, although if you move them to another PC's Steam library it won't see the game immediately; you have to try to install it to that same location and then it'll find the existing files.

You seem to just want to back up your existing games, which is fine, but isn't really the point of the caching server implementations. Just do what I do and use a backup utility (in my case, FreeFileSync) to keep a second copy of your games up-to-date as convenient.

eames
May 9, 2009



Enos Cabell posted:

I've been using the hilariously old and outdated backup/restore feature in Steam and storing those on my server. It's so old it still defaults to splitting things into 650mb CD sized chunks.

Yeah that feature is just about useless these days. Their compression algorithm seems to be stuck in HL2 days, as indicated by the default chunk size. On a modern PC setup (assuming 8 cores, SSD source/destination and >300 Mbit internet) you can download most games faster than the time it takes to back them up and that doesn't even include restoring them.


on a completely different note the Synology DS619slim is still not out a year after the announcement, it was renamed to DS620slim and downgraded to a J3355 dual core atom instead of the previously advertised quadcore.

https://www.youtube.com/watch?v=uUItcC2UiCA#t=7s

eames fucked around with this message at 09:33 on Jun 2, 2019

Ika
Dec 30, 2004
Pure insanity



Does anyone know if amazon (EU) or anybody else ever offers good synology sales. I now have the drives for a NAS ready to go, but I can't make up my mind between the DS418, and the larger and more expensive 5 and 6 bay enclosures, which are significantly more per bay. If there is a chance the 5 or 6 bay ones will go on sale in the next couple of months I am willing to wait.

Ika fucked around with this message at 11:23 on Jun 2, 2019

Tornhelm
Jul 26, 2008



I know here in AU at least, the NAS units nearly never go on sale. No real discounts I could see even internationally over an 8-9 month period (including end of financial year & Christmas sales) after my house burnt down and I was waiting until I'd gotten all my other shit in order before buying one. I'm guessing its because they're such a niche/specialized item that they don't see the point in discounting them until they're EoL and trying to get rid of excess stock.

SuitcasePimp
Feb 27, 2005



eames posted:


on a completely different note the Synology DS619slim is still not out a year after the announcement, it was renamed to DS620slim and downgraded to a J3355 dual core atom instead of the previously advertised quadcore.


Damn! WTF Synology, and 6gb ram limit? I was waiting specifically for this to build an all ssd NAS. I need to do it like next week, anyone got a recommendation? I need 5-6tb of storage, ability to run crashplan, heat and vibration tolerance, and at least 1 drive loss survivability.

CommieGIR
Aug 22, 2006

If Godzilla can do it, you know I can deliver!

Pillbug

Picking up a HP MSA70 25 drives 2.5" Storage Array, gonna be nice, since I have an abundance of 2.5" drives so it'll be a cheap upgrade, and the Controller seems to handle fairly large drives despite HP claiming it only handles up to 900GB SAS, but also supports SATA native.

Tempted to get a bunch of SSDs and setup a ZFS array with SSD caching.

Progressive JPEG
Feb 19, 2003



Takes No Damage posted:

Starting to put together parts for my next desktop build, thinking of going [...] three 8 or 10 TB drives in RAID for storage.

RAID or similar disk array setups work best with a large number of moderately sized drives. More drives means more spindles to serve requests in parallel. Moderate size means the array doesn't take several days to rebuild if there's a problem. A prolonged rebuild itself induces added load onto the remaining healthy disks, which can then cause additional disks to fail, potentially taking the array with them.

An array of a small number of large disks adds complexity and risk without offering much benefit over just using the drives standalone.

Progressive JPEG fucked around with this message at 23:09 on Jun 2, 2019

CommieGIR
Aug 22, 2006

If Godzilla can do it, you know I can deliver!

Pillbug



Time for a lot of SATA SSDs

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

PMC and LSI chips workin together!

OG storage bros

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




JESUS. I can't remember the last time I saw a legitimate full length card. What the fuck host are you going to put that in?

Cool pickup, though, mind if I ask price?

CommieGIR
Aug 22, 2006

If Godzilla can do it, you know I can deliver!

Pillbug

Crunchy Black posted:

JESUS. I can't remember the last time I saw a legitimate full length card. What the fuck host are you going to put that in?

Cool pickup, though, mind if I ask price?

$250, its full of 25 x 76GB 10ks, but we'll be replacing those of course.

Yeah, its a long card, thankfully these are full size SuperMicro boxes. The goal is to load it up with cheap SATA SSDs of the 240-480GB variety.

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




Nice! Yeah those drives would be power hogs, but that seems like a solid plan. Probably not the most dense option but a shit pile of IOPS!

CommieGIR
Aug 22, 2006

If Godzilla can do it, you know I can deliver!

Pillbug

Crunchy Black posted:

Nice! Yeah those drives would be power hogs, but that seems like a solid plan. Probably not the most dense option but a shit pile of IOPS!

I don't know if the P800 Controller supports SSD caching, but I may split the array into half SSDs and half 1-2TB SATA for density. HP claims the P800 can handle up to 900GB SAS per drive, and others have claimed up to 2TB per disk. We'll see.

Use the SSDs for VM hosting, and the SATA for storage/archiving.

Theoretically, this would give me 12TB/24TB Raw (1TB or 2TB Disks) of Archive/Backup and 3TB/6.2TB Raw for the SSDs (240 or 480GB disks) for VMs.
Might have to dig up a Tape Drive to backup that storage, since my current backup disk is 4TB, unless I get a 10TB USB 3.0 Drive.

I have a good and functional Disk to Disk backup that rsyncs nightly.

CommieGIR fucked around with this message at 23:54 on Jun 3, 2019

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




CommieGIR posted:

I don't know if the P800 Controller supports SSD caching, but I may split the array into half SSDs and half 1-2TB SATA for density. HP claims the P800 can handle up to 900GB SAS per drive, and others have claimed up to 2TB per disk. We'll see.

Use the SSDs for VM hosting, and the SATA for storage/archiving.

Theoretically, this would give me 12TB/24TB Raw (1TB or 2TB Disks) of Archive/Backup and 3TB/6.2TB Raw for the SSDs (240 or 480GB disks) for VMs

Holy hell, you changed your AV, didn't realize it was you, Commie! Hope all is well. Been following your snapchat of various Audi shenanigans.

Crunchy Black fucked around with this message at 23:55 on Jun 3, 2019

IOwnCalculus
Apr 2, 2003





Looks like it's a LSI SAS1078, so unless HP fucked something up, it should support up to (but not above) 2TB per drive.

With that said, that also means it probably is just speaking regular SAS to the array, so any newer-generation LSI controller should get you support for >2TB drives, and possibly better performance, assuming the backplane in that enclosure isn't limited to 3Gbps SAS / 1.5Gbps SATA.

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




IOwnCalculus posted:

Looks like it's a LSI SAS1078, so unless HP fucked something up, it should support up to (but not above) 2TB per drive.

With that said, that also means it probably is just speaking regular SAS to the array, so any newer-generation LSI controller should get you support for >2TB drives, and possibly better performance, assuming the backplane in that enclosure isn't limited to 3Gbps SAS / 1.5Gbps SATA.

Could you not technically go with any other external SAS HBA to get around this limitation?

IOwnCalculus
Apr 2, 2003





Yeah, that's what I was thinking. Even if the MSA70 is locked to 3Gbps SAS internally on whatever SAS switch it has in there, going to a SAS2008 or newer would lift the drive size limitation. In theory, with a boatload of SSDs behind it, even limited at 3Gbps on each of the SAS links, the PCIe v1 interface between the 1078 and the system could even be a bottleneck versus a PCIe 2.0 or 3.0 interface on a later card.

I also highly doubt the 1078 supports any sort of controller-based SSD caching, but they did have the "Cachecade" feature on later cards.

CommieGIR
Aug 22, 2006

If Godzilla can do it, you know I can deliver!

Pillbug

Either way its a vast improvement over my RAID-5'ed 1TB 7200 RPM drives.

And yeah, the P800 is PCIe, so that's a huge advantage.

Next purchase: An actual damned rack. This plastic shelving sucks, but it was all that I had at the moment:



And yes, the IBM Bladecenter works, and has power available down there, but its so antiquated it might as well be trash.

CommieGIR fucked around with this message at 01:04 on Jun 4, 2019

KKKLIP ART
Sep 3, 2004



So I figure better here than in the home networking thread. I have a freenas box that I want to run the Unifi controller software off a docker container. The FreeNAS software is 11.2-U4.1, which supports setting up a docker container straight out from the VM's tab. Only problem is, I can't connect to it because it says its not set to UEFI (shows GRUB). Every youtube I see for a 11.2 docker setup shows the setting to have it be UEFI during VM setup, but in 11.2-U4 I suppose that isn't an option. Because of that, I can't connect to it and I don't know of any default SSH password because the one I set isn't working. Any ideas or tips?

D. Ebdrup
Mar 13, 2009



KKKLIP ART posted:

So I figure better here than in the home networking thread. I have a freenas box that I want to run the Unifi controller software off a docker container. The FreeNAS software is 11.2-U4.1, which supports setting up a docker container straight out from the VM's tab. Only problem is, I can't connect to it because it says its not set to UEFI (shows GRUB). Every youtube I see for a 11.2 docker setup shows the setting to have it be UEFI during VM setup, but in 11.2-U4 I suppose that isn't an option. Because of that, I can't connect to it and I don't know of any default SSH password because the one I set isn't working. Any ideas or tips?
Setup a jail, access the jail, then run the following commands:
code:
pkg install net-mgmt/unifi5
sysrc mongod_enable="YES"
sysrc unifi_enable="YES"
service mongod start
service unifi start
Then browse to [url]https://[/url]<IP>:8443 (make sure it's https, otherwise you'll get a very cryptic set of unicode boxes, or a warning about TLS not being used - it's not entirely guaranteed which it'll be).

EDIT: With FreeBSD 12 it's even simpler, because service can now be used to enable things with - that won't help you on FreeNAS though, unless they decide to backport that change to 11.x.

Chris Knight
Jun 5, 2002

And I'm only saying this because I care.

There are a lot of decaffeinated brands on the market today that are just as tasty as the real thing.



Fun Shoe

Has anyone said FreeNAS on the LAN yet.

D. Ebdrup
Mar 13, 2009



Chris Knight posted:

Has anyone said FreeNAS on the LAN yet.
Is "FreeNAS on the LAN, beastie in the sheets" what you're going for?

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


I think it was a "freeman on the land" pun

If your CPU has gold pins then pirate law applies and the FBI can't enforce those pesky copyright laws!

Paul MaudDib fucked around with this message at 23:04 on Jun 5, 2019

Enos Cabell
Nov 3, 2004



If your rig is water cooled it falls under admiralty law.

Chris Knight
Jun 5, 2002

And I'm only saying this because I care.

There are a lot of decaffeinated brands on the market today that are just as tasty as the real thing.



Fun Shoe

Enos Cabell posted:

If your rig is water cooled it falls under admiralty law.

What if it's freshwater cooled and not saltwater cooled?

redeyes
Sep 14, 2002
I LOVE THE WHITE STRIPES!

You can't have tropical fish in that then.

KKKLIP ART
Sep 3, 2004



D. Ebdrup posted:

Setup a jail, access the jail, then run the following commands:
code:
pkg install net-mgmt/unifi5
sysrc mongod_enable="YES"
sysrc unifi_enable="YES"
service mongod start
service unifi start
Then browse to [url]https://[/url]<IP>:8443 (make sure it's https, otherwise you'll get a very cryptic set of unicode boxes, or a warning about TLS not being used - it's not entirely guaranteed which it'll be).

EDIT: With FreeBSD 12 it's even simpler, because service can now be used to enable things with - that won't help you on FreeNAS though, unless they decide to backport that change to 11.x.

This is all excellent info. Is there a way to specify if it uses eth0 or eth1? Will the unifi controller give a dang if it shares an IP with FreeNAS?

e: looks like in jail config I can specify the ip address. I'll have to fiddle with this.

E2: It was pretty painless. Seems like the Unifi software is out of date when done that way so I might fiddle with updating it. I know there was some funky stuff with looking for specific java versions. Interestingly enough, I can access my FreeNAS box on network, but that device doesn't show up on my devices list in the cloud key software, but the "cloud key" set up in FreeNAS does using the IP address from the second ethernet jack on my motherboard. It all works, just curious.

KKKLIP ART fucked around with this message at 03:48 on Jun 6, 2019

CommieGIR
Aug 22, 2006

If Godzilla can do it, you know I can deliver!

Pillbug

So I ended up remove the P800 controller and array from my Xen box and putting it into my Dell T310 and throwing ESOS on it so I could use it as a SAN

D. Ebdrup
Mar 13, 2009



KKKLIP ART posted:

This is all excellent info. Is there a way to specify if it uses eth0 or eth1? Will the unifi controller give a dang if it shares an IP with FreeNAS?

e: looks like in jail config I can specify the ip address. I'll have to fiddle with this.

E2: It was pretty painless. Seems like the Unifi software is out of date when done that way so I might fiddle with updating it. I know there was some funky stuff with looking for specific java versions. Interestingly enough, I can access my FreeNAS box on network, but that device doesn't show up on my devices list in the cloud key software, but the "cloud key" set up in FreeNAS does using the IP address from the second ethernet jack on my motherboard. It all works, just curious.
If you want to earn extra points, you'll setup a new jail with vnet instead - all of it should be documented, but FreeNAS unfortunately does some assumptions about your network so don't change anything on your existing jail. Once you've got it setup, you can always migrate the config from UniFi itself.

As to updating it, packages are built from FreeBSD Ports and someone from the FreeBSD Porter's Team has to update the port before a new package can be built (and there are +34000 of the things, so it takes a little while).
What you can do is find the FreeBSD handbook and read up on how to use ports, then setup a new jail and hand-edit the Makefile according to the FreeBSD Porters Handbook, then fiddle with to your hearts desire - alternatively, you can use the net-mgmt/unifi-devel port, but the issue with that is that it's restricted, because it's using the beta-branch, so you'll need to manually grab the right distribution file from Ubiquiti.
Or just wait until a porter gets around to updating it, it happens pretty regularly on the non-LTS branch that you're using (I'm on the net-mgmt/unifi-lts branch on purpose).

The good thing about doing this in a new jail is that if you fuck up so royally that you can't back out any changes, you can just delete the jail and start over.

CommieGIR posted:

So I ended up remove the P800 controller and array from my Xen box and putting it into my Dell T310 and throwing ESOS on it so I could use it as a SAN
Huh, never heard of ESOS. Would be interesting to make something like it on FreeBSD; shouldn't be hard, as the CAM Target Layer can serve things to an iSCSI Initiator Target and there's the whole OFED stack to deal Infiniband, as well as isp(4) for Fiber Channel..

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »