«608 »
  • Post
  • Reply
Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



lol

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

Actually, Kubernetes also supports rkt for its containers As a closeted security guy, I prefer rkt's design and architecture over Docker's but I know that going on about that is about as useful as noting all the neat features and performance Solaris has over Linux when the market winner is absolutely clear.

Also, I have zero idea what an "AWS-style" deployment even means and I've been deploying IaaS, PaaS, and SaaS stacks across like 10 companies on AWS for like 4 years now.

Mr. Crow
May 22, 2008

Snap City mayor for life


necrobobsledder posted:

Also, I have zero idea what an "AWS-style" deployment even means and I've been deploying IaaS, PaaS, and SaaS stacks across like 10 companies on AWS for like 4 years now.

"The clouds"

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

necrobobsledder posted:

Also, I have zero idea what an "AWS-style" deployment even means and I've been deploying IaaS, PaaS, and SaaS stacks across like 10 companies on AWS for like 4 years now.

Nobody else does, either, but it sounds good in a PowerPoint presentation pitch.

Ziploc
Sep 19, 2006
MX-5

So I've got freenas running on my whitebox masquerading as a server machine.

As mentioned before. IPMI and iKVM is pretty friggen neato.

I'm still dicking around with things here and there. Not sure when it will be ready for prime time yet. I have 5 random 1tb disks in it now. With 9 sata ports left to fill.

Annoyingly, one of the Seagate disks in it's previous service scenario got a bit warm. So now it's always labelled with a warning in Freenas.





Not the end of the world, but interesting. All disks passed a full sector test in WD Data Lifeguard before being put in this machine.

Fire Storm
Aug 8, 2004

what's the point of life
if there are no sexborgs?


I recently got a Dell PowerVault MD1200 and a matching server, cards, drives etc. The problem is connecting the two. It currently has cards for SFF-8088 SAS cables, but I don't have a cable. I found a similar looking cable, a pair of HITACHI DF-F850-SC1 3285194-A cables in a box, but they don't fit in the slot due to a little plastic tab on the cable near the release mechanism.

Does anyone know if the cables are actually compatible and its safe to cut the tab off or if I should just order new cables?

Pics of the Hitachi cable are in this auction on ebay

zennik
Jun 9, 2002



Fire Storm posted:

I recently got a Dell PowerVault MD1200 and a matching server, cards, drives etc. The problem is connecting the two. It currently has cards for SFF-8088 SAS cables, but I don't have a cable. I found a similar looking cable, a pair of HITACHI DF-F850-SC1 3285194-A cables in a box, but they don't fit in the slot due to a little plastic tab on the cable near the release mechanism.

Does anyone know if the cables are actually compatible and its safe to cut the tab off or if I should just order new cables?

Pics of the Hitachi cable are in this auction on ebay

If you just need a SAS 8088 to SAS 8088 cable, hop on Amazon:

https://www.amazon.com/Monoprice-Ex...ywords=8088+sas

Farmer Crack-Ass
Jan 2, 2001

~this is me posting irl~


Hughlander posted:

If you guys use Crashplan listen here...

I run Crashplan in Docker that autoupdates itself and when it does I usually go a week or two without resetting the VM memory options to 4g so it crashes constantly and takes forever to catch up. This last time it was saying 1.4 - 2.8 years to catch up so I got sick of it and poke and prodded and came across
<dataDeDupAutoMaxFileSizeForWan> in the my.service.xml file. It seems to specify the max file size before it'll try to dedup to the crashplan server. It's set to 0 or all files. I set it to 1. My throughput went from 4-600kps to 30Mbps. I was trying to paste a datadog graph but imgur kept saying that it couldn't take it... I'm uploading 3-4 megabytes a second and the time to catch up dropped to 23 days and after a day it's now 19 days.

I ran across this in a different source and came here to confirm, I run it on Windows but my experience was much like yours; I was stalled out at around 900kbps and one of the CPU cores maxed out, and after making the change went to 30mbps with a core half-utilized.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

What is up with my Crashplan? I was at 10% backed up a few weeks ago but now the "backup report" email says I'm 0.1% backed up.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Did you add a lot more stuff to be backed up recently? Crashplan doesn't delete stuff unless you tell it to, so either you deleted/reset your store on their end, or you added a bunch of stuff to get backed up on your end and it hasn't caught up yet.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

DrDork posted:

Did you add a lot more stuff to be backed up recently? Crashplan doesn't delete stuff unless you tell it to, so either you deleted/reset your store on their end, or you added a bunch of stuff to get backed up on your end and it hasn't caught up yet.

Added maybe ~15GB recently. I had 8.5TB backed up there at one point, and it reset back to 0%. Slowly made it back to 10% backed up and now it has reset back to 0% again.

IOwnCalculus
Apr 2, 2003





Is it running out of memory and puking?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

IOwnCalculus posted:

Is it running out of memory and puking?

I think that's the culprit. Loads of OutOfMemoryError entries in /usr/local/crashplan/log/service.log.0

I thought I had given it 8192m but looking at /usr/local/crashplan/bin/run.conf it must have gotten reverted to 1200m at some point.

Sucks that OOM error wiped out my existing backup that took like 2 years to upload

Hughlander
May 11, 2005



fletcher posted:

I think that's the culprit. Loads of OutOfMemoryError entries in /usr/local/crashplan/log/service.log.0

I thought I had given it 8192m but looking at /usr/local/crashplan/bin/run.conf it must have gotten reverted to 1200m at some point.

Sucks that OOM error wiped out my existing backup that took like 2 years to upload

It didn't it's just the way that line is reporting. It got through 10% of syncing blocks with the server is what it's saying. Fix the memory through the java mix command and in 10 hours or so it'll catch up.

Note afaik you need to use the console not any config file.

Fire Storm
Aug 8, 2004

what's the point of life
if there are no sexborgs?


Curiosity won: The cables are compatible, or at least compatible enough for my use. HITACHI DF-F850-SC1 3285194-A cable is a SFF-8088 SAS cable with a slightly different end.

I'm copying my NAS to the array now, and I probably should have figured out if I wanted LINUX or Windows before I blindly installed Windows Server on a license that came with the machine. Ah well.

movax
Aug 30, 2008



My IT guy at work gave me what I think is a decent suggestion -- to allow for more SSDs to get passed through to FreeNAS via VT-d, either grab a PCIe SSD and partition it for ZIL and L2ARC for the install, or, grab a SAS expander and run my spinny drives off that (connected to one of the two LSI3008 connectors), and have up to four SSDs to toss at FreeNAS.

I figure if I do sabnzbd through a FreeNAS plugin and not through a separate VM, a separate SSD for unpack may be nice.

Thoughts?

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


Is the extra speed of unpack honestly worth it? I just do the whole thing right on my array. I mean, yeah, you end up with a little extra fragmentation because of copy on write, but that's not that huge of a deal even. When you're blowing through 70 rars and unpacking them in about a minute on spinning rust, it just seems to me like it's wasteful to move just that process onto SSD.

redeyes
Sep 14, 2002
I LOVE THE WHITE STRIPES!

Wouldn't putting in more RAM do kind of the same thing?

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


More RAM wouldn't, no, because FreeNAS will intentionally fill as much RAM as possible with caching. sabnzbd is going to write files to disk, then they'll need to be read to extract, the extracted content will be written, original files will be deleted, and then whichever app is consuming from sabnzbd will read those files and move/copy them to the final destination, deleting the extracted files. There's nowhere for the caching process to fit into that. You could maybe take a large chunk of RAM (you'd likely need 32GB or more to make this worthwhile, given download concurrency, and that you need enough space for the original archives and the extracted content to exist simultaneously, at least for a second or two) and make a tmpfs that lives in RAM and use that as the download and extract locations, but that seems pretty wasteful as well.

Edit: I know 32GB sounds extreme, but consider that the capacity of a single-layer blu-ray disc is 25GB and that number starts seeming very low.

G-Prime fucked around with this message at 23:51 on Apr 8, 2017

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


movax posted:

My IT guy at work gave me what I think is a decent suggestion -- to allow for more SSDs to get passed through to FreeNAS via VT-d, either grab a PCIe SSD and partition it for ZIL and L2ARC for the install, or, grab a SAS expander and run my spinny drives off that (connected to one of the two LSI3008 connectors), and have up to four SSDs to toss at FreeNAS.

I figure if I do sabnzbd through a FreeNAS plugin and not through a separate VM, a separate SSD for unpack may be nice.

Thoughts?

You're going to want to be reeeealll careful with the passthrough thing, are you 100% sure that your VM layer isn't adding any latency? I've never worked with VT-d so maybe it's OK (given it's such a low-level passthrough) but be careful and check on this, working with VMs is where ZFS drives can get fucked up. The more cache layers the more chances that something is buffered and doesn't get flushed to disk.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


VT-d literally hands the device to the VM as if it's native hardware, that shouldn't be an issue at all. Passing a whole controller to a VM is the recommended way to bypass disk latency issues when you're working gaming VMs, for example. I'm not saying using it for an array is precisely the same as just basic I/O for gaming, but research indicates it should be fine.

YouTuber
Jul 31, 2004

by FactsAreUseless


I'm repurposing my old gaming computer from 2010 into a NAS. Whats a decent OS to run this shit? I'm running FreeNAS at the moment but this computer doesn't have ECC ram so ZFS isn't a good option for it. Unraid or Snapraid? It's just handling movies and tv show. If I can run docker containers I'll probably have Usenet and Nextcloud running on it as well.

Matt Zerella
Oct 7, 2002


Unraid owns and is worth the price.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



YouTuber posted:

I'm repurposing my old gaming computer from 2010 into a NAS. Whats a decent OS to run this shit? I'm running FreeNAS at the moment but this computer doesn't have ECC ram so ZFS isn't a good option for it. Unraid or Snapraid? It's just handling movies and tv show. If I can run docker containers I'll probably have Usenet and Nextcloud running on it as well.

On the one hand you probably don't need ECC, on the other hand if I could go back and start over again I probably wouldn't go with ZFS for my home server bulk storage and I'd use probably snapraid.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!


Yeah, ZFS is a little cumbersome when it comes to adding storage down the line. I'm to the point where it's either replace 4 2TB disks with 4TB+, or just add an additional vdev of 4-5 disks. I kept the original vdevs small with a mind towards expansion later, but unless I can find a reasonable tower case that fits under my desk and holds 12 3.5" disks plus a couple SSDs, I'm stuck with 8 disks in a pair of 4-disk vdevs. I've got roughly 300GB left before I'm forced to grab another 4 disks and start the expansion process.

Hughlander
May 11, 2005



I got 6tb usable left. When I fill it I'm adding a second group of 6 8TB drives.

movax
Aug 30, 2008



G-Prime posted:

VT-d literally hands the device to the VM as if it's native hardware, that shouldn't be an issue at all. Passing a whole controller to a VM is the recommended way to bypass disk latency issues when you're working gaming VMs, for example. I'm not saying using it for an array is precisely the same as just basic I/O for gaming, but research indicates it should be fine.

Yeah, I don't plan for my FreeNAS install to touch any disks through a controller that isn't directly passed through (besides maybe its installation disk).

What's the state-of-the-art advice on ZIL/L2ARC? Most Google results are showing up with dates of 2014 or older, and seem to have conflicting information. I'm a home server build with no massive amount of clients or reads/writes besides media streaming or basic file I/O, so I don't think I strictly /need/ a ZIL or L2ARC, but if I can add say a single SSD for both and enjoy a performance boost, I'll totally do it.

D. Ebdrup
Mar 13, 2009



movax posted:

What's the state-of-the-art advice on ZIL/L2ARC? Most Google results are showing up with dates of 2014 or older, and seem to have conflicting information. I'm a home server build with no massive amount of clients or reads/writes besides media streaming or basic file I/O, so I don't think I strictly /need/ a ZIL or L2ARC, but if I can add say a single SSD for both and enjoy a performance boost, I'll totally do it.
There's no new information on ZIL/L2ARC because what you're looking for is separate (zfs) log (slog, not zil as that already exists your spinning rust) and cache devices, both of which haven't changed in-so-far as implementation goes and any the only reason there's conflicting information on it if because you need to vet the source of the claims being made (ie. any documentation from dtrace.org and sun.com/oracle.com is reliable, as are a few blogs by people who've have verifiable committed code to ZFS and OpenZFS).
Separate (zfs) intent logs only keep track of synchronous writes (which are either files opened with a sync flag, which you can do with dd, or where the zfs dataset has explicitly had its sync property set to always).
L2ARC is a secondary layer of caching that's for all intents and purposes the same as the ARC (in that it caches based on MFU and MRU algorithms, except that it's setup in such a way that stuff that doesn't fit into ARC will be put into L2ARC, which means it makes sense to call it a level 2 (or secondary) ARC). The downside of L2ARC is that it has to be mapped in memory which takes up space, and even PCIe 3.0 3DXpoint SSDs with NVMe are still much slower than DRAM, so you should make an effort to completely fill your system with memory first.

You won't see a performance boost by adding log or cache devices, unless you're doing a lot of synchronous writes, or your ARC hit ratio is low enough (which can be checked).

movax
Aug 30, 2008



D. Ebdrup posted:

There's no new information on ZIL/L2ARC because what you're looking for is separate (zfs) log (slog, not zil as that already exists your spinning rust) and cache devices, both of which haven't changed in-so-far as implementation goes and any the only reason there's conflicting information on it if because you need to vet the source of the claims being made (ie. any documentation from dtrace.org and sun.com/oracle.com is reliable, as are a few blogs by people who've have verifiable committed code to ZFS and OpenZFS).
Separate (zfs) intent logs only keep track of synchronous writes (which are either files opened with a sync flag, which you can do with dd, or where the zfs dataset has explicitly had its sync property set to always).
L2ARC is a secondary layer of caching that's for all intents and purposes the same as the ARC (in that it caches based on MFU and MRU algorithms, except that it's setup in such a way that stuff that doesn't fit into ARC will be put into L2ARC, which means it makes sense to call it a level 2 (or secondary) ARC). The downside of L2ARC is that it has to be mapped in memory which takes up space, and even PCIe 3.0 3DXpoint SSDs with NVMe are still much slower than DRAM, so you should make an effort to completely fill your system with memory first.

You won't see a performance boost by adding log or cache devices, unless you're doing a lot of synchronous writes, or your ARC hit ratio is low enough (which can be checked).

Fair enough -- I'm putting 64GB of memory into the machine, and was going to toss 32-48GB of it directly at FreeNAS through ESXi as an allocated amount, and keep the other 16GB for a DC, Linux Plex/Usenet/etc VM, and whatever other random stuff is on the machine. I'm also being lazy and using ZIL/SLOG interchangeably -- to be clear, yes, I am referring to adding a separate SLOG device.

Most of my I/O will be via CIFS/SMB from Windows clients, or likely internal iSCSI (or NFS, I suppose) shares to the local VMs on the machine.

I guess the question is -- if I get a M.2 <-> PCie x4 physical adapter card, and toss a 128GB-256GB Samsung SSD in there and offer it up to FreeNAS as a tiny SLOG + massive L2ARC, is my worst case 'no real benefit', or is it 'you're making it worse than if you didn't have the drive there'. I forget the formula for calculating the RAM impact of a huge L2ARC drive, but I do know tossing a too-large L2ARC can just make things worse in the end.

redeyes
Sep 14, 2002
I LOVE THE WHITE STRIPES!

Meanwhile my lowly windows server has raid 1 and 30 bux for Stablebit drive pool. 4GB RAM does the trick.

EconOutlines
Jul 3, 2004



I've had nothing but corruption issues, even when switching to the beta builds. Worth switching to dedicated NAS IMO.

D. Ebdrup
Mar 13, 2009



movax posted:

Fair enough -- I'm putting 64GB of memory into the machine, and was going to toss 32-48GB of it directly at FreeNAS through ESXi as an allocated amount, and keep the other 16GB for a DC, Linux Plex/Usenet/etc VM, and whatever other random stuff is on the machine. I'm also being lazy and using ZIL/SLOG interchangeably -- to be clear, yes, I am referring to adding a separate SLOG device.

Most of my I/O will be via CIFS/SMB from Windows clients, or likely internal iSCSI (or NFS, I suppose) shares to the local VMs on the machine.

I guess the question is -- if I get a M.2 <-> PCie x4 physical adapter card, and toss a 128GB-256GB Samsung SSD in there and offer it up to FreeNAS as a tiny SLOG + massive L2ARC, is my worst case 'no real benefit', or is it 'you're making it worse than if you didn't have the drive there'. I forget the formula for calculating the RAM impact of a huge L2ARC drive, but I do know tossing a too-large L2ARC can just make things worse in the end.
Both NFS and iSCSI can make use of various sync commands but I think you have to force it if you want to be sure, by setting the zfs property to sync=always. Moreover, I'd suggest getting one MLC SSD for L2ARC and 2 smaller SLC SSDs, and 1) mirroring them so that if one fails you don't lose data and 2) over-provisioning them to match the data size that you're working with (ie. the maximum amount of data that can be written during any given 5-second period, which is the period between slog flushes, doubled to ensure that you never run out of space on your slog devices), so that you get as much write amplification and write endurance as possible.

redeyes posted:

Meanwhile my lowly windows server has raid 1 and 30 bux for Stablebit drive pool. 4GB RAM does the trick.
It also doesn't scale up to 2^128 bytes, have bitrot protection, do transparent FDE, support snapshots or atomicity, or take an arbitrarily large amount of disks from any number of disk shelves without encountering problems with scaling, or do rebuild at anything like the speed that ZFS can manage - all of which ZFS was designed to do because ZFS was designed to work with enterprise storage. Just because some of us are using it for home-servers because it also scales down without sacrificing features doesn't mean you can say that one is better than the other.

redeyes
Sep 14, 2002
I LOVE THE WHITE STRIPES!

quote:

It also doesn't scale up to 2^128 bytes, have bitrot protection, do transparent FDE, support snapshots or atomicity, or take an arbitrarily large amount of disks from any number of disk shelves without encountering problems with scaling, or do rebuild at anything like the speed that ZFS can manage - all of which ZFS was designed to do because ZFS was designed to work with enterprise storage. Just because some of us are using it for home-servers because it also scales down without sacrificing features doesn't mean you can say that one is better than the other.

Its a home server. HOME SERVER. It's about 1000x bux less than all these insane zfs builds. Who has the money for that shit?!

redeyes fucked around with this message at 14:41 on Apr 11, 2017

Matt Zerella
Oct 7, 2002


redeyes posted:

Its a home server. HOME SERVER. It's about 1000x bux less than all these insane zfs builds. Who has the money for that shit?!

If it's in a home lab for VMs and stuff all of that stuff owns.

But yeah if it's hosting a Plex docker and a torrent client and some newsgroup stuff then

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



redeyes posted:

Its a home server. HOME SERVER. It's about 1000x bux less than all these insane zfs builds. Who has the money for that shit?!

I have to agree with this. My gut feeling is that ZFS talk and the like is way overrepresented in this thread compared to what most potential readers actually need to use.

ZFS has a huge downside for most consumers. The completely ridiculous situation when it comes to upgrading your storage size.

ChiralCondensate
Nov 13, 2007

what is that man doing to his colour palette?


Grimey Drawer

Thermopyle posted:

I have to agree with this. My gut feeling is that ZFS talk and the like is way overrepresented in this thread compared to what most potential readers actually need to use.

ZFS has a huge downside for most consumers. The completely ridiculous situation when it comes to upgrading your storage size.

ZFS looked fun to play with but the harder upgradeability is why I decided against it. Fun is fun in the homelab, with no accounting for taste. (As an example of what I find "fun", to discredit me: I enjoyed rolling my own bitrot-monitoring scripts + database + alarm foghorn, and I'm thinking about implementing a simpler mergerfs.)

ChiralCondensate fucked around with this message at 15:51 on Apr 11, 2017

redeyes
Sep 14, 2002
I LOVE THE WHITE STRIPES!

Thermopyle posted:

I have to agree with this. My gut feeling is that ZFS talk and the like is way overrepresented in this thread compared to what most potential readers actually need to use.

ZFS has a huge downside for most consumers. The completely ridiculous situation when it comes to upgrading your storage size.

I actually thought that was easier since so many people here use it or claim to... but yeah with Stablebit Drive pool it is trivial to remove or increase storage with any size HD. I have used that 'feature' many many times over the last years. You can also get snapshots going if you do some hackery. Bit rot isn't something I have ever seen happen. I imagine it does on huge scale stuff but consumer HDs let the OS know when they are not working right so bit rot doesn't actually happen. You can also use SSDs for a temp landing drive and you can use any standard NTFS recovery utility to get files back from a failing HD assuming you need to. Whats not to love?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

redeyes posted:

Its a home server. HOME SERVER. It's about 1000x bux less than all these insane zfs builds. Who has the money for that shit?!

The people spending 1000x bux are doing it because they either want some specific feature, or they're building a Srs Business server for whatever reason. No one has suggested that grandma needs a $1500 Xeon setup for her vacation photos, after all. There's the continual argument about ECC vs non-ECC, but even that really only is talking about ~$100 or less in most cases--for everything else, there's no price difference between a ZFS build and anything else: most of the money gets dumped into drives or trying to grab a particular hardware feature (like IMPI or vt-d) that's needed regardless of the file system. No one's walking in here saying they need to store 500GB of family photos and walking out with a recommendation for an E5-2650 and 20TB of storage.

There's been plenty of suggestions for people who want/need much more pedestrian storage solutions that don't involve VMs, iSCSI targets, fiber channels, 10GbE, 60TB pools, etc. They're just...pedestrian storage solutions, frankly, and thus don't generate pages of posts because they're not particularly new or interesting.

D. Ebdrup
Mar 13, 2009



redeyes posted:

Its a home server. HOME SERVER. It's about 1000x bux less than all these insane zfs builds. Who has the money for that shit?!
And if you have a spare system with Windows 8+ on it, you already have Storage Spaces which is at least as good as Stablebit (which seems good for what you're using for, don't get me wrong), won't even cost you any additional money, and is arguably easier to set up and forget about.
I'm not arguing that you can't re-purpose old hardware, but for a home lab, ZFS has some features that I just can't imagine living without now that I've gotten used to them.

ZFS can be done with re-purposed hardware all the way down to an old i3 or you can grab a cheap low-end Ryzen CPU with a motherboard that supports ECC, or you can use ZFS without ECC, if you're confident in your backup strategy, and want to save the ~50 bux that ECC costs nowadays.

DrDork posted:

There's been plenty of suggestions for people who want/need much more pedestrian storage solutions that don't involve VMs, iSCSI targets, fiber channels, 10GbE, 60TB pools, etc. They're just...pedestrian storage solutions, frankly, and thus don't generate pages of posts because they're not particularly new or interesting.
Digital packratting lends itself to a certain mindset in which a Venn-diagram charting the overlap, between the people interested in that and also wanting to build a home lab to mess around with, is quite big.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

D. Ebdrup posted:

Digital packratting lends itself to a certain mindset in which a Venn-diagram charting the overlap, between the people interested in that and also wanting to build a home lab to mess around with, is quite big.

Yeah, absolutely. I'm just saying it's like wandering into a car thread and being grumpy that there isn't much talk about Civics--a perfectly suitable and recommended sane choice for most people--but instead are seeing people going on about 600shp gigglemobiles and their E46 that needs a new gizmo.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »