«608 »
  • Post
  • Reply
Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Anyone have any thoughts on Duplicacy + Backblaze B2 as a Crashplan replacement?

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


I got IPoIB working at least marginally. I'm only getting about 6 gbit even in an iperf test though. Even still, I can tell my IOPS are way up versus gigabit.

Anything in particular I need to do to tune performance? Obviously 32 gbit is the theoretical limit, and I understand that without RDMA I won't even hit that much, but my throughput is not even close to not being close there.

I set MTU to 65k on my fileserver, but when I set it on my Windows desktop my transfers started hanging.

Paul MaudDib fucked around with this message at 17:56 on Aug 27, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Mellanox? If so, their Windows drivers don't do these extended MTUs. IPoIB is limited to 4092.

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


Combat Pretzel posted:

Mellanox? If so, their Windows drivers don't do these extended MTUs. IPoIB is limited to 4092.

Yup, a pair of MHQH29B-XTR cards, which are dual-port 4x QDR ConnectX-2 cards.

I started with whatever Win10 auto-installed, eventually I installed the OFED driver package while trying to debug my setup (ended up being a problem in my ifconfig on the fileserver). Maybe it's time to dig into those older drivers you mentioned, what specifically am I looking for?

Paul MaudDib fucked around with this message at 18:07 on Aug 27, 2017

Furism
Feb 21, 2006

Live long and headbang


IOwnCalculus posted:

You need to use feature flags. ZFS moved away from depending on versions a while ago, because version numbers couldn't be trusted between all of the contributors to the ZFS projects.

With that said, I don't think NAS4Free is ahead of FreeNAS on feature flags. There is at least one BSD feature flag that is not supported on ZOL, but it relates to kernel dumps and shouldn't actually be in use on a storage pool. I directly imported a pool from FreeNAS to Ubuntu 16.04 without any trouble beyond making sure to import disks by-id instead of /dev/sdX.

I'm sorry but I have little idea how to do that. How I use feature flags on my NAS4Free server? Are you at the same time saying I should be able to import into CentOS with no problem? Because I'd like to be pretty fucking sure of that.

Thermopyle posted:

Anyone have any thoughts on Duplicacy + Backblaze B2 as a Crashplan replacement?

I just switched to something similar myself. I use restic instead of duplicacy, but I upload to B2 (as well as local or networked drives). For my 400 GB backup it should come at around $2/month. Since restic does differential backups I'm not sure yet as to how much overhead there'll be though. When the initial upload is done I'll schedule this using the Windows Task Scheduler. It already works fine for my backup to a local USB drive using the simplest PowerShell script. Using cron would work identically I think.

That's my 2cts.

Steakandchips
Apr 30, 2009



Proteus Jones posted:

Sorry I misread. (removed all kinds of bad advice). I need to check something real quick and I'll update.

According to the Synology KB, CloudSync will do bidirectional sync. It's a setting in the wizard when you set it up.

https://www.synology.com/en-us/know...dSync/cloudsync

This sounds promising, will try it once my 2nd NAS arrives shortly! Thanks!

IOwnCalculus
Apr 2, 2003





Furism posted:

I'm sorry but I have little idea how to do that. How I use feature flags on my NAS4Free server? Are you at the same time saying I should be able to import into CentOS with no problem? Because I'd like to be pretty fucking sure of that.

Just ran this on my Ubuntu box but should be similar on N4F:

code:
zpool get all | grep feature
tank  feature@async_destroy       enabled                     local
tank  feature@empty_bpobj         enabled                     local
tank  feature@lz4_compress        active                      local
tank  feature@spacemap_histogram  active                      local
tank  feature@enabled_txg         active                      local
tank  feature@hole_birth          active                      local
tank  feature@extensible_dataset  enabled                     local
tank  feature@embedded_data       active                      local
tank  feature@bookmarks           enabled                     local
tank  feature@filesystem_limits   enabled                     local
tank  feature@large_blocks        enabled                     local
Active means the feature is being used by the pool, enabled means the system supports it but isn't actually using it. You'd need all active features to be supported by ZFS on Linux.

Also keep in mind that a failed import is a non-destructive thing. If you really want you could probably boot CentOS on a live USB and attempt to import.

Furism
Feb 21, 2006

Live long and headbang


IOwnCalculus posted:

Just ran this on my Ubuntu box but should be similar on N4F:


This is what I got:

code:
remontoire: /mnt# zpool get all | grep feature
pool1  feature@async_destroy          enabled                        local
pool1  feature@empty_bpobj            active                         local
pool1  feature@lz4_compress           active                         local
pool1  feature@multi_vdev_crash_dump  enabled                        local
pool1  feature@spacemap_histogram     active                         local
pool1  feature@enabled_txg            active                         local
pool1  feature@hole_birth             active                         local
pool1  feature@extensible_dataset     enabled                        local
pool1  feature@embedded_data          active                         local
pool1  feature@bookmarks              enabled                        local
pool1  feature@filesystem_limits      enabled                        local
pool1  feature@large_blocks           enabled                        local
Is there something I should do to turn switch the features I need to active?

I will try a CentOS live CD and try to import the array. I'm told one should import by disk id instead of /dev/sdx is that right?

sharkytm
Oct 9, 2003

Gimme Gimme Swedish Fish...



Fallen Rib

Welp, woke up this morning to a hung FreeNAS screen, and the monitor on the box blank. A boot later, and Error5:unretryable error on the USB boot drive... 2 hours later, I had everything back except my Crashplan jail, which had been modified recently. Not too shabby.

IOwnCalculus
Apr 2, 2003





Furism posted:

This is what I got:

code:
remontoire: /mnt# zpool get all | grep feature
pool1  feature@async_destroy          enabled                        local
pool1  feature@empty_bpobj            active                         local
pool1  feature@lz4_compress           active                         local
pool1  feature@multi_vdev_crash_dump  enabled                        local
pool1  feature@spacemap_histogram     active                         local
pool1  feature@enabled_txg            active                         local
pool1  feature@hole_birth             active                         local
pool1  feature@extensible_dataset     enabled                        local
pool1  feature@embedded_data          active                         local
pool1  feature@bookmarks              enabled                        local
pool1  feature@filesystem_limits      enabled                        local
pool1  feature@large_blocks           enabled                        local
Is there something I should do to turn switch the features I need to active?

I will try a CentOS live CD and try to import the array. I'm told one should import by disk id instead of /dev/sdx is that right?

There's a page somewhere that breaks out all of the feature flags by what ZFS implementations support them. Multi vdev crash dump is one that isn't supported outside of BSD, but doesn't matter because it's not actually enabled on your pool.

And yes, you need to import using by-id or by-uuid because the /dev/sdX assignments can change on every reboot.

caberham
Mar 18, 2009

by Smythe


Grimey Drawer

Hi Goons, I recently read about SFP+

So I can just scour ebay for a pair of 10GB SFP+ cards and have one in the PC and one in the NAS?

From what I read UBNT US-16-XG is supposedly the golden ticket but like most UBNT gear, it rushed out to the market too soon and have a bunch of compatibility issues. Any other switch recommendations for home users? I just want to load all my photos and videos and run 10Gbe

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Thermopyle posted:

Anyone have any thoughts on Duplicacy + Backblaze B2 as a Crashplan replacement?

I was looking at that combo as well. Also considering duplicity. Gonna be lame going from $60/yr to $38/mo though!

lurksion
Mar 21, 2013


Thermopyle posted:

Anyone have any thoughts on Duplicacy + Backblaze B2 as a Crashplan replacement?
Also looking at duplicacy, amoung all the other dupli* programs (there are way too many with that prefix), targeting Google Drive.

Apparently the GUI version is a bit limited, but that's being worked on. One issue that my setup would run into is that alot of my stuff is mounted as r/o (well, actually due to permissions setup) to my backup VM, and duplicacy wants to write into the toplevel dir according to it's docs.

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


lurksion posted:

Also looking at duplicacy, amoung all the other dupli* programs (there are way too many with that prefix), targeting Google Drive.

Apparently the GUI version is a bit limited, but that's being worked on. One issue that my setup would run into is that alot of my stuff is mounted as r/o (well, actually due to permissions setup) to my backup VM, and duplicacy wants to write into the toplevel dir according to it's docs.

Honest question for you and others here: is this something that could be done with git-annex? I've always wanted to play with it but never gotten around to it.

One of the features it supports is "offline repositories", this is useful in some edge cases where you can't get immediate random access to a repo. I was looking at it for media library usage (eg when a DVD is not in the drive it can tell you "please insert disc 23") but it would also seem applicable if IO might result in data charges.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

caberham posted:

Hi Goons, I recently read about SFP+

So I can just scour ebay for a pair of 10GB SFP+ cards and have one in the PC and one in the NAS?

From what I read UBNT US-16-XG is supposedly the golden ticket but like most UBNT gear, it rushed out to the market too soon and have a bunch of compatibility issues. Any other switch recommendations for home users? I just want to load all my photos and videos and run 10Gbe

I run two of the Mellanox ConnectX-2s that you can get on ebay for $14.50 a pop, but switches are still expensive. It sounds like you only have two machines right now; it probably makes sense to just connect the two directly with a generic copper SFP+ cable and wait on buying a switch until you want to add a third machine to the 10G network.

Furism
Feb 21, 2006

Live long and headbang


lurksion posted:

Apparently the GUI version is a bit limited, but that's being worked on. One issue that my setup would run into is that alot of my stuff is mounted as r/o (well, actually due to permissions setup) to my backup VM, and duplicacy wants to write into the toplevel dir according to it's docs.

restic doesn't write into the source directories, if that helps. But no GUI.

Furism
Feb 21, 2006

Live long and headbang


IOwnCalculus posted:

There's a page somewhere that breaks out all of the feature flags by what ZFS implementations support them. Multi vdev crash dump is one that isn't supported outside of BSD, but doesn't matter because it's not actually enabled on your pool.

And yes, you need to import using by-id or by-uuid because the /dev/sdX assignments can change on every reboot.

Thanks, I'll check it out. Apparently FreeBSD and Linux use the same version so I might just get lucky. And like you said I can just live boot. Thanks a lot!

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Desuwa posted:

I run two of the Mellanox ConnectX-2s that you can get on ebay for $14.50 a pop, but switches are still expensive. It sounds like you only have two machines right now; it probably makes sense to just connect the two directly with a generic copper SFP+ cable and wait on buying a switch until you want to add a third machine to the 10G network.
SFP+ copper only goes up to 7m/21ft, so that's to be considered. Any further, and you'll have to go with fiber.

Krailor
Nov 2, 2001
I'm only pretending to care

Taco Defender

caberham posted:

Hi Goons, I recently read about SFP+

So I can just scour ebay for a pair of 10GB SFP+ cards and have one in the PC and one in the NAS?

From what I read UBNT US-16-XG is supposedly the golden ticket but like most UBNT gear, it rushed out to the market too soon and have a bunch of compatibility issues. Any other switch recommendations for home users? I just want to load all my photos and videos and run 10Gbe

Mikrotik also has a couple of switches with SFP+ ports. If you only need to connect 2 devices there's the CSS326-24G-2S+RM and if you want to go all-out crazy on 10g they also just released the CRS317-1G-16S+RM (although I haven't actually seen it for sale anywhere yet). Both of these options are cheaper than the Ubiquiti switch.

I'm running the slightly older CRS210-8G-2S+IN with 2x SFP+ ports at home and it's working great.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Furism posted:

restic doesn't write into the source directories, if that helps. But no GUI.

Anybody else using restic? I'm looking for a linux CLI client to backup my NAS to B2. It looks pretty nice, and it has a lot of activity on github.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



fletcher posted:

Anybody else using restic? I'm looking for a linux CLI client to backup my NAS to B2. It looks pretty nice, and it has a lot of activity on github.

I haven't, but several things I've read like the following are the reason I haven't:

https://github.com/gilbertchen/benchmarking

Nulldevice
Jun 17, 2006


Toilet Rascal

fletcher posted:

Anybody else using restic? I'm looking for a linux CLI client to backup my NAS to B2. It looks pretty nice, and it has a lot of activity on github.

I'm using rclone with B2 and it is really good. Not what you're asking about, but figured I'd offer an alternative. I uploaded 8TB on my gigabit connection in about three days I guess with the speed manually throttled to 700Mbps. B2 likes lots of simultaneous connections, so I run 32 concurrent connections and 64 checkers on my uploads, and set the versions on the B2 side to a single version of retention since I have versioned backups at home. Figure it'd save space on the cloud side and keep costs low. I've been using rclone for a while now and it's been good to me.

Hughlander
May 11, 2005



Thermopyle posted:

Anyone have any thoughts on Duplicacy + Backblaze B2 as a Crashplan replacement?

Have you done the math on it? Would it be cheaper for you than just the small business CrashPlan? The more research I do the more it seems that just doing Small Business crashplan on my NAS is still the most cost effective way of dealing with it. Which still bothers me because I feel that they don’t deserve the money for making me give up the rest of my family plan and paying 5x as much money.

Furism
Feb 21, 2006

Live long and headbang


Thermopyle posted:

I haven't, but several things I've read like the following are the reason I haven't:

https://github.com/gilbertchen/benchmarking

These tests are outdated. For instance restic uses blocks of 8 Mb not 1. If they would correct the mistakes and run updated versions that'd be interesting.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Furism posted:

These tests are outdated. For instance restic uses blocks of 8 Mb not 1. If they would correct the mistakes and run updated versions that'd be interesting.

Well, they provide their code right there. I nominate you to rerun the tests for the benefit of the thread!

Erwin
Feb 17, 2006



Thermopyle posted:

I haven't, but several things I've read like the following are the reason I haven't:

https://github.com/gilbertchen/benchmarking

What is the issue you have with it regarding those tests? Lack of compression?

Ziploc
Sep 19, 2006
MX-5

Hrm. Problems again. It would appear that one of the drives is acting up. These are all sector checked 1tb drives out of the ewaste. So I'm not THAT surprised.



However, the thing that confuses me, is that (after a restart) the volume doesn't come back. When I try to import it, it stalls out. and I notice this on the IPMI.





Why does the state go to Unknown? And why won't it import? Shouldn't the Volume be able to survive this and maintain access to my datas?

I have a fresh 1tb drive on the way. But I'm not sure what the procedure is in this case.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Erwin posted:

What is the issue you have with it regarding those tests? Lack of compression?

Mainly CPU usage.

skull mask mcgee
Nov 14, 2016


What’s a good way of backing up data to another computer on the same network now that Crashplan is on the way out? All running Windows.

D. Ebdrup
Mar 13, 2009



I thought one of the things that Windows 10 Home changed to previous editions of Windows was that the backup utility can now use UNC paths (ie. network paths)?

Furism
Feb 21, 2006

Live long and headbang


Thermopyle posted:

Well, they provide their code right there. I nominate you to rerun the tests for the benefit of the thread!

My day job is running tests against network services or network devices. I'm not doing that on my free time (I might just do that actually)

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Thermopyle posted:

I haven't, but several things I've read like the following are the reason I haven't:

https://github.com/gilbertchen/benchmarking

I was looking at Duplicacy as well but restic just seemed more professional, as far as open source projects go. restic documentation seemed much better and it's truly free open source software. The licensing for Duplicacy seems a little wonky.

fletcher fucked around with this message at 19:32 on Aug 29, 2017

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Nulldevice posted:

I'm using rclone with B2 and it is really good. Not what you're asking about, but figured I'd offer an alternative. I uploaded 8TB on my gigabit connection in about three days I guess with the speed manually throttled to 700Mbps. B2 likes lots of simultaneous connections, so I run 32 concurrent connections and 64 checkers on my uploads, and set the versions on the B2 side to a single version of retention since I have versioned backups at home. Figure it'd save space on the cloud side and keep costs low. I've been using rclone for a while now and it's been good to me.

rclone came up in my searching as well and it looked quite good. More stars on github (and more open issues as well) than restic.

IOwnCalculus
Apr 2, 2003





Hughlander posted:

Have you done the math on it? Would it be cheaper for you than just the small business CrashPlan? The more research I do the more it seems that just doing Small Business crashplan on my NAS is still the most cost effective way of dealing with it. Which still bothers me because I feel that they don’t deserve the money for making me give up the rest of my family plan and paying 5x as much money.

I'm at least going to take them up on the first year at 75% off since it actually ends up cheaper for one computer than I was paying on Crashplan home. It means I won't have any computer-to-computer backup by then but I'm sure I can figure something out in the next year, and by the time that last bit of paid Crashplan runs out, who knows what the market will be like.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull


EVIL Gibson posted:

I thought how it was worded it was opening and getting metadata out like this doc file was last opened by so-and-so which doesn't even make more sense if it is really doing block by block hashing and not a god damn entire file hash.

If it is transferring the entire file each time it changes, it is not using rsync period.

I think you might be a bit confused about how rsync works? I'm not sure we're talking about the same things, though, but in any case I'll try to burble on a bit about rsync.

One of the ways rsync saves network bandwidth is that it hashes blocks, not just whole-file. If you're syncing one version of a file to another one, this allows rsync to transmit only the changed blocks.

The first step for rsync is to determine whether a file needs any syncing at all, then it has to figure out which blocks of it need syncing. Due to the way hashing algorithms work, you can compute per-block and whole-file hashes at the same time while passing over the data only once. I don't know that this is what rsync does, but it's what would make sense based on first principles. Both sides compute the hashes, then they compare the whole-file hashes, then if the file needs syncing they compare per-block hashes to determine the set of blocks which need to be copied over.

In any case, file contents must be read into memory to compute hashes, and the resulting indirect memory usage (due to Linux caching all file reads) is what makes a system chew through a ton of RAM when rsyncing a large amount of data. This can happen even if the source and destination are sufficiently in sync that the amount of data moved is small. I didn't see anything in the OP to indicate that wossname the tool being talked about wasn't actually using rsync as the back-end.

BobHoward fucked around with this message at 21:28 on Aug 29, 2017

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



BobHoward posted:

I think you might be a bit confused about how rsync works?

He thought the same thing I did when I read the way the original post was wrote.

On a quick read it sounded like it was uncompressing the archive files.

Gozinbulx
Feb 19, 2004


I feel like I can never pull the trigger on a NAS build...

For a fairly small office with maybe 5-6 simultaneous users average, 10-13 users peak load, mostly accessing documents, occasionally reading media files (no transcoding), and at least 2 comps constantly writing to it (loggers), does a Xeon E5-1620v3 build with 32GB ram seem like overkill?

Also why are Reds impossible to get on Amazon now without third party sellers? Are the Seagates (now called IronWolf??) decent enough?

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.


Gozinbulx posted:

I feel like I can never pull the trigger on a NAS build...

For a fairly small office with maybe 5-6 simultaneous users average, 10-13 users peak load, mostly accessing documents, occasionally reading media files (no transcoding), and at least 2 comps constantly writing to it (loggers), does a Xeon E5-1620v3 build with 32GB ram seem like overkill?

Also why are Reds impossible to get on Amazon now without third party sellers? Are the Seagates (now called IronWolf??) decent enough?

For something that small, unless you're talking about seriously high log volume, that's probably overkill. But on the flip side, it's not all that expensive to do a build like that. The CPU on mine is just a little below that and is ample.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!


Switchblade Switcharoo

BobHoward posted:

I think you might be a bit confused about how rsync works? (Everything I've already described)


I bring up rsync doing block by block hashing several times and someone already said the original post can be understood as the program was doing weird things to the files.

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


G-Prime posted:

For something that small, unless you're talking about seriously high log volume, that's probably overkill. But on the flip side, it's not all that expensive to do a build like that. The CPU on mine is just a little below that and is ample.

Yeah this.

The upside to a Sandy Bridge-E is it's cheap. The downside to something that old is that newer parts may not be available (new ECC RAM is not the worst idea), and it does eat a lot more power compared to Haswell/later.

I was looking at doing a used 2620v3 for my overkill homelab server. But you'd have to step up to an X99 board.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »