|
Anyone have any thoughts on Duplicacy + Backblaze B2 as a Crashplan replacement?
|
![]() |
|
I got IPoIB working at least marginally. I'm only getting about 6 gbit even in an iperf test though. Even still, I can tell my IOPS are way up versus gigabit. Anything in particular I need to do to tune performance? Obviously 32 gbit is the theoretical limit, and I understand that without RDMA I won't even hit that much, but my throughput is not even close to not being close there. I set MTU to 65k on my fileserver, but when I set it on my Windows desktop my transfers started hanging. Paul MaudDib fucked around with this message at 17:56 on Aug 27, 2017 |
![]() |
|
Mellanox? If so, their Windows drivers don't do these extended MTUs. IPoIB is limited to 4092.
|
![]() |
|
Combat Pretzel posted:Mellanox? If so, their Windows drivers don't do these extended MTUs. IPoIB is limited to 4092. Yup, a pair of MHQH29B-XTR cards, which are dual-port 4x QDR ConnectX-2 cards. I started with whatever Win10 auto-installed, eventually I installed the OFED driver package while trying to debug my setup (ended up being a problem in my ifconfig on the fileserver). Maybe it's time to dig into those older drivers you mentioned, what specifically am I looking for? Paul MaudDib fucked around with this message at 18:07 on Aug 27, 2017 |
![]() |
|
IOwnCalculus posted:You need to use feature flags. ZFS moved away from depending on versions a while ago, because version numbers couldn't be trusted between all of the contributors to the ZFS projects. I'm sorry but I have little idea how to do that. How I use feature flags on my NAS4Free server? Are you at the same time saying I should be able to import into CentOS with no problem? Because I'd like to be pretty fucking sure of that. ![]() Thermopyle posted:Anyone have any thoughts on Duplicacy + Backblaze B2 as a Crashplan replacement? I just switched to something similar myself. I use restic instead of duplicacy, but I upload to B2 (as well as local or networked drives). For my 400 GB backup it should come at around $2/month. Since restic does differential backups I'm not sure yet as to how much overhead there'll be though. When the initial upload is done I'll schedule this using the Windows Task Scheduler. It already works fine for my backup to a local USB drive using the simplest PowerShell script. Using cron would work identically I think. That's my 2cts.
|
![]() |
|
Proteus Jones posted:Sorry I misread. (removed all kinds of bad advice). I need to check something real quick and I'll update. This sounds promising, will try it once my 2nd NAS arrives shortly! Thanks!
|
![]() |
|
Furism posted:I'm sorry but I have little idea how to do that. How I use feature flags on my NAS4Free server? Are you at the same time saying I should be able to import into CentOS with no problem? Because I'd like to be pretty fucking sure of that. Just ran this on my Ubuntu box but should be similar on N4F: code:
Also keep in mind that a failed import is a non-destructive thing. If you really want you could probably boot CentOS on a live USB and attempt to import.
|
![]() |
|
IOwnCalculus posted:Just ran this on my Ubuntu box but should be similar on N4F: This is what I got: code:
I will try a CentOS live CD and try to import the array. I'm told one should import by disk id instead of /dev/sdx is that right?
|
![]() |
|
Welp, woke up this morning to a hung FreeNAS screen, and the monitor on the box blank. A boot later, and Error5:unretryable error on the USB boot drive... 2 hours later, I had everything back except my Crashplan jail, which had been modified recently. Not too shabby.
|
![]() |
|
Furism posted:This is what I got: There's a page somewhere that breaks out all of the feature flags by what ZFS implementations support them. Multi vdev crash dump is one that isn't supported outside of BSD, but doesn't matter because it's not actually enabled on your pool. And yes, you need to import using by-id or by-uuid because the /dev/sdX assignments can change on every reboot.
|
![]() |
|
Hi Goons, I recently read about SFP+ So I can just scour ebay for a pair of 10GB SFP+ cards and have one in the PC and one in the NAS? From what I read UBNT US-16-XG is supposedly the golden ticket but like most UBNT gear, it rushed out to the market too soon and have a bunch of compatibility issues. Any other switch recommendations for home users? I just want to load all my photos and videos and run 10Gbe
|
![]() |
|
Thermopyle posted:Anyone have any thoughts on Duplicacy + Backblaze B2 as a Crashplan replacement? I was looking at that combo as well. Also considering duplicity. Gonna be lame going from $60/yr to $38/mo though!
|
![]() |
|
Thermopyle posted:Anyone have any thoughts on Duplicacy + Backblaze B2 as a Crashplan replacement? Apparently the GUI version is a bit limited, but that's being worked on. One issue that my setup would run into is that alot of my stuff is mounted as r/o (well, actually due to permissions setup) to my backup VM, and duplicacy wants to write into the toplevel dir according to it's docs.
|
![]() |
|
lurksion posted:Also looking at duplicacy, amoung all the other dupli* programs (there are way too many with that prefix), targeting Google Drive. Honest question for you and others here: is this something that could be done with git-annex? I've always wanted to play with it but never gotten around to it. One of the features it supports is "offline repositories", this is useful in some edge cases where you can't get immediate random access to a repo. I was looking at it for media library usage (eg when a DVD is not in the drive it can tell you "please insert disc 23") but it would also seem applicable if IO might result in data charges.
|
![]() |
|
caberham posted:Hi Goons, I recently read about SFP+ I run two of the Mellanox ConnectX-2s that you can get on ebay for $14.50 a pop, but switches are still expensive. It sounds like you only have two machines right now; it probably makes sense to just connect the two directly with a generic copper SFP+ cable and wait on buying a switch until you want to add a third machine to the 10G network.
|
![]() |
|
lurksion posted:Apparently the GUI version is a bit limited, but that's being worked on. One issue that my setup would run into is that alot of my stuff is mounted as r/o (well, actually due to permissions setup) to my backup VM, and duplicacy wants to write into the toplevel dir according to it's docs. restic doesn't write into the source directories, if that helps. But no GUI.
|
![]() |
|
IOwnCalculus posted:There's a page somewhere that breaks out all of the feature flags by what ZFS implementations support them. Multi vdev crash dump is one that isn't supported outside of BSD, but doesn't matter because it's not actually enabled on your pool. Thanks, I'll check it out. Apparently FreeBSD and Linux use the same version so I might just get lucky. And like you said I can just live boot. Thanks a lot!
|
![]() |
|
Desuwa posted:I run two of the Mellanox ConnectX-2s that you can get on ebay for $14.50 a pop, but switches are still expensive. It sounds like you only have two machines right now; it probably makes sense to just connect the two directly with a generic copper SFP+ cable and wait on buying a switch until you want to add a third machine to the 10G network.
|
![]() |
|
caberham posted:Hi Goons, I recently read about SFP+ Mikrotik also has a couple of switches with SFP+ ports. If you only need to connect 2 devices there's the CSS326-24G-2S+RM and if you want to go all-out crazy on 10g they also just released the CRS317-1G-16S+RM (although I haven't actually seen it for sale anywhere yet). Both of these options are cheaper than the Ubiquiti switch. I'm running the slightly older CRS210-8G-2S+IN with 2x SFP+ ports at home and it's working great.
|
![]() |
|
Furism posted:restic doesn't write into the source directories, if that helps. But no GUI. Anybody else using restic? I'm looking for a linux CLI client to backup my NAS to B2. It looks pretty nice, and it has a lot of activity on github.
|
![]() |
|
fletcher posted:Anybody else using restic? I'm looking for a linux CLI client to backup my NAS to B2. It looks pretty nice, and it has a lot of activity on github. I haven't, but several things I've read like the following are the reason I haven't: https://github.com/gilbertchen/benchmarking
|
![]() |
|
fletcher posted:Anybody else using restic? I'm looking for a linux CLI client to backup my NAS to B2. It looks pretty nice, and it has a lot of activity on github. I'm using rclone with B2 and it is really good. Not what you're asking about, but figured I'd offer an alternative. I uploaded 8TB on my gigabit connection in about three days I guess with the speed manually throttled to 700Mbps. B2 likes lots of simultaneous connections, so I run 32 concurrent connections and 64 checkers on my uploads, and set the versions on the B2 side to a single version of retention since I have versioned backups at home. Figure it'd save space on the cloud side and keep costs low. I've been using rclone for a while now and it's been good to me.
|
![]() |
|
Thermopyle posted:Anyone have any thoughts on Duplicacy + Backblaze B2 as a Crashplan replacement? Have you done the math on it? Would it be cheaper for you than just the small business CrashPlan? The more research I do the more it seems that just doing Small Business crashplan on my NAS is still the most cost effective way of dealing with it. Which still bothers me because I feel that they dont deserve the money for making me give up the rest of my family plan and paying 5x as much money.
|
![]() |
|
Thermopyle posted:I haven't, but several things I've read like the following are the reason I haven't: These tests are outdated. For instance restic uses blocks of 8 Mb not 1. If they would correct the mistakes and run updated versions that'd be interesting.
|
![]() |
|
Furism posted:These tests are outdated. For instance restic uses blocks of 8 Mb not 1. If they would correct the mistakes and run updated versions that'd be interesting. Well, they provide their code right there. I nominate you to rerun the tests for the benefit of the thread!
|
![]() |
|
Thermopyle posted:I haven't, but several things I've read like the following are the reason I haven't: What is the issue you have with it regarding those tests? Lack of compression?
|
![]() |
|
Hrm. Problems again. It would appear that one of the drives is acting up. These are all sector checked 1tb drives out of the ewaste. So I'm not THAT surprised.![]() However, the thing that confuses me, is that (after a restart) the volume doesn't come back. When I try to import it, it stalls out. and I notice this on the IPMI. ![]() ![]() Why does the state go to Unknown? And why won't it import? Shouldn't the Volume be able to survive this and maintain access to my datas? I have a fresh 1tb drive on the way. But I'm not sure what the procedure is in this case.
|
![]() |
|
Erwin posted:What is the issue you have with it regarding those tests? Lack of compression? Mainly CPU usage.
|
![]() |
|
Whats a good way of backing up data to another computer on the same network now that Crashplan is on the way out? All running Windows.
|
![]() |
|
I thought one of the things that Windows 10 Home changed to previous editions of Windows was that the backup utility can now use UNC paths (ie. network paths)?
|
![]() |
|
Thermopyle posted:Well, they provide their code right there. I nominate you to rerun the tests for the benefit of the thread! My day job is running tests against network services or network devices. I'm not doing that on my free time ![]()
|
![]() |
|
Thermopyle posted:I haven't, but several things I've read like the following are the reason I haven't: I was looking at Duplicacy as well but restic just seemed more professional, as far as open source projects go. restic documentation seemed much better and it's truly free open source software. The licensing for Duplicacy seems a little wonky. fletcher fucked around with this message at 19:32 on Aug 29, 2017 |
![]() |
|
Nulldevice posted:I'm using rclone with B2 and it is really good. Not what you're asking about, but figured I'd offer an alternative. I uploaded 8TB on my gigabit connection in about three days I guess with the speed manually throttled to 700Mbps. B2 likes lots of simultaneous connections, so I run 32 concurrent connections and 64 checkers on my uploads, and set the versions on the B2 side to a single version of retention since I have versioned backups at home. Figure it'd save space on the cloud side and keep costs low. I've been using rclone for a while now and it's been good to me. rclone came up in my searching as well and it looked quite good. More stars on github (and more open issues as well) than restic.
|
![]() |
|
Hughlander posted:Have you done the math on it? Would it be cheaper for you than just the small business CrashPlan? The more research I do the more it seems that just doing Small Business crashplan on my NAS is still the most cost effective way of dealing with it. Which still bothers me because I feel that they dont deserve the money for making me give up the rest of my family plan and paying 5x as much money. I'm at least going to take them up on the first year at 75% off since it actually ends up cheaper for one computer than I was paying on Crashplan home. It means I won't have any computer-to-computer backup by then but I'm sure I can figure something out in the next year, and by the time that last bit of paid Crashplan runs out, who knows what the market will be like.
|
![]() |
|
EVIL Gibson posted:I thought how it was worded it was opening and getting metadata out like this doc file was last opened by so-and-so which doesn't even make more sense if it is really doing block by block hashing and not a god damn entire file hash. I think you might be a bit confused about how rsync works? I'm not sure we're talking about the same things, though, but in any case I'll try to burble on a bit about rsync. One of the ways rsync saves network bandwidth is that it hashes blocks, not just whole-file. If you're syncing one version of a file to another one, this allows rsync to transmit only the changed blocks. The first step for rsync is to determine whether a file needs any syncing at all, then it has to figure out which blocks of it need syncing. Due to the way hashing algorithms work, you can compute per-block and whole-file hashes at the same time while passing over the data only once. I don't know that this is what rsync does, but it's what would make sense based on first principles. Both sides compute the hashes, then they compare the whole-file hashes, then if the file needs syncing they compare per-block hashes to determine the set of blocks which need to be copied over. In any case, file contents must be read into memory to compute hashes, and the resulting indirect memory usage (due to Linux caching all file reads) is what makes a system chew through a ton of RAM when rsyncing a large amount of data. This can happen even if the source and destination are sufficiently in sync that the amount of data moved is small. I didn't see anything in the OP to indicate that wossname the tool being talked about wasn't actually using rsync as the back-end. BobHoward fucked around with this message at 21:28 on Aug 29, 2017 |
![]() |
|
BobHoward posted:I think you might be a bit confused about how rsync works? He thought the same thing I did when I read the way the original post was wrote. On a quick read it sounded like it was uncompressing the archive files.
|
![]() |
|
I feel like I can never pull the trigger on a NAS build... For a fairly small office with maybe 5-6 simultaneous users average, 10-13 users peak load, mostly accessing documents, occasionally reading media files (no transcoding), and at least 2 comps constantly writing to it (loggers), does a Xeon E5-1620v3 build with 32GB ram seem like overkill? Also why are Reds impossible to get on Amazon now without third party sellers? Are the Seagates (now called IronWolf??) decent enough?
|
![]() |
|
Gozinbulx posted:I feel like I can never pull the trigger on a NAS build... For something that small, unless you're talking about seriously high log volume, that's probably overkill. But on the flip side, it's not all that expensive to do a build like that. The CPU on mine is just a little below that and is ample.
|
![]() |
|
BobHoward posted:I think you might be a bit confused about how rsync works? (Everything I've already described) I bring up rsync doing block by block hashing several times and someone already said the original post can be understood as the program was doing weird things to the files.
|
![]() |
|
G-Prime posted:For something that small, unless you're talking about seriously high log volume, that's probably overkill. But on the flip side, it's not all that expensive to do a build like that. The CPU on mine is just a little below that and is ample. Yeah this. The upside to a Sandy Bridge-E is it's cheap. The downside to something that old is that newer parts may not be available (new ECC RAM is not the worst idea), and it does eat a lot more power compared to Haswell/later. I was looking at doing a used 2620v3 for my overkill homelab server. But you'd have to step up to an X99 board.
|
![]() |