«608 »
  • Post
  • Reply
DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Martytoof posted:

I don't plan on having the freenas machine doing anything but serving storage so I can't see running many services.

Your "this is silly" point is probably something like >16GB. You could probably go with 8GB and not have an issue. Unless you're trying to do dedup (which you almost certainly don't need in a home setting), or plan on running a bunch of VMs, you don't really need much RAM to run FreeNAS decently.

e; I'm assuming you're planning on hooking this up to your network with normal GigE stuff. If you wanted to get exotic (IB, 40Gig, etc.) you might get some benefit with 32GB. Maybe.

DrDork fucked around with this message at 01:14 on Oct 10, 2017

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money

Martytoof posted:

I'm in a position to throw probably 128GB or more memory in a Dell R710 I plan to use as a FreeNAS machine. It'll serve eight 2tb drives in some kind of arrangement I haven't thought about yet. Is there really any advantage to maxing out the memory or is there a point at which it'll just be pointless without a lot more storage to serve?

I don't plan on having the freenas machine doing anything but serving storage so I can't see running many services.

Whether this is a good idea or not really, really really depends on how good a deal you're getting on that RAM.

Martytoof
Feb 25, 2003

 
 


The ram is already sitting in my drawer, tangible spoils of war from a recent decomissioning.

I'll probably throw 32 into this server then.

I have a mostly-empty MD1200 that I can attach to the 710 for future expansion but I don't see myself running it right now for power concerns. I guess if I run out of TB at some point down the road..

Martytoof fucked around with this message at 01:26 on Oct 10, 2017

D. Ebdrup
Mar 13, 2009



TTerrible posted:

No fear of not having the RAM to turn dedup on. I can't think of anything else beyond that.
Even with Matt Ahrens' proposal for 1000x better dedup preformance as a talk at the OpenZFS 2017 DevSummit, I wouldn't turn dedup on just because I could. We're talking about something that's testable with 'zdb -S', so it should be tested instead of just turning it on for shits and giggles.

Martytoof posted:

I have a mostly-empty MD1200 that I can attach to the 710 for future expansion but I don't see myself running it right now for power concerns. I guess if I run out of TB at some point down the road..
Just be aware that ZFS can't rebalance itself if you add an additional vdev to an existing pool - if you want it rebalanced (so that load is shared), you need to use zfs send | receive.

His Divine Shadow
Aug 7, 2000

I'm not a fascist. I'm a priest. Fascists dress up in black and tell people what to do.


Looks like my synology DS414j shat the bed. Only get a blinking blue power light, and the light for HDD1 (though all slots are used). It doesn't matter if i remove the drives, no combination of anything seems to change the status, can't be found on the network...

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



I have an issue with my DS112j.

It has stopped sending out email notifications. A while ago, but I only noticed now.

I found this thread on the Synology forum, and seen as I'm not in the camp that gets it fixed with an authorize email account button (because I don't have one) and removing the account, rebooting and adding it back doesn't work, I might be one of the people where DSM's own PHP install is buggered.

So I tried to execute the instructions given here that apparently helped out the other guy. I don't have a clue what I'm doing though, when it comes to the terminal. I'm sure it didn't help that I'm not in a situation where I have the multiple php installs going on at this moment (maybe in the past), which this how-to seems to take for a starting point. The php -m command doesn't return anything like what the how-to expects anyway. I tried installing the php 5.6 package through the gui, to see if it somehow would help, and it turns out I can't install any packages at all anymore. I tried reversing what I assume is a move operation back from /usr/bin/phpORIG to /usr/bin/php, but it says I don't have permission to do that.

Anyway, I'm in way over my head and who knows what other mess I've made trying to fix this without thinking to document it. I have a backup of my files and have made a backup file of the DSM settings.

I'm probably good to do a DSM reset right? Do the thing described here as "reset to reinstall the operating system". Normally wouldn't even need to use my backup as my data folders stay in place? Can import the DSM settings backup to be pretty much back where I started, but hopefully with DSM's built in php back in place?

Any massive misconceptions or misunderstandings on my part in the above?

Droo
Jun 25, 2003



Flipperwaldt posted:

Any massive misconceptions or misunderstandings on my part in the above?

I believe you are correct about what to do. You might want to consider updating the OS right after you reset and reinstall. Also, if I were in your place I would probably set everything up from scratch instead of importing the configuration file in case there is something screwed up in the config that is causing all the problems.

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



Droo posted:

I believe you are correct about what to do. You might want to consider updating the OS right after you reset and reinstall. Also, if I were in your place I would probably set everything up from scratch instead of importing the configuration file in case there is something screwed up in the config that is causing all the problems.
Thanks for the reassurance.

I have a backup job in HyperBackup that is a simple sync copy job that is grandfathered in from before HyperBackup got tied into the file history thing and that I can't recreate using the USB Copy package they added when that changed. I'm hoping this is something I can keep by using my old settings. The problem here is that new HyperBackup jobs want to backup to huge container files that can only be browsed through the file history browser as far as I can make out from the documentation. This might be wrong, but I don't know. The USB Copy app won't let me select the volume as the root folder (in order to copy everything to the external disk) and also won't allow me to create multiple copy jobs with all the different shared folders going to the same target (ubshare1). Someone's gonna say rsync that shit instead if that's what you want, but as demonstrated, I'm pretty hopeless at the command line.

I'm also figuring that if turns out it's necessary to start from scratch, I can just do the reset again.

Droo
Jun 25, 2003



Flipperwaldt posted:

Someone's gonna say rsync that shit instead if that's what you want, but as demonstrated, I'm pretty hopeless at the command line.

rsync can be pretty intimidating but it's also worth learning. If you are just trying to copy the entire volume1 to a USB drive, you can run:

code:
rsync --exclude='@eaDir' --include="*/" -avmP "/volume1/public/" "/volumeUSB1/usbshare" --delete --delete-excluded --dry-run
Delete the --dry-run to actually do it, otherwise you just see a preview of what it will copy. There are lots of syntax formats for this... for example I rename some directories with (backupsX) to separate them for USB drives, so then when I put the backupsX drive in the dock I run:

code:
rsync --exclude='@eaDir' -f'+ *backupsX*/***' -f'+ */' -f'- *' -avmPW "/volume1/public/" "/volumeUSB1/usbshare" --delete --delete-excluded

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



It's awesome you compiled that answer for me, but meanwhile I've gone from the frying pan into the fire, so experimenting with that is going to have to wait. But thanks.

The reset worked, but I still can't get email notifications to work. Why, I'll have to figure out at some point. On the plus side, installing packages works again so wahey.

Now, because I'm shit at reading and conceptualizing, I glossed over the fact that the DSM backup file doesn't contain packages (ok) or package settings (crap!). If I had realized that, I would have written down more. After restoring it anyway, only manually set scheduled tasks are imported to execute at the right time. The package related ones are at seemingly random times. The s.m.a.r.t. check schedules are duplicated. Seems pretty sloppy.

CloudSync has kindly picked up where it left off. Nice.
Cloud Station Server had to be set up from scratch. Which means Cloud Station Backup on my other devices lost connection and had to be unlinked. Which means that despite all the files already being in the destination folder, everything is going to be copied over all over again.
HyperBackup starts from scratch as well. I now see a regular sync job is still one of the options in the new version, it's just out of sight at the bottom in the wizard. When I point it to the folder it used to point to, it prevents me from creating the job, because "the folder is already in use by another job". Which of course is very perceptive and would be helpful if it offered me to import or otherwise edit that job, but it's invisible. So now I'm deleting the metadata from that folder and hope it will allow me to point the job at that folder anyway, but it'll be still smart enough to realize the bulk of the files is already there. I do not have high hopes for that. I suspect Media Server will want to re-index as well when I install it. It's all a bit much for this anemic cpu and my brain hurts trying to think of how things were set up.

I've still got a couple of packages to go, but I really have got to let things finish up before I add more to the workload.

Basically, the earlier php fuck up has probably been fixed, which is good. Otherwise this wouldn't have been worth it at all.

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



Well, the email notifications not working was outlook/hotmail specific, so maybe something changed om Microsoft's end. It works perfectly with gmail or even isp email settings. This I should have verified first.

Still have got a couple of days of the backup job running, copying 2TB over usb 2.0 , but otherwise everything is back where it was. Peeved that HyperBackup has no way of importing old job configurations or even just reusing a folder you've used before. That's dumb. Some of it is clearly stored somewhere and the package interacts with it, but you can't even clear it.

EL BROMANCE
Jun 10, 2006

COWABUNGA DUDES!



Is it generally ok to run an external drive on a usb 3 hub? It's a powered Anker one, only has a soundcard on it, along with some flash drives and my mouse. No other drives.

I've run out of usb 3 ports on the back of my Mac mini due to other externals. At some point I'll get a NAS or similar as this will just be a Plex storage drive.

SlowBloke
Aug 14, 2017


Hi, I've just finished setting up my new NAS(a QNAP 1253bu) i wanted to ask in case there are any qnap users here:
Is there any advantages/disadvantages to keep a single datavol rather than setting up multiple ones by media type?

Nulldevice
Jun 17, 2006


Toilet Rascal

SlowBloke posted:

Hi, I've just finished setting up my new NAS(a QNAP 1253bu) i wanted to ask in case there are any qnap users here:
Is there any advantages/disadvantages to keep a single datavol rather than setting up multiple ones by media type?

What if you run out of room in one of your media partitions for a particular type of media? Kinda leaves you fucked. Just stick with one large volume and use folders and permissions to manage everything. Less chance of shit going boom. (oh, and backups, always have backups. raid is not backup.)

Photex
Apr 6, 2009





Details:

4 Disks in unRAID
1st Disk is Parity
2nd Disk SSD Cache
2nd Disk 95% Full
3rd Disk Brand new Empty

Any suggestions on how I should rebalance the disk usage on my unraid server? I had a drive go two weeks ago and I took the opportunity to upgrade my NAS but before the new disk and server arrive I just moved everything and (thankfully) had enough room on a good drive.

Should I just..leave everything on Disk 2 and do like symlinks when I recreate the user share or should I use rebalance and do 50/50 and let unraid handle file location?

SlowBloke
Aug 14, 2017


Nulldevice posted:

What if you run out of room in one of your media partitions for a particular type of media? Kinda leaves you fucked. Just stick with one large volume and use folders and permissions to manage everything. Less chance of shit going boom. (oh, and backups, always have backups. raid is not backup.)

Yeah, that's how i set it up, Is there a way to set granular quotas? I see only a general size, not by share.

Jaded Burnout
Jul 10, 2004




I have a Synology NAS running SHR. It's got a 40TB capacity and a little over half full, and I'm getting a little wary of full data loss. I'm going to add a hot spare for a little added redundancy but I'd also like to arrange a proper cold storage backup.

I've not dealt with tape backups since 2007, are they still the done thing? Any recommendations for something where I can point it at the NAS share and feed it tapes? Something which can manage deltas etc? I won't be needing to restore unless there's a catastrophic failure / loss / house burns down.

SlowBloke
Aug 14, 2017


Jaded Burnout posted:

I have a Synology NAS running SHR. It's got a 40TB capacity and a little over half full, and I'm getting a little wary of full data loss. I'm going to add a hot spare for a little added redundancy but I'd also like to arrange a proper cold storage backup.

I've not dealt with tape backups since 2007, are they still the done thing? Any recommendations for something where I can point it at the NAS share and feed it tapes? Something which can manage deltas etc? I won't be needing to restore unless there's a catastrophic failure / loss / house burns down.

Even the cheapest LTO drive(and tapes) able to provide a decent RTO(LTO5+) with all your data will be more expensive than a new synology chassis, you are also going to need a broker for the data cartdrige management which is going to be a extra server(plus software) to handle the tapes balooning the price.

You are better off buying a new chassis, populate it with disks, migrate the data and ship the old synology to a colo/safe site.

Jaded Burnout
Jul 10, 2004




SlowBloke posted:

Even the cheapest LTO drive(and tapes) able to provide a decent RTO(LTO5+) with all your data will be more expensive than a new synology chassis, you are also going to need a broker for the data cartdrige management which is going to be a extra server(plus software) to handle the tapes balooning the price.

You are better off buying a new chassis, populate it with disks, migrate the data and ship the old synology to a colo/safe site.

The chassis is never the expensive part. The ~10 6TB drives are.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA


Jaded Burnout posted:

I have a Synology NAS running SHR. It's got a 40TB capacity and a little over half full, and I'm getting a little wary of full data loss. I'm going to add a hot spare for a little added redundancy but I'd also like to arrange a proper cold storage backup.

I've not dealt with tape backups since 2007, are they still the done thing? Any recommendations for something where I can point it at the NAS share and feed it tapes? Something which can manage deltas etc? I won't be needing to restore unless there's a catastrophic failure / loss / house burns down.

I use a 16 tape LTO5 autoloader and it works pretty decently. You can run Veeam's free backup software to do the backups on, and can run the tape deck off a desktop machine easily enough. It takes fucking forever to run, you'll probably average 60-80MB/sec off your array, and depending on the size of it, could take you several weeks to finish. I do my backups more or less quarterly simply because it takes like 15 days to complete the full tape set.

You could put together a pretty good setup getting a used autoloader and tape drive off ebay, then spending the money on the LTO-tapes in bulk. In aggregate the solution costs are pretty similar, though a tape is a crapload more robust than a spinning disk is.

Jaded Burnout
Jul 10, 2004




OK, I'll keep investigating. I'm going to put a racked server in place anyway so that's not an extra cost. Maybe I should just stick 4 big HDDs in it as a JBOD/LVM and run a weekly rsync.

Also considered grabbing a disk duplicator and every month or so shutting down the NAS and doing a full copy, but none of the ones I can find will claim to support more than 4TB disks, and while fast to restore would require buying more disks.

IOwnCalculus
Apr 2, 2003





Jaded Burnout posted:

OK, I'll keep investigating. I'm going to put a racked server in place anyway so that's not an extra cost. Maybe I should just stick 4 big HDDs in it as a JBOD/LVM and run a weekly rsync.

Do this. You don't need the same level of HDD redundancy out of your backup set anyway. Maybe do a raidz / raid5 just so a single drive doesn't completely hose the array.

Jaded Burnout
Jul 10, 2004




IOwnCalculus posted:

Do this. You don't need the same level of HDD redundancy out of your backup set anyway. Maybe do a raidz / raid5 just so a single drive doesn't completely hose the array.

It does remove the ability to easily store the backups offsite but let's be real there's only a 15% chance I'm going to do that anyway.

Thanks Ants
May 21, 2004

Bless You Ants, Blants



Fun Shoe

DSM 6.2 looks like it will have the option to virtualise it (rather than virtual instances still on Synology hardware), presumably for a license fee. So if you have a server and a disk shelf kicking around then that might be an option, as you can still use all the Synology tools to sync the data.

Jaded Burnout
Jul 10, 2004




Thanks Ants posted:

DSM 6.2 looks like it will have the option to virtualise it (rather than virtual instances still on Synology hardware), presumably for a license fee. So if you have a server and a disk shelf kicking around then that might be an option, as you can still use all the Synology tools to sync the data.

Interesting. Thanks Thanks Ants. Tthhants.

SlowBloke
Aug 14, 2017


Jaded Burnout posted:

The chassis is never the expensive part. The ~10 6TB drives are.

https://www.amazon.com/Red-10TB-Har...rds=wd+red+10tb

Six of those will provide you 40tb of storage like your current nas(in a raid6 configuration).

We have recently completed a remote DR data storage project and rather than setting up the logistics of an additional set of tapes to handle and a dedicated LTO6 drive to be within a reasonable RTO for our data, we bought a qnap 1271u and twelve 4tb red pro. We still spent less than what tape would have cost us.

Unless you are going to need a very specific RTO objective or to adhere specific regulations i would avoid tape(or god forbid RDX). Just set up a replication target with a jbod array and a cheap server chassis, easier and less cumbersome.

SlowBloke fucked around with this message at 19:21 on Oct 13, 2017

Martytoof
Feb 25, 2003

 
 


Is anyone here using a Perc 6/iR with FreeNAS without having first crossflashed it to IT firmware?

As soon as I flash the Perc with IT my Dell R710 complains that an unsupported card is sitting in the storage PCIe slot as I guess it has white listed the Perc's PCI ID or something. Stock 6/iR is fine. FreeNAS seems to see the disks without issue but I haven't done any extensive testing yet.

I could probably just get away with putting the flashed 6/iR into a different PCI slot but then I need to get longer backplane cables and I don't really want to go to the trouble right now. I'm not sure what advantages the IT firmware will get me over the 6/iR so if it's going to be a disaster leaving it stock FW then I'd rather know that right now before I build a system around it.

Photex
Apr 6, 2009





Martytoof posted:

Is anyone here using a Perc 6/iR with FreeNAS without having first crossflashed it to IT firmware?

As soon as I flash the Perc with IT my Dell R710 complains that an unsupported card is sitting in the storage PCIe slot as I guess it has white listed the Perc's PCI ID or something. Stock 6/iR is fine. FreeNAS seems to see the disks without issue but I haven't done any extensive testing yet.

I could probably just get away with putting the flashed 6/iR into a different PCI slot but then I need to get longer backplane cables and I don't really want to go to the trouble right now. I'm not sure what advantages the IT firmware will get me over the 6/iR so if it's going to be a disaster leaving it stock FW then I'd rather know that right now before I build a system around it.

Known issue, you can't use the original slot, have to use one if the other slots

SamDabbers
May 26, 2003



Fallen Rib

Martytoof posted:

I'm not sure what advantages the IT firmware will get me over the 6/iR so if it's going to be a disaster leaving it stock FW then I'd rather know that right now before I build a system around it.

It may be different with the PERC6/iR, but the H310 IR firmware has a much smaller queue depth than the IT firmware. It shouldn't create a disaster if you present single-drive "arrays" with IR firmware to ZFS, but best practice is to use IT firmware. Since there would be no RAID code in the I/O processing path it supposedly improves performance.

Martytoof
Feb 25, 2003

 
 


Hmm, ok thanks guys. I'll hold off until I can get some SAS backplane cables in.

Photex posted:

Known issue, you can't use the original slot, have to use one if the other slots

Bummer. I was hoping to use the storage slot. Have a two-slot SATA SSD caddy in on one side, wanted to put in a 10gbe and SAS extender cards as well but now I'll probably have to be a bit pickier.

Hadlock
Nov 9, 2004





Ok so a couple of years ago I was at a windows shop and sort of out of boredom, sort of out of neccessity to sharpen my skills there I went with a Hyper-V server and Storage Spaces (using their "parity" raid-1/5 mirror type solution). During my move recently the Hyper-V thing went tits up (probably can recover the data, not worried about that) but I'm realizing that doing data backup + home VM lab on the same box is dumb for my purposes.

Also due to living arrangements looking to downsize to a laptop as my primary, and only spin up the VM lab box periodically, and split out the mirrored storage to a thunderbolt 3 drive as my new work laptop and (soon to purchase) new personal laptop will both have TB3/usb-c and why not just buy in to USB-C/TB3 now.

Current storage needs are about 3.5-4TB, I am thinking about getting a hardware RAID dual or quad drive device, run it in some sort of mirrored mode. Looking at the Akitio Thunder3 Duo Pro and slapping two 8 or 10TB drives in there. That should allow me to plug it in to a Macbook Pro or Windows/Linux Thinkpad laptop, right?

https://www.akitio.com/desktop-storage/thunder3-duo-pro

At one point NAS was a pretty cool idea with multiple computers, a xbmc/kodi box etc, but in reality these days, I just have my laptop and phone, if I need some sort of low performance NAS solution I have a couple of Raspberry Pis or Laptops floating around.

Is a mirrored RAID TB3 going to meet my needs?

Star War Sex Parrot
Oct 2, 2003



Muldoon

That's basically what I'm down to at this point: dual-bay Thunderbolt enclosure with a pair of mirrored 12TB drives in it. It does the trick for bulk storage when I want it connected to my MBP.

EconOutlines
Jul 3, 2004



That's why I went the Synology route vs rolling my own. A DS 916+ doesn't take up too much space plus I can max out the drives for 10TB each (when they become affordable...using 8TB Reds now), low power, small footprint and solid OS.

If I only had a laptop, well, a modern macOS laptop, the main issue would be how there are no reliable docks or how different dongles accept or recognize different devices. I can see your desire for TB3 in this instance.

Also, the Duo Pro maxes out at 8TB/drive, so just don't make the mistake of buying the 10TBs.

Mr. Crow
May 22, 2008

Snap City mayor for life


Late to the party, what's the recommended backup plan now that crash plan is gone?

Hadlock
Nov 9, 2004





Star War Sex Parrot posted:

That's basically what I'm down to at this point: dual-bay Thunderbolt enclosure with a pair of mirrored 12TB drives in it. It does the trick for bulk storage when I want it connected to my MBP.

What unit did you go with? Are you using hardware or software mirroring?

I am leaning towards hardware raid as it should be a lot less of a headache to swap between win/mac/linux, the last thing I want to deal with is being locked in to a specific OS with my data tied to a single machine.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Mr. Crow posted:

Late to the party, what's the recommended backup plan now that crash plan is gone?

Backblaze seems to be the only one still doing unlimited backup for ~$50/yr. There are some annoyances with using the client and setting it up, but it doesn't throttle and it supports large files, so it's the best option I'm aware of right now.

DJ Burette
Jan 6, 2010


I've gone with idrive, 2tb is pretty cheap and they're running 90% off the first year if you're migrating from somewhere else. They also let you have as many clients as you want which is far more important to me than unlimited backup. I checked my crashplan and I was only using 300gb, but I did have 5 computers backing up which would be pretty expensive on most of the other options.

SlowBloke
Aug 14, 2017


Hadlock posted:

Ok so a couple of years ago I was at a windows shop and sort of out of boredom, sort of out of neccessity to sharpen my skills there I went with a Hyper-V server and Storage Spaces (using their "parity" raid-1/5 mirror type solution). During my move recently the Hyper-V thing went tits up (probably can recover the data, not worried about that) but I'm realizing that doing data backup + home VM lab on the same box is dumb for my purposes.

Also due to living arrangements looking to downsize to a laptop as my primary, and only spin up the VM lab box periodically, and split out the mirrored storage to a thunderbolt 3 drive as my new work laptop and (soon to purchase) new personal laptop will both have TB3/usb-c and why not just buy in to USB-C/TB3 now.

Current storage needs are about 3.5-4TB, I am thinking about getting a hardware RAID dual or quad drive device, run it in some sort of mirrored mode. Looking at the Akitio Thunder3 Duo Pro and slapping two 8 or 10TB drives in there. That should allow me to plug it in to a Macbook Pro or Windows/Linux Thinkpad laptop, right?

https://www.akitio.com/desktop-storage/thunder3-duo-pro

At one point NAS was a pretty cool idea with multiple computers, a xbmc/kodi box etc, but in reality these days, I just have my laptop and phone, if I need some sort of low performance NAS solution I have a couple of Raspberry Pis or Laptops floating around.

Is a mirrored RAID TB3 going to meet my needs?

FYI QNAP has just released their low end nas with TB3 (https://www.qnap.com/en/product/ts-453bt3). Prices are higher that the akitio but it's a lot more flexible. For instance you can license nakivo(https://www.nakivo.com/how-to-buy/v...yper-v-pricing/) to backup your hyperv host to the nas.

EL BROMANCE
Jun 10, 2006

COWABUNGA DUDES!



DrDork posted:

Backblaze seems to be the only one still doing unlimited backup for ~$50/yr. There are some annoyances with using the client and setting it up, but it doesn't throttle and it supports large files, so it's the best option I'm aware of right now.

I thought BackBlaze would suit me fine, and ‘luckily’ I found out within a few months that nope, it’s absolute garbage.

I spent one of my Comcast ‘go over 1tb transfer without consequence’ months to do my main backup, and then not long after the hurricane hit so I powered down my stuff and moved it. I bring it back up a little later and the ssd has died, which is about 5% of my backup. I use Time Machine but lost a day or so, nothing major. BackBlaze informs me that the checksums between my machine and their backup are different, so they’re in ‘safety freeze’ and can no longer be dealt with. So I either leave that backup exactly as is and keep paying them, or I delete it and start again.

So I guess it’s fine if you have bandwidth to burn, like business lines but not a lot of people on US consumer lines, or you’re not really doing much transfer. I always hated the crash plan software but it wasn’t such an ass as this. It knew I had multiple drives backed up, but a few files literally broke their entire system.

Star War Sex Parrot
Oct 2, 2003



Muldoon

Hadlock posted:

What unit did you go with? Are you using hardware or software mirroring?

I am leaning towards hardware raid as it should be a lot less of a headache to swap between win/mac/linux, the last thing I want to deal with is being locked in to a specific OS with my data tied to a single machine.
I’m just using software RAID in an old WD dual bay Thunderbolt enclosure. I don’t have to worry about swapping between OSes so it’s an APFS mirror.

Star War Sex Parrot fucked around with this message at 15:18 on Oct 18, 2017

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »