|
Have you tried the Synology Cloud Station app?
|
![]() |
|
I have tried it multiple times. Stay away. Nothing good to say about it. It doesn't sync cleanly, very buggy, eats up disc space due to some insane file-versioning code.
|
![]() |
|
Finally got my server all setup and freenas installed - what's the recommended way to utilize datasets? I hear they're like separate filesystems so copying between them introduces some extra overhead, but I would like some fine grain control over which folders can be accessed. I wouldn't be crippling any functionality if I made separate datasets for downloads, videos, music, pictures etc?
|
![]() |
|
Ended up picking up the QNAP TS-451 at Fry's on Saturday. They had it for $359, down from $479 or so from other retailers. Set it up with a RAID 5 with 4 x 3 TB (Hitachi/HGST drives). Working good so far. I did a large 2TB transfer and had a sustained transfer speed for 100 MB/s. I'm thinking about using it with iSCSI also for a little VMWare lab. The system I was using before was a Debian machine with MDADM with a RAID 1 array. I'm pretty sure QNAP is using the same stuff on the back end but it's much simpler to manage with their interface.
|
![]() |
|
Seagate 5TB Externals for $129 on Amazon. Tempted to buy one and take off the external case. http://www.amazon.com/Seagate-Expan...177402741770316
|
![]() |
|
89 posted:Seagate 5TB Externals for $129 on Amazon. Tempted to buy one and take off the external case. Showing as 149 for me.
|
![]() |
|
I've been playing with FreeNAS on an old server at work that has a bunch of ram and wasn't being used for anything productive. Is it just me or are a lot of things maddening? Half of what I've looked into all have me connecting with ssh and leaving me wondering why I have a web front end at all. And what good reason is there for a 2gb swap partition on every disc? I don't even know what question I'm asking ![]()
|
![]() |
|
89 posted:Seagate 5TB Externals for $129 on Amazon. Tempted to buy one and take off the external case. I thought the drives they stick in those external things are crap and should be avoided?
|
![]() |
|
thebigcow posted:I've been playing with FreeNAS on an old server at work that has a bunch of ram and wasn't being used for anything productive. Is it just me or are a lot of things maddening? Half of what I've looked into all have me connecting with ssh and leaving me wondering why I have a web front end at all. And what good reason is there for a 2gb swap partition on every disc? This was not my experience at all. I don't even have SSH enabled on my freeNAS server. I always thought FreeNAS had some reason for the swap partition thing, and I guess I just shrugged cause it's only 2gb.
|
![]() |
|
89 posted:Seagate 5TB Externals for $129 on Amazon. Tempted to buy one and take off the external case. This probably uses shingled magnetic recording doesn't it? thebigcow posted:I've been playing with FreeNAS on an old server at work that has a bunch of ram and wasn't being used for anything productive. Is it just me or are a lot of things maddening? Half of what I've looked into all have me connecting with ssh and leaving me wondering why I have a web front end at all. And what good reason is there for a 2gb swap partition on every disc? The swap partition just seems to be there as insurance since the OS/cache eats so much memory. Had mine set up for a few days and don't think I've swapped yet; the RAM usage seems to be pretty stable, consistently hovering around the 12G mark (I have 16). Haven't had to use the CLI for anything yet though I see myself learning a few diagnostic commands. The web interface seems very comprehensive but hardly intuitive; the dialogue for scheduling tasks for example presents a large amount of options that are completely ambiguously presented - If I select a task to run on the 15th of every month and only tick 'Sunday' will it only run if the 15th lies on a sunday or will it run on the sunday closest to the 15th? Turns out it's the former, which really isn't telegraphed at all in the UI. Also I'm probably going to have to go into the CLI to futz around with permissions to get a downloader plugin write access to one of my datasets - it seems odd that something like this wouldn't be in the GUI and instead require you to mess around with user and group IDs in a command line. The sanctioned way to do this through a GUI is just to grant everyone access which seems very strange. Generic Monk fucked around with this message at 17:35 on Feb 11, 2015 |
![]() |
|
MMD3 posted:A question regarding sync'ing software, hopefully this is a good place to ask. I do something similar. I really wish there were better controls within Lightroom for roaming devices like this that want to share a catalog across machines. I use btsync (http://www.getsync.com/) to sync the Lightroom folder across machines (for me that includes the catalog & previews). I share this between my desktop, a server (as a backup location) and my MBAir. You're right that you can't have two copies of LR accessing this at the same time, and LR will look for a lock file (that does get synced) to try to avoid this. So start the catalog on the desktop, make changes, wait for those to sync (<1 minute), open up the same catalog on the laptop & continue. If you have the RAWs on a share, you'll want to make sure that those are at the same path (//server/volume) on both devices. This syncs everything, so even your LR settings will come across. To get this going, just select the LR folder on both machines and choose to sync those. Remember that one of those should be EMPTY - you don't want to sync both catalogs over eachother, etc. I just created a new LR folder on my MacBook and renamed the old one. This isn't a perfect solution, but it does work. You could even open a BTsync port on your firewall and allow your MacBook to sync those files from anywhere.
|
![]() |
|
My problem with the swap partition isn't the 2gb per disc, its that as set up if anything is using the swap on a particular disc, and that disc dies, that swap is gone and a program goes poof. I had a problem where I thought my pool had been made with ashift=9 but it turns out something FreeNAS did made zdb not report information on anything but the boot device, my pool was ashift=12 the whole time.
|
![]() |
|
Yeah, the FreeNAS UI is not very good at a lot of things like that. I mostly use it because every time I upgrade it I get a somewhat properly configured set of services and upgraded things like SMB, AFP, and iSCSI, and it's kind of handy compared to doing it through my usual means. However, once I get the time I'll probably configure everything with Chef and control my systems that way.
|
![]() |
|
So my work keeps on throwing out perfectly good hardware. I've got pretty much everything I need to build a i7 1366 computer server with dual gigabit (which hopefully I can use to prioritize certain appliances) and 8 SATA ports on the motherboard. There are also 7x 1tb drives just sitting waiting to go to ewaste. I'd be stupid to let all this go to waist right? I'm tempted to try my hand at FreeNAS. But I would like to slowly replace drives with larger sizes which SHR handles more gracefully (or effectively) than Raid5. Anyone rolling their own Synology solution? Or is FreeNAS still my best bet? If Raid5 on FreeNAS is my only option, how does it handle a slow space upgrade? Using Synology's raid calculator, would I just replace the drives one by one until I have them up to 2tb drives, and thennnnn...? Do I have to reconfigure the NAS? Or will the new space automatically be utilized? Ziploc fucked around with this message at 18:36 on Feb 12, 2015 |
![]() |
|
ZFS does automagically adjust the vdev size according to the smallest drive in it. Once the last 1TB drive has been replaced with a 2TB one, the vdev they're in bumps its size. I'd suggest RAIDZ2 tho (pretty much RAID6, two drive failures) if you go with FreeNAS and seven drives.
|
![]() |
|
It only does this if autoexpand is enabled for the pool (before you start replacing disks), but if the pool was created anywhere after ZFS v15 (I think?), it has this feature enabled by default. To check, type "zpool get autoexpand" which gives you a list of each pool and whether it's enabled or not. To enable it for the pool named tank, type "zpool set autoexpand=on tank". EDIT: ↓ That's fucking stupid. D. Ebdrup fucked around with this message at 19:13 on Feb 15, 2015 |
![]() |
|
Last public common format between OpenZFS and Oracle was v28. However... I've checked my shit here and it turns out that FreeNAS does not auto-enable it.
|
![]() |
|
This might be a better thread for this question. I basically just want to cram all my drives into a small, quiet space so that it can live near the router in our living room and I don't need to leave a desktop full of drives on. Proposing throwing a Q1900-ITX into a Fractal Node 304, FreeNAS, ZFS. Should I be well enough with 8GB of Non-ECC? People seem pretty insistent that you'd better go 16GB ECC if using ZFS, and if there's a good reason for that I would switch to an Avoton board like the C2550D4I for ECC support.
|
![]() |
|
There's little reason to justify not using ECC when It's only so much of a premium overall. The Lenovo Thinkservers can all use ECC RAM and are like $200+ (yes, an i3 can use ECC RAM, look up the Intel ark entry if you don't believe me). Note that ECC is handy in general for correcting and detecting which is handy for most cases of bit flipping corruption. ZFS is not any more susceptible to problems but unlike a lot of RAID, ZFS can back-write errors potentially and performs a whole lot more transactions than hardware RAID or other software RAID. But if you're trying to save maybe $50 - $100 or so (which is actually a false dichotomy) on a build you might not want to even bother with ZFS and just use mdadm or whatever BS software RAID is on your motherboard. I'm selling my NAS based on the NSC-800 case soon so maybe you're up for inheriting it, I dunno. ![]()
|
![]() |
|
Reason for ECC is the off-chance of a bit-flip messing with your data while the writes are buffered. ZFS default is up to 30 seconds, depending on memory pressure, until things are written to disk. FreeNAS has it tuned to 5 seconds. Same the other way around, once data is cached in the ARC, it isn't verified by checksum anymore, IIRC.
|
![]() |
|
Potentially stupid question. I'm not very knowledgeable about networks and use a basic NAS. I have a 2 bay Synology NAS plugged straight in my iMac via Ethernet. I got my hands on a WD NAS 4 bay on the cheap. Imac only has one Ethernet port. Is it possible to get a splitter type cable so I can use both in the same room? Router is in another room and there's no leftover power points. Main reason: I'd like a fast RAID for working with video on one, and the other as the "store shit I'm done with and may need quick access to during the day" type stuff. I'm probably not doing this the smart way. Any advice?
|
![]() |
|
the_lion posted:Any advice? You could get a cheap switch, gigabit switches start at $20 on Amazon which is a bit steep, but they're commodity items so if you can find one cheaper somewhere else it should be fine. The only thing that's weird about your situation, to me, is that it'd normally make a lot more sense to plug everything into the router, including the iMac via ethernet. I assume you're just in a situation where running an ethernet cable from your computer to your router is impractical. Otherwise this is the "smart" way to attach more than one NAS to one computer.
|
![]() |
|
Total newb question. Do these devices allow you to stream music stored on them through the web?
|
![]() |
|
/\ Yes. Does Google Cloud Print not work with XPenology? Everything immediately goes to "Printed" in the Google queue but it clearly hasn't (even when the printer isn't connected). I can't find any info, just wondering if maybe there's an incompatibility and trying to troubleshoot will be a waste of time. If not, any troubleshooting suggestions?
|
![]() |
|
I've been doing some research, but I haven't been able to find an answer: Using some run of the mill Synology NAS (for example DS215j), I'm guessing it installs some custom backup software on your computer. Is it possible to configure it in such a way that the NAS powers on automatically when my computer powers on?
|
![]() |
|
busfahrer posted:I've been doing some research, but I haven't been able to find an answer: Synology's answer for managed backups is Cloud Station, however it seems to eat a lot of CPU on the unit. If you have a mac it can also act as a Time Machine target for your backups. Another option is to create a shared folder or setup an iSCSI drive on the Synology that can be mapped to your computer and used as a target for whatever backup program you want (Crashplan, etc.). You should be able to setup the unit to power on with your computer using Wake on LAN (WOL) functionalty. Synology has a list of supported devices on their site: https://www.synology.com/en-uk/knowledgebase/faq/437 However this means that you'll also need to remember to shut the unit off when you turn off your computer.
|
![]() |
|
busfahrer posted:I've been doing some research, but I haven't been able to find an answer:
|
![]() |
|
Desuwa posted:You could get a cheap switch, gigabit switches start at $20 on Amazon which is a bit steep, but they're commodity items so if you can find one cheaper somewhere else it should be fine. Gotcha. This was helpful, thanks a heap!
|
![]() |
|
I don't see any reason to power down a synology. It uses little power and will spin the drives down as needed.
|
![]() |
|
Is there any advantage to Synology's specialized raid software with only 2 disks? It looks like it's just raid 1 on their site, but it's a very general overview. I would like to replace a freeNAS box with something simpler, requiring less maintenance, less electricity, and taking up a smaller footprint in my office area. I'm looking for a 2-disk device, with raid 1, hot swap would be nice but it's not mandatory. I don't need to transcode video or anything, I just need a simple, stable thing for regular backups for two PCs (mostly source code and some larger raster data, along with music and movies) My wife does need to be able to have a separate partition that is encrypted, but it doesn't need to be any sort of high performance storage for either of us. If there's not an advantage to Synology's offerings, other than specific user interface items, would stuff from QNAP (TS-231 for example) make any difference? I'd like to avoid shelling out $400 for something that a $200 device can handle just fine.
|
![]() |
|
SopWATh posted:I just need a simple, stable thing for regular backups for two PCs (mostly source code and some larger raster data, along with music and movies) My wife does need to be able to have a separate partition that is encrypted, but it doesn't need to be any sort of high performance storage for either of us. This doesn't sound like a good backup solution. A good backup solution would mean that your NAS and two PCs could be stolen, melted in a fire, infected with viruses, accidentally wiped, etc and you would suffer no data loss (or very little data loss).
|
![]() |
|
I've got a mdadm'ed RAID 5. I want to switch these drives to a different linux machine. Anyone have experience doing this? I'm wondering if this is a straightforward process or if I'll need to plan for a bunch of things going wrong.
|
![]() |
|
fletcher posted:This doesn't sound like a good backup solution. A good backup solution would mean that your NAS and two PCs could be stolen, melted in a fire, infected with viruses, accidentally wiped, etc and you would suffer no data loss (or very little data loss). Why not? He's using the NAS as a backup to other machines, not as primary storage, in which case a two-drive RAID 1 is perfectly fine if it fits his size requirements--not everyone needs triple off-site backups for their music collections. That said, I'd also recommend something like Crashplan for an added layer of security for things you really care about (like source code). It's laughably cheap, is encrypted, and can happily serve as a backup-of-last-resort should both your PCs and NAS get stolen, infected with viruses, wiped, and then melted in a fire.
|
![]() |
|
DrDork posted:Why not? He's using the NAS as a backup to other machines, not as primary storage, in which case a two-drive RAID 1 is perfectly fine if it fits his size requirements--not everyone needs triple off-site backups for their music collections. The machines are in the same location, that right there makes in unsuitable as a backup solution to me.
|
![]() |
|
On-site backup is a perfectly fine part of an overall backup strategy.
|
![]() |
|
fletcher posted:The machines are in the same location, that right there makes in unsuitable as a backup solution to me. It's entirely reasonable for someone to conclude that they need a backup to protect against hardware failure without also needing a backup to protect against a house fire. It's a significant decrease in the risk of data loss even though it does not zero the risk of data loss.
|
![]() |
|
evilweasel posted:It's entirely reasonable for someone to conclude that they need a backup to protect against hardware failure without also needing a backup to protect against a house fire. It's a significant decrease in the risk of data loss even though it does not zero the risk of data loss. what no it's not enough HE'S GONNA LOSE DATA IT'S NOT SAFE RAID IS NOT BACKUP!!! *fills closet with enterprise grade horseshit* ![]() Kinda sorta roughly along those lines, I've been using a shitbox running WinXP as a glorified NAS for way too long and need to upgrade. But I think I've been reading too many posts about bitrot by paranoid nerds on the FreeNAS forums, and I'm uncertain what direction to go for a new NAS. It won't be anything huge, probably 4TB, it will be for local backups and media storage. So I'm considering a Synology or FreeNAS on a cheap ThinkServer. I like the sound of ZFS in terms of its ability to maintain data integrity, but the hardware FreeNAS needs is awfully heavy compared to a little Synology. So I guess the question is, do I really have anything to worry about with a 2x4TB RAID1 in a Synology? How worried should one really be about bitrot? Would a 2x4TB ZFS setup be recommended, or maybe 3x4TB?
|
![]() |
|
evilweasel posted:It's entirely reasonable for someone to conclude that they need a backup to protect against hardware failure without also needing a backup to protect against a house fire. It's a significant decrease in the risk of data loss even though it does not zero the risk of data loss. This. My question was more about QNAP vs Synology in terms of hardware/software performance. I guess the same thing could hold for Asustor, Thecus, WD, DLink, etc. but a) QNAP and Synology appear to be above the normal shit-tier crap I can find at best buy and b) have specific models that fall within my budget constraints. I'm looking for any specific caveats about these relatively low-end devices. I've seen what the Synology and QNAP web interfaces look like and they appear to be polished, easy to use, etc. In the past, I've read poor reviews for QNAP devices, saying they're slow, but not necessarily fault-prone. I understand if my house burns down, or a tornado comes through, or some asshole decides to steal my NAS that I'll lose data. Rest assured, anything that is irreplaceable is actually saved off-site. What I'd like to avoid is having to store 2-3TB of rarely-used data on my desktop PC when I could have that on hand and not have to download it from the internet. I'd like to have "backups" (unsafe, redundant copies) of a couple laptops so if they get dropped/lost/break for some other reason, I can get all the data back without having download it all via my shitty internet connection. A side benefit would be using the left over space to hold MP3s and such, have a central location for budget junk, etc.
|
![]() |
|
SO DEMANDING posted:So I'm considering a Synology or FreeNAS on a cheap ThinkServer. I like the sound of ZFS in terms of its ability to maintain data integrity, but the hardware FreeNAS needs is awfully heavy compared to a little Synology. So I guess the question is, do I really have anything to worry about with a 2x4TB RAID1 in a Synology? How worried should one really be about bitrot? Would a 2x4TB ZFS setup be recommended, or maybe 3x4TB? Bitrot is a thing, but how much you care about it should be directly proportional to how much you care about the data you're backing up. ECC RAM isn't much more expensive than its non-ECC counterpart, but if all you're doing it storing movies and the like where the occasional bit-flip is unlikely to do much harm, it's not exactly a hard requirement. Strongly recommended for backing up shit you really care about, though. Super-important stuff should be backed up to something like Crashplan regardless of what you decide to do with your NAS. Size your array on what you think you'll need. The downside of ZFS is there's no easy way to expand an existing pool (say, go from 2 drives to 3), so leaving yourself some room for growth is a good idea.
|
![]() |
|
SopWATh posted:Rest assured, anything that is irreplaceable is actually saved off-site. What I'd like to avoid is having to store 2-3TB of rarely-used data on my desktop PC when I could have that on hand and not have to download it from the internet. Ah, that makes a big difference. When I originally read your post I was thinking that irreplaceable data would only be stored on the PCs + NAS. If you are backing up irreplaceable stuff offsite then by all means, a NAS is a good fit for the other stuff that isn't so critical but you'd still like to have some protection against data loss. DNova posted:On-site backup is a perfectly fine part of an overall backup strategy. Totally agreed. For example, my irreplaceable photos are on my PC, which gets backed up to a NAS, and certain folders on the NAS are backed up to Crashplan. NAS is used to restore for a 'normal' data loss scenario, Crashplan for the catastrophic data loss scenarios. SO DEMANDING posted:what no it's not enough HE'S GONNA LOSE DATA IT'S NOT SAFE RAID IS NOT BACKUP!!! *fills closet with enterprise grade horseshit* Uhhh....what?
|
![]() |