|
Whoah, those replies are pretty handy! Thanks guys. Decairn posted:In that case its just another client to the router. No special setup of Synology required. Whatever its possible to connect to on internet from PC should be 100% same for the Synology. Oh, I can set it up as using the net even though it's plugged straight into my machine? How would I go about that? I figured a wired connection would be faster, also that there's limited power points near the modem but this would be quite handy. Ninja Rope posted:If it's literally plugged directly into your machine it probably has a 169.254.x address. Try disabling your wifi before using the whatever autodetector app. Yup, it ended up having a 169.254 address. Longinus00 posted:If that's the case then you should be able to put "syn_nas.local" directly into your browser and it will browse to your nas without having to care about the IP address thanks to zeroconf. I tried this but I goofed up : I think I tried syn_nas.local:5000 - I assumed you needed the port number.
|
![]() |
|
I use a cooler master elite 431 plus. Its on the smaller side of a matx case, but not anything special for size. It has a bay in the front to hot swap a 3.5in drive that can be convenient. Also it has a usb3 front header
|
![]() |
|
IOwnCalculus posted:Dunno about quiet but 2-4 drives, come on, go for this bad boy. Longinus00 posted:If that's the case then you should be able to put "syn_nas.local" directly into your browser and it will browse to your nas without having to care about the IP address thanks to zeroconf. As an example, on my Edgerouter 3 Lite, I have static mac address setup for my dhcp clients, along with unbound functioning both as a remote caching and local nameserver with dnssec, meaning I never have to worry about client ip(v4 or v6, I have both) configuration, I just use hostnames (with or without .local) when I need to get in contact with any device on my LAN, and its cross-platform. D. Ebdrup fucked around with this message at 18:27 on Mar 16, 2014 |
![]() |
|
D. Ebdrup posted:This is offtopic, but it is bugging me - so here goes: Please don't confuse Windows Wireless Zero Configuration (what enables you to use WNICs without manefacturer software installed except the driver) with RFC3927 or RFC4862 (respectively, link-local auto-configuration for IPv4 and IPv6), or NetBIOS broadcasts/WINS (which, at this point is considered legacy and only be used for pre-Win2k enviroments).
|
![]() |
|
I missed mentioning mDNS probably because it isn't reliably present on all platforms I work with. Auto-configuration can go take a hike, if you ask me - when (if) it works, it's never optimal and is usually just another in a long series of exploitable services. If you have a network with a router running DNSMasq, you can setup local+remote dns lookup with caching and forwarding.
|
![]() |
|
My experience is that mDNS screws up if you've named your local network *.local.
|
![]() |
|
I'm planning to finally upgrade my two slow-ass DNS-323s, and was hoping this thread could offer some advice. I'm thinking it's time to bump up my NAS storage by a good amount (~5 TB of current capacity nearly full) so I've been thinking of just throwing money at something easy and going with a Synology 1813+ and 6-8 4TB Western Digital Reds. However, the last few pages have got me wondering if I might not better off building my own Xpenology box. Questions: 1. If I got the Synology and wanted to expand it with a DX513 in the future, does it treat those additional drives as if they're part of the same volume? Or is it a separate volume which needs two extra drives for SHR-2 redundancy? 2. The Synology 1813+ has limited memory and can't run Plex server, right? Is the low memory an issue otherwise? Being able to get a beefier CPU and more memory seems like the only reason I'd want to go with XPEnology, so it would be good to know how limited I would be with the Synology. 3. Are there any stats on typical power consumption of the Synology machines? Ideally, I'd love to keep it as low as possible (one good thing about the DNS-323s is their low power usage) so if going with an XPEnology machine uses significantly more power, that would be a consideration. Thanks for any advice or suggestions!
|
![]() |
|
Giraffe posted:I'm planning to finally upgrade my two slow-ass DNS-323s, and was hoping this thread could offer some advice. I'm thinking it's time to bump up my NAS storage by a good amount (~5 TB of current capacity nearly full) so I've been thinking of just throwing money at something easy and going with a Synology 1813+ and 6-8 4TB Western Digital Reds. However, the last few pages have got me wondering if I might not better off building my own Xpenology box. I bumped from a DNS-323 to a DS412+ and have been very happy with it. Good move. 1. Your choice. If you want to expand the original SHR, you can. 2. The 1813+ has twice the RAM as my 412+, which runs plex server perfectly. They have the same CPU. 3. Power consumption specs here: http://www.synology.com/en-us/products/spec/DS1813+ I'd considered xpenology before making this purchase, and went with the real thing because of power consumption and too many 'gotchas' or bugs in xpenology. It's a lot of cash to lay out, though, and your decision may be different than mine.
|
![]() |
|
I went with the 1813+ too. I maxed out the ram and it's clicking along nicely. I got a good deal on CL but my one at work is doing well. The only issue with running plex server is the transcoding. I will likely do a fair amount of this so I've moved that to a local low power PC that has more power.
|
![]() |
|
sellouts posted:I went with the 1813+ too. I maxed out the ram and it's clicking along nicely. I got a good deal on CL but my one at work is doing well.
|
![]() |
|
Giraffe posted:Yeah, I'd gotten the impression that you can't transcode on the 1813+, so I'll have to decide if I care about that or not. Thanks to you and Civil for the advice. That's not true. You should be able to transcode HD video on the 1813+. It works on my DS412+. What client/device will you be using to watch the content? edit: forgot link http://www.synology.com/en-us/support/faq/577 Civil fucked around with this message at 04:23 on Mar 19, 2014 |
![]() |
|
From the Plex forums https://docs.google.com/a/plexapp.c...gUxU0jdj3tmMPc/ I don't transcode on mine.
|
![]() |
|
Issue that popped up today after I upgraded the Ubuntu distro on my desktop that hosts my zpool:code:
For what it's worth, the 5 disks that are found are attached to a Highpoint HBA card, and the 3 not found are attached to the motherboard SATA controller. Those labels (scsi-SATA-xxx) do NOT show up in my /dev/disk/by-id folder now, only as ata-WDC_xxxx now. Any quick fixes before I start digging into it? edit: Found the fix. Apparently 13.10 removed the scsi-xxx /dev/disk/by-id names, and doing export/import -f allowed the pool to self-correct with the new by-id names. Whew! PitViper fucked around with this message at 05:23 on Mar 20, 2014 |
![]() |
|
That's the second issue I've heard of with linux and zfs where it can suddenly stop working because device ids aren't actually persistent. The other issue I mentioned is that in rare cases linux won't identify the same disks by the same internal label, which it uses to assign entries in /dev/ with, resulting in device ids changing across a reboot. Is it just me, or is it completely irresponsible of whoever's in charge to change the device ids? I thought you weren't supposed to change existing kernel behaviour, because you can't - without a complete code audit - know what impacts it'll have. D. Ebdrup fucked around with this message at 20:38 on Mar 20, 2014 |
![]() |
|
okay, I'm finally finally ready to pull the trigger on a NAS to replace my Drobo S DAS I was about to purchase the Synology 1513+ but then I started reading a bit about the QNAP line and the TS-569L. If my priorities for a DAS are... 1) Simple interface 2) Primary use will be housing raw photography files for editing in Lightroom 3) Secondary use will be housing & streaming HD content to Roku via Cat6. 4) I'm hoping to setup a few IP cameras so having a utility I can use to automate that would be great too. 6-8TB capacity is probably plenty. My drobo only has 3.5TB on it currently and I could probably purge a good amount of that. The problem I ran into with my Drobo was that apparently some photo files became corrupted and. It seems Synology is a popular choice around here but I'm not sure if I missed some kind of brand comparison where people voiced whether Synology or QNAP are preferred. Can anyone tell me anything about the 1513+ or a comparable QNAP device that would help me feel a little better about making the right choice? Also, is photo editing off of a NAS going to be noticeably less responsive than photo editing off of an eSATA attached device?
|
![]() |
|
Qnap vs Synology pretty much comes down to preference. I've had no experience with Qnap, but every time I've needed support from Synology they have been very helpful, even going as far as connecting to our VPN to SSH into one of their boxes to set up out SNMP UPS before it was officially supported in the firmware. I can't recommend them enough.
|
![]() |
|
MMD3 posted:okay, I'm finally finally ready to pull the trigger on a NAS to replace my Drobo S DAS Just gotta ask what is preventing you from storing say, the last year of photos on your primary workstation and offloading them to the NAS when they are older? The real thing that speeds up (or can slow down) Lightroom performance is the CPU. Pulling the files over the network shouldn't really slow things down too badly, because all the waiting around during imports is like Lightroom building 1:1 previews in the resolution you want and actually working with the previews moving sliders around, which taxes the CPU. When you export the photos it will likely have to pull in the photos from the NAS, process them, and then spit them back out to the NAS. This might be slightly slower depending on your transfer speed capability, but I doubt it really will be that bad. I run an xpenology setup and it's dope. I like the interface and services it can run, and I run a small backup program on my workstation that incrementally backs up a few document directories and my photography directory every night at 4am or something. It can run a Plex server, which Roku can run on the client end. However, I know Roku has more issues than say, Boxee, for playing whatever you throw at it natively, so you might want something beefier on the CPU end that can transcode. Synology also has IP camera support. I run my xpenology box on like 5 year old Shuttle crap, but it runs everything fast and well. I have 3x3TB WD Reds for a total of 6TB available storage with 1-drive failure parity, and it could support 1 more right now. I don't necessarily suggest that approach, but just giving you an example. There are more powerful Synology boxes that can handle transcoding HD material, if the streaming to Roku is something you want to get 100% working correctly.
|
![]() |
|
MMD3 posted:4) I'm hoping to setup a few IP cameras so having a utility I can use to automate that would be great too. Synology surveillance software is certainly nice, and it works well, but the unit you purchase only allows you to run one camera. You need to purchase additional licenses per camera to use more, and they run $50+ each. It's a big drawback. You should also make sure your cameras are supported by the software.
|
![]() |
|
Civil posted:Synology surveillance software is certainly nice, and it works well, but the unit you purchase only allows you to run one camera. You need to purchase additional licenses per camera to use more, and they run $50+ each. It's a big drawback. I'd only be planning on two cameras, front door and back door of the house, and I haven't picked them out yet so I will have to do some research on the best way to achieve it before I do. I did just have wire run to above the doors though anticipating that I would want to do that down the road.
|
![]() |
|
D. Ebdrup posted:That's the second issue I've heard of with linux and zfs where it can suddenly stop working because device ids aren't actually persistent. What, really, is the difference in the scsi-xxx and ata-xxx /dev/disk/by-id naming schema? 12.10 had each disk listed under both labels, and I've never been bothered to figure out exactly why it's done that way. I've seen the scsi-xxx naming schema referred to as a virtual scsi interface, so perhaps I should have been using the ata-xxx labels all along. I believe when I set the pool up, I initially referred to each disk by it's /dev/sdX naming scheme, and then did an export/import using the by-id labels, and ZFS just picked the scsi-xxx references by default. Changing the order the drives are referred to in /dev/sdX is a known issue, and can be caused by spin-up delays, reordering drives on the controller, etc. by-id was supposed to be persistent, hence why I re-imported the pool using that method of referencing drives.
|
![]() |
|
PitViper posted:What, really, is the difference in the scsi-xxx and ata-xxx /dev/disk/by-id naming schema? 12.10 had each disk listed under both labels, and I've never been bothered to figure out exactly why it's done that way. I've seen the scsi-xxx naming schema referred to as a virtual scsi interface, so perhaps I should have been using the ata-xxx labels all along. I believe when I set the pool up, I initially referred to each disk by it's /dev/sdX naming scheme, and then did an export/import using the by-id labels, and ZFS just picked the scsi-xxx references by default. Changing the order the drives are referred to in /dev/sdX is a known issue, and can be caused by spin-up delays, reordering drives on the controller, etc. by-id was supposed to be persistent, hence why I re-imported the pool using that method of referencing drives. ZFS picks up the references because /dev/disk/by-id... is symlinked back to /dev/${device} The change is because the kernel team decided that addressing serial devices through a naming scheme intended for parallel devices was a waste, and that even differentiating between SAS and SATA with device names is pretty pointless these days with bus convergence, so it was deprecated a few years ago and removed in kernel 3.10.
|
![]() |
|
I have no real idea how ZFS on linux is architectured but can't you use UUIDs instead?
|
![]() |
|
I had the same issue on FreeNAS for bizarre reasons. Repeatedly seeing my guids change on a drive that kept falling out of the array upon boot up drove me up the wall. Something kept regenerating the guid for drives that weren't written out properly somewhere when I pulled the plug caused the mess it seems (I was having hanging issues at the time simultaneously). The device node naming and FIFO creator system is what needs to stay consistent too or at least with a way to get forwards compatibility somehow. Not sure if there's a safe way to rename the vdevs in a zpool while it's still mounted but that may be as low priority as stripe resizing to support changing the number of drives making up RAIDZ vdevs.
|
![]() |
|
Longinus00 posted:I have no real idea how ZFS on linux is architectured but can't you use UUIDs instead? Yes, which you should be doing.
|
![]() |
|
Thermopyle posted:Yes, which you should be doing. I don't think there's anything wrong with using by-id, it certainly makes physically identifying which disk is which easier. This is certainly something that might happen if you upgrade a machine without first exporting your pool, but since the solution is a simple export/import operation anyway its probably not that big of a deal. I agree UUID is a more permanent and consistent way to assign the disks to a vdev/pool, but by-id should in theory be just as consistent, plus it allows you to easily identify which disk is failed by putting the disk type and serial number right there in the pool information.
|
![]() |
|
Thermopyle posted:Yes, which you should be doing. I'm relatively new to ZFS. I chose to do mine by WWN number, is that reasonable?
|
![]() |
|
WD Red 3TB just dropped to the lowest price it's ever been, $120 on Amazon.
|
![]() |
|
Civil posted:Synology surveillance software is certainly nice, and it works well, but the unit you purchase only allows you to run one camera. You need to purchase additional licenses per camera to use more, and they run $50+ each. It's a big drawback. Does this also apply to XPEnology? I'd assume so they can give a hat tip to the actual developers yeah?
|
![]() |
|
This is not properly adapted from how I do it on FreeBSD so it's not perfect and will need more work, but for linux, you can presumably use lsblk -f and find labels rather than device ids and then simply look for the serial number that you of course documented with labels on the out-facing side of the disk.code:
D. Ebdrup fucked around with this message at 11:15 on Mar 21, 2014 |
![]() |
|
Potentially stupid question: On the back of my Synology DS214se there's two USB ports. I have a lot of externals, if I connect these it'll just add them as network available drives right? They won't change them in any way or try to add to my current RAID 1 or anything? Or it pretty much for adding more NAS drives/ synology expansion? I saw you can share printers this way, just curious as to how they work.
|
![]() |
|
the_lion posted:Potentially stupid question: I believe the drives you plug in are for dumping data from internal to external and vice versa. Printers are great, just plug in and it's shared. Point your computer to the NAS and it shows up. Use the appropriate driver for the printer and you're printing away.
|
![]() |
|
Independence posted:I believe the drives you plug in are for dumping data from internal to external and vice versa.. This is correct. On Xpenology, anyway.
|
![]() |
|
Minty Swagger posted:Does this also apply to XPEnology? I'd assume so they can give a hat tip to the actual developers yeah? Yes. You install Surveillance Station from the exact same Synology repo on xpenology, and that package only comes with one camera license.
|
![]() |
|
spoon daddy posted:I'm relatively new to ZFS. I chose to do mine by WWN number, is that reasonable? I said UUID, but what I meant was "anything not tied to which port or controller the drive is plugged in to", and as far as I'm aware, WWN meets that criteria.
|
![]() |
|
D. Ebdrup posted:This is not properly adapted from how I do it on FreeBSD so it's not perfect and will need more work, but for linux, you can presumably use lsblk -f and find labels rather than device ids and then simply look for the serial number that you of course documented with labels on the out-facing side of the disk. lsblk -f is an unreliable means of doing this. You should use UUIDs. Labels are fungible. Disk UUIDs are not. And, as before: code:
|
![]() |
|
the_lion posted:They won't change them in any way or try to add to my current RAID 1 or anything? Or it pretty much for adding more NAS drives/ synology expansion? Don't remember if it auto shares it on the network or something, but it's reachable from inside the gui as another disk separate from your other volumes. The usb ports can also take a camera or an audio interface although I'm not sure what the application of the latter would be.
|
![]() |
|
I built my own Linux server/NAS for fun and because I wanted more power than comparatively priced Synology/QNAP. It's been quite the learning experience, and I'm pretty familiar with mdadm now at least, but I wanted to to check on something before I start storing data on it. When I run:pre:$ sudo fdisk -l /dev/sdc WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Edit to (I believe) answer my own question: I shouldn't have ignored the warning to use Parted instead of fdisk. parted /dev/sdc unit s print returns the following, which looks good to me. code:
SeventySeven fucked around with this message at 04:23 on Mar 22, 2014 |
![]() |
|
I soon have to upgrade from a (4-1)x3TB WD Red system to a bigger one to accomodate all my digital necromantic storaging needs. Since the release of WD Reds and then 4TB disks there are a few alternatives to those, HGST Deskstar and Seagate NAS in particular. My first system (G1610+16GB+FreeNAS+the above) just has been a proof-of-concept and a step up from a loose collection of differently sized USB drives. Now I'd like to upgrade into something more future proof with ECC and double parity. How do the WD Red 4TB fare against the Seagate and HGST ones? I assume the HGST is the ex-Hitachi-now-WD one?
|
![]() |
|
yomisei posted:My first system (G1610+16GB+FreeNAS+the above) just has been a proof-of-concept... Now I'd like to upgrade into something more future proof with ECC If you want a stable zfs setup then you must use ECC memory. Non-ECC on zfs is worse than non-ECC on ext4/NTFS/HFS. Comatoast fucked around with this message at 17:06 on Mar 22, 2014 |
![]() |
|
Comatoast posted:If you want a stable zfs setup then you must use ECC memory. Non-ECC on zfs is worse than non-ECC on ext4/NTFS/HFS. I read something similar to this many months after I had first set up my server with non-ECC RAM and ever since I've been living in fear of the day my data is trashed.
|
![]() |