«608 »
  • Post
  • Reply
the_lion
Jun 8, 2010

On the hunt for prey...

Whoah, those replies are pretty handy! Thanks guys.

Decairn posted:

In that case its just another client to the router. No special setup of Synology required. Whatever its possible to connect to on internet from PC should be 100% same for the Synology.

Oh, I can set it up as using the net even though it's plugged straight into my machine? How would I go about that? I figured a wired connection would be faster, also that there's limited power points near the modem but this would be quite handy.

Ninja Rope posted:

If it's literally plugged directly into your machine it probably has a 169.254.x address. Try disabling your wifi before using the whatever autodetector app.

Yup, it ended up having a 169.254 address.

Longinus00 posted:

If that's the case then you should be able to put "syn_nas.local" directly into your browser and it will browse to your nas without having to care about the IP address thanks to zeroconf.

I tried this but I goofed up : I think I tried syn_nas.local:5000 - I assumed you needed the port number.

lampey
Mar 27, 2012



I use a cooler master elite 431 plus. Its on the smaller side of a matx case, but not anything special for size. It has a bay in the front to hot swap a 3.5in drive that can be convenient. Also it has a usb3 front header

D. Ebdrup
Mar 13, 2009



IOwnCalculus posted:

Dunno about quiet but 2-4 drives, come on, go for this bad boy.



I can totally see using one of these for my next server iteration.
They've made such a almost-perfect case, but instead of using a cage on rails like that, what they should've done was make it slightly wider so anyone could fit anything from a few drives plus huge watercooling (as seen in the Fractal Node 804 excibit at Cebit 2014) to a IcyDock FatCage 5-in-3 MB155SP-B cage.

Longinus00 posted:

If that's the case then you should be able to put "syn_nas.local" directly into your browser and it will browse to your nas without having to care about the IP address thanks to zeroconf.
This is offtopic, but it is bugging me - so here goes: Please don't confuse Windows Wireless Zero Configuration (what enables you to use WNICs without manefacturer software installed except the driver) with RFC3927 or RFC4862 (respectively, link-local auto-configuration for IPv4 and IPv6), or NetBIOS broadcasts/WINS (which, at this point is considered legacy and only be used for pre-Win2k enviroments).
As an example, on my Edgerouter 3 Lite, I have static mac address setup for my dhcp clients, along with unbound functioning both as a remote caching and local nameserver with dnssec, meaning I never have to worry about client ip(v4 or v6, I have both) configuration, I just use hostnames (with or without .local) when I need to get in contact with any device on my LAN, and its cross-platform.

D. Ebdrup fucked around with this message at 18:27 on Mar 16, 2014

evol262
Nov 30, 2010
#!/usr/bin/perl

D. Ebdrup posted:

This is offtopic, but it is bugging me - so here goes: Please don't confuse Windows Wireless Zero Configuration (what enables you to use WNICs without manefacturer software installed except the driver) with RFC3927 or RFC4862 (respectively, link-local auto-configuration for IPv4 and IPv6), or NetBIOS broadcasts/WINS (which, at this point is considered legacy and only be used for pre-Win2k enviroments).
As an example, on my Edgerouter 3 Lite, I have static mac address setup for my dhcp clients, along with unbound functioning both as a remote caching and local nameserver with dnssec, meaning I never have to worry about client ip(v4 or v6, I have both) configuration, I just use hostnames (with or without .local) when I need to get in contact with any device on my LAN, and its cross-platform.
If you're going to do this, you should know that the RFCs you cited have nothing to do with it. mDNS is 6762 (which lets .local work) and DNS-SD (effectively auto SRV records for mDNS, otherwise known as zeroconf, and what resolved this situation by advertising the right port) is 6763. Your "I just use hostnames" bit is probably an example of dynamic DNS registration from Unbound. mDNS (with .local) would do the same without a DNS server at all.

D. Ebdrup
Mar 13, 2009



I missed mentioning mDNS probably because it isn't reliably present on all platforms I work with. Auto-configuration can go take a hike, if you ask me - when (if) it works, it's never optimal and is usually just another in a long series of exploitable services. If you have a network with a router running DNSMasq, you can setup local+remote dns lookup with caching and forwarding.

phosdex
Dec 16, 2005



Tortured By Flan

My experience is that mDNS screws up if you've named your local network *.local.

Giraffe
Dec 12, 2005



Soiled Meat

I'm planning to finally upgrade my two slow-ass DNS-323s, and was hoping this thread could offer some advice. I'm thinking it's time to bump up my NAS storage by a good amount (~5 TB of current capacity nearly full) so I've been thinking of just throwing money at something easy and going with a Synology 1813+ and 6-8 4TB Western Digital Reds. However, the last few pages have got me wondering if I might not better off building my own Xpenology box.

Questions:

1. If I got the Synology and wanted to expand it with a DX513 in the future, does it treat those additional drives as if they're part of the same volume? Or is it a separate volume which needs two extra drives for SHR-2 redundancy?

2. The Synology 1813+ has limited memory and can't run Plex server, right? Is the low memory an issue otherwise? Being able to get a beefier CPU and more memory seems like the only reason I'd want to go with XPEnology, so it would be good to know how limited I would be with the Synology.

3. Are there any stats on typical power consumption of the Synology machines? Ideally, I'd love to keep it as low as possible (one good thing about the DNS-323s is their low power usage) so if going with an XPEnology machine uses significantly more power, that would be a consideration.

Thanks for any advice or suggestions!

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

Giraffe posted:

I'm planning to finally upgrade my two slow-ass DNS-323s, and was hoping this thread could offer some advice. I'm thinking it's time to bump up my NAS storage by a good amount (~5 TB of current capacity nearly full) so I've been thinking of just throwing money at something easy and going with a Synology 1813+ and 6-8 4TB Western Digital Reds. However, the last few pages have got me wondering if I might not better off building my own Xpenology box.

Questions:

1. If I got the Synology and wanted to expand it with a DX513 in the future, does it treat those additional drives as if they're part of the same volume? Or is it a separate volume which needs two extra drives for SHR-2 redundancy?

2. The Synology 1813+ has limited memory and can't run Plex server, right? Is the low memory an issue otherwise? Being able to get a beefier CPU and more memory seems like the only reason I'd want to go with XPEnology, so it would be good to know how limited I would be with the Synology.

3. Are there any stats on typical power consumption of the Synology machines? Ideally, I'd love to keep it as low as possible (one good thing about the DNS-323s is their low power usage) so if going with an XPEnology machine uses significantly more power, that would be a consideration.

Thanks for any advice or suggestions!

I bumped from a DNS-323 to a DS412+ and have been very happy with it. Good move.

1. Your choice. If you want to expand the original SHR, you can.
2. The 1813+ has twice the RAM as my 412+, which runs plex server perfectly. They have the same CPU.
3. Power consumption specs here: http://www.synology.com/en-us/products/spec/DS1813+

I'd considered xpenology before making this purchase, and went with the real thing because of power consumption and too many 'gotchas' or bugs in xpenology. It's a lot of cash to lay out, though, and your decision may be different than mine.

sellouts
Apr 23, 2003



I went with the 1813+ too. I maxed out the ram and it's clicking along nicely. I got a good deal on CL but my one at work is doing well.

The only issue with running plex server is the transcoding. I will likely do a fair amount of this so I've moved that to a local low power PC that has more power.

Giraffe
Dec 12, 2005



Soiled Meat

sellouts posted:

I went with the 1813+ too. I maxed out the ram and it's clicking along nicely. I got a good deal on CL but my one at work is doing well.

The only issue with running plex server is the transcoding. I will likely do a fair amount of this so I've moved that to a local low power PC that has more power.
Yeah, I'd gotten the impression that you can't transcode on the 1813+, so I'll have to decide if I care about that or not. Thanks to you and Civil for the advice.

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

Giraffe posted:

Yeah, I'd gotten the impression that you can't transcode on the 1813+, so I'll have to decide if I care about that or not. Thanks to you and Civil for the advice.

That's not true. You should be able to transcode HD video on the 1813+. It works on my DS412+.

What client/device will you be using to watch the content?

edit: forgot link http://www.synology.com/en-us/support/faq/577

Civil fucked around with this message at 04:23 on Mar 19, 2014

sellouts
Apr 23, 2003



From the Plex forums

https://docs.google.com/a/plexapp.c...gUxU0jdj3tmMPc/

I don't transcode on mine.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!


Issue that popped up today after I upgraded the Ubuntu distro on my desktop that hosts my zpool:

code:
  pool: media-pool
 state: UNAVAIL
status: One or more devices could not be used because the label is missing 
	or invalid.  There are insufficient replicas for the pool to continue
	functioning.
action: Destroy and re-create the pool from
	a backup source.
   see: [url]http://zfsonlinux.org/msg/ZFS-8000-5E[/url]
  scan: none requested
config:

	NAME                                           STATE     READ WRITE CKSUM
	media-pool                                     UNAVAIL      0     0     0  insufficient replicas
	  raidz1-0                                     ONLINE       0     0     0
	    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300564617   ONLINE       0     0     0
	    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300253507   ONLINE       0     0     0
	    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300618570   ONLINE       0     0     0
	    ata-WDC_WD20EFRX-68AX9N0_WD-WMC300576341   ONLINE       0     0     0
	  raidz1-1                                     UNAVAIL      0     0     0  insufficient replicas
	    scsi-SATA_WDC_WD20EZRX-00_WD-WMC300641345  UNAVAIL      0     0     0
	    scsi-SATA_WDC_WD20EARX-00_WD-WCAZA8308495  UNAVAIL      0     0     0
	    ata-WDC_WD20EFRX-68EUZN0_WD-WMC4M1489130   ONLINE       0     0     0
	    scsi-SATA_WDC_WD20EARS-00_WD-WMAZA4986032  UNAVAIL      0     0     0
The disks are online, recognized by the Disks utility in Ubuntu, but for whatever reason my zpool isn't seeing them. How on earth do I correct this without destroying the pool? I don't much care for the idea or restoring from the many many scattered DVDs of data, which is only about 1/2 of what's there regardless.

For what it's worth, the 5 disks that are found are attached to a Highpoint HBA card, and the 3 not found are attached to the motherboard SATA controller. Those labels (scsi-SATA-xxx) do NOT show up in my /dev/disk/by-id folder now, only as ata-WDC_xxxx now. Any quick fixes before I start digging into it?

edit: Found the fix. Apparently 13.10 removed the scsi-xxx /dev/disk/by-id names, and doing export/import -f allowed the pool to self-correct with the new by-id names. Whew!

PitViper fucked around with this message at 05:23 on Mar 20, 2014

D. Ebdrup
Mar 13, 2009



That's the second issue I've heard of with linux and zfs where it can suddenly stop working because device ids aren't actually persistent.
The other issue I mentioned is that in rare cases linux won't identify the same disks by the same internal label, which it uses to assign entries in /dev/ with, resulting in device ids changing across a reboot.

Is it just me, or is it completely irresponsible of whoever's in charge to change the device ids? I thought you weren't supposed to change existing kernel behaviour, because you can't - without a complete code audit - know what impacts it'll have.

D. Ebdrup fucked around with this message at 20:38 on Mar 20, 2014

MMD3
May 16, 2006

Montmartre -> Portland

okay, I'm finally finally ready to pull the trigger on a NAS to replace my Drobo S DAS

I was about to purchase the Synology 1513+ but then I started reading a bit about the QNAP line and the TS-569L.

If my priorities for a DAS are...

1) Simple interface
2) Primary use will be housing raw photography files for editing in Lightroom
3) Secondary use will be housing & streaming HD content to Roku via Cat6.
4) I'm hoping to setup a few IP cameras so having a utility I can use to automate that would be great too.

6-8TB capacity is probably plenty. My drobo only has 3.5TB on it currently and I could probably purge a good amount of that.

The problem I ran into with my Drobo was that apparently some photo files became corrupted and.

It seems Synology is a popular choice around here but I'm not sure if I missed some kind of brand comparison where people voiced whether Synology or QNAP are preferred.

Can anyone tell me anything about the 1513+ or a comparable QNAP device that would help me feel a little better about making the right choice?

Also, is photo editing off of a NAS going to be noticeably less responsive than photo editing off of an eSATA attached device?

Thanks Ants
May 21, 2004

Bless You Ants, Blants



Fun Shoe

Qnap vs Synology pretty much comes down to preference. I've had no experience with Qnap, but every time I've needed support from Synology they have been very helpful, even going as far as connecting to our VPN to SSH into one of their boxes to set up out SNMP UPS before it was officially supported in the firmware. I can't recommend them enough.

ShaneB
Oct 22, 2002



MMD3 posted:

okay, I'm finally finally ready to pull the trigger on a NAS to replace my Drobo S DAS

I was about to purchase the Synology 1513+ but then I started reading a bit about the QNAP line and the TS-569L.

If my priorities for a DAS are...

1) Simple interface
2) Primary use will be housing raw photography files for editing in Lightroom
3) Secondary use will be housing & streaming HD content to Roku via Cat6.
4) I'm hoping to setup a few IP cameras so having a utility I can use to automate that would be great too.

6-8TB capacity is probably plenty. My drobo only has 3.5TB on it currently and I could probably purge a good amount of that.

The problem I ran into with my Drobo was that apparently some photo files became corrupted and.

It seems Synology is a popular choice around here but I'm not sure if I missed some kind of brand comparison where people voiced whether Synology or QNAP are preferred.

Can anyone tell me anything about the 1513+ or a comparable QNAP device that would help me feel a little better about making the right choice?

Also, is photo editing off of a NAS going to be noticeably less responsive than photo editing off of an eSATA attached device?

Just gotta ask what is preventing you from storing say, the last year of photos on your primary workstation and offloading them to the NAS when they are older?

The real thing that speeds up (or can slow down) Lightroom performance is the CPU. Pulling the files over the network shouldn't really slow things down too badly, because all the waiting around during imports is like Lightroom building 1:1 previews in the resolution you want and actually working with the previews moving sliders around, which taxes the CPU. When you export the photos it will likely have to pull in the photos from the NAS, process them, and then spit them back out to the NAS. This might be slightly slower depending on your transfer speed capability, but I doubt it really will be that bad.

I run an xpenology setup and it's dope. I like the interface and services it can run, and I run a small backup program on my workstation that incrementally backs up a few document directories and my photography directory every night at 4am or something. It can run a Plex server, which Roku can run on the client end. However, I know Roku has more issues than say, Boxee, for playing whatever you throw at it natively, so you might want something beefier on the CPU end that can transcode. Synology also has IP camera support. I run my xpenology box on like 5 year old Shuttle crap, but it runs everything fast and well. I have 3x3TB WD Reds for a total of 6TB available storage with 1-drive failure parity, and it could support 1 more right now. I don't necessarily suggest that approach, but just giving you an example. There are more powerful Synology boxes that can handle transcoding HD material, if the streaming to Roku is something you want to get 100% working correctly.

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

MMD3 posted:

4) I'm hoping to setup a few IP cameras so having a utility I can use to automate that would be great too.

Synology surveillance software is certainly nice, and it works well, but the unit you purchase only allows you to run one camera. You need to purchase additional licenses per camera to use more, and they run $50+ each. It's a big drawback.

You should also make sure your cameras are supported by the software.

MMD3
May 16, 2006

Montmartre -> Portland

Civil posted:

Synology surveillance software is certainly nice, and it works well, but the unit you purchase only allows you to run one camera. You need to purchase additional licenses per camera to use more, and they run $50+ each. It's a big drawback.

You should also make sure your cameras are supported by the software.

I'd only be planning on two cameras, front door and back door of the house, and I haven't picked them out yet so I will have to do some research on the best way to achieve it before I do. I did just have wire run to above the doors though anticipating that I would want to do that down the road.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!


D. Ebdrup posted:

That's the second issue I've heard of with linux and zfs where it can suddenly stop working because device ids aren't actually persistent.
The other issue I mentioned is that in rare cases linux won't identify the same disks by the same internal label, which it uses to assign entries in /dev/ with, resulting in device ids changing across a reboot.

Is it just me, or is it completely irresponsible of whoever's in charge to change the device ids? I thought you weren't supposed to change existing kernel behaviour, because you can't - without a complete code audit - know what impacts it'll have.

What, really, is the difference in the scsi-xxx and ata-xxx /dev/disk/by-id naming schema? 12.10 had each disk listed under both labels, and I've never been bothered to figure out exactly why it's done that way. I've seen the scsi-xxx naming schema referred to as a virtual scsi interface, so perhaps I should have been using the ata-xxx labels all along. I believe when I set the pool up, I initially referred to each disk by it's /dev/sdX naming scheme, and then did an export/import using the by-id labels, and ZFS just picked the scsi-xxx references by default. Changing the order the drives are referred to in /dev/sdX is a known issue, and can be caused by spin-up delays, reordering drives on the controller, etc. by-id was supposed to be persistent, hence why I re-imported the pool using that method of referencing drives.

evol262
Nov 30, 2010
#!/usr/bin/perl

PitViper posted:

What, really, is the difference in the scsi-xxx and ata-xxx /dev/disk/by-id naming schema? 12.10 had each disk listed under both labels, and I've never been bothered to figure out exactly why it's done that way. I've seen the scsi-xxx naming schema referred to as a virtual scsi interface, so perhaps I should have been using the ata-xxx labels all along. I believe when I set the pool up, I initially referred to each disk by it's /dev/sdX naming scheme, and then did an export/import using the by-id labels, and ZFS just picked the scsi-xxx references by default. Changing the order the drives are referred to in /dev/sdX is a known issue, and can be caused by spin-up delays, reordering drives on the controller, etc. by-id was supposed to be persistent, hence why I re-imported the pool using that method of referencing drives.

ZFS picks up the references because /dev/disk/by-id... is symlinked back to /dev/${device}

The change is because the kernel team decided that addressing serial devices through a naming scheme intended for parallel devices was a waste, and that even differentiating between SAS and SATA with device names is pretty pointless these days with bus convergence, so it was deprecated a few years ago and removed in kernel 3.10.

Longinus00
Dec 29, 2005
Ur-Quan

I have no real idea how ZFS on linux is architectured but can't you use UUIDs instead?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

I had the same issue on FreeNAS for bizarre reasons. Repeatedly seeing my guids change on a drive that kept falling out of the array upon boot up drove me up the wall. Something kept regenerating the guid for drives that weren't written out properly somewhere when I pulled the plug caused the mess it seems (I was having hanging issues at the time simultaneously). The device node naming and FIFO creator system is what needs to stay consistent too or at least with a way to get forwards compatibility somehow. Not sure if there's a safe way to rename the vdevs in a zpool while it's still mounted but that may be as low priority as stripe resizing to support changing the number of drives making up RAIDZ vdevs.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Longinus00 posted:

I have no real idea how ZFS on linux is architectured but can't you use UUIDs instead?

Yes, which you should be doing.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!


Thermopyle posted:

Yes, which you should be doing.

I don't think there's anything wrong with using by-id, it certainly makes physically identifying which disk is which easier. This is certainly something that might happen if you upgrade a machine without first exporting your pool, but since the solution is a simple export/import operation anyway its probably not that big of a deal. I agree UUID is a more permanent and consistent way to assign the disks to a vdev/pool, but by-id should in theory be just as consistent, plus it allows you to easily identify which disk is failed by putting the disk type and serial number right there in the pool information.

spoon daddy
Aug 11, 2004
Who's your daddy?

College Slice

Thermopyle posted:

Yes, which you should be doing.

I'm relatively new to ZFS. I chose to do mine by WWN number, is that reasonable?

30 TO 50 FERAL HOG
Mar 2, 2005





Ramrod XTreme

WD Red 3TB just dropped to the lowest price it's ever been, $120 on Amazon.

Minty Swagger
Sep 8, 2005

Ribbit Ribbit Real Good


Civil posted:

Synology surveillance software is certainly nice, and it works well, but the unit you purchase only allows you to run one camera. You need to purchase additional licenses per camera to use more, and they run $50+ each. It's a big drawback.

You should also make sure your cameras are supported by the software.

Does this also apply to XPEnology? I'd assume so they can give a hat tip to the actual developers yeah?

D. Ebdrup
Mar 13, 2009



This is not properly adapted from how I do it on FreeBSD so it's not perfect and will need more work, but for linux, you can presumably use lsblk -f and find labels rather than device ids and then simply look for the serial number that you of course documented with labels on the out-facing side of the disk.
code:
for i in a b c d e f g; do echo -n "/dev/sd$i: "; hdparm -I /dev/sd$i | awk '/Serial Number/ {print $3}'; done

D. Ebdrup fucked around with this message at 11:15 on Mar 21, 2014

the_lion
Jun 8, 2010

On the hunt for prey...

Potentially stupid question:

On the back of my Synology DS214se there's two USB ports. I have a lot of externals, if I connect these it'll just add them as network available drives right? They won't change them in any way or try to add to my current RAID 1 or anything? Or it pretty much for adding more NAS drives/ synology expansion?

I saw you can share printers this way, just curious as to how they work.

Independence
Jul 12, 2006



Grimey Drawer

the_lion posted:

Potentially stupid question:

On the back of my Synology DS214se there's two USB ports. I have a lot of externals, if I connect these it'll just add them as network available drives right? They won't change them in any way or try to add to my current RAID 1 or anything? Or it pretty much for adding more NAS drives/ synology expansion?

I saw you can share printers this way, just curious as to how they work.

I believe the drives you plug in are for dumping data from internal to external and vice versa. Printers are great, just plug in and it's shared. Point your computer to the NAS and it shows up. Use the appropriate driver for the printer and you're printing away.

eightysixed
Sep 23, 2004

I always tell the truth. Even when I lie.


Independence posted:

I believe the drives you plug in are for dumping data from internal to external and vice versa..

This is correct. On Xpenology, anyway.

eddiewalker
Apr 28, 2004


Minty Swagger posted:

Does this also apply to XPEnology? I'd assume so they can give a hat tip to the actual developers yeah?

Yes. You install Surveillance Station from the exact same Synology repo on xpenology, and that package only comes with one camera license.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



spoon daddy posted:

I'm relatively new to ZFS. I chose to do mine by WWN number, is that reasonable?

I said UUID, but what I meant was "anything not tied to which port or controller the drive is plugged in to", and as far as I'm aware, WWN meets that criteria.

evol262
Nov 30, 2010
#!/usr/bin/perl

D. Ebdrup posted:

This is not properly adapted from how I do it on FreeBSD so it's not perfect and will need more work, but for linux, you can presumably use lsblk -f and find labels rather than device ids and then simply look for the serial number that you of course documented with labels on the out-facing side of the disk.
code:
for i in a b c d e f g; do echo -n "/dev/sd$i: "; hdparm -I /dev/sd$i | awk '/Serial Number/ {print $3}'; done

lsblk -f is an unreliable means of doing this. You should use UUIDs. Labels are fungible. Disk UUIDs are not.

And, as before:
code:
ls /dev/sd* | sed -e 's/.*sd\(.\).*/\1/' | uniq | while read i; do echo -n "/dev/sd$i: "; smartctl -i /dev/sd$i | awk '/Serial Number/ {print $3}'; done

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



the_lion posted:

They won't change them in any way or try to add to my current RAID 1 or anything? Or it pretty much for adding more NAS drives/ synology expansion?
Doubling up on the confirmation you already got by saying it won't change a thing on the disk or on your internal setup. It automounts as "usb disk x", pretty much the same way as you would expect from a normal computer.

Don't remember if it auto shares it on the network or something, but it's reachable from inside the gui as another disk separate from your other volumes.

The usb ports can also take a camera or an audio interface although I'm not sure what the application of the latter would be.

SeventySeven
Jan 18, 2005
I AM A FAGGOT WHO BEGGED EXTREMITY TO CREATE AM ACCOUNT FOR ME. PLEASE PELT ME WITH ASSORTED GOODS.

I built my own Linux server/NAS for fun and because I wanted more power than comparatively priced Synology/QNAP. It's been quite the learning experience, and I'm pretty familiar with mdadm now at least, but I wanted to to check on something before I start storing data on it. When I run:
pre:
$ sudo fdisk -l /dev/sdc

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1  4294967295  2147483647+  ee  GPT
Partition 1 does not start on physical sector boundary.
Do I need to be worried about Partition 1 does not start on physical sector boundary.? When I used gparted for partition the disk I just did mkpart primary 0.00TB 3.00TB. I remember reading, in the SSD thread awhile back, about mis-aligned partitions causing a lot of problems.

Edit to (I believe) answer my own question: I shouldn't have ignored the warning to use Parted instead of fdisk. parted /dev/sdc unit s print returns the following, which looks good to me.
code:
Model: ATA ST3000VN000-1H41 (scsi)
Disk /dev/sdc: 5860533168s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start  End          Size         File system  Name     Flags
 1      2048s  5860532223s  5860530176s               primary

SeventySeven fucked around with this message at 04:23 on Mar 22, 2014

yomisei
Mar 18, 2011


I soon have to upgrade from a (4-1)x3TB WD Red system to a bigger one to accomodate all my digital necromantic storaging needs. Since the release of WD Reds and then 4TB disks there are a few alternatives to those, HGST Deskstar and Seagate NAS in particular. My first system (G1610+16GB+FreeNAS+the above) just has been a proof-of-concept and a step up from a loose collection of differently sized USB drives. Now I'd like to upgrade into something more future proof with ECC and double parity.

How do the WD Red 4TB fare against the Seagate and HGST ones? I assume the HGST is the ex-Hitachi-now-WD one?

Comatoast
Aug 1, 2003


yomisei posted:

My first system (G1610+16GB+FreeNAS+the above) just has been a proof-of-concept... Now I'd like to upgrade into something more future proof with ECC

If you want a stable zfs setup then you must use ECC memory. Non-ECC on zfs is worse than non-ECC on ext4/NTFS/HFS.

Comatoast fucked around with this message at 17:06 on Mar 22, 2014

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Comatoast posted:

If you want a stable zfs setup then you must use ECC memory. Non-ECC on zfs is worse than non-ECC on ext4/NTFS/HFS.

I read something similar to this many months after I had first set up my server with non-ECC RAM and ever since I've been living in fear of the day my data is trashed.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »