|
GokieKS posted:Got in my Norco RPC-4020 and the 120mm fan plane this week, and my file server migration is well underway. A few thoughts: It looks like one or more of the clips on your heatsink/fan are rotated incorrectly.
|
![]() |
|
i ordered 3 hard drives from amazon and they all came in their own boxes, inside boxes. just wanted to let you know great transaction, will buy again, +5. now i need some1 to come build my computer for me.
|
![]() |
|
Don Lapre posted:It looks like one or more of the clips on your heatsink/fan are rotated incorrectly. The stock Intel HSF doesn't need the clips to be rotated to be installed - just pushed through. The rotation is to make it easier to prepare to push through or remove. My HSF is installed securely and working fine (CPU at 35C, fan spinning at ~1000RPM). Also, it's coming off soon anyway.
|
![]() |
|
GokieKS posted:The stock Intel HSF doesn't need the clips to be rotated to be installed - just pushed through. The rotation is to make it easier to prepare to push through or remove. My HSF is installed securely and working fine (CPU at 35C, fan spinning at ~1000RPM). You rotate the clip to remove the heatsink. If they are rotated incorrectly then they can pop out. There is a lock. There is a specific spot they are supposed to be rotated to. https://www.youtube.com/watch?v=6abFUpPPCfI#t=123s Don Lapre fucked around with this message at 00:38 on Mar 3, 2014 |
![]() |
|
Arob1000 posted:All Synology machines had a vulnerability that someone used to install a rootkit + Bitcoin miner on a ton of them. Updating to the newest DSM version is supposed to fix it: http://forum.synology.com/enu/viewtopic.php?f=1&t=81316 (also appears as a 'lolz' directory in /etc) I got hit with this on my 1513+ and I too would like to know how it was done. My first clue that something was amiss was SABnzbd started spitting out errors saying "Unpacking failed, See log". When I saw "lolz" in the error log I knew it was a bad thing. I followed these directions from the Synology forums and I was up and running in about 20 minutes without losing any data or packages. quote:OK, following the posts from Mark and Mads I finally managed to get my NAS back up and running without losing anything. All the apps and configurations are still there.
|
![]() |
|
Synology question: due to some shuffling around, I now I only have a Volume 2. No Volume 1. That wouldn't be a problem, except a bunch of bootstrap/ipkg tools are hardlinked to install on Volume 1 and it's kind of a pain. How can I rename my Volume 2 to "1" without messing DSM settings up? I hoped that rebooting would fix it automatically, but it didn't, nor did deleting the Volume 1 mount point via ssh. Edit: found a big "Not even possible." on the syno forums. Looks like the only solution is shuffling disks around to make a new Volume 1 and move a few terabytes. Again. eddiewalker fucked around with this message at 03:43 on Mar 3, 2014 |
![]() |
|
I'm looking for advice on a RAID configuration for a home NAS/media server I'm building. I will be installing FreeNAS for the OS and using ZFS for the file system. The motherboard and case can accommodate up to 10 internal HDDs. In order to keep the initial investment down, I am ordering two 4TB WD Red drives. My plan was to create a vdev that mirrors the two drives to start. Then, when I want to expand capacity, I will add additional two-drive mirrored vdevs to the zpool. The alternative I was considering is RaidZ2. The downside is that I'd have to purchase additional HDDs right now. The upside is having more redundancy per span. I am not concerned with write speeds, but boosting read speeds would be nice since the server could be streaming to up to 4 devices at once. Does anyone have any advice on which way I should go?
|
![]() |
|
madhatter160 posted:Does anyone have any advice on which way I should go?
|
![]() |
|
adorai posted:Looking at newegg prices, it looks like you will pay around $370 for a pair of 4TB reds, which will give you 4TB usable in a mirrored vdev. Alternatively you could spend $300 and get 5x 1TB drives, which in raidz2 would give you 3TB usable. You can add 5 more later to double the size, or replace the existing disks one at a time to add capacity. You could also spend $360 to buy 4x 2tb drives, run a 2+1 raidz1 with a hotspare, and then add more 2+1 vdevs later. The hotspare does not provide the same level of protection from data loss as an additional parity disk, but can be shared among vdevs. When you are all done you could have 12TB of storage for $900, vs $1110 for 3 mirrored pairs of 4TB disks. Thanks for the reply. There are a lot of articles against raidz1 (RAID5) due to the vulnerability to a second disk failure during a rebuild. So, I'm a bit leery of going that route. It seems like doing the 5x raidz2 is going to give me the most storage and greater redundancy inside of a vdev. How concerned should I be about losing a third HDD during a rebuild?
|
![]() |
|
madhatter160 posted:Thanks for the reply. There are a lot of articles against raidz1 (RAID5) due to the vulnerability to a second disk failure during a rebuild. So, I'm a bit leery of going that route. It seems like doing the 5x raidz2 is going to give me the most storage and greater redundancy inside of a vdev. How concerned should I be about losing a third HDD during a rebuild? tl;dr for a home-server scenario you're way over-thinking HDD loss. DrDork fucked around with this message at 02:34 on Mar 4, 2014 |
![]() |
|
madhatter160 posted:Thanks for the reply. There are a lot of articles against raidz1 (RAID5) due to the vulnerability to a second disk failure during a rebuild. So, I'm a bit leery of going that route. It seems like doing the 5x raidz2 is going to give me the most storage and greater redundancy inside of a vdev. How concerned should I be about losing a third HDD during a rebuild?
|
![]() |
|
My server has drives in a mdadm array and ZFS arrays. I'm about to move them all to a new motherboard/CPU/RAM. I think I know this already, but I want to confirm: ZFS and mdadm can handle the fact that they'll be plugged in to different ports, correct? Is there any feature that I need to confirm is enabled for either tech to ensure that they figure out what's what after the move? Also, the current system only has 4 GB of memory. Is there anything I need to configure to help ZFS take advantage of the 12 GB in the new system?
|
![]() |
|
Thermopyle posted:My server has drives in a mdadm array and ZFS arrays. I'm about to move them all to a new motherboard/CPU/RAM. ZFS handles this gracefully. mdadm not so much, sometimes. It'll probably come up as md127 or something, and you'll need to go through the "mdadm --detail --scan >> /etc/mdadm.conf" bit, then it should be fine. Nothing you have to do to tell ZFS to eat more memory.
|
![]() |
|
Thermopyle posted:My server has drives in a mdadm array and ZFS arrays. I'm about to move them all to a new motherboard/CPU/RAM. Both mdadm and ZFS use UUIDs to determine which drives are part of a particular array, so no, neither care about which physical port a drive is plugged into. You should do a zfs export of your zpools before moving them to the new system. If you forget, you'll have to use zfs import -f to bypass the error that they weren't exported before moving, but it won't actually hurt anything. I'm not sure about ZFS on Linux, but on FreeBSD/IllumOS it'll automatically use all the RAM you can throw at it. IIRC the ARC auto-sizing on Linux needs manual configuration since it's not integrated with the rest of the kernel caches. Google should help on this one. SamDabbers fucked around with this message at 16:54 on Mar 4, 2014 |
![]() |
|
I wish I had caught the fact that FreeNAS was setting aside 2gb per disk for swap space. It's a trivial loss of storage but it bothers me because the system will never need anywhere close to 8gb of swap.
|
![]() |
|
What's the process for reinstalling FreeNAS? Can I just export the config, set up a new flash drive and load the config from there, or do I have to export the ZFS-pool first? I'm currently in a weird situation where an upgrade failed, and now it's still working, but unable to be upgraded any further. I'm hoping a reinstall can fix that.
|
![]() |
|
Both. Export the zpool, backup your config file, wipe and reinstall/upgrade, load the config file, import the zpools.
|
![]() |
|
evol262 posted:ZFS handles this gracefully. What's the appropriate way to handle this if I boot off of this mdadm array? To be clear, this isn't a new OS install that I'm wanting to use an existing array with. I'm going to move the array that the OS is installed on, into a new system.
|
![]() |
|
Thermopyle posted:What's the appropriate way to handle this if I boot off of this mdadm array? Booting should be ok. You may have problems when it comes time to pivot root if the devices are different. mdadm is ok with drive ordering changing, but is not ok with /dev/sda,/dev/sdb,/dev/sdc changing to /dev/sdd,/dev/sde/,/def/sdf. In the latter case, it'll fail to assemble, mdX won't start, and you'll have to use a recovery cd (mdadm --assemble --scan && mount /dev/mdX /mnt/recover && mdadm --detail --scan >> /mnt/recover/etc/mdadm.conf && reboot).
|
![]() |
|
Are there any strong opinions about purchasing refurbished hard drives on amazon? The 3tb WD Reds are $10 less and no state sales tax for me if I go with the amazon fullfilled refurb.
|
![]() |
|
Comatoast posted:Are there any strong opinions about purchasing refurbished hard drives on amazon? The 3tb WD Reds are $10 less and no state sales tax for me if I go with the amazon fullfilled refurb. Only ![]()
|
![]() |
|
Thanks for the help SamDabbers and evol262. I didn't have a thing to worry about. Moved drives to new motherboard. Booted. Done.
|
![]() |
|
SamDabbers posted:Only Its more about keeping the great state of Texas's grubby hands as far away as possible. A matter of principal, if you will.
|
![]() |
|
Thermopyle posted:Thanks for the help SamDabbers and evol262. This is why software RAID is awesome.
|
![]() |
|
Comatoast posted:Its more about keeping the great state of Texas's grubby hands as far away as possible. A matter of principal, if you will. Well if it's a matter of principle, then you should buy them even if they cost ![]() ![]() If it's a matter of principal then you may end up losing more value in the end if your refurb drive craps out and you're just out of the (e.g.) 90-day warranty.
|
![]() |
|
Please be aware that if you have an uneven number of non-parity drives vs drives in the vdev, with an even number record size (record size defaults to 128 but is tunable at pool creation - and has to follow the binary value progression), you may get unfortuante results - the formula is something like: recordsize / (nr_of_drives - parity_drives) = maximum variable stripe size. So if you have 5 drives in raidz2, you will end up with repeating numbers - and storing that in a 512 byte or 4094 byte sector will lead to preformance problems both at write and at read. The more time I spend looking up things for the tips and notes I post from time to time - even if I wasn't able to find it this time, the more convinced I become that zfs is arcane magic that involves blood sacrifices and virgins, possibly of the goat variety. D. Ebdrup fucked around with this message at 23:50 on Mar 5, 2014 |
![]() |
|
I've posted about this before in the thread, but the performance problems aren't that bad and would only show up under really specific workloads (lots of fsync'd writes of a size that doesn't match the number of data drives * stripe size), and even then the penalty is that they'd perform as if the write were one stripe larger in size. This is definitely something to be avoided if you're going to hit that use case (eg, write-heavy ACID compliant database), but for most home NASs it will never be noticeable.
|
![]() |
|
What the flying fuck is a 2.5TB HD? Got one from WD as a refurb in exchange for a busted 2.0TB drive. Is it a 3.0TB drive with half a busted platter?
|
![]() |
|
Bob Morales posted:What the flying fuck is a 2.5TB HD? its a drive with 4 640gb platters.
|
![]() |
|
Bob Morales posted:What the flying fuck is a 2.5TB HD? HDD platters come in 500gb, 640gb, 1tb and 1.25tb sizes. They're then "destroked" to fit whatever size your drive is supposed to be.
|
![]() |
|
Bob Morales posted:What the flying fuck is a 2.5TB HD? Hahah, I got the exact same thing a while back from a RMA! Unfortunately it threw my array for a spin and I was actually kind of pissed they didn't give me an identical sized disk.
|
![]() |
|
HDDs from Newegg arrived. While my 8 drives were in fact individually in smaller boxes with 1 drive per instead of the styrofoam holders (which probably require buying 12 drives?), they actually seem pretty well packaged, with a pretty rigid air bubble holder that prevents the drive from being able to move at all within the small boxes:![]() Time to test and hope they're all good! E: Well, one of them was DOA. Makes a high-pitched buzzing noise and isn't being detected by the system. The rest were detected properly, and SMART data looks good. Now running badblocks on them, hopefully will only have to RMA the one drive. Might just request a refund and pick up a drive from Micro Center since they have them for $125 this month - paying the sales tax is probably worth not having to wait for the new drive to arrive. GokieKS fucked around with this message at 20:40 on Mar 6, 2014 |
![]() |
|
TIMG that shit, por favor.
|
![]() |
|
DrDork posted:Both. Export the zpool, backup your config file, wipe and reinstall/upgrade, load the config file, import the zpools. Two questions, if I want to upgrade freenas and don't care about the previous config, I assume I don't need to back it up. And also, what is the purpose of exporting a zpool? Doesn't freenas have auto importing of pools/volumes?
|
![]() |
|
About to pick up a new NAS and dump my eSATA/USB3 Drobo on craigslist. It took days to get my files off of the drobo onto an external drive. Is the Synology 1513+ still the most recommended 5-bay NAS? I want to do Smart RAID or whatever they call it so I can add drives later. I'm mostly going to be housing a bunch of raw photography that I access w/ Lightroom and then video media files. Just want to make sure I'm still up to date before I pull the trigger.
|
![]() |
|
Megaman posted:Two questions, if I want to upgrade freenas and don't care about the previous config, I assume I don't need to back it up. And also, what is the purpose of exporting a zpool? Doesn't freenas have auto importing of pools/volumes? (2) It's just being nice to the zpool. Yes, FreeNAS/FreeBSD can force an import of a not-correctly-exported zpool. In your case not exporting is unlikely to harm anything, since all it really does is force-flush anything that needed to be written to the pool and gracefully remove it from the OS. Since you're unlikely to be writing stuff to it when you pull it, and you don't care about the OS you're leaving behind, it really doesn't matter. Good habit, though.
|
![]() |
|
MMD3 posted:About to pick up a new NAS and dump my eSATA/USB3 Drobo on craigslist. It took days to get my files off of the drobo onto an external drive. Yea, as long as you dont need transcoding the 1513+ is great and can be expanded with the DX513 to a 10 or 15 bay.
|
![]() |
|
ZFS on Linux appears to be shaping up quite nicely. I got sick of running SmartOS at home and decided to install Debian instead. I had forgotten to export the pool but a force import worked just fine.
|
![]() |
|
Conversely BTRFS is almost ready too and by the general articles should be better than ZFS.
|
![]() |
|
MMD3 posted:About to pick up a new NAS and dump my eSATA/USB3 Drobo on craigslist. It took days to get my files off of the drobo onto an external drive. I got the 1813+ locally via eBay (make offer, offer cash + pickup, cancel ebay auction) for the cost of a 1513+. So you may want to keep your eyes open. Brand new in sealed box -- guy got it in exchange for some IT work he did and ended up flipping it.
|
![]() |