|
The primary downside of using the N54L for Plex is that you will likely hit some performance limits with transcoding (why someone would use Plex for just sitting at a 10ft interface instead of XBMC is not clear to me aside from the "pay someone else to configure more of XBMC" argument) - it's just barely competent at transcoding 720p when using the x264 library (it's not like Plex supports GPU-based transcoding either). For Synology units, I wrote them off a good while ago for a situation where I might even have a chance of transcoding (see: iPads, iPhones as clients) because of benchmark results like this: http://us.hardware.info/reviews/270...-video-encoding But they're wonderful units if you just want to run mostly a pure dedicated NAS and don't have much of a need for fast CPU. Krailor posted:Edit: And if you want to get crazy you could install a Xeon E3-1230 v2 into the G8 to get VT-D support. bacon! posted:The FreeNAS guides recommend 1GB ram/TB storage -- NSC-800 build update. Good News - the Rosewill heatsink I have is now definitely absorbing heat from the CPU as it can get rather hot. Bad News - I found out because the fan died after I did something stupid and shameful / goony. My i3 ran at full speed in an open enclosure hit 77C "idling" at the EFI screen. Guess these bad boys still don't issue the CPUIDLE instructions when sitting around outside a "real" OS. Thought this shit was solved over a decade ago, silly me having faith in the low level computer software industry. necrobobsledder fucked around with this message at 22:18 on Oct 18, 2013 |
![]() |
|
Krailor & necrobobsledder posted:
Also worth noting is that the Gen8 Microserver comes with HP iLO4 which uses the Matrox G200 that nearly all OOB solutions use, but you need to buy a seperate HP iLO Advanced license to get vKVM features. Alternatively, it appears you can get terminal support for free which is fine for ESXi. Additionally, the HP B120i controller uses a single pci-ex 2.0 x1 and a 1x SAS -> 4x SATA SFF-8087 cable, meaning that you can't make full use of 2x SATA6Gbps+2xSATA3Gbps, so you'd be wise in investing in a M1015 while you're at it. Not that the Gen8 Microserver is cheap, of course - and adding another cpu and a SATA controller isn't going to make it cheaper. EDIT: ↓ Yeah, my Google-fu isn't up to scratch this evening. D. Ebdrup fucked around with this message at 23:18 on Oct 18, 2013 |
![]() |
|
D. Ebdrup posted:Care to provide a source for this? The Gen8 ProLiant ML310e has a model that comes with the Xeon E3-1230 v2, but isn't the CPU in the Gen8 Microserver soldered on? I think the AMD-based Microservers had soldered CPUs though.
|
![]() |
|
I'm setting up a Nexenta CE iSCSI-based SAN for use in a SQL Server lab and I'm wondering how much L2ARC cache I'll need and how to determine the L2ARC usage and if L2ARC is actually helping or not. I currently have eight SATA-2 (3.0gbps) disks in a single volume of mirrored pairs totalling 2.3TB of space, with a pair of Samsung 256GB 840 PRO SSDs on a M1015 controller for L2ARC. I had the drives available so I tossed them into the mix, but I'm wondering how much good they are doing/will do and if I need both in there. This filer will be the shared iSCSI storage for a SQL Server 2012 cluster running about a half dozen small databases, but the largest of which is pushing 150GB and billions of records. The database is updated hourly, which adds about 2 million rows at a time (consisting of 1.5 million user numbers and their scores at that time). Data access is mostly sequential in nature (tracking users scores over time). Since the filer will mostly be getting hit with sequential database usage patterns, will 512GB of L2ARC be overkill? How do I know what to look for, performance-wise?
|
![]() |
|
Agrikk posted:I'm setting up a Nexenta CE iSCSI-based SAN for use in a SQL Server lab and I'm wondering how much L2ARC cache I'll need and how to determine the L2ARC usage and if L2ARC is actually helping or not. I hate to say this, but this might be a little above our pay grade. Perhaps you might have better luck in the Enterprise Storage thread? http://forums.somethingawful.com/sh...hreadid=2943669
|
![]() |
|
You can add / remove L2ARC completely nondestructively. Figure out a benchmark method that approximates your workload, then run it a bunch of times with L2ARC enabled / disabled.
|
![]() |
|
Most people in the enterprise storage thread have relatively little experience with ZFS is the thing because almost everyone commercial that has the budget to build out a capable ZFS storage server has the budget to just buy a NetApp or something. The ZIL is what you should be using SLC based SSDs for and can hose your system if it dies. Most newer commercial SANs are expensive but can destroy the performance of most ZFS-based single-node SANs (that doesn't even make sense, granted). Anyway, everyone will say to try removing the L2ARC drives to see their effect upon performance - you can even yank them while the system is running and have no effect upon data integrity. Because you're mostly doing sequential the effect should be minimal even if you're down to something tiny like a 16GB SSD. However, the second you start adding other duties to that iSCSI target set, you'll be wishing you had more. If you know the access pattern will be mostly sequential, ZFS will do fine and you should be looking at sequential IOPS. This might be a bit of a stretch, but a bit older hardware from Fusion IO may be worth a shot. Their stuff gets you literally millions of IOPS... for a price far cheaper than the equivalent to get there. This is because they basically don't do redundancy and are aimed at people building distributed file systems such as Hadoop's HDFS or Google GFS. Then again, I think their cheapest stuff is like 3TB of SSDs for like $10k, which is probably more than the budget for your project.
|
![]() |
|
necrobobsledder posted:If you know the access pattern will be mostly sequential, ZFS will do fine and you should be looking at sequential IOPS. Are you saying that I should have my SATA drives configured as RAID-Zx instead of a volume of mirrors? I thought that the parity calculations in any RAID (other than 1,0,10, 0+1) would inherently make the volume slower than a mirrored set. Is a ZFS volume that much faster? IOwnCalculus posted:You can add / remove L2ARC completely nondestructively. Figure out a benchmark method that approximates your workload, then run it a bunch of times with L2ARC enabled / disabled. This was pretty much what I planned. A pity that there isn't some counter, like the "L2ARC in use" or something that I could use to see how much of it I'm using. Benchmarking it is, then... Bleh. Agrikk fucked around with this message at 17:28 on Oct 19, 2013 |
![]() |
|
FreeBSD has zfs-stats to provide information about ARC and L2ARC usage, I don't know what the Nexenta equivalent would be. Do you have enough RAM to use all that L2ARC? Every entry in L2ARC takes up a small portion of ARC. I can't find it now but I read a Sun developer blog that talked about the issue and some systems that weren't able to use all of their L2ARC or any of their ARC because they were starved for RAM. edit: maybe some of these scripts will help http://dtrace.org/blogs/brendan/201...of-the-zfs-arc/ thebigcow fucked around with this message at 17:31 on Oct 19, 2013 |
![]() |
|
I'm one of those special flowers that clings to UNRAID, but its showing its age pretty bad now. I want to scale up my server to 6+ drives which involves buying a more expensive license, but it doesn't support any level of double parity drive coverage. I'm really privy to file servers where you can expand storage at your convenience which is why I like unraid, but want to modernize a bit more. I REALLY had my eyes on storage spaces but when people were testing the RC it fell flat on it's face. It sounds like it's become really compelling though with a recent R2 release? or am I mistaken?
|
![]() |
|
What's a plug-and-play solution I can get that will backup multiple home PC's and Macs over a network?
|
![]() |
|
fookolt posted:What's a plug-and-play solution I can get that will backup multiple home PC's and Macs over a network? The best full backup solution I've used was when I was running WHS 2011. The desktop client would provide a full backup of any hard drive you chose, and in the event that your HDD were to fail, if you replaced the HDD and booted from the provided recovery disk, it would restore everything and provide you with a bootable volume, leaving you with everything intact. For mac, I know there's a client, but you could also do regular time machine backups to a share. I've never restored my mac from a backup, so that's probably better answered by someone else. WHS is easy to set up, but not so much plug and play - you need some form of PC hardware to get started. The HP microservers are well liked around here, that's what I used. If you're more concerned with the plug and play type solutions, I'll once again pimp synology for having one of the easiest setups there is - start plugging hard drives in, turn it on, run a util, and you're up. For the backup portion, they recommend time machine on macs, and they have a backup util for PC's. It doesn't look it restores a bootable volume from a failure like WHS does, but it will keep your data safe, provided you're backing up the locations you store it in. http://www.synology.com/dsm/home_ba...ktop_backup.php
|
![]() |
|
Lost another harddrive in my house so I'm ready to set up a NAS finally so I stop risking losing something important. A http://www.amazon.com/gp/product/B00CRB9CK4/ and 2x 2TB http://www.amazon.com/gp/product/B008JJLZ7G and I'm all set right?
|
![]() |
|
Agrikk posted:Are you saying that I should have my SATA drives configured as RAID-Zx instead of a volume of mirrors? I thought that the parity calculations in any RAID (other than 1,0,10, 0+1) would inherently make the volume slower than a mirrored set. Is a ZFS volume that much faster? I'm generally a fan of not optimizing hardware to make up for poorly written software, but that's a philosophical debate in engineering unto itself. With 7200 RPM SATA disks like I suspect you're using, your random IOPS performance being hilariously bad (literally - 150 IOPS) will long be the problem before RAIDZ performance issues become a problem. But like most SANs and I/O problems on servers, you can just throw more spindles at the problem in the end. That's how SANs scale horizontally fundamentally, but for your lab I'm not sure what to expect besides that L2ARC will give you a performance boost almost certainly being used as a SQL server's datastore. For a general ballpark idea of what to expect with RAIDZ vdevs of different sizes or with mirroring opposed to RAIDZ vdevs, this guy has a fair chart. Note that he didn't use 15k+ RPM Fibrechannel drives or anything. Also, there's a way to determine how much of your L2ARC is in use to begin with and whether you would get any benefit from increasing its size. It's right there in the zpool status command. http://serverfault.com/questions/31...t-hitting-l2arc
|
![]() |
|
D. Ebdrup posted:Care to provide a source for adding a Xeon CPU to a Microserver? The Gen8 ProLiant ML310e has a model that comes with the Xeon E3-1230 v2, but isn't the CPU in the Gen8 Microserver soldered on? I don't mean to beat a dead horse, but what are the advantage of a g8 microserver w/ a swapped in Xeon 1220 versus a rig I build myself with an the Xeon 1220 and a SuperMicro C204 board? At that point, the g8 proliant ends up being about $160 more. It looks like I could pawn off the G1610T for about $50.
|
![]() |
|
Death Himself posted:Lost another harddrive in my house so I'm ready to set up a NAS finally so I stop risking losing something important. That's the stuff. 3TB's are cost effective if you can afford the bump up. Reds are the way to go.
|
![]() |
|
bacon! posted:I don't mean to beat a dead horse, but what are the advantage of a g8 microserver w/ a swapped in Xeon 1220 versus a rig I build myself with an the Xeon 1220 and a SuperMicro C204 board? At that point, the g8 proliant ends up being about $160 more. It looks like I could pawn off the G1610T for about $50.
|
![]() |
|
Assuming you can get a good case for a dyi build, the G8 Microserver simply isn't as competitively priced for the same market as the N??L series Microservers were. My next homeserver build is most likely going to be a Intel Server Board S1200V3RPM + Xeon E3-1225 v3 and 32GB memory along with 6 WD Red 4TB so I can throw pfSense, FreeNAS and OpenELEC onto one machine. D. Ebdrup fucked around with this message at 23:02 on Oct 19, 2013 |
![]() |
|
Civil posted:The best full backup solution I've used was when I was running WHS 2011. The desktop client would provide a full backup of any hard drive you chose, and in the event that your HDD were to fail, if you replaced the HDD and booted from the provided recovery disk, it would restore everything and provide you with a bootable volume, leaving you with everything intact. For mac, I know there's a client, but you could also do regular time machine backups to a share. I've never restored my mac from a backup, so that's probably better answered by someone else. WHS is easy to set up, but not so much plug and play - you need some form of PC hardware to get started. The HP microservers are well liked around here, that's what I used. Thanks. So let's say something breaks on one of my machines; how do I restore it from a Synology? Also, which one should I get? Is there a $100 difference between the DS213 and DS213j? fookolt fucked around with this message at 22:48 on Oct 19, 2013 |
![]() |
|
The J and not should be difference between single core cpu vs dual and possibly architecture difference. If you are going to run a bunch of torrents and other stuff get the dual core. The backup software is a little clunky but can backup folders you select so you could reinstall windows and restore the data after installing software. If you have Windows 8 you can do a better backup setup or use something to make an image file (backup and restore in windows).
|
![]() |
|
What is the best goon recommended CPU (1155) for a server running PLEX for transcoding? Would a i3-3225 handle 2 or 3 1080p streams? By best, I mean cheapest that will do the job. kill your idols fucked around with this message at 02:07 on Oct 20, 2013 |
![]() |
|
Is there a hard drive equivalent of Memtest that will do a write/read comparison, preferable looping and with different patterns? There's a machine with what we term to be "voodoo" problems, hard to track down and isolate but they can show up at inconvenient moments. I want at least some sort of assurance that its not the data storage side that's causing them. I've used BST5 but its not well documented and most tests just check for transfer rates. This is a desktop machine.
|
![]() |
|
So i've got my "Nas" (really just a Win7 with Drivepool) up and running. The consensus is that it shouldnt go to sleep/turn off the drives any point, right? They should constantly be spinning?
|
![]() |
|
Gozinbulx posted:The consensus is that it shouldnt go to sleep/turn off the drives any point, right? They should constantly be spinning?
|
![]() |
|
Mine stay on for about 3 hours if they get accessed, so they don't go on off on off on off, but there are periods of over a day or two where the array isn't accessed at all, seems like a waste to just let it run the drives just because.
|
![]() |
|
Well I was just asking. I thought I read that here that drives turning on and off is bad for longevity.
|
![]() |
|
So I finally hit the capacity limit of my NAS and I'm planning an upgrade, probably just disks and HBA but potentially other hardware as well. I don't follow the consumer NAS scene very closely, so I just wanted to get some advice before diving in. My current setup is as follows: Motherboard: Supermicro X9SCL+-F CPU: i3-2100 HBA: BR10i Disks: 2x WD15EARS, 2x WD20EARX I'm running ZFS on Ubuntu (ZOL kernel module), but I'm still on pool version 28. Basically, I was using OpenIndiana, but switched to Linux because I wasn't comfortable in a Solaris environment and I use this server for several other things (SAB, SickBeard, CP, PVR backend, etc.). While I have been lucky to not have any drive failures yet, I'm looking to get rid of the green drives, and replace all of them with better quality drives of the same capacity, rather than the mixed capacity I have now. With that said, here's the hardware I'm considering: Disks: 4x WD30EFRX HBA: M1015 It sucks to have to buy a new HBA considering the M1015 is pretty expensive, but so be it. From here though, what is my best upgrade path? Should I install the new controller with the new disks, create a new zpool of version 5000 and simply copy my data over? Should I upgrade my existing pool first? I haven't looked in to how ZFS on Linux is version FreeBSD for quite some time; is there any reason to consider switching to FreeBSB? As well, I should be sure to flash the M1015 to the LSI9211-8i firmware, right? I know it was recommended to flash the BR10i to the LSI firmware, and from what I can tell, the same process is recommended for the M1015. Thanks for the help!
|
![]() |
|
Gozinbulx posted:Well I was just asking. I thought I read that here that drives turning on and off is bad for longevity. You're probably mixing "power cycles" with "load / unload cycles". WD Greens, as well as most other green-type drives, park their read/write heads at almost every chance they get. The problem is that the heads will only survive that type of load long-term in a desktop environment where the drive is either powered off most of the time, or if it's on, it's idle for hours and hours. In most NAS / server applications, the drive is powered 24/7 and is accessed often enough that it is constantly moving the heads to and from being parked, which wears them out and can cause premature drive failure. This used to be manually configurable on Greens way back in the day but they stopped that when they realized people were filling up servers with Greens instead of REs. entr0py posted:Disks: 4x WD30EFRX I used this particular guide to flash my M1015. The same tools (just with different firmware packages) should flash the BR10i as well. I have my M1015 coexisting with an HP LSI1064-based card (the BR10i is 1068 I believe) and both are running generic LSI IT firmware, not the HP / IBM firmwares they came with. In your situation I would create a new pool and migrate the data, no reason not to. Are you going to set it up as raidz2 or copy your current setup of what I assume must be two mirrored vdevs?
|
![]() |
|
IOwnCalculus posted:I used this particular guide to flash my M1015. The same tools (just with different firmware packages) should flash the BR10i as well. I have my M1015 coexisting with an HP LSI1064-based card (the BR10i is 1068 I believe) and both are running generic LSI IT firmware, not the HP / IBM firmwares they came with. Thanks for the flash guide, that's perfect. Are for my data migration, my current setup is indeed two mirrored vdevs. I am planning to ditch the existing drives, so I'm really open to anything on my new pool. Most of my data is non-critical media, so I was thinking of just doing raidz1 (which would mean 9TB of total storage). I automate off-site backups of critical data weekly so I'm not worried on the off chance of two drives failing at once in that configuration.
|
![]() |
|
Wait, you can mirror vdevs? Thought that zpools always striped / round robined their vdevs and that the only mirrors are the vdevs themselves?IOwnCalculus posted:This used to be manually configurable on Greens way back in the day but they stopped that when they realized people were filling up servers with Greens instead of REs.
|
![]() |
|
entr0py posted:Thanks for the flash guide, that's perfect. Are for my data migration, my current setup is indeed two mirrored vdevs. I am planning to ditch the existing drives, so I'm really open to anything on my new pool. Most of my data is non-critical media, so I was thinking of just doing raidz1 (which would mean 9TB of total storage). I automate off-site backups of critical data weekly so I'm not worried on the off chance of two drives failing at once in that configuration. Then yeah, sounds like a good time for a raidz1 to me. necrobobsledder posted:Wait, you can mirror vdevs? Thought that zpools always striped / round robined their vdevs and that the only mirrors are the vdevs themselves? I mean his pool was made up of two vdevs, each of which is a mirrored pair. The pool would indeed be striped across them. necrobobsledder posted:Samsung drives were able to have TLER set on them, they call it CCTL. Last I remember, the drawback was that it wasn't permanently set so you'd have to set them upon booting your OS. I had mine setup in an rc.local script calling hdparm and/or smartctl. Still have the drives and they work on the Spinpoint F4 models if you can snag some. TLER was part of it but the main problem with the greens was the head parking. I've got one sitting in a server where I don't care if it dies, and if I can ever make smartctl play nice with the ancient ass controller in that even-more-ancient server, I know it has a stupid number of head load/unload cycles. Ah, there we go. code:
IOwnCalculus fucked around with this message at 00:19 on Oct 22, 2013 |
![]() |
|
Bensa posted:Is there a hard drive equivalent of Memtest that will do a write/read comparison, preferable looping and with different patterns? There's a machine with what we term to be "voodoo" problems, hard to track down and isolate but they can show up at inconvenient moments. I want at least some sort of assurance that its not the data storage side that's causing them. I've used BST5 but its not well documented and most tests just check for transfer rates. This is a desktop machine. Use the offline tests from smartctl / smartmontools. IOwnCalculus posted:Two days away from two years of power-on time, averaging 700+ head parks a day. How is this thing still running? I'm not sure if you're being sarcastic or not, but according to the SMART data you still have 19% (or 7.4%) of the expected load cycle lifetime left. Ninja Rope fucked around with this message at 00:47 on Oct 22, 2013 |
![]() |
|
Bensa posted:Is there a hard drive equivalent of Memtest that will do a write/read comparison, preferable looping and with different patterns? There's a machine with what we term to be "voodoo" problems, hard to track down and isolate but they can show up at inconvenient moments. In addition to the previously mentioned SMART tests, consider Linux's badblocks for a software-oriented read/write test. Be sure to read the manual very well, as it can be destructive. Also note that it can be amazingly ineffective if the issue is related to load on the system. If you're finding corrupted data on disk, you may actually be dealing with a memory issue.
|
![]() |
|
I have a bunch of 2.5" 1TB drives laying around and I was wondering what the least inexpensive way to present them to a windows machine for storage would be. Nothing fancy, inexpensive and lower power would be fantastic. JBOD is fine too although RAID 5 or 6 would be cool. I found some 4 2.5" drives in 1 5.25" slot thing, but 70 bucks a pop is more than I'm willing to spend on something to mess around with. eSATA, USB, iSCSI, all that will work. I have a ton of desktop hardware to mess with as well. Hell if I could just find something that I could put 8 or 12 of the things in without costing a fortune I'd be happy.
|
![]() |
|
Ninja Rope posted:Use the offline tests from smartctl / smartmontools. McGlockenshire posted:In addition to the previously mentioned SMART tests, consider Linux's badblocks for a software-oriented read/write test. Be sure to read the manual very well, as it can be destructive. My first instinct with these things is to load up UBCD and run Memtest86+ for at least a night. Usually something shows up with that or the standard disk tests. But the current issue is very rare but it can ruin long term measurements, and no, we can't switch the hardware. I've run the graphical version of smartctl via Parted Magic for a single loop as I can't see a way to do multiple, but nothing showed up. I'm guessing with the CLI I can script a loop? I've ran into issues where a memory error would show up every fourth scan or so with Memtest, so doing multiple loops for disks would seem appropriate.
|
![]() |
|
IOwnCalculus posted:You're probably mixing "power cycles" with "load / unload cycles". WD Greens, as well as most other green-type drives, park their read/write heads at almost every chance they get. The problem is that the heads will only survive that type of load long-term in a desktop environment where the drive is either powered off most of the time, or if it's on, it's idle for hours and hours. Do the Reds do this? My NAS/HTPC is running 4 Reds. Should i be putting it to sleep when not in use or should I constantly leave it running? I use it maybe 4 or 5 hours a day.
|
![]() |
|
Ninja Rope posted:I'm not sure if you're being sarcastic or not, but according to the SMART data you still have 19% (or 7.4%) of the expected load cycle lifetime left. Being dead serious, I could've sworn people were seeing massive failures of Greens back in the day in always-on server environments and this was being cited as a cause. Honestly had no idea that's what those other columns meant, I always just go straight for the value. Gozinbulx posted:Do the Reds do this? Reds definitely do not do this. My reds are of course much newer, but the numbers from them tell a much different story: code:
|
![]() |
|
Death Himself posted:Lost another harddrive in my house so I'm ready to set up a NAS finally so I stop risking losing something important. Just got this set up, was quick and painless. The only thing is I swear I set it up to be RAID 1 but in their (really cool) psuedo-desktop software it says it's "Synology Hybrid RAID (SHR) (With data protection of 1 disk fault-tolerance)" so uhh.. I assume that's what they are calling RAID 1? As long as my shit is safe I don't care but it's odd.
|
![]() |
|
It looks like a RAID5 Hybrid, though since you just have two disks it's effectively RAID1. If you add more disks it will be... different: http://forum.synology.com/wiki/inde..._Hybrid_RAID%3F
|
![]() |
|
FISHMANPET posted:It looks like a RAID5 Hybrid, though since you just have two disks it's effectively RAID1. If you add more disks it will be... different: The unit can literally only hold two disks so... ok I guess. Sure why not.
|
![]() |