|
Megaman posted:I'm using a USB drive to boot my FreeNAS setup, 7 disk, 1tb disks, in RaidZ3. It's so slow to boot compared to any system I have. Would it be worth putting the FreeNAS install on an SSD instead of on the USB? Would that speed the boot process up significantly? Using an SSD would speed up the boot time but the real question is why does boot time really matter? Generally speaking a FreeNAS setup is something that's put on a server somewhere and runs 24/7 so it's not something that should need to boot all that often. What's your use case where you are needing to boot the server so often that slightly longer boot times are an issue?
|
![]() |
|
deimos posted:One of my < 1yo 4TB reds is starting to show bad sectors while my 2TB HGSTs that are damn near 4 years old are trucking along... stupid bathtub.
|
![]() |
|
Krailor posted:Using an SSD would speed up the boot time but the real question is why does boot time really matter? Generally speaking a FreeNAS setup is something that's put on a server somewhere and runs 24/7 so it's not something that should need to boot all that often. What's your use case where you are needing to boot the server so often that slightly longer boot times are an issue? Ah, I don't leave mine on 24/7, maybe I should. I assume booting and stopping every so often puts severe wear and tear on the drives and not so much on the software RAID setup itself?
|
![]() |
|
Megaman posted:Ah, I don't leave mine on 24/7, maybe I should. I assume booting and stopping every so often puts severe wear and tear on the drives and not so much on the software RAID setup itself? When it comes to starting and stopping, drives are rated with a S.M.A.R.T attribute called Load/Unload Count which keeps track of how many times the head has been parked (an event that can trigger on many things like a system call with powerd, the firmware telling the disk to park its head to save power, and the system doing another power event such as shutdown or reboot). Some drives are only rated for 20k whereas others are rated as high as 600k. ZFS doesnt care one bit if you shut it off or not.
|
![]() |
|
Drives these days deal pretty well with frequent power-downs. If you chose a long timeout, it shouldn't spin up and down often at all. I've mine set to an hour. If the NAS idles that long, it's safe to say I'll be gone for a while. As far as power goes, I was surprised to see that with 2 WD Red spinning, my Haswell Xeon based one draws only 36W. Should be 25W if they spin down.
|
![]() |
|
Combat Pretzel posted:As far as power goes, I was surprised to see that with 2 WD Red spinning, my Haswell Xeon based one draws only 36W. Should be 25W if they spin down. As far as wear and tear, having drives set to unnecessarily short spin-down times (like 15 minutes) will cause far more wear and tear than shutting it off at night. Server-grade drives like the WD Red and Seagate NAS are also designed with continuous workloads in mind, so it's even less of an issue for them. Basically, don't worry about it. For your initial question, though, yes, running FreeNAS off a SSD is faster than off a USB, but it's still a slow boot; BSD just takes awhile.
|
![]() |
|
A cheap SSD is a good idea anyway, if you want to save power by spinning your disks down. FreeNAS slaps its log files on the first zpool you create and keeps it from spinning down because of the updates. I've gotten myself an Intel 320 for 25€, created a separate zpool on it and moved the system dataset to it.
|
![]() |
|
I'm trying to decide how best to increase my existing NAS & maybe consolidate as well. I've currently got a Synology DS212j that I'm running 2 WD Red 4g's on in JBOD. I realize that my data is getting to the point that I can't back it all up so I'm interested in moving to SHR or ZFS (FreeNas or similar). Additionally, I run a Plex media server through a HTPC in my living room that serves media to about 8 different friends and family members. Some transcode, some don't. I only occasionally have performance issues as it's a i5 4750S and can transcode at least 2 1080p streams at a time. My dilemma stars there. I want to expand so I immediately thought about getting a DS415+ but I found it a bit short sighted to think that if 8tb isn't enough space, 12tb (in SHR) probably wouldn't be enough in the long run either. I then decided to get the DS1513+ which has 5 bays and I could add up to 2 of the RX513 for another 5 bays each down the line if needed. The problem with this is that I'll still be locked in to running both the NAS & my HTPC at all times. Then I started looking at FreeNas and I got completely lost. I'm not necessarily scared of it, but I'm pretty fucking lost with it. I know I can make it much more powerful, powerful enough to run Plex & handle the transcoding, but I can't really get a handle on how much I'm looking at spending. I've looked through the freenas forums and from what I understand I need at supermicro motherboard that supports ECC, at least 1gb of RAM for each terabyte of disc space and I can't use an i5 or i7. Outside of that...lost. I can't find any recommended builds or anything. The Synology DS1513+ is 750 or so on Amazon right now. Can I really build a NAS that can handle transcoding for that price?
|
![]() |
|
suddenlyissoon posted:I'm trying to decide how best to increase my existing NAS & maybe consolidate as well. I've currently got a Synology DS212j that I'm running 2 WD Red 4g's on in JBOD. I realize that my data is getting to the point that I can't back it all up so I'm interested in moving to SHR or ZFS (FreeNas or similar). Additionally, I run a Plex media server through a HTPC in my living room that serves media to about 8 different friends and family members. Some transcode, some don't. I only occasionally have performance issues as it's a i5 4750S and can transcode at least 2 1080p streams at a time. Last question first - yes. You don't need 1GB/TB unless you are doing deduplication. You'll be fine with 8GB, but 16 might be better. ECC is a good idea, and I would recommend it, but it is not strictly necessary. If you go with ECC you are stuck with AMD CPUs or Xeons, as far as I know. So it's just a matter of choosing a processor capable of whatever media loads you are putting on it, really. FreeNAS/ZFS isn't as hard as it might at first seem, and we can help with specifics when the time comes.
|
![]() |
|
suddenlyissoon posted:I'm trying to decide how best to increase my existing NAS & maybe consolidate as well. I've currently got a Synology DS212j that I'm running 2 WD Red 4g's on in JBOD. I realize that my data is getting to the point that I can't back it all up so I'm interested in moving to SHR or ZFS (FreeNas or similar). Additionally, I run a Plex media server through a HTPC in my living room that serves media to about 8 different friends and family members. Some transcode, some don't. I only occasionally have performance issues as it's a i5 4750S and can transcode at least 2 1080p streams at a time.
|
![]() |
|
DNova posted:If you go with ECC you are stuck with AMD CPUs or Xeons, as far as I know. You can use ECC with a lot of the Ivy Bridge/Haswell Pentium and i3s, actually, and it should work on most non-server MBs: http://ark.intel.com/search/advance...arketSegment=DT
|
![]() |
|
Civil posted:If you get a NAS that can transcode (even DS415play can do this), you can get by with something like a chromecast rather than a HTPC. And it may not be more cost effective today, but you could also put 6TB or 8TB drives into a NAS, rather than 4TB drives if you need additional space but want a smaller unit. Around the house I'm pretty set on streaming. However, outside of the house I've got a lot of people using the web client & iOS apps. I'm on a gigabit network with symmetrical gigabit internet as well. I've had a few people say that they've had issues with HD streams but I'm never at the house when it happens in order to see if it's a internal bandwidth issue or the CPU chugging. Because of that, I wanted to make sure I got something capable of a decent amount of transcoding. If I didn't need that, I'd probably just get the Asrock C2750D4I and be done with it.
|
![]() |
|
GokieKS posted:You can use ECC with a lot of the Ivy Bridge/Haswell Pentium and i3s, actually, and it should work on most non-server MBs: http://ark.intel.com/search/advance...arketSegment=DT You're right on the first but I think Intel still uses the chipset to segment it off; you'll probably need to pop that i3 into something with a C22x chipset. Has anyone actually gotten ECC working on an H97 or Z97 motherboard? Technically speaking there's no reason it shouldn't work since the CPU interfaces with RAM directly...
|
![]() |
|
Hmm, I thought they were supported on non-server chipsets, but maybe not. Thankfully you can get a C222 motherboard for not *too* much more than a decent consumer 1150 board.
GokieKS fucked around with this message at 21:12 on Oct 3, 2014 |
![]() |
|
Non-server mainboards could support it. That's a bold assumption to rely on. If they saved on the traces, your ECC DIMMs will still work, but just like regular ones.
|
![]() |
|
IOwnCalculus posted:You're right on the first but I think Intel still uses the chipset to segment it off; you'll probably need to pop that i3 into something with a C22x chipset. Has anyone actually gotten ECC working on an H97 or Z97 motherboard? I did see somewhere that someone got a Z97 Gigabyte board to work with ECC. It's why I originally got it for my NAS setup... and proceeded to burn out 2 of them in a row. GokieKS posted:Thankfully you can get a C222 motherboard for not *too* much more than a decent consumer 1150 board.
|
![]() |
|
It's certainly possible that some boards don't have them, but there are no real cost savings to be had by omitting the traces for the extra ECC byte lane.
|
![]() |
|
BobHoward posted:It's certainly possible that some boards don't have them, but there are no real cost savings to be had by omitting the traces for the extra ECC byte lane. There's nothing to be gained, either, when 99.9% of the people purchasing it don't even know what ECC is. necrobobsledder posted:Technically speaking, the motherboard BIOS/EFI does need to coordinate ECC capabilities back to the CPU. ECC used to work on a number of consumer motherboards (particulary Asus), but after reading a couple dozen threads with different user experiences and people running a user-written C program to determine the ECC flags, I decided that I had had enough and decided to get a C22x motherboard for my i3-4130. Right - my point was more that this is classic Intel market segmentation. There's no technical reason that a H/Z chipset shouldn't be capable of ECC when combined with an ECC-supporting i3 or Celeron, but Intel says you need a C2xx chipset. Same way they've blocked off VT-d on all K-series chips until the current generation. The thing that really sucks is that while AMD doesn't pull those shenanigans from the manufacturer level (pretty much any chip and chipset combo should support IOMMU and ECC), you still end up with spotty support. Most manufacturers are lazy about enabling it in the BIOS and testing it properly. If you're willing to deal with AMD's higher power draw, you might as well just buy a used Nehalem Xeon and motherboard combo for better performance and better support.
|
![]() |
|
IOwnCalculus posted:Right - my point was more that this is classic Intel market segmentation. There's no technical reason that a H/Z chipset shouldn't be capable of ECC when combined with an ECC-supporting i3 or Celeron, but Intel says you need a C2xx chipset. Same way they've blocked off VT-d on all K-series chips until the current generation. As far as recommendations, you'll find that you can pick up a used older Xeon and Supermicro board for only a few bucks more than an i3 or i5 and a generic 'board. That said, a newer i3 is going to be more power efficient than a Xeon, and quite frankly for simple transcoding and NAS work, an i3 is plenty. You only NEED a Xeon if you want to do fancy virtualization like VMWare or something. I'd still recommend a server motherboard, though, as they'll generally work much better with ECC and usually come with some Intel-based NIC, which'll go a long way to ensuring stable, fast connections. As far as FreeNAS/NAS4Free, they're pretty simple as long as you just want to stay simple. I personally think NAS4Free is easier to get up and running, and if all you want to do is set up a generic file server NAS without fancy write privileges and whatnot, configuration takes about 5 minutes. Setting up plex takes another 10 if you follow the guide. It's really not bad.
|
![]() |
|
DNova posted:You don't need 1GB/TB unless you are doing deduplication. You'll be fine with 8GB, but 16 might be better. ECC is a good idea, and I would recommend it, but it is not strictly necessary. No, for raidz*(1), you do need 1GB/1TB above whatever the system needs when doing regular zfs. The recommendation when doing zfs and deduplication is 5GB/TB. Also, ECC is mandatory. EDIT: (1): This does not apply to mirrors, which is why you dont always see it mentioned when people build giant zpools with 50 sets of mirrored disks. EDIT2: ↓ You're right. Still doesn't mean that the 1GB/TB rule doesn't matter, though. D. Ebdrup fucked around with this message at 14:34 on Oct 4, 2014 |
![]() |
|
Ugh, I hate my WD RE4 drives. One has an elevated multi zone error rate, the other seems fine in SMART but seems to always do something like park its heads or some bullshit shortly after activity stops. Annoying as fuck of a noise.D. Ebdrup posted:No, for raidz*(1), you do need 1GB/1TB above whatever the system needs when doing regular zfs.
|
![]() |
|
I have 8GB RAM in a system running a 24TB raw zpool (2 z1 sets of 4 disks each) plus a virtualized windows desktop for work (2gb RAM dedicated). Its probably not ideal for performance, but it certainly meets the needs for a home file server. 1GB/TB is the recommendation for enterprise workloads. Adding an SSD for a cache disk will help offset the lower RAM available, but is obviously not as cost-effective.
|
![]() |
|
D. Ebdrup posted:No, for raidz*(1), you do need 1GB/1TB above whatever the system needs when doing regular zfs. Perhaps you are right for super-best-ever-ultra-try-hard performance. It's demonstrably not true for your average home-NAS user who just wants FreeNAS or similar to sit there and serve the occasional 1080p stream. 8GB RAM on a 16TB system, for example, will still let you get ~100MB/s transfers all day long. Maybe the extra 8GB to bring it back to 1:1 would make resilvering go faster or something, but that's the type of thing enterprise users worry about, not home users. ECC is still recommended, though, and it really isn't much/any more expensive than non-ECC these days if you bother to look around for it.
|
![]() |
|
The 1GB/1TB recommendation stems from the idea that x% of the data needs to be hot. It may make more sense with RAIDZ than mirrors, because RAIDZ arrays are slower than mirrors. Due to full stripe writes, retrieving a block requires ZFS to touch all disks in a RAIDZ (minus parity). But it isn't necessary, because in a home NAS setting, a couple of gigabytes will do. ZFS' prefetching will be able to compensate for the RAIDZ latency easily, since there won't be several clients hammering the pool, anyway.
|
![]() |
|
D. Ebdrup posted:No, for raidz*(1), you do need 1GB/1TB above whatever the system needs when doing regular zfs. The recommendation when doing zfs and deduplication is 5GB/TB. Also, ECC is mandatory. Wrong on all counts.
|
![]() |
|
Combat Pretzel posted:the idea that x% of the data needs to be hot. This works great for load balanced commercial DB situations, for consumers you probably have 300mb of cat pictures, half a gig of game data and a gig or so of transient data like video etc that you access on a week to week basis. Some rule like 4GB/residential user is probably safe. Besides Netflix and youtube, I don't think my weekly bandwidth budget to my home "workstation" goes above 8GB in an average week.
|
![]() |
|
Also, not all enterprise workloads are created equal. There's massive differences between a typical NAS for shoving Powerpoints of cat pictures and booting dozens of diskless servers off of iSCSI LUNs (also, I'd generally try to get FCoE before that at least), and so tuning (and thereby building out) ZFS will be different. Focusing upon the drives first should come with your ZFS setup when it comes to home use, RAM only matters if you're starting repeatedly evict stuff out of the ARC that won't hold in there that you need back again. Most home storage needs are sequential. You may need to think about ARC, ZIL, and L2ARC if you're trying to setup iSCSI LUNs for several diskless machines on your network, but for the love of God you should be focusing on your network first long before your ZFS setup.
|
![]() |
|
DNova posted:Wrong on all counts. You're right, 5GB/TB for dedup is probably low. ![]() I'll nth that for a home scenario, 8GB RAM will be just fine for a pool much larger than 8TB. It's exactly what I have in my NAS4Free box, and I'm at 19.5TB raw / 13TB usable. The reality is that in a home situation the vast majority of that data is cold.
|
![]() |
|
DrDork posted:I have a similar setup to you, and am able to easily get 80-100MB/s. Sounds like a settings issue to me. Things that I found that helped out speed for me were using SMB2 for CIFS with a fuck-off sized buffer (8MB worked well for me, YMMV), and then disabling "Enable tuning of some kernel variables" under System. Not sure how much RAM you have, but you can try seeing if enabling/disabling prefetch makes much of a difference. Turns out this was a layer 1 problem, my w7 laptop was connecting over wifi because I didn't plug in the ethernet cable into the switch after moving to the docking station. But I made the 8MB buffer change as well and things seem to be running swell. Thanks!
|
![]() |
|
DrDork posted:As far as recommendations, you'll find that you can pick up a used older Xeon and Supermicro board for only a few bucks more than an i3 or i5 and a generic 'board. That said, a newer i3 is going to be more power efficient than a Xeon, and quite frankly for simple transcoding and NAS work, an i3 is plenty. You only NEED a Xeon if you want to do fancy virtualization like VMWare or something. I'd still recommend a server motherboard, though, as they'll generally work much better with ECC and usually come with some Intel-based NIC, which'll go a long way to ensuring stable, fast connections. A new i3 will also be much, much better at virtualization than a Nehalem Xeon. And an intel NIC is $20 if you absolutely can't live with broadcom or realtek (broadcom in particular isn't that bad), which will pay for itself with the power usage difference between a Nehalem Xeon and a Haswell i3
|
![]() |
|
For the love of god, ditch the Realtek, if you have one. I tried to use the 8119SC on my mainboard, it managed to max out at ~480Mbit only (iperf, four connections) and frequently decks my system in interrupt storms when maxed out, causing the desktop refresh rate to drop and the mouse cursor to become glitchy.
Combat Pretzel fucked around with this message at 14:37 on Oct 5, 2014 |
![]() |
|
Lower end Realteks like those don't even connect via PCIe-2 if I remember correctly. The 82** ones shouldn't have that kind of issue.
|
![]() |
|
evol262 posted:A new i3 will also be much, much better at virtualization than a Nehalem Xeon. And an intel NIC is $20 if you absolutely can't live with broadcom or realtek (broadcom in particular isn't that bad), which will pay for itself with the power usage difference between a Nehalem Xeon and a Haswell i3
|
![]() |
|
DrDork posted:None of the i3's have VT-d that I'm aware of, which makes life a lot harder if you wanted to really play with serious virtualization. You don't need it to run a jail in FreeNAS, but if you want to do VMWare to run multiple servers on the same hardware, it's Xeon or nothing (well, or some very odd i-series chips that cost more than the equivalent Xeon would). An i3 is wonderful for its efficiency, and more than up to the task of normal home NAS use, but the Xeon still has quite the edge in server-related performance, like virtualization. This is true even for the older Xeon E3 v1 and v2s compared to Haswell i3's. Nehalem vs Haswell is no contest. It's ~33% faster clock-for-clock. Xeons are segmented and binned on a number of features, but VT-x isn't one of them. And it's improved (less so Sandy Bridge on, but still some). It's "Xeon or nothing" for ESXi because of the crap HCL and likelihood of Xeon mobos meeting it. This is not true for other products. I work on RHEV and Openstack. A significant number of deployments and test labs are on Core I* hardware. My labs runs on i5s with 23 guests split between two nodes. Memory density is the killer. Not CPU features. VT-d is nice if you can get it, but not worth fretting over unless you're passing through an HBA. Xeons are nice if you can get them cheaper than i3/5s. You can't. The difference in price is not worth the difference in capabilities or performance unless you require multiple sockets, passthrough, or you're the kind of person who buys i7s just because.
|
![]() |
|
evol262 posted:Unless you're doing passthrough or pegging your CPU with I/O IRQs, vt-d is wasted silicon. You're right that Haswells are faster clock-for-clock, but with i3's having (depending on model) half the cores and half or less the L3 cache, heavy workloads are obviously still going to be handled better by the Xeon. Which is irrelevant to a home user because basically nothing you'll be doing will be considered a "heavy workload." Just not sure how you figure that a Haswell is "much better at virtualization" when it can't do passthrough and will get bogged down faster running multiple guests due to the lower cores and cache sizes, even if single-thread performance is a bit faster.
|
![]() |
|
Haswell is worthwhile for a file server simply because it uses (almost) half the power of Ivy Bridge. For a 24/7 powered on device that adds up after a year or two, at least $15/year. The 35w (peak) haswell usually idles at around 17w, compared to the 95w (peak) core2duo it replaced, that idled around 65w.. it's been a while since I've done the math but I save at least $30/yr on electricity with the new machine (ignoring new hardware costs). Unless you get a funny budget haswell model they all come with vt-d these days which is great for VMs (my file server is a VM on the haswell host) as they all have fast access to the disk.
|
![]() |
|
While I will absolutely agree with you that Haswell i3's make magnificent home file-servers due to their tiny power profile and ECC support (more on that in a sec), not sure where you're getting that they all come with VT-d from. Even Intel says that only 5 of the i3-4xxx chips have VT-d, and none of those support ECC. The VT-d ones aren't available on NewEgg, either (eg, they're they odd-ball ones). i5's fare a bit better with a bunch more supporting VT-d (46 total), but only 6 of those also support ECC, none of which are on NewEgg again. The few places that do have them price them north of $300, making them actually a good bit more expensive than the comparable Xeons you could otherwise easily source. Basically what I'm saying is that Intel isn't stupid, and to get both the VT-d and ECC support you'd want for a serious server, you're gonna have a hell of a time finding it in something that isn't a Xeon. Which is obnoxious as hell for someone who doesn't need 8 cores and 16MB of L3 cache, but would really like a 35W ESXi box without spending $200+ on a CPU that you'll only use a fraction of the power of. That said, for anyone who just wants a normal file server, get an i3 with some ECC RAM and be done with it. DrDork fucked around with this message at 07:15 on Oct 6, 2014 |
![]() |
|
The price spread between the i3 and i5 is about $60, it definitely depends on what you're going to use it for and how much ECC matters to you. If you know you'll never want more than a file server, then the i3 absolutely makes sense, however it's not much more effort to put a hypervisor on the thing (at no cost) and run prod freenas + test freenas + whatever hobby system, and now you can take advantage of all that raw computing power down the road, rather than being locked in to a single purpose PC with a 1990s era mindset. The roughly 50% performance bump, vt-d and future-proofyness make sense (to me) if you're a hobbyist, but if you just need a bare bones file server and don't mind being locked in to a single purpose machine, saving $60 and going with the i3 is probably a better option. I recently put a bitnami minecraft bitbucket "virtual appliance" vm server on my fileserver, for grins. And a copy of the new Windows 10 technical preview to play around with, and there's a copy of boot2docker running too, for when I finally have time to try that out. It burns up an extra 1.5GB of ram but it's an easy example of the flexibility that going i5 gives you without harming your core file serving funcionality. Hadlock fucked around with this message at 07:42 on Oct 6, 2014 |
![]() |
|
DrDork posted:Just not sure how you figure that a Haswell is "much better at virtualization" when it can't do passthrough and will get bogged down faster running multiple guests due to the lower cores and cache sizes, even if single-thread performance is a bit faster. Multiple guests usually get bogged down on I/O, which faster disks fix. Or memory pressure. Or scheduling interrupts if you like to give all your guests too many vCPUs (but that's your mistake). An i3 is perfectly capable of running 4-6 guests, even relatively busy ones. If you're going to be running guests which hammer the CPU, obviously a Xeon is better. Most people aren't. And not all vt-x support is the same. Iterative generations (Intel's "tick") are far more efficient at nested virt, ept mapping, etc. From Ivy Bridge to Haswell it's hardly noticeable. Not for Nehalem.
|
![]() |
|
Hadlock posted:If you know you'll never want more than a file server, then the i3 absolutely makes sense, however it's not much more effort to put a hypervisor on the thing (at no cost) and run prod freenas + test freenas + whatever hobby system, and now you can take advantage of all that raw computing power down the road, Wouldn't you need separate SATA controller cards for this? One for each Freenas VM?
|
![]() |