|
Bonobos posted:First question - for the VM server, I don't know if I should roll with Solaris, FreedBSD, FreeNAS or Openindiana. I have used Windows and Mac PCs, but my linux experience has never been positive. Can anyone recommend a particular distro? Anyhow, installing FreeNAS itself is easy. I followed this guide to install it to a USB stick. Takes about 10 minutes and a few clicks, that's it. Stick the USB stick into the HP's internal USB port, connect it to a router, and power it on. Either attach a spare monitor, or go into your router's config page to see what IP address the HP ended up with, and slap that into your browser of choice. Now you're into the FreeNAS config pages, and from there it's quite easy. Set up the drives however you like, set up some shares (don't forget to enable Guest access if that's what you want, and enable CIFS), and you're basically done. If you know what you're doing, the whole process probably takes an hour. If not (like me) then a few hours. Required Linux experience is literally limited to "Can you type in an IP address"? as the whole thing is run from a WebGUI.
|
![]() |
|
Another option for hdds are the Samsung F4EG 2TB drives. I got 4 of those running in 4k sektor size zraid1, and I'm getting line-speed over SMB on both read and write. (~110Mbps). Not that I expect this to affect you, but there's an old firmware version that were on drives sold before january 2011 which can experience dataloss if you enable SMART tests on the drives. However, the silly thing is that the newer firmware doesn't reflect a version change so you can't easily tell what drives you have. So it's not a bad idea to flash them as soon as you get them - info is here - luckily on the microserver you don't need to move the drives around, you can set them in the BIOS with the same result. For zraid1, there's a golden rule about memory: 1GB for every 1TB of diskspace you have in the array (including parity space). Also, while guest sharing is fine, if you're not the only one on the network or plan on bringing the server anywhere, it might be worth it to use local user authentication on one that has full read-write-execute permissions, and one that's anonymous access that only has read access. What DrDork said about linux experience is true (although as a long-time BSD user, I have to point out that linux is not unix), but you'll be better off if you go read the FreeBSD handbook. At least familiarize yourself with man through the manpages that are available on freebsd.org. Additionally, while I remember it, buy another NIC. The bge driver that handles the HP ProLiant NC7760/NC7781 embedded NIC in FreeBSD has problems which will cause drops and bad preformance (along the speeds DrDork mentioned, so he might want to pay attention too). Anyhow, go to manpage for the driver and check the HARDWARE section for any NIC you can easily find and buy, and use that. Personally I went with the HP NC112T (503746-B21) but anything based off any of the chipsets mentioned on that manpage will work fully (just ensure it has 9k jumbo frames support, as you'll want that if the rest of your network supports it). Just noticed that you asked for memory, here are some that work. EDIT: Fixed links, added more info. Damn it, I seem to go over some of these things every time someone buys a microserver. D. Ebdrup fucked around with this message at 10:04 on Feb 5, 2012 |
![]() |
|
Its not a free option past 3 disks, but UNRAID does what you're looking for. Its similar to Flexraid and Beyondraid above.
|
![]() |
|
D. Ebdrup posted:Additionally, while I remember it, buy another NIC. The bge driver that handles the HP ProLiant NC7760/NC7781 embedded NIC in FreeBSD has problems which will cause drops and bad preformance (along the speeds DrDork mentioned, so he might want to pay attention too). Anyhow, go to manpage for the driver and check the HARDWARE section for any NIC you can easily find and buy, and use that. Personally I went with the HP NC112T (503746-B21) but anything based off any of the chipsets mentioned on that manpage will work fully (just ensure it has 9k jumbo frames support, as you'll want that if the rest of your network supports it).
|
![]() |
|
DrDork posted:This is true. Personally, 50MB/s was fast enough that I didn't feel like spending yet another $50 on a NIC. If you do decide to stay with the HP embedded NIC, do some trial runs moving files around in a manner that simulates how you'll actually use it, and see what happens. Stock, mine would give me ~100MB/s for the first few seconds, and then drop hard to ~30 with a lot of stuttering--which would be fine for transferring small files, but I move big ones around a lot. After some tuning I got the drop to settle at ~50, with no stuttering. There are a lot of guides for how to tune ZFS, and other than it being kinda a trial-and-error process that'll eat up an afternoon, it's not hard, or even strictly necessary. Do remember that (contrary to a lot of the guides) there is no reason to ever use vi unless you actually want to--use nano instead (built-in with FreeNAS) and save yourself the headache. ZFS Evil Tuning Guide [solarisinternals.com/wiki] posted:Tuning is often evil and should rarely be done. Yes, you've been able to tune the zpool so that it runs stable at less than half the performance you can easily expect with the hardware you have - but is 50bux really that much? Also, do note that while the NIC I recommended works, it's by FAR not the only one that does. You could easily check the manpages/HCL and locate a gigabit pci-ex x1 NIC for perhaps as low as 10bux, certainly not over 20bux.
|
![]() |
|
While it's true that you shouldn't blindly accept someone else's tuning numbers and think that they will universally apply, the defaults are just that: defaults. They are there to provide solid performance and reliability across the widest spectrum of hardware setups possible. This frequently means that they will not give the best performance for any specific hardware/software setup. In my case a little bit of tuning took me from stuttering at 30MB/s to not stuttering at 50MB/s. Your mileage may vary, but it's at least something worth looking into if you've got a free afternoon, and it's easy enough to roll back to the defaults if you mess something up. And hey, if initial testing on your particular setup yields performance levels you're happy with, by all means, leave everything as default and skip the hassle. And no, $50 isn't that much but it's $50 I don't need to spend as the current performance is perfectly sufficient for my needs. The cheapest Intel gigabit PCIe NIC I could find was $30, and that's assuming the 82574L chipset is supported (which I imagine it is, as the 82574 is supported). I'd rather spend the cash on something else.
|
![]() |
|
Thank you both DrDork & D.Ebdrup, your advice is precisely what I was look for. It sounds like FreeNAS is the way to go, I will try and see how it goes. Sounds like I need a new NIC as well...I see various Intel gigabit NICs on newegg, will all work the same? Basically I would want the fastest throughput I can get on my gigabit network, while obviously spending the least amount of money. Any specific suggestions there?
|
![]() |
|
For what it's worth, I've always had the best luck finding good prices on Intel gigabit NICs on eBay, even for new or indistinguishable-from-new items. I bought two 82574L PCIe x1 NICs for my ESXi all-in-one box and the one from Amazon was about $30 (needed it overnight) and the one from eBay (could wait a bit) was $15. Both came in the exact same condition and packaging. Go figure. Also, annoyingly, dual-port Intel PCIe NICs still command a bit of a premium, unlike say the dual-port PCI-X NICs.
|
![]() |
|
Bonobos posted:Sounds like I need a new NIC as well...I see various Intel gigabit NICs on newegg, will all work the same? tl;dr You want one with one of the following Intel chipsets: 82540, 82541ER, 82541PI, 82542, 82543, 82544, 82545, 82546, 82546EB, 82546GB, 82547, 82571, 82572, 82573, or 82574 (assumed 82574L as well, but I'm not 100% on that).
|
![]() |
|
There's also the Ethernet section of the hardware notes for 8.2-RELEASE which is what FreeNAS 8.0x is based on. 8.1/2 will be based on FreeBSD-9.0-RELEASE, but anything that's supported in 8.2-RELEASE is also supported in 9.0-RELEASE. If you find a specific chipset that you think might work but it's not on the list, simply do a google search on "<chipset> freebsd" and you'll find it very likely someone else has already been wondering the same. Unfortunately Google's BSD search no longer exist, but just about anything you can wonder has probably already been asked in one form or another.
|
![]() |
|
DrDork posted:(assumed 82574L as well, but I'm not 100% on that). My googlefu found this re: 82574L chipset. Is this a valid ... driver thingy? -e- I'm in the process of assembling my new NAS. I am going to have 6 2TB WD Green drives running on it, but I am not sure if I want zraid1 or zraid2. Odette fucked around with this message at 09:39 on Feb 6, 2012 |
![]() |
|
Our fileserver is on the fritz and I'm pretty sure it's data drive is getting ready to die and take everything with it. Seeing as it's just a linux box with a 1.5tb dying drive that does nothing but store files, I'm wanting to replace the entire box with a NAS. I'm looking at the Synology DS1511+. My question is, what is the best way to set it up so I can get the most amount of storage, while still being able to recover from drive failures without having to worry too much about the "write-hole" or whatever it's called when you get into larger disks? Also, what drives are recommended right now? I'd like to do 2TB disks if possible.
|
![]() |
|
Maniaman posted:Our fileserver is on the fritz and I'm pretty sure it's data drive is getting ready to die and take everything with it. You can avoid the write hole by not using RAID5 (Use RAID 10 or 6), or use a UPS (you won't avoid it completely but it significantly reduces chances of this happening). I recommend Hitachi 5k3000 2TB HDs. They've been running solid for me for 6 months now. They don't use 4k sectors (if you care about that kind of thing). But I also hear good things about the Samsung F4 drives. To be honest, I haven't really paid attention since I built my box and then forgot about it. Hope you get a backup of that server soon.
|
![]() |
|
Since the Norco 4220/m1015/Solaris combo seems to be pretty popular here, I'm wondering if anybody's got a clue how to clear the red error light on the bays? MegaCli's not getting me very far (it claims to successfully clear, but nothing happens -- the locate option doesn't work either), and MSM doesn't look like anything's wrong with it at all. Is the red light some I2C thing handled by the backplane instead of the controller?
|
![]() |
|
LmaoTheKid posted:You can avoid the write hole by not using RAID5 (Use RAID 10 or 6), How does RAID 6 close the write hole? Unless I'm misinformed, it's usually just RAID 5 with duplicate parity data. An incomplete write to RAID 6 could still leave you with inaccurate parity data, you'd just have twice as much of it.
|
![]() |
|
Zorak of Michigan posted:How does RAID 6 close the write hole? Unless I'm misinformed, it's usually just RAID 5 with duplicate parity data. An incomplete write to RAID 6 could still leave you with inaccurate parity data, you'd just have twice as much of it. You're probably right. I was just under the impression that RAID5 was the only one with a write hole possibility.
|
![]() |
|
It doesn't, the chances just go down a bit. A write hole in RAID5 would be when one of the member disks doesn't match the others and therefore you can't tell which one of the disks is bad. With 6, the write hole would be when two disks don't match the other simultaneously. Your server is full of expensive disks, you should be able to budget it in a UPS, be it an used APC w/ refurb batteries, or a new CyberPower from Amazon via Prime.
|
![]() |
|
ZFS Supremacy ![]()
|
![]() |
|
FISHMANPET posted:ZFS Supremacy Agreed. Haven't touched my ZFS box in months, except when I hooked up the UPS my boss gave me.
|
![]() |
|
Don't worry, it will be on a UPS. Perhaps write-hole isn't what I was looking for. I was thinking the deal where if a large disk fails out of the array, with larger disks there's a extremely high chance of another disk failing during the rebuild.
|
![]() |
|
LmaoTheKid posted:Agreed. Haven't touched my ZFS box in months, except when I hooked up the UPS my boss gave me. code:
|
![]() |
|
Maniaman posted:Don't worry, it will be on a UPS.
|
![]() |
|
I'd love to go with 2TB disks if possible, but there's no way I can afford $250/drive x 5 right now. Samsung EcoGreen F4 drives are on Newegg right now for $160 + shipping. Is there anything wrong with using green drives? Is that drive any good for a NAS? I seem to remember reading something about RAID controllers freaking out over idle times in green drives, or was that just a WD thing?
|
![]() |
|
I'm going to be buying 5x WD20EARX next week or so, how can I differentiate between batches? As I'd like to buy from different batches.
|
![]() |
|
Odette posted:I'm going to be buying 5x WD20EARX next week or so, how can I differentiate between batches? As I'd like to buy from different batches.
|
![]() |
|
FISHMANPET posted:ZFS Supremacy Maniaman posted:Perhaps write-hole isn't what I was looking for. I was thinking the deal where if a large disk fails out of the array, with larger disks there's a extremely high chance of another disk failing during the rebuild. I don't bother putting my ZFS server on a UPS because 99.99% of the time I just read from it and I've never had a problem with disk corruption. Only problems I've ever had were a disk dying and my boot partition getting corrupted.
|
![]() |
|
Question on ZFS, assuming I have 2x WD green drives (EARS, 2tb drives), and 2x Samsung F4s, with an extra Hitachi 5k3000 drive, all are 2tb, would I be better of going 2 Mirrored Vdevs (RAID1), or can I risk running all 5 different drives in RAIDZ (ie., RAID5)? I've never run a RAID array before, and I understand its most efficient to run the same size drives, but no clue how this fares for different types / speeds of drives. I just want some type of redundancy with automatic checksumming. Ideally I should just get 5 of the same type of drives, but with prices what they are at, I cannot afford to blow almost $1000 on HDDs.
|
![]() |
|
Is there any RAID implementation that works similarly to Drobo's BeyondRAID? I already have 7TB of mixed disks in a Drobo and was considering selling it for $200 and upgrading to a Drobo FS for $600 so I could access it over the network but maybe there's something else that would work better? Blazing fast speed isn't important to me. I mostly do reads, it's for media storage and backup. ashgromnies fucked around with this message at 05:10 on Feb 7, 2012 |
![]() |
|
Maniaman posted:I'd love to go with 2TB disks if possible, but there's no way I can afford $250/drive x 5 right now. Samsung EcoGreen F4 drives are on Newegg right now for $160 + shipping. Is there anything wrong with using green drives? Is that drive any good for a NAS? I seem to remember reading something about RAID controllers freaking out over idle times in green drives, or was that just a WD thing? I haven't heard anything about the EcoGreen F4 specifically, but the two WD Greens I have in my HP FreeNAS box right now are behaving reasonably well (outside the 4k thing, but that's another issue). Bonobos posted:Question on ZFS, assuming I have 2x WD green drives (EARS, 2tb drives), and 2x Samsung F4s, with an extra Hitachi 5k3000 drive, all are 2tb, would I be better of going 2 Mirrored Vdevs (RAID1), or can I risk running all 5 different drives in RAIDZ (ie., RAID5)? ashgromnies posted:Is there any RAID implementation that works similarly to Drobo's BeyondRAID?
|
![]() |
|
DrDork posted:If you're more concerned about price, you may want to consider cracking open external drives. They'll have whatever the manufacturer's 5400RPM or "Green" drive is in there, and (for whatever reason) you can usually find them $20+ cheaper than their normal internal-drive equivalents. Obviously you take a chance with warranty and whatnot, but $20 saved per drive x 5 drives pretty much buys you a spare anyhow. Regarding FlexRAID, I've been using it for about 2 months now. It's decent, but also pretty fucking buggy; I often have to stop and restart the service if I've tinkered with the settings too much. That said, when it works, it works.
|
![]() |
|
MY general rule of thumb for ZFS is if you have over 4 drives, RAIDZ2 is the way to go, especially if you're using large drives. I worry about large rebuilds in an array. I'm probably just being paranoid (I've never had a drive fail during rebuild). Then again, RAID isn't backup so you should have a large external, or offsite backup solution ontop of your NAS.
|
![]() |
|
PopeOnARope posted:Regarding FlexRAID, I've been using it for about 2 months now. It's decent, but also pretty fucking buggy; I often have to stop and restart the service if I've tinkered with the settings too much. That said, when it works, it works.
|
![]() |
|
ashgromnies posted:Is there any RAID implementation that works similarly to Drobo's BeyondRAID? I already have 7TB of mixed disks in a Drobo and was considering selling it for $200 and upgrading to a Drobo FS for $600 so I could access it over the network but maybe there's something else that would work better? When I moved to a Linux file server from Windows Home Server, I had quite an array of different sized disks. What I did was come up with a scheme to partition them all so that I had at least 3 of every size partition. I then used mdadm to RAID each set of 3+ partitions together and then used LVM to pool them all together. Then I fucked myself in the ass by using ext4 on LVM and when my storage expanded to 16TB I was stuck because 16TB is the biggest you can go with ext4. Now I need an extra 16TB of space to back all that data up to and come up with a better filesystem...
|
![]() |
|
Thermopyle posted:When I moved to a Linux file server from Windows Home Server, I had quite an array of different sized disks. What I did was come up with a scheme to partition them all so that I had at least 3 of every size partition. If you want a BIG FS with BIG files then XFS is probably the way to go right now. This has actually been true for a while but recently XFS has been fixed so it's better with lots of small files (metadata heavy workloads). http://lwn.net/Articles/476263/ http://www.youtube.com/watch?v=FegjLbCnoBw I'm also pretty sure >16TB ext4 requires a mkfs option so you can't just convert over.
|
![]() |
|
Oddhair posted:What would the symptoms be if you had the wrong cable? I have the forward kind of setup (and I just double checked and the cables I ordered are forward, but I've been having problems.) does the connection simply not recognize the drives? I just got this today, and holy shit were you right. The cable looks like it's made of foil or something. Oh well, it's plugged in and working. code:
|
![]() |
|
I used that cable first with an Intel drive cage just placed in my WHS, and the RAID controller (which has an audible alarm, yay!) was throwing PD errors. Since one of the SATA ports on the cage was loose from the board, I then got some 3-in-2 trays with fans and placed the 6 drives in it, and currently it's seeing 5 of the 6. There's so many unverified variables that I don't have any way to blame any one portion. I've already installed a bigger PSU in case that was the issue, I just need this storage to last long enough, reliably, to RMA a pair of HDs totaling 2.5TB to Seagate. Also since it's WHS v1 it will have to be <2TB/volume, so I was going to do a RAID6 with the 6x500GB drives.
|
![]() |
|
FISHMANPET posted:I just got this today, and holy shit were you right. The cable looks like it's made of foil or something. Oh well, it's plugged in and working. Yeah, I've got the same one, works fine but scares the shit out of me.
|
![]() |
|
Finished building the system I posted about a few pages back, 8 2tb stat drives, intel controller, supermicro board etc. Went ahead and put windows storage server on it (we're an exclusively windows shop) and for some reason, none of the hyper-v hosts can see any of the 5 targets on the box. The port is open, they can telnet to the port so I know connectivity is good, but nothing shows up on refresh in the iscsi connector. Microsoft's iscsi target software seems almost retard-proof, not sure what I missed. If anyone has any ideas, i'd appreciate hearing them.
|
![]() |
|
ashgromnies posted:So for something reliable that will take any drive size and allows hotswapping my only good option right now is Drobo? Once the array is up and running, it's usually fine. It's just when you need to change things that you have to fuck with it.
|
![]() |
|
So I got my second 5-in-3 enclosure installed and all wired up. ![]() FISHMANPET fucked around with this message at 00:42 on Feb 10, 2012 |
![]() |