|
adorai posted:You should use ECC ram if your data is actually important. Also, if you have transient errors getting written to disk, you can always use ZFS's awesome snapshot system to get back the original, uncorrupted data. It's just far more resilient than what I can do with mdadm + LVM + hardware RAID on my consumer hardware. Of course I'd use ECC still if it was actually important, ECC is mandatory in a business environment except for end user systems. Look up at the topic, we're not in the enterprise storage thread. There's a certain level of tolerance we have at home for our horse porn and home vacation movies that we don't have at our workplaces.
|
![]() |
|
adorai posted:You are absolutely incorrect. ZFS will write whatever it is told to, so if what is in RAM is bad, ZFS will still write it, checksum it, and write the checksum that tells it the data is good. Using what parts?
|
![]() |
|
necrobobsledder posted:Also, if you have transient errors getting written to disk, you can always use ZFS's awesome snapshot system to get back the original, uncorrupted data. It's just far more resilient than what I can do with mdadm + LVM + hardware RAID on my consumer hardware. Snapshots are awesome stuff, especially on static datasets. Once I finished organizing my shit, I snapshotted it and now barring something catching fire, even if a stick of memory goes tits up and starts writing junk, I'll still have a known good copy! Goon Matchmaker posted:Using what parts? Any decent case + Decent power supply ~150 4x Samsung 2TB drives ~120 each ~~480 Intel Motherboard w/ integrated video ~120 Core 2 Duo ~150 2x2gb DDR2 RAM ~175 Total: 1075. These are ballpark numbers I pulled off newegg and out of my ass. It's entirely possible to get this set up for under $1000. It only starts getting silly when you want hot swapable stuff, SAS controllers, and rackmountable cases. Methylethylaldehyde fucked around with this message at 01:26 on Jun 16, 2010 |
![]() |
|
I'm a bit skeptical of OpenSolaris, I've used it at work and found it incredibly frustrating to use, mainly because it was unfamiliar and different. But comparing mdadm and zfs I'm really not sure what I should go with. I'll probably be getting 6 2TB EARS drives or 2TB Samsung drives. I'd prefer to use linux as I'm familiar with it, but am I going to end up hating myself and my setup for choosing software RAID-5/6(?) over Raid-Z?
|
![]() |
|
Methylethylaldehyde posted:Any decent case + Decent power supply ~150 6 x Samsung 2TB drives for $110 each = $660 Corsair 400CX PSU (you don't need that much) = $30 (yep, look around for the rebate + coupon code) Intel / LSI SAS 8-port card = $130 E5200 C2D + mobo = $130 2x2GB DDR2 RAM = $100 Antec 300 case = $30 5-in-3 bay = $90 Random old hdd for OS = $10 ---- $1180, about $220 of it arguably not needed (the 8-port card and the 5-in-3 bay). It'll fit up to 11 drives total with the expander (6 internally). Gets you 12TB raw (more like 10.5TB because of the dumb binary MB v. decimal MB thing) and a number of possible configs. If I wanted to do ECC RAM and a Xeon + mobo to go with it, the cost would bump up another $500 for more peace of mind and greater power draw. Is it worth it? Your call.
|
![]() |
|
It seems to me that there's a consensus in this thread that the best solution for your typical home user storing terabytes of movies and TV is a dedicated box shoved in a closet somewhere, running some Linux or Solaris, with ZFS. This is a bit academic for me, since I've already built my new system, but I'm wondering what the "recommended" solution for the network connectivity would be? That is, say you have your sweet ZFS system in your garage or closet. What kind of network do you use to connect it to your Windows box in your bedroom, or your HTPC near the TV? This includes both physical/link-layer stuff as well as protocols that run on top. I see people saying that SMB (Windows file sharing) over your typical Fast Ethernet or 802.11g gives terrible latency and bandwidth when trying to work with your files from a desktop. So what is the better way? Especially for connecting a Windows machine to the server.
|
![]() |
|
Sgs-Cruz posted:It seems to me that there's a consensus in this thread that the best solution for your typical home user storing terabytes of movies and TV is a dedicated box shoved in a closet somewhere, running some Linux or Solaris, with ZFS. Samba is all that Windows speaks, so there isn't anything else. It's also really not that bad. I wouldn't run the server on wireless. Run a cable out to the garage/closet/whatever and get that wired up right. If you can, wire everything. Wireless just isn't that great for streaming content.
|
![]() |
|
FISHMANPET posted:Samba is all that Windows speaks, so there isn't anything else. It's also really not that bad. Windows 7 has an NFS client in it, which is what I use to mount up shares from the media server.
|
![]() |
|
Vista (as well as 7) also rewrote the Samba (CIFS) client to support what can be explained as the USB 2.0 version. Completely backwards compatible but if both of them have the upgraded version it can be more efficient.
|
![]() |
|
Let's also not forget Windows Home Server and the Linux EVMS / LVM / MDRAID / RAID options for those people not hardcore enough to run OpenSolaris / FreeBSD w/ ZFS. Home SOHO NASes work well for a lot of people. Then there's always the Sun Thumper / Thor options that'll cost a lot more. Given how much time I've put into all this and my salary, it's probably reasonable to have bought one of those suckers by now.
|
![]() |
|
NeuralSpark posted:Windows 7 has an NFS client in it, which is what I use to mount up shares from the media server. Only in Ultimate though I believe.
|
![]() |
|
roadhead posted:Only in Ultimate though I believe. All my installs are Enterprise.
|
![]() |
|
NeuralSpark posted:All my installs are Enterprise.
|
![]() |
|
adorai posted:This discussion was in regard to a home user. You can only get windows 7 enterprise through an enterprise agreement, and I sort of doubt you went through that trouble for home. Actually I'm a student with Windows 7 Enterprise via the Microsoft campus license agreement ![]() Like I said, though, I've already built my system, and it's a 4-drive hardware RAID 5 that's inside my regular desktop. I live in a fairly small apartment (no garage) with my wife and didn't want to have two computers taking up space. So the problem of network latency is really just for my own interest (and to help out others reading the thread who are considering building a file server). So all you people with file servers using ZFS on OpenSolaris; do you access them through SMB from Windows, or do you use Linux machines to watch your videos? I guess I'm probably out of the ordinary here, too, since I use my computer to watch video content (it has a Dell U2410 screen, compared to the TV which is a 24" non-HD CRT), whereas lots of people here probably have Linux-based HTPCs.
|
![]() |
|
Sgs-Cruz posted:So all you people with file servers using ZFS on OpenSolaris; do you access them through SMB from Windows, or do you use Linux machines to watch your videos?
|
![]() |
|
OK, so, I'm planning on building an OpenSolaris-based file server. Originally, I thought that it'd be prudent to build it around a new Xeon or faster Intel chip, but I realized I have a dual-quad-core (8x1.8GHz) Opteron box that I'm not doing anything with. I figured that the cores wouldn't be fast enough to do a nice raidz2 setup, am I wrong? Is the ZFS/raidz stuff sufficiently multithreaded or not CPU-dependent that I could pull this off? I have 8GB RAM, too, and can bump it up to 16GB for not too much money.
|
![]() |
|
That's pretty much the definition of overkill. A 7110, which has 16 10k rpm disks runs on a single 1.9GHz quad core opteron.
|
![]() |
|
illamint posted:OK, so, I'm planning on building an OpenSolaris-based file server. Originally, I thought that it'd be prudent to build it around a new Xeon or faster Intel chip, but I realized I have a dual-quad-core (8x1.8GHz) Opteron box that I'm not doing anything with. I figured that the cores wouldn't be fast enough to do a nice raidz2 setup, am I wrong? Is the ZFS/raidz stuff sufficiently multithreaded or not CPU-dependent that I could pull this off? I have 8GB RAM, too, and can bump it up to 16GB for not too much money. It's multithreaded out the ass, and unless you're trying to run a 50 disk RAIDZ2, it'll be overkill the likes of which you have never seen. I would also toss VirtualBox on it and use it as a VM host.
|
![]() |
|
Fuckin' sweet, that saves me having to buy $1200 worth of Xeon guts for no reason (although I'll maybe do that in the future to have an ESXi host, since that's what the Opteron box is doing now). Thanks, guys; I'll post a trip report when I get to it ![]()
|
![]() |
|
So I just noticed FreeNAS has ZFS support - I've got an old opteron 165 box running FreeNAS, should I be able to drop two or tree 2TB drives in there and get what amounts to a RAID implementation? I'm not sure I'm completely grasping all this ![]()
|
![]() |
|
With all this ZFS talk I really want to try my hand at building something other than a windows box for storage and try out software raid. Ideally want something that I can share with my windows 7 box for xbmc, which it sounds like opensolaris or FreeBSD would be able to do without issue. What are the cons to running a RAIDZ off something like this: http://www.newegg.com/Product/Produ...N82E16813500027 Being limited to three sata connections is probably the first big issue, but would it be easy enough to just get a 4 or more port mini-pci express sata card? Downfalls? Right now I want to build a box with 4-6tbs of usable storage with no need to get much bigger than 6tbs all said and done. Should I just suck it up and build something without an Atom processor? Is there another good resource for learning/figuring out opensolaris for someone that has never used it and doesnt want to flounder like an idiot trying to set it up with RaidZ?
|
![]() |
|
Gotta make sure you're running the Ion in 64-bit mode, and the NIC that's onboard will probably have problems with OpenSolaris either via lack of support or really shit performance. It's fine if you don't mind something like 10MBps throughput, but it can be a bit frustrating if you're used to a speed demon network. The problem I have with such a board isn't any of that though but the lack of a PCI-E 16x / 8x slot for an add-on SATA controller that won't be using a port multiplier (I don't expect port multipliers to work on OpenSolaris but maybe Linux). The real estate limitations of the board will keep you from putting a bigass card onto the board as well even if you're fine with the PCI-E 1x bandwidth for your system. There's almost no case out there that'd be optimal for a mini-ITX slot and several drives - you're almost certainly going to wind up with a micro-ATX board and a mid-tower case at the least. That one Chenbro case only supports 4 drives and is like $180+, which isn't enough for lots of people. I'm stuffing an Antec 300 case I got for like $30 (space for 7+ drives internally and with a 5-in-3 backplane you get a total of 12 drives possible) and it also happens to have plenty of room for fans to cool down the drives. Most mini-ITX cases won't cool 4+ hard drives (even the green ones) very well, which could lower the life expectancy of your drives if it gets bad. If you're sure you won't use more than 12TB, there's always the NexentaStor distribution, which is a weird amalgamation of Linux userland and Solaris kernel. The community edition lets you use up to 12TB (that's 12TB used - you could make a 1exabyte system and so long as used is < 12TB, and it'd still be free). There's also FreeNAS, which has ZFS support in recent builds, but I've heard all sorts of horror stories of incompatible hardware messing things up or badness happening from the OpenSolaris -> FreeBSD port, but that was a year or so ago. Now, OpenSolaris is kinda buggy (no thanks to Oracle) and FreeBSD has had time to make ZFS mature on its own terms for a while now. If you really are that impatient / non-technical, I'd recommend one of those SOHO NASes, grab one that allows for 4 or 5 drives, and stuff it full of 2TB drives, which will cost you somewhere around $700-$1000 total I'd estimate. Ignorance / lack of time / laziness / lack of willingness to learn costs you money in this world, that's kinda how human economics works. ZFS capable OSes aren't anywhere near as easy to administer / maintain as Windows systems of any flavor, partly because they don't have any neato GUIs typically.
|
![]() |
|
necrobobsledder posted:Good Advice... Never said I was lazy or had a lack of willingness to learn, I have just never used anything other than windows and mac and don't want to jump in feet first without some decent research, and would prefer to do the whole the the "right way" other than just throwing together a $1000 box, and learn with trial and error. I would prefer to build something from scratch since the build process isn't a problem for me. Taking what you told me, it looks like the Atom is an awful choice for what I want to do. I probably need to look into a board with 5-6 sata ports, and some room for expansion, something in the intel flavor I assume? From your post, it sounds like FreeBSD is a good way to go, and has solid ZFS support. I guess I need to research between OpenSolaris and FreeBSD some more and decide which will fit my needs better. B/c learning either isn't going to be a problem for me, I just need to pick one and was hoping to get some feedback between the two.
|
![]() |
|
I've got an odd question here. I'm toying with the idea of implementing a NAS at my school. It's about 400 students, with 25 teachers/admins, so I'm not looking for something super complex. What I would like to do is allow for teachers to upload photos and short clips to a central server. Our own server can't handle that load + the logging in/file saving that the student accounts muster, so I'm thinking NAS is the way to go. Basically, I want to build a repository of pictures that the teachers can go to if they need (esp. the Yearbook folk). Maybe also include the ability to download Audiobooks or video files of productions or small events. Any suggestions where to start? I was thinking an off-the shelf solution with 2TBs of storage, but it'd have to work with our Mac-heavy environment.
|
![]() |
|
Shizzy, if you want to go completely insane, check this out: http://www.newegg.com/Product/Produ...N82E16813500055 Same price as the one you linked, but it has 6 sata ports. More importantly, it has a 1x PCIe slot and a 16x PCIe slot. The 1x is used for the Ion, but I think if you take that off the board will still work. You can put an Intel NIC in the 1x slot and a stupid huge SATA/SAS card in the 16x slot. PS, no port multipliers work with OpenSolaris.
|
![]() |
|
I run WHS, and I'm getting ready to move from a total of 10TB (6 drives) to 20TB (adding 5 drives) of storage. Unfortunately, I need to add more SATA ports. Any recommendations for a PCI-Express solution that provides an Individual mode? Don't need RAID, but from what I can tell quality performance cards are all RAID anyway. I guess I could go with PCI as well. I've currently got 4 drives on a cheap SATA PCI card and I find performance to be somewhat meh as the drives on it seem to max out around 30-35MB/s...
|
![]() |
|
TDD_Shizzy posted:Never said I was lazy or had a lack of willingness to learn, I have just never used anything other than windows and mac and don't want to jump in feet first without some decent research, and would prefer to do the whole the the "right way" other than just throwing together a $1000 box, and learn with trial and error. Look at FreeNAS. It's basically plug and play if you have compatible hardware. ZFS support and everything else (iSCSI, etc). It's FreeBSD underneath, but you can run it from a usb stick or liveCD. Very low hardware requirements. Pick your hardware based on how fast you want things.
|
![]() |
|
what is this posted:Look at FreeNAS. It's basically plug and play if you have compatible hardware. ZFS support and everything else (iSCSI, etc). It's FreeBSD underneath, but you can run it from a usb stick or liveCD. Yea that actually sounds really cool, and right up my alley, and it sounds like, if I ever wanted to move to opensolaris, I can convert the zpool and do that. One question I have been getting mixed signals on is, if I setup a zpool containing 3x2TB drives in RaidZ, can I just add a 4th, 5th and maybe 6th drive later? (as long as my hardware supports it) From reading, it sounds like I have to create another zpool and then add that to the raid? Ideas? Edit: I more or less figured out the answer, and I think I have a decent solution until I could afford to get 4-6 2TB drives. Setup my RaidZ with all my existing drives, 3x1TB and 3x1.5TB. When I pick up a new 2TB, or can buy them, swap them with the smallest drive and let it repair? TDD_Shizzy fucked around with this message at 04:10 on Jun 24, 2010 |
![]() |
|
TDD_Shizzy posted:Edit: I more or less figured out the answer, and I think I have a decent solution until I could afford to get 4-6 2TB drives. Setup my RaidZ with all my existing drives, 3x1TB and 3x1.5TB. When I pick up a new 2TB, or can buy them, swap them with the smallest drive and let it repair? Yep - I believe that process is called resilvering. Only thing that scares me about that is the harddrive thrashing that comes along with it.
|
![]() |
|
I've been tasked to get a file server for a branch of ours that only has about 5-6 people accessing it. Cheapest method is to send down an XP box but I want to avoid this in case they end up having more than 10 users on it at any given time in the future. I think a Windows Server 2008 box would be overkill for the amount of users which got me looking into a NAS. Dell starts at $3500 so lets stop there. I was looking at http://www.drobo.com/ Specifically the Drobo FS and this looks like a good solution. I was wondering if anyone has any experience with these? Can you format the drives with NTFS or hook it into the domain at all for SSO?
|
![]() |
|
No, drobo is terrible, don't buy a drobo. Buy a Thecus, QNAP, Synology, etc NAS, or maybe throw together a FreeNAS box. I'd recommend an off the shelf solution for ease of support.
|
![]() |
|
So I recently put together a software Raid-6 array with 6x2TB EARS drives and decided to benchmark it with palimpest.![]() Does this look right? The way it drops off dramatically is a little unnerving. I /think/ I got the partitions aligned, gparted says the first sector on all the partitions is at 2048. Is there anything else I could have done wrong. More benchmarks: one two three four Horse Clocks fucked around with this message at 02:02 on Jun 25, 2010 |
![]() |
|
What the fuck is it with recent SATA drives? What the hell are they doing while idle?! A few months ago, I got two WD Greens. As soon I've attached them, they were grinding the heads a long while while idle. At some point it stopped and just happens occasionally. I shrugged it off at that point. Now one of them is making tons of issues, I've switched it out today for a Seagate Barracuda 7200.11 (I've read about issues with THAT one after the order went through, but I needed a 1.5TB drive matching the sector count). But instead of booting back into Linux and resyncing the mirror, I said "Fuck it, I'm going to play some games first!" And guess what, the drive is seeking like shit at random points in time, untouched, uninitialized. And you don't even get an answer from the drive manufacturers. Back with the WDs, I've asked about information regarding that from their support. I got the random WD diagnostic tool spiel.
|
![]() |
|
What would people recommend for a basic/decent 4 (maybe 8) port sata II card. I was given a board I like, but it only has 4 sata ports on it, so I need to come up with 4 more ports, and eventually 8. This is the board: http://www.newegg.com/Product/Produ...N82E16813128424 It has 1 pci-e x16, 1 pci-ex1 and 2 PCI slots. Looking at newegg there are some good 4 port options, but wondering what peoples preferences are. I dont need raid or anything, just 4 or more ports with the ability to slap another card in down the road if needed. This will be used in a WHS build.
|
![]() |
|
I have an old ReadyNas NV+ that has apt installed. I've read a lot of the forum talk about how running an update on the current packages will break ssh/http access but all I want to do is install Ruby. Does anyone have any experience with the ReadyNas? I'm thinking installing Ruby shouldn't pose any problems since it seems like the main packages that cause this problem are the C library packages and apache (from what I've read). Edit: nm looks like you can install ruby just not ruby 1.9.1 Strong Sauce fucked around with this message at 18:43 on Jun 30, 2010 |
![]() |
|
TDD_Shizzy posted:What would people recommend for a basic/decent 4 (maybe 8) port sata II card. I was given a board I like, but it only has 4 sata ports on it, so I need to come up with 4 more ports, and eventually 8. This is the board: http://www.newegg.com/Product/Produ...N82E16813128424 Take a look at this card: http://www.amazon.com/Supermicro-Ad...77948232&sr=8-5 I believe it is supported by WHS. All you need to do is pick up a bracket and break outs.
|
![]() |
|
Phatty2x4 posted:Take a look at this card: Intel makes a card with the same chipset that costs a few bucks more, but it comes with the bracket (but still no cables, I believe).
|
![]() |
|
Phatty2x4 posted:Take a look at this card: Do you really need a bracket? It seems like it'd work without one. If you do need one, where can I find such a bracket? Searches for "bracket" and "pci bracket" yield much that doesn't seem too useful...
|
![]() |
|
Thermopyle posted:Do you really need a bracket? It seems like it'd work without one. If you do need one, where can I find such a bracket? Searches for "bracket" and "pci bracket" yield much that doesn't seem too useful... If your card sits vertically, so that its weight is just pushing down on the PCIe slot (like a rackmount case) , then probably not. If it's horizontal (like in a tower case standing up) then you would want the bracket. The PCIe slots aren't meant to support much weight, it's supposed to be supported by the case and the bracket. The reason that card needs a bracket is that it's made for Super Micro cases. Their expansion slots are "backwards" so the bracket it comes with won't work in a standard case. I tried to find a bracket for mine, and it was a pain in the butt. I only found one company that made them, and they wanted to sell them wholesale. I ordered one I thought would work, but the threading on the holes was wrong so it didn't. Luckily I had a bracket from a wireless card that worked.
|
![]() |
|
If I have 15x 750GB drives what would be the optimal raidz2 configuration? Optionally I could drop a drive, leaving an even 7x7 split. Not sure if that would make things easier or not. Keep in mind I'm new to raid and raidz in particular. If I'm understanding correctly I believe 2 pools of 7 drives (with double parity making it effectively 5*750GB worth of space for each pool). Is this correct? Now what is a "pool" mean in terms of addressing the drives from say Windows? Do I see two drives ? Also what wattage psu would be required to run this many drives? I'm thinking 650w would probably cut it but I'm not sure if this would be cutting it too close? Maybe a 750w instead? Thanks.
|
![]() |