«608 »
  • Post
  • Reply
BlackMK4
Aug 23, 2006

wat.

Megamarm

Update: It still stutters, but not as badly. I will look into getting another HDD, I guess.

Megaman
May 8, 2004
I didn't read the thread BUT...

Stupid question, I have a Synology DS1511+ and it seems like it's ALWAYS working on something yet I'm never, or hardly ever using it and no ports are open through my router and I live alone. Has anyone experienced this? Is there some indexing job or something simple that I need to turn off on it? I don't want the SATA drive life to be fucked over by whatever is going on. I have 5 1TB drives, all are western digital, 3 green drives, 2 blacks

Gorfob
Feb 10, 2007


How many SATA 3gbps drives can I run off a PCIe x 16 slot before they start bottle necking? Unless I'm super retarded and suck hard at maths running a IBM m1015 with 8 drives on it should be more than fine?

movax
Aug 30, 2008



Gorfob posted:

How many SATA 3gbps drives can I run off a PCIe x 16 slot before they start bottle necking? Unless I'm super retarded and suck hard at maths running a IBM m1015 with 8 drives on it should be more than fine?

The question can get a bit more specific than that, but at Gen. 1 speeds, a PCIe x8 link is generally enough when mechanical drives are attached to it. For instance, the LSI 1068E is a Gen. 1 x8 device, which is "only" 2GB/s of bandwidth.

Telex
Feb 11, 2003



necrobobsledder posted:

Do be careful about using ZFS with a VM keeping it running, Raw Device Mappings (RDMs) have different compatibility modes, and apparently you're supposed to use virtual compatibility mode with ZFS, not physical compatibility mode that is what you're supposed to use when directly exposing disks to any ZFS supporting OS.

Okay so I finally powered my freenas VM down to change that config and the option to change it was greyed out.

Is this possibly related to the fact that I got what looks like a bunch of data corruption in random spots suddenly?

In either case, how the hell do I make them work as virtual and not physical? I tried changing the raid controllers but then the thing wouldn't power on in vmware anymore so I changed them back. I'm almost done backing up what I can salvage from this raid so I can just erase the whole goddamn thing I guess and do it all proper the right way if possible.

Longinus00
Dec 29, 2005
Ur-Quan

BlackMK4 posted:

Update: It still stutters, but not as badly. I will look into getting another HDD, I guess.

I looked at the command that was recommended to you and I think you should try ionice -c3 before you get a new HD. If that's too slow for you try increasing the value of n in ionice -c2 -n4.

Longinus00 fucked around with this message at 06:11 on Oct 21, 2011

teamdest
Jul 1, 2007


An update on the ongoing saga of modernizing my file server:

Swapped some hardware around, picked a few new things up, Currently running:

AMD Athlon II X2 240
8GB DDR3
5x 1TB Drives
4x 500GB Drives
1x 30GB SSD
1x 16GB USB key
1x 120GB Drive

I've been planning to do some consolidation for a long time now, and it's finally happened: my file server box became a VMware ESXi 5 Box. Now I can throw around VMs like nobody's business, and having to reboot the fileserver doesn't mean a 20 minute pause in all my work. To facilitate this changeover, I've wound up using the following:

Supermicro USAS-L8i controller
HP SAS Expander (Not actually installed yet)
Some generic 3-in-2 5.25" to 3.5" bay adapter thing

And for software:

VMware vSphere 5
Debian
ZFS-on-Linux

The installation of vSphere was completely painless, the client software is excellent (it's a lot like Workstation if you have used that), and the management seems sound. I've spun off a File Server, LDAP Server, Applications Server, and a couple other things that I'm mostly playing around with. Right now 1 500GB drive is used for the VM and hypervisor storage, But it will at some point be hosted on a Raid10 of 500GB drives instead, I am just waiting on the last cable I need. The 120GB drive will be a shared storage facility for the VMs so that they don't have to hit the file server's array for internal work. The 30GB is going to be host to any VMs that have high disk I/O requirements.

The other VM's I'll omit, but the file server is a Debian 6 box running Samba and ZFS on Linux, possibly some iSCSI shit to come later. It hosts a 3TB Raid-Z with Hot Spare which is being destroyed as we speak (copying the data away) in favor of a 3TB Z2 array. I will probably add a pair of TB disks at some point to expand this to a 4TB Raid-Z2 w/ Hot Spare, but that's off in the future.

The transition from physical to virtual was relatively smooth, minus two big caveats:

1) The card I chose is one of Supermicro's new "UIO" things, the components are on the top of the board instead of the bottom, and the bracket is reversed. at the moment it's just sitting in the case (yay zip ties), but in the long term I'm going to need to put an extension on the bracket, looks to be about a centimeter or so to align it to the PCI hole above it.

2) ZFS effectively requires physical access to the drives it uses, it works much better on devices than on partitions (in terms of speed and some unusual errors I'd seen). However in order to use RDM (Raw Device Management), vSphere's version of direct drive access, the SATA/SAS controller needs to, and I quote, "Export the device's serial number", which apparently my motherboard controller did not. The Supermicro card does, so all is well, but bear in mind if you're using the integrated stuff, high chance of no RDM ability at all.

3) Yes I said two, shut up. Right now this isn't an issue, but eventually running 5-10 VMs on a single physical network port might begin to bog down, especially since the file server tends to get hit pretty hard in the evenings and I've seen that just about max a Gigabit connection all on its own. A cheap 1 or 2 port PCI-E 1x network card will fix that issue once it presents itself, but it's still something to think about.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


Gorfob posted:

How many SATA 3gbps drives can I run off a PCIe x 16 slot before they start bottle necking? Unless I'm super retarded and suck hard at maths running a IBM m1015 with 8 drives on it should be more than fine?

If we assume full SATA 3 Gbps speeds, that comes out to ~300 MB/s after transmit overhead (10b transmitted per 8b of data). A PCIe 2.x x16 slot has a bandwidth of 500 MB/s per lane, or 8000 MB/s total. Simple division tells us you can fit 26 2/3 drives that run at full speed (like SandForce SSDs) and, in a perfect world, none will be bottlenecked.

If, instead, you pick more reasonable drives, like the Samsung Spinpoint F3R with a sustained read speed up to 150 MB/s (average ~125 MB/s), you can fit 53 and a third on one PCIe 2.x x16 link without running out of PCIe bandwidth under a full sustained read across all drives.

In other words, don't worry about eight drives unless you're doing less than an x4 PCIe 2.0 slot.

Gorfob
Feb 10, 2007


Factory Factory posted:



movax posted:




Thank you to you both. Ordered one off ebay to give my old core 2 quad something to do after I replace it as HTPC.

GobiasIndustries
Dec 14, 2007



Lipstick Apathy

Does anyone have recommendations on 4in3 or 5in3 swappable drive bays? I've got my server in the Antec 300 and my 4 HD slots are used up. I'd prefer front loading bays because it means way less hassle if I need to replace a drive and I can keep the inside wiring nice and neat. I was looking at something like this, the only thing I'd have to do it seems is take care of the guide rails inside the case to get it to fit.

devmd01
Mar 7, 2006

Elektronik
Supersonik


GobiasIndustries posted:

I've got my server in the Antec 300 and my 4 HD slots are used up.

Um the Antec 300 has 6 hard drive slots. I only had need for nine drives when I built my server, so I just used some 3.5->5.25" brackets I had lying around.

GobiasIndustries
Dec 14, 2007



Lipstick Apathy

devmd01 posted:

Um the Antec 300 has 6 hard drive slots. I only had need for nine drives when I built my server, so I just used some 3.5->5.25" brackets I had lying around.

Whoops..I actually have the Cooler Master 590, must be my brother that has the 300...they look exactly the same googling. But yeah, it has 9 5.25" drive bays, and came with a 4in3 HD adapter.

BlackMK4
Aug 23, 2006

wat.

Megamarm

http://www.newegg.com/Product/Produ...icro-_-17121405

I have one of those but I guess it ends up costing more than yours.

dj_pain
Mar 28, 2005



I spent the whole weekend moving my file server to a intel server board. Everything was fine till I had to attach the heatsink, everything was screwed onto the case.

movax
Aug 30, 2008



teamdest posted:

1) The card I chose is one of Supermicro's new "UIO" things, the components are on the top of the board instead of the bottom, and the bracket is reversed. at the moment it's just sitting in the case (yay zip ties), but in the long term I'm going to need to put an extension on the bracket, looks to be about a centimeter or so to align it to the PCI hole above it.

I used longer screws + nylon spacer to get the job done in my case, works like a charm.

For NICs, right now I have the two x1 PCIe slots populated with Intel GigE controllers, seems to be working pretty well. When I finally get around to porting my server over to an ESXi-based box, I may try to Team them or similar and feed that device to the VMNet manager.

Nystral
Feb 6, 2002

Every man likes a pretty girl with him at a skeleton dance.

Lipstick Apathy

Star War Sex Parrot posted:

Are the arrays that you're examining only comprised of 4 drives?

Yes 3 or 4 drive arrays.

Cool Matty
Jan 8, 2006
Usuyami no Sekai

Hey all, I'm not sure if this is the best megathread for this question or not, but I figure you guys can either help point me in the right direction or answer it outright!

There's a small company I do small jobs for (building machines, fixing their shoddy network, etc). After getting their terrible ~8 year old Linksys HUB and trashy cables out, there's one last improvement they need, some sort of solid backup system.

This company does a lot of design and videography work. They have terabytes of 1080p footage to keep track of, and to keep backed up. They don't necessarily need all of it on hand all the time (the video guy just copies over whatever he needs), but right now their only backup "solution" are external hard drives.

I'm thinking some sort of NAS with easy expandability is exactly what they need. Am I wrong? I'm really looking for some good recommendations here, along with what sort of software I could use to automate this process for them.

I would post this in the enterprise NAS thread, but this is decidedly far from that. They need something cheap, appropriate for a small business, that can hold a lot of data and can be expanded easily. Speed's not important, and they are finally running a gigabit network.

Raukowath
Jul 5, 2003
Wreaker of Havoc

I have read far too much of this thread and it has all started to blend together, and I have been putting off building a new small server with storage to function as a NAS along with a few piddly tasks here and there.

So far, the HP Microserver looks like a very nice deal and upgrade to what I am currently (and have been for going on 8 year) using. Perspective: It is a P3-800 with 4 120GB drives in a raid5 w/ hot spare, so I certainly need an upgrade at this point and decided to throw aside parts of a couple paychecks to get a more robust solution.

Also, I have not messed with RAID in any sense for about 6 years, and even before that it was always higher end server hardware raid solutions, which I will not have available to me.

So my question basically boils down to. If I was to get the microserver, upgrade the RAM, throw in 4 1 or 1.5TB drives, install debian, would that give adequate performance for the occasional backing up, storage and reading if I was to go with a software based RAID. And is ZFS/RAIDZ the 'goto' for software RAID at this point? Also, do the 'Green' drives have issues being in raid, hardware or software, still? I remember hearing about that being a problem.

Any suggestions or tips would be greatly appreciated, I've been out of this game for far too long.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


Linux does a lot better with mdadm, its default software RAID package. It's not as feature-filled as ZFS, but it's rock solid and no-nonsense. ZFS on Linux is not yet mature.

luigionlsd
Jan 9, 2006

i dont know what this is i think its some kind of nazi giraffe or nazi mountains or something i dont know

Looking to go the Windows route on this one, though i suppose I'd consider *nix. I have a 2008 R2 license from Dreamspark, though I was considering buying WHS 2011. I know WHS "2003" is a pain in the ass with loading AHCI, especially considering the AM3+ board running this doesn't have native floppy. I have 3x2tb WD Greens ready to go, going to use a BIOS-configured RAID 5. Do I need to worry about lack of Drive Extender if I've got RAID 5 going?

Also, bonus question: I've got a current 2008 setup with a micro ATX board and the onboard nVidia RAID 5 (chipset 630a) - if I just swap boards, will my RAID config be detected on the new AMD 870 chipset, or will I have to rebuild it from scratch? I've only got ~1TB stored so far, so backing up to a separate drive isn't a huge deal.

BnT
Mar 10, 2006



Cool Matty posted:

I'm thinking some sort of NAS with easy expandability is exactly what they need. Am I wrong? I'm really looking for some good recommendations here, along with what sort of software I could use to automate this process for them.

You mention cheap and terabytes, but not a ballpark on either. Going on that bit, you might want to look at the Synology 1511+, or match a budget to their other products. They have good reviews, lots of development, and would probably fit your requirements.

Cool Matty
Jan 8, 2006
Usuyami no Sekai

BnT posted:

You mention cheap and terabytes, but not a ballpark on either. Going on that bit, you might want to look at the Synology 1511+, or match a budget to their other products. They have good reviews, lots of development, and would probably fit your requirements.

That looks like a solid idea, but it might be a bit much for them currently. This might push them to save up though, and I like the ability of expansion.

Longinus00
Dec 29, 2005
Ur-Quan

Raukowath posted:

I have read far too much of this thread and it has all started to blend together, and I have been putting off building a new small server with storage to function as a NAS along with a few piddly tasks here and there.

So far, the HP Microserver looks like a very nice deal and upgrade to what I am currently (and have been for going on 8 year) using. Perspective: It is a P3-800 with 4 120GB drives in a raid5 w/ hot spare, so I certainly need an upgrade at this point and decided to throw aside parts of a couple paychecks to get a more robust solution.

Also, I have not messed with RAID in any sense for about 6 years, and even before that it was always higher end server hardware raid solutions, which I will not have available to me.

So my question basically boils down to. If I was to get the microserver, upgrade the RAM, throw in 4 1 or 1.5TB drives, install debian, would that give adequate performance for the occasional backing up, storage and reading if I was to go with a software based RAID. And is ZFS/RAIDZ the 'goto' for software RAID at this point? Also, do the 'Green' drives have issues being in raid, hardware or software, still? I remember hearing about that being a problem.

Any suggestions or tips would be greatly appreciated, I've been out of this game for far too long.

How comfortable are you with linux? If you're not then you probably shouldn't be doing weird stuff like zfs.

luigionlsd posted:

Looking to go the Windows route on this one, though i suppose I'd consider *nix. I have a 2008 R2 license from Dreamspark, though I was considering buying WHS 2011. I know WHS "2003" is a pain in the ass with loading AHCI, especially considering the AM3+ board running this doesn't have native floppy. I have 3x2tb WD Greens ready to go, going to use a BIOS-configured RAID 5. Do I need to worry about lack of Drive Extender if I've got RAID 5 going?

WD green drives? Check.
Bios raid (softraid)? Check.

Hope you enjoy headaches.

luigionlsd posted:

Also, bonus question: I've got a current 2008 setup with a micro ATX board and the onboard nVidia RAID 5 (chipset 630a) - if I just swap boards, will my RAID config be detected on the new AMD 870 chipset, or will I have to rebuild it from scratch? I've only got ~1TB stored so far, so backing up to a separate drive isn't a huge deal.

I highly doubt it but it probably won't hurt to try.

Matt Zerella
Oct 7, 2002


Raukowath posted:

I have read far too much of this thread and it has all started to blend together, and I have been putting off building a new small server with storage to function as a NAS along with a few piddly tasks here and there.

So far, the HP Microserver looks like a very nice deal and upgrade to what I am currently (and have been for going on 8 year) using. Perspective: It is a P3-800 with 4 120GB drives in a raid5 w/ hot spare, so I certainly need an upgrade at this point and decided to throw aside parts of a couple paychecks to get a more robust solution.

Also, I have not messed with RAID in any sense for about 6 years, and even before that it was always higher end server hardware raid solutions, which I will not have available to me.

So my question basically boils down to. If I was to get the microserver, upgrade the RAM, throw in 4 1 or 1.5TB drives, install debian, would that give adequate performance for the occasional backing up, storage and reading if I was to go with a software based RAID. And is ZFS/RAIDZ the 'goto' for software RAID at this point? Also, do the 'Green' drives have issues being in raid, hardware or software, still? I remember hearing about that being a problem.

Any suggestions or tips would be greatly appreciated, I've been out of this game for far too long.

As mentioned above, you shouldn't really use ZFS on Linux.

However, you can use kFreeBSD which is the Debian userland/package manager built on top of FreeBSD and provides native ZFS support. Or you could go to regular FreeBSD which is a bit different from Linux, but it's not too bad.

ZFS itself is pretty damn simple to get up and running, the hard part, especially if you're not familiar with BSD/Linux is sharing out the drives. Installing BSD is a little confusing too, but the installer on 9.0RC1 is MUCH improved to the point where it's easier than Debian to set up.

I'm currently running FreeBSD 9.0RC1 (disabled debugging by recompiling the kernel) with a m1015 controller (flashed to IT mode) and 4x2TB 5k3000 drives and it is rock solid.

Another thing to consider is, for some reason, Samaba on freebsd is complete ass in terms of performance. Personally I use Macs so I just use AFP/netatalk so I don't run into that problem.

Here's a good post install guide for FreeBSD to set up a ZFS array:
http://zfsguru.com/doc/bsd/setup

Also, stay away from the Green drives, the 5k3000 drives are just as good and don't have the 4k sector headaches. ZFS takes care of pretty much everything (partitioning, read, write, etc) so if you go that way, you want to make sure you present all of your drives as JBOD.

movax
Aug 30, 2008



Cool Matty posted:

There's a small company I do small jobs for (building machines, fixing their shoddy network, etc). After getting their terrible ~8 year old Linksys HUB and trashy cables out, there's one last improvement they need, some sort of solid backup system.

Ok, here's my first issue. Do they want a solid backup system, or do they want a solid physically redundant block of storage, or do they want both? A NAS in their office isn't much of a backup if the place burns down. Obviously these guys don't likely have the budget to colo backup somewhere, but this is something to keep in mind before you sell them a "backup" system, as you have to cover your ass.

Moving on...

quote:

This company does a lot of design and videography work. They have terabytes of 1080p footage to keep track of, and to keep backed up. They don't necessarily need all of it on hand all the time (the video guy just copies over whatever he needs), but right now their only backup "solution" are external hard drives.

Lots of data, possibly ripe for deduplication but that might not be possible at the price point. Is there a lot of simultaneous access? Looks like the video guy just copies over files to his local machine before working with them, which is good for you as now you don't need to design a box that can handle streaming uncompressed 1080p to clients.

quote:

I'm thinking some sort of NAS with easy expandability is exactly what they need. Am I wrong? I'm really looking for some good recommendations here, along with what sort of software I could use to automate this process for them.

NAS is probably the right solution here. Are you the on-call tech support guy? You want something as painless as possible, which means probably purchasing something like a Drobo Pro and shoving it full of 2TB drives like the Hitachi 5K3000s in RAID-6. That would give you 6x2TB usable space and tolerate 2 drives failing, but it won't do shit against environmental dangers or software errors like virii and human deletions.

quote:

Also, bonus question: I've got a current 2008 setup with a micro ATX board and the onboard nVidia RAID 5 (chipset 630a) - if I just swap boards, will my RAID config be detected on the new AMD 870 chipset, or will I have to rebuild it from scratch? I've only got ~1TB stored so far, so backing up to a separate drive isn't a huge deal.

I'm not an AMD chipset expert, but I'm relatively certain that you can't just move it over and you'll need to rebuild it. That's what, two chipset generations apart?

Cool Matty
Jan 8, 2006
Usuyami no Sekai

movax posted:

Ok, here's my first issue. Do they want a solid backup system, or do they want a solid physically redundant block of storage, or do they want both? A NAS in their office isn't much of a backup if the place burns down. Obviously these guys don't likely have the budget to colo backup somewhere, but this is something to keep in mind before you sell them a "backup" system, as you have to cover your ass.

Fully aware of that. They don't seem concerned enough to keep anything offsite at the moment, and I've given them the talk about it already.

They just need something that will protect them from drive failure, as currently they have zero redundancy. If an external dies with their old projects on it, those projects are gone.

quote:

Lots of data, possibly ripe for deduplication but that might not be possible at the price point. Is there a lot of simultaneous access? Looks like the video guy just copies over files to his local machine before working with them, which is good for you as now you don't need to design a box that can handle streaming uncompressed 1080p to clients.

There is little-to-no simultaneous access. There's only two guys who do any sort of design work there, and only one that works with video. He always copies the files over locally for maximum performance (he has SSDs in the machine) when working on a project.

I don't think there's actually much duplicated data, so it's probably not worth the extra money.

quote:

NAS is probably the right solution here. Are you the on-call tech support guy? You want something as painless as possible, which means probably purchasing something like a Drobo Pro and shoving it full of 2TB drives like the Hitachi 5K3000s in RAID-6. That would give you 6x2TB usable space and tolerate 2 drives failing, but it won't do shit against environmental dangers or software errors like virii and human deletions.

I wouldn't go as far as to say on-call, just anything they need that's beyond your average day-to-day IT needs. This company is literally made of only 5 people, so they just wing it most of the time.

I'm aware it won't do anything for them as far as a true backup, but they need the redundancy first (not to mention they can't really afford to keep off-site backups of that much data right now).

I have to admit that the Drobo was my first instinct, but I had been reading a lot of really horrible things about them online, as far as reliability and bugs. Is a Drobo going to be solid? If it's not, I'd rather get them to make a larger budget for something that is.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

movax posted:

Lots of data, possibly ripe for deduplication but that might not be possible at the price point. Is there a lot of simultaneous access? Looks like the video guy just copies over files to his local machine before working with them, which is good for you as now you don't need to design a box that can handle streaming uncompressed 1080p to clients.
Video data is oftentimes not deduplicable easily with most encodings. There's a few different formats I've heard of that work better for dedupe, but they're all internal formats used by huge companies like Pixar, Eastman-Kodak, etc. rather than straight raw video that will bloat your data use more than you'd save with dedupe. Dedupe is mostly useful in service provider environments than anywhere else I can think of anyway.

When it comes to chipset-based RAID, I would hardly bother with any of them because you're going to be tied to the motherboard in some manner which will lock your data into a platform that you might not even want in 2-3 years. And with only 1 TB of data to worry about, I'd seriously just buy a 2TB hard drive for like $80, transfer data over, let the new drive burn in and mellow out for a month or so, and then ditch the old drives that are probably about to go anyway.

Then again, I completely forgot about the recent spike in hard drive prices due to the Thailand flooding and it's not as cheap. Oh well, I've got a spare 2TB hard drive right now and should have some 1TB drives in the coming weeks. Best time I could think of to have been migrating a ZFS RAID to a new set of drives.

0x17h
Feb 24, 2011


Something I haven't seen mentioned (or perhaps I overlooked) in this thread is power consumption.

I need a small, low-power server for my house. Ideally this thing would do a number of tasks besides serving files: torrents, Time Machine, OpenVPN, and maybe a personal Minecraft server.
I'd like ~2TB capacity, and obviously some degree of fault tolerance but nothing hardcore. The data stored on this server will be transient and not that important.

Since it would be on all the time power consumption is critical. I don't care for doing anything processor-intensive on this thing. It's been a few years since I built a computer, but I'm guessing that it would be good to use an Atom-based system for this along with FreeNAS (which I have used before)

Tips? I looked through this thread over a period of months, but I don't even know where to start at this point

0x17h fucked around with this message at 05:23 on Oct 28, 2011

Wheelchair Stunts
Dec 17, 2005

by Y Kant Ozma Post


For the FreeBSD ZFS options, I was wondering if these offer NFS/SMB/CIFS in-kernel like Solaris does? I'm sure that NFS support is but not so sure on CIFS and/or if the kernel version in FreeBSD is compatible with like

code:
zfs set sharesmb=on;bullshit fartbullet

movax
Aug 30, 2008



0x17h posted:

Something I haven't seen mentioned (or perhaps I overlooked) in this thread is power consumption.

I need a small, low-power server for my house. Ideally this thing would do a number of tasks besides serving files: torrents, Time Machine, OpenVPN, and maybe a personal Minecraft server.
I'd like ~2TB capacity, and obviously some degree of fault tolerance but nothing hardcore. The data stored on this server will be transient and not that important.

Since it would be on all the time power consumption is critical. I don't care for doing anything processor-intensive on this thing. It's been a few years since I built a computer, but I'm guessing that it would be good to use an Atom-based system for this along with FreeNAS (which I have used before)

Tips? I looked through this thread over a period of months, but I don't even know where to start at this point

Assuming you only really need 2TB of capacity, a simple 2-bay NAS enclosure should be the cheapest and lowest power way to get this done.

Cool Matty posted:

I have to admit that the Drobo was my first instinct, but I had been reading a lot of really horrible things about them online, as far as reliability and bugs. Is a Drobo going to be solid? If it's not, I'd rather get them to make a larger budget for something that is.

I believe the Drobo Pros are pretty solid. I don't recall if Synology makes a 8-bay model (they probably do), but you could also look at the Netgear ReadyNAS which is available in rackmount as well.

Matt Zerella
Oct 7, 2002


Wheelchair Stunts posted:

For the FreeBSD ZFS options, I was wondering if these offer NFS/SMB/CIFS in-kernel like Solaris does? I'm sure that NFS support is but not so sure on CIFS and/or if the kernel version in FreeBSD is compatible with like

code:
zfs set sharesmb=on;bullshit fartbullet

They aren't there, you have to share them manually. I don't use NFS so maybe someone else can chime in on that.

AbsoluteLlama
Aug 15, 2009

By the power vested in me by random musings in tmt... I proclaim you guilty of crustophilia!


So I went to price together a NAS today. Has anyone noticed HD prices have skyrocketed on every site? I checked out CamelCamelCamel and most of the 2TB drives on Amazon that were $70-90 are now $130-150. Is this just a pre-holiday price bump or something?

Walked
Apr 14, 2003



AbsoluteLlama posted:

So I went to price together a NAS today. Has anyone noticed HD prices have skyrocketed on every site? I checked out CamelCamelCamel and most of the 2TB drives on Amazon that were $70-90 are now $130-150. Is this just a pre-holiday price bump or something?

http://www.msnbc.msn.com/id/4499573...s/#.TqrL4d4r29w

AbsoluteLlama
Aug 15, 2009

By the power vested in me by random musings in tmt... I proclaim you guilty of crustophilia!



Oh thanks. That makes more sense. I guess I won't be building that NAS for awhile

Longinus00
Dec 29, 2005
Ur-Quan

AbsoluteLlama posted:

Oh thanks. That makes more sense. I guess I won't be building that NAS for awhile

Any brick and mortar stores in your area? They might have ads with prices that last until the end of the month.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

0x17h posted:

Since it would be on all the time power consumption is critical. I don't care for doing anything processor-intensive on this thing. It's been a few years since I built a computer, but I'm guessing that it would be good to use an Atom-based system for this along with FreeNAS (which I have used before)

Tips? I looked through this thread over a period of months, but I don't even know where to start at this point
The HP Microserver was basically made for this sort of common need. I generally shy away from Atom servers mostly because the CPU is so slow that it starts to affect performance adversely here and there. Furthermore, despite Atom being power efficient itself, most Atom supporting chipsets (that you could pick out and put in a DIY server) consume like 3x as much power as the CPU. However, you could always look at the Acer home server (it runs Windows Home Server and uses an Atom CPU) or one of the 1-2 drive D-Link DNS-323 units for cheaper.

The biggest tip I can give for people building low-power servers is to use an appropriately specced power supply. Even with 80Plus Bronze and Silver power supplies around, if a system is expected to use maybe 20w a 400w PSU Silver PSU will probably be less efficient (meaning more power draw in this case) than a 65w PSU around 40% load. One part of why so many people taking their old machines and using them as NAS experience high power draw is that the PSUs are vastly overspecced to be a home file server. The power savings can add up to 10-15w in a hurry on systems taking maybe 40-50w, which is important if you live where electricity's expensive like, say, Hawaii. The downside of these smaller supplies is that low-watt PSUs for computers can be rather expensive compared to even a 400w Corsair PSU, so if you're concerned about low power mostly because of total cost, well I'd reconsider my "how small does this sucker have to be?" requirement.

Because of all these considerations, I'd recommend (for any home file server use case) the cheapo NASes like the D-Link I mentioned, an HP Microserver, or a giant fuck-off Norco RPC-4xxx 4U server that's still cost-effective for the storage capacity.

Longinus00
Dec 29, 2005
Ur-Quan

necrobobsledder posted:

The HP Microserver was basically made for this sort of common need. I generally shy away from Atom servers mostly because the CPU is so slow that it starts to affect performance adversely here and there. Furthermore, despite Atom being power efficient itself, most Atom supporting chipsets (that you could pick out and put in a DIY server) consume like 3x as much power as the CPU. However, you could always look at the Acer home server (it runs Windows Home Server and uses an Atom CPU) or one of the 1-2 drive D-Link DNS-323 units for cheaper.

The biggest tip I can give for people building low-power servers is to use an appropriately specced power supply. Even with 80Plus Bronze and Silver power supplies around, if a system is expected to use maybe 20w a 400w PSU Silver PSU will probably be less efficient (meaning more power draw in this case) than a 65w PSU around 40% load. One part of why so many people taking their old machines and using them as NAS experience high power draw is that the PSUs are vastly overspecced to be a home file server. The power savings can add up to 10-15w in a hurry on systems taking maybe 40-50w, which is important if you live where electricity's expensive like, say, Hawaii. The downside of these smaller supplies is that low-watt PSUs for computers can be rather expensive compared to even a 400w Corsair PSU, so if you're concerned about low power mostly because of total cost, well I'd reconsider my "how small does this sucker have to be?" requirement.

Because of all these considerations, I'd recommend (for any home file server use case) the cheapo NASes like the D-Link I mentioned, an HP Microserver, or a giant fuck-off Norco RPC-4xxx 4U server that's still cost-effective for the storage capacity.

The whole power supply market is power inflated. The 80 Plus ratings only apply down to 20% power and the reviews I've seen show that efficiency drops off to 70 or worse at 10% (basically the same as power supplies available 10 years ago). It's funny because all the high power components (cpus+gpus) have such great power saving ability that it's very likely most people with 500W+ power supplies spend most of their time in a sub 20% (100W) state unless they are overvolting like crazy.

Don't get too small a power supply for a NAS though, hard drives use anywhere from 1 to 2 amps when spinning up so make sure you either have a supply that can take that or enabled staggered startup.

Star War Sex Parrot
Oct 2, 2003



Muldoon

I made a thread to discuss the hard drive shortages:

http://forums.somethingawful.com/sh...hreadid=3445864

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

0x17h posted:

I'm guessing that it would be good to use an Atom-based system for this along with FreeNAS (which I have used before)
Zacate will be faster and use less power for the same money. Plus you have real PCIe slots for expansion.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


ZFS on Solaris question:

Is there a way to give a user permissions to run zfs snapshot and zfs send/receive without requiring sudo? I'm setting up two servers to mirror each other automatically, and I'd like to use an account other than root if possible. It will be run from cron and be piped through ssh (zfs send dataset | ssh host zfs receive)

E: Nevermind, found zfs allow:
http://blogs.oracle.com/marks/entry..._administration

FISHMANPET fucked around with this message at 01:09 on Oct 29, 2011

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »