«608 »
  • Post
  • Reply
Methylethylaldehyde
Oct 23, 2004

BAKA BAKA


tboneDX posted:

I have a quick question:

I'm using OpenSolaris snv_118 and have a raidz1 pool of four WD 1TB Caviar Greens. I recently had a power failure, and I'm pretty sure it messed something up, as I/O performance hasn't been all that great lately. I'm currently running a scrub on tank, and it's 12h in, 4.51% done, and at 246h26m to go. Next to one of the drives it says '141K repaired'.

Should I assume that everything will be fixed once this scrub is done? Also, should I upgrade to a later version of OS? What about some of the other operating systems supporting ZFS (FreeBSD/Nexenta/others)?

I have had very few problems with this setup, but this latest incident has me a bit worried...

Go to the Openindiana website, update to the latest version using the in-place installer that uses pkg, then scrub. The scrub shouldn't take nearly that long, and it sounds suspiciously like shit I ran into back before I upgraded. Also, since it sets a new Boot Environment, if you don't liek OI, you can go right back to snv_118 when you're done.

Telex
Feb 11, 2003



well it was surprisingly easy to get sabnzbd and sickbeard running on FreeNAS..

http://sourceforge.net/apps/phpbb/f...php?f=15&t=8483

but I used those directions and now only the "sabuser" account has perms on the files that it downloads which ain't gonna work. what (probably easy) thing do I have to do to make all the files with something better like 644 so I can mess with them on my CIFS shares?

is this a cifs thing or a filesystem level thing?

cage-free egghead
Mar 8, 2004

Ready to eat me, sir!


Drevoak posted:

Could you post some more details? Hard drive model/sizes, how long they've been used, etc.

Email me at lblitzer at gmail

mpeg4v3
Apr 8, 2004
that lurker in the corner

Anyone have any suggestions for ZFS performance degradation? I just moved from WHS to FreeBSD running ZFS on 3x 1.5TB in a raidz1 and 3x 2TB also in a raidz1. I'm having a (apparently common) problem wherein ZFS performance will start very fast, but as soon as (according to top) free memory drops below 200MB, it slows down ridiculously. I have a script running constantly now that will run a perl command to allocate 6GB of RAM and free it every 3 minutes to get it back to the proper performance, but I don't like that as a solution. The arc size of the ZFS array also drops down to a ridiculously low value- something like 8MB- when the performance drops, despite being 1-2GB just seconds before.

The server is running a Q6600 with 8GB of RAM. Hard drives are 2x 1.5TB Samsung HD154UI, 1x WD15EADS, 3x Samsung HD204UI. /boot/loader.conf is as such:
code:
# Memory Tweaks
kmem_size="12288m"
kmem_size_max="12288m"

# ZFS Tweaks
vfs.zfs.arc_min="2147483648"
vfs.zfs.arc_max="4294967296"
vfs.zfs.vdev.min_pending="1"
vfs.zfs.vdev.max_pending="8"
Quick edit: I also tried everything I could find online, including lowering the maxvnodes possible, but it didn't help. The only other thing I can think of is that somehow, transferring from an NTFS drive is causing some sort of weird conflict between ntfs-3g and zfs, where zfs doesn't accurately get the memory free after it's used by ntfs-3g or some such.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


After using the suggestions ITT to look around more, I've decided to roll my own NAS but not reuse hardware (just because it's such power-hungry stuff). Instead I'm building a Mini-ITX server based off a Chenbro server case with hot-swap bays. Of course, the only Mini-ITX board I could find that had both USB 3.0 and at least 4 SATA ports also takes a nice big 73W TDP Core i3 (I will likely underclock it a bit), but hey, could be worse. I can run a GUI so I can click things and not care about using precious CPU cycles. Chenbro also does a version of the case with a 120W power supply if you want to do an Atom build.

Software-wise, I'm going to see if I can do the following: Ubuntu server installed on an mdadm RAID 0+1 partition set, with the rest of the space on the drives set as RAIDZ + hot spare via zfs-fuse. While a userspace filesystem driver is crappy for "real" server functions (many of which ZFS is crappy for, too, compared to ext4), it should work just fine for general fileserving. If it doesn't, or if I can't set it up for some reason, I'll just do RAID5 + hot spare via mdadm. If I'm *really* missing something, I'll just do one big array and partition with LVM. I nearly went with OpenIndiana or NexentaStor, but familiarity and zfs-fuse won the day for me.

Side thought, anyone notice that a LOT of tech sites have the same type of generic "Good Design" design going on? It's nice from a usability standpoint, but it's not very exciting.

Factory Factory fucked around with this message at 09:07 on Nov 15, 2010

DLCinferno
Feb 22, 2003

Happy

Factory Factory posted:

After using the suggestions ITT to look around more, I've decided to roll my own NAS but not reuse hardware (just because it's such power-hungry stuff). Instead I'm building a Mini-ITX server based off a Chenbro server case with hot-swap bays. Of course, the only Mini-ITX board I could find that had both USB 3.0 and at least 4 SATA ports also takes a nice big 73W TDP Core i3 (I will likely underclock it a bit), but hey, could be worse. I can run a GUI so I can click things and not care about using precious CPU cycles. Chenbro also does a version of the case with a 120W power supply if you want to do an Atom build.

Software-wise, I'm going to see if I can do the following: Ubuntu server installed on an mdadm RAID 0+1 partition set, with the rest of the space on the drives set as RAIDZ + hot spare via zfs-fuse. While a userspace filesystem driver is crappy for "real" server functions (many of which ZFS is crappy for, too, compared to ext4), it should work just fine for general fileserving. If it doesn't, or if I can't set it up for some reason, I'll just do RAID5 + hot spare via mdadm. If I'm *really* missing something, I'll just do one big array and partition with LVM. I nearly went with OpenIndiana or NexentaStor, but familiarity and zfs-fuse won the day for me.

I'm not tracking on your proposed installation plan. You are installing the OS onto an mdadm array? What is going to run mdadm?

FWIW, I've got that exact same case for my backup server, and while I went with the Intel DG45FC board (no USB 3.0), I tried a couple different things to get a 5th drive into the case, which also has a dedicated space for a 2.5" drive. Tiny PCI single-drive SATA controller with a right-angle PCI express riser...worked for awhile, but was WAY too tight in the case. There's almost no room and it got pretty warm. Thought about wrapping an eSATA cable back into the case from the external port on the motherboard but that seemed sloppy. I ended up with a USB CF card reader plugged into one of the internal USB ports. Works great and 8 GB is more than enough for Ubuntu Server. If your motherboard supports booting from USB I'd recommend it.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


Maybe mdadm is the wrong tool, but I'm under the impression that Ubuntu can put together soft RAID arrays during a live CD install, and since the driver is kernel mode, it will function just fine for /boot.

Laserface
Dec 24, 2004

Sipping harsh elixir,
the ice above us melting fast


Hey guys! Im somewhat new to the idea of NAS!

I work in desktop support (mac/PC) and have a deal with netgear through my work that I get stuff really cheap (less than half retail).

I just bought a netgear ReadyNAS NV4000+ and am wondering the best way to configure it for my needs. I was thinking RAID5 is the way to go, but would like some input based on what Ill be using it for primarily.

I have about 1TB of data (documents, movies, tv shows, music and multitracking/edits of home recording/video) and I dont really delete anything unless I dont like it or know I will never watch it again.

Primary use is going to storing media to share between 2 PCs and streaming media to a network capable media player and an xbox360, but would like to get thoughts on how I can use it as a backup (currently using a 2TB WD external with EZbackitup)

all PCs running windows 7 home premium or greater.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Solaris Express 11 is out, FYI. Comes finally with ZFS encryption.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


Combat Pretzel posted:

Solaris Express 11 is out, FYI. Comes finally with ZFS encryption.

What's the licensing like on this? Is it shitty like Solaris 10, or is it better? AKA can I use it without throwing a fuck ton of money at Oracle?

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire


NAS folks:

So would it be cheaper to buy a pre-made mini-box like the ACER REVO for media center playing, then attach a JBOD array externally via eSATA or Cat6, or just build a HTPC with all of the big HDs I would have put in the JBOD internally but running something like Windows Home Server that can make a easy raid of all of those discs into one?

I don't really want to get into the whole unix pro-RAID thing like most of you folks are doing in this thread, I just want a box that I can add another 2TB hard drive to when I run out of space without having to rebuild a raid or something.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

FISHMANPET posted:

What's the licensing like on this? Is it shitty like Solaris 10, or is it better? AKA can I use it without throwing a fuck ton of money at Oracle?
Free for development, testing and demonstration purposes. So unless you're running it in production in a bigco, I doubt anyone will give a shit.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


jeeves posted:

NAS folks:

So would it be cheaper to buy a pre-made mini-box like the ACER REVO for media center playing, then attach a JBOD array externally via eSATA or Cat6, or just build a HTPC with all of the big HDs I would have put in the JBOD internally but running something like Windows Home Server that can make a easy raid of all of those discs into one?

I don't really want to get into the whole unix pro-RAID thing like most of you folks are doing in this thread, I just want a box that I can add another 2TB hard drive to when I run out of space without having to rebuild a raid or something.

I say do the nettop and a Windows Home Server box separately. Unless you need more than 4 internal hard drives/6-7 total, it's the cheapest way to make things as easy as possible and still do what you want. If you need more than that, it'll be significantly more expensive and at least a bit more complicated to set up. A single box will be faster since all the drives are internal, but a giant computer case holding 10 drives would look shitty as part of a home theater setup, and it would probably be obnoxiously loud.

For the record, we bother with the whole Unix pro-RAID thing because doing things in Windows and/or with pre-built computers can be difficult, expensive, and restrictive (Home Server is only a recent option). Things like ZFS on Solaris let us do "just put in a drive and grow the space we can use" without locking into a tiny Windows server that may not give us all the options we want. Or we can avoid buying a $2,000 piece of dedicated NAS or DAS hardware just for the privilege of spending $1,000 on eight hard drives and still needing a computer to actually use it.

But anyway, rationales:

For cost - Honestly, it's probably cheaper to build one computer and have it handle both storage and media playback. But cheap, easy, and aesthetically-pleasing ways to hold hard drives in a small space stop existing around 4 internal drives, and it's tricky to get good-enough graphics in that space, too.

For ease of setting things up - WHS is king, but you can't use a WHS box as an HTPC, so get a nettop or whatever to use as the HTPC.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire


Why can't the WHS be used as a HTPC? If you have a decent enough video card can't you just have Media Player Classic run things? My girlfriend has gotten used to my laziness with not setting up a full HTPC environment with a remote/indexed whatever, so I would think WHS could run most stuff just fine?

Gilg
Oct 10, 2002



I realize this is more of a NAS thread than a generic storage thread, so excuse me if this is the wrong place, but I'm looking to get a USB drive like this and I'm looking for recommendations. I'm primarily interested in a USB 2.0 drive, powered by the USB cable, and greater than 500 GB. Thanks.

Edit: Yeah, I plan on getting this one if there's not a compelling reason to get a different one. Thank you.

Gilg fucked around with this message at 02:24 on Nov 18, 2010

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


Why not that one you linked? WD is a well-respected name around here.

jeeves posted:

Why can't the WHS be used as a HTPC? If you have a decent enough video card can't you just have Media Player Classic run things? My girlfriend has gotten used to my laziness with not setting up a full HTPC environment with a remote/indexed whatever, so I would think WHS could run most stuff just fine?



If you want to roll your own, sure, I guess it would have a way to plug into a TV. But it's not tuned for running applications you interact with via screen and keyboard; it's tuned for reacting to requests from a network, so no guarantees that it would work as well as a non-server OS on cheaper hardware.

Telex
Feb 11, 2003



jeeves posted:

Why can't the WHS be used as a HTPC? If you have a decent enough video card can't you just have Media Player Classic run things? My girlfriend has gotten used to my laziness with not setting up a full HTPC environment with a remote/indexed whatever, so I would think WHS could run most stuff just fine?

Even on a roll your own which I did recently, disk i/o and network i/o take precedence over things you try to do with the machine. I was running on a rather beefy server (16gb ram, radeon HD2900, dual core processor) style machine and playing videos was not an ideal scenario. Lots of skips and stutters.

Part of it is probably WHSv2 being sorta shitty right now and part of it might have been the WD advanced format drives that seem to be total bullshit but the entire experience was not great.

So I resigned to the idea of a nas being a nas and having a separate XBMC machine. Look into a Boxee Box, those look pretty nice as far as not having to have a real separate PC goes and get a beefy enough server to handle the drives you want or whatever.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire


That is really disheartening to hear. Right now I have a 2 year old ASUS EEE-Box as my home server, which streams out (via an ethernet cord, not wireless) to my girlfriend's mac laptop which we have been using as our HTPC/media player to the tv. I keep most of my media archive on two terabyte drives inside my own desktop (seperate from the ASUS) but since I like to turn it off a lot to save power, my girlfriend can't easily access the media on it-- plus due to crappy design on my desktop's case, I can't actually install any new hard drives while also having one of those comically large nice video cards that have become the norm.

I was hoping to migrate all of my archive drives out to a HTPC, and use it for both expandable storage+media playing, but if WHS can't do that well then that is a huge let down. I'd rather not have to buy another tiny-PC just for playing, as my ASUS Box I'm going to keep as my remote desktop/downloads/low power-24/7 server and it is not powerful enough to play HD stuff, and I was hoping to have a bit of a beefier machine for the achieves/playing that I could turn off/hibernate to save on power when not in use.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



jeeves posted:

That is really disheartening to hear. Right now I have a 2 year old ASUS EEE-Box as my home server, which streams out (via an ethernet cord, not wireless) to my girlfriend's mac laptop which we have been using as our HTPC/media player to the tv. I keep most of my media archive on two terabyte drives inside my own desktop (seperate from the ASUS) but since I like to turn it off a lot to save power, my girlfriend can't easily access the media on it-- plus due to crappy design on my desktop's case, I can't actually install any new hard drives while also having one of those comically large nice video cards that have become the norm.

I was hoping to migrate all of my archive drives out to a HTPC, and use it for both expandable storage+media playing, but if WHS can't do that well then that is a huge let down. I'd rather not have to buy another tiny-PC just for playing, as my ASUS Box I'm going to keep as my remote desktop/downloads/low power-24/7 server and it is not powerful enough to play HD stuff, and I was hoping to have a bit of a beefier machine for the achieves/playing that I could turn off/hibernate to save on power when not in use.

Yeah, WHS just isn't built for this sort of scenario. You're going to be much better off with a dedicated server. If you're insistent on not doing that, I would probably just use a regular Win7 install instead of WHS for my HTPC and just share the drives you hook up to it.

I've been serving media off of a WHS to a little Atom-powered nettop hooked up to my HDTV for a long while and couldn't be happier.

movax
Aug 30, 2008



Any suggestions for a cheap PCI (or PCIe) SATA RAID card for my desktop? I'd like to just slap 2 2TB drives in RAID 1 as a solid scratch disk to have in conjunction with my NAS. Chipset is a P55, but I'm leery of doing soft-RAID.

Also, pretty sure I've settled on the 7K2000s, just going to wait for them to dip below the $100 mark.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

movax posted:

Any suggestions for a cheap PCI (or PCIe) SATA RAID card for my desktop? I'd like to just slap 2 2TB drives in RAID 1 as a solid scratch disk to have in conjunction with my NAS. Chipset is a P55, but I'm leery of doing soft-RAID.

Also, pretty sure I've settled on the 7K2000s, just going to wait for them to dip below the $100 mark.
You can get some nice lsi logic sas cards on ebay for under $50 that support raid 1 for sure.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


movax posted:

Any suggestions for a cheap PCI (or PCIe) SATA RAID card for my desktop? I'd like to just slap 2 2TB drives in RAID 1 as a solid scratch disk to have in conjunction with my NAS. Chipset is a P55, but I'm leery of doing soft-RAID.

Also, pretty sure I've settled on the 7K2000s, just going to wait for them to dip below the $100 mark.

Watch out on those eBay cards. If they don't have a RAM cache on board, I can almost guarantee you that they are also softRAID cards.

e: Or it should say "Host based." If it doesn't say "Host based" and has no RAM cache, it's almost definitely softRAID.

movax
Aug 30, 2008



I will look into LSI, thanks! Should I stay far away from Highpoint/RocketRAID/Promise?

I've just gotten a bit uneasy lately with my JBOD in my desktops...a 750, a 1TB and a 640 that I use as a scratch dump for files downloaded and crap.

Telex
Feb 11, 2003



Factory Factory posted:

Watch out on those eBay cards. If they don't have a RAM cache on board, I can almost guarantee you that they are also softRAID cards.

e: Or it should say "Host based." If it doesn't say "Host based" and has no RAM cache, it's almost definitely softRAID.

So if you're on a machine using ZFS, how much does this matter?

I'd kinda like to consolidate and put all my internal drives on a single card but I'm intending to stick with zfs if possible. I can't really find any decent 8-port sata cards and not many good 4-port ones either..

If I get a raid card and I'm using zfs, does zfs even take any advantage whatsoever of any ram on a controller card?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


Short answer: if the ZFS pool is built from individual disks, it matters none at all, since there are no host-based controllers that do RAIDZ and no RAID(#) arrays to be accelerated.

If the ZFS pool is built out of multiple RAID arrays (like RAID5 or RAID50) for enterprise-type operations, you should already have host-based controllers because otherwise software overhead for storage would make the crazy setup unusable. But the RAID controllers still wouldn't affect ZFS's operations directly, just the arrays being added as disks in the pool. Though that would free some processor time and RAM cache for ZFS, but likely not tons.

And really, if you're using ZFS, the best reasons to consolidate individual drives to a single controller is to make the cabling neater and free a card slot. I mean, assuming the separate ones already being used aren't crap. Take that as you will.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA


Factory Factory posted:

Short answer: if the ZFS pool is built from individual disks, it matters none at all, since there are no host-based controllers that do RAIDZ and no RAID(#) arrays to be accelerated.

If the ZFS pool is built out of multiple RAID arrays (like RAID5 or RAID50) for enterprise-type operations, you should already have host-based controllers because otherwise software overhead for storage would make the crazy setup unusable. But the RAID controllers still wouldn't affect ZFS's operations directly, just the arrays being added as disks in the pool. Though that would free some processor time and RAM cache for ZFS, but likely not tons.

And really, if you're using ZFS, the best reasons to consolidate individual drives to a single controller is to make the cabling neater and free a card slot. I mean, assuming the separate ones already being used aren't crap. Take that as you will.

Uhh, you never ever want to have a hardware RAID with a ZFS setup. It fucks with ZFS's ability to intelligently deal with your disks, and generally just complicates things. The ZFS overhead is pretty marginal compared to anything else. A full rebuild of a disk ended up costing me like 13% CPU time on a single core for the length of the rebuild.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


Yeah, you don't want any hardware raid with ZFS. The LSI 2 port SAS card (which breaks out into 8 SATA ports) is the crowd favorite here. I've got one and it's a rock solid champion.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


Methylethylaldehyde posted:

Uhh, you never ever want to have a hardware RAID with a ZFS setup. It fucks with ZFS's ability to intelligently deal with your disks, and generally just complicates things. The ZFS overhead is pretty marginal compared to anything else. A full rebuild of a disk ended up costing me like 13% CPU time on a single core for the length of the rebuild.

I thought that too, and I originally wrote my answer that way, but after looking over the ZFS best practices guide, I noticed that they specifically said that ZFS works well for iSCSI targets and RAID 5 or mirrored logical disks. vv

Another handy tip from the guide: Don't create a ZFS pool with more than 40 logical devices. I wouldn't be surprised if that were important to some people ITT.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


Factory Factory posted:

I thought that too, and I originally wrote my answer that way, but after looking over the ZFS best practices guide, I noticed that they specifically said that ZFS works well for iSCSI targets and RAID 5 or mirrored logical disks. vv

Another handy tip from the guide: Don't create a ZFS pool with more than 40 logical devices. I wouldn't be surprised if that were important to some people ITT.

I had to work really hard to talk my boss out of making our Thumper (Sun X4500 with 48 SATA drives) into one giant pool. I wanted 4, we compromised on 2

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA


FISHMANPET posted:

I had to work really hard to talk my boss out of making our Thumper (Sun X4500 with 48 SATA drives) into one giant pool. I wanted 4, we compromised on 2

One giant pool is fine, you just have to break the pool down into smaller vdevs. That's because the IOPS penalty for large vdevs starts to outweigh the sequential benefits. That and re-silvering a disk after a failure takes fucking forever. Most places I know actually do that, because unless you're going to be changing the disk size or raid geometry at some later point, there is no real reason not to. That or you have some application that needs guaranteed IOPS and throughput, although you can finagle that using some bizarre options in the kernel config.

And while you can use ZFS on anything that presents itself as a HDD, there are some things to keep in mind. Like you don't want to use a hardware raid card because it can fuck with ZFS's cache flush code, having your raid card's 1GB cache flush every 7 seconds murders the fuck out of performance. There are a few things like this that happen with a hardware acceleration and abstraction engine like on a RAID card.
Sorta like metal in the microwave, you can do it, but the rules for doing so are so byzantine that it's way easier just to say "no, don't".

Methylethylaldehyde fucked around with this message at 17:51 on Nov 19, 2010

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


Methylethylaldehyde posted:

One giant pool is fine, you just have to break the pool down into smaller vdevs. That's because the IOPS penalty for large vdevs starts to outweigh the sequential benefits. That and re-silvering a disk after a failure takes fucking forever. Most places I know actually do that, because unless you're going to be changing the disk size or raid geometry at some later point, there is no real reason not to. That or you have some application that needs guaranteed IOPS and throughput, although you can finagle that using some bizarre options in the kernel config.

Erp, that's what I meant, one giant RAIDZ2 with 2 hot spares. Right now we've got 2 22disk RAIDZ2 vdevs and 2 hot spares.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Any "gotchas" to using these drives in an mdadm raid5 array?

I know there has been some problems with WD Green drives and some sort of NAS solution that's popular in here, but I've lost track of the actual issues.

Shaocaholica
Oct 29, 2002

Fig. 5E


Not sure if this is the thread to ask but I couldn't find another one that was more suited.

Recently I bought 2 raid5 DAS boxes and both have issues with my machine. They are both based on the Oxford 936 chipset and seem to be the best in their class.

One is the OWC Mercury Elite Qx2
http://eshop.macsales.com/item/Othe...ng/MEQX2KIT0GB/

The other is a Raidon GR5630-4S-WBS2
http://www.newegg.com/Product/Produ...2-003-_-Product

The hard drives I'm using are Samsung HD204UI 2TB drives which have all passesd a full surface scan with Samsung's ESTOOL and a low level format.

The machine I'm connecting these two boxes to is a Shuttle SP35P2 Pro which has an Intel ICH9 sata controller running in ACHI mode. The Shuttle has the latest BIOS. I also did a fresh install of Win7 Pro 64bit and also installed the latest Intel sata drivers (~march 2010).

Whenever I attach these boxes to my machine via esata, windows shows them as 1.4TB disks even though they should have ~5.5TB with 4x2TB in raid5. I made sure that any 2TB limiters were not enabled. However, if I attach the units via firewire, I see them with correct 5.5TB size and can format them (GPT of course). When I reattach them via esata, they show up with the correct 5.5TB volume I made over firewire.

Here's where it gets screwy. I can write up to ~2.5TB to the volume fine over esata but when I go over that the whole array will stop responding and I'll be forced to power cycle the raid box. Both boxes don't report any kind of error or disk failure when this happens. When I remount the array, the partition is just gone. I can predictably do this over and over so its not just a one time thing.

Now I'm hesitant to get another box because I'm not sure if the problem is with my ICH9 or something else.

Any thoughts? I've kinda got my eye on the new 2nd gen Drobo S but its hella pricey.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Thermopyle posted:

So, this and two of these?

What's the performance like on these?

I think this got lost on the end of last page...

Just making sure this is the LSI card that everyone talks about (and that those are the right cables).

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

Thermopyle posted:

I think this got lost on the end of last page...

Just making sure this is the LSI card that everyone talks about (and that those are the right cables).
This is close to what I was referring to, though it has 3 ports instead of 2.

http://cgi.ebay.com/LSI-SAS3041E-HP...=item20b254402e

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


Shaocaholica posted:

Big volume doesn't work in eSATA but does in FireWire

I Googled a bit, and I think it's related to why 3TB drives are being shipped with controller cards. The 2TB volume limit isn't just a software issue, it's part of many hardware controllers, as well. The ICH9 southbridge seems to have such a limit; people have had trouble creating RAID arrays with usable space larger than 2 TB (the actual limit is ~2.19 TB). SoftRAID wouldn't be affected, as the chipset still deals with the drives as individual disks.

Best bet for now is probably to either stay FireWire or get an controller card that offers both eSATA and no 2 TB limit.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


adorai posted:

This is close to what I was referring to, though it has 3 ports instead of 2.

http://cgi.ebay.com/LSI-SAS3041E-HP...=item20b254402e

Welp, that's a nice card. 4 ports on that though, not 3, so it can drive 16 SATA drives. Very nice.

dietcokefiend
Apr 28, 2004
HEY ILL HAV 2 TXT U L8TR I JUST DROVE IN 2 A DAYCARE AND SCRATCHED MY RAZR

Quick question for one of you. I built a 4-drive RAID5 using mdadm in Ubuntu and getting about 60/90MB/s Read/Write. For some reason I swear software RAID5 would have been slightly slower on write but read would have still been great

EDIT: Share is over samba to a Windows 7 system and this is what I get testing the speeds on the system:

root@ububox:~# hdparm -Tt /dev/md0

/dev/md0:
Timing cached reads: 2520 MB in 2.00 seconds = 1260.28 MB/sec
Timing buffered disk reads: 1126 MB in 3.00 seconds = 375.23 MB/sec


Before this I had RAID1 on two 1TB drives and 2 2TB drives and I could pretty much saturate 1Gb LAN over read/write.

dietcokefiend fucked around with this message at 19:30 on Nov 20, 2010

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



adorai posted:

This is close to what I was referring to, though it has 3 ports instead of 2.

http://cgi.ebay.com/LSI-SAS3041E-HP...=item20b254402e
Nice.

I think I'll get one of these and some fanout cables.

edit: Fanout cables are frickin' expensive, and it seems like there's different types...

edit2: Wait, HP's specs on that card says it has 4 internal SATA ports. The only fan-out cables I can find are Mini-SAS to SATA. I'm ignorant.

edit3: Ok, I don't think this card is what I was talking about. As far as I can tell, you have to have a card with Mini-SAS internal connectors to use fanout cables to multiple SATA drives. This is unfortunate, because I already ordered the card you linked to. :/

Thermopyle fucked around with this message at 22:51 on Nov 20, 2010

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



double post

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »