|
titaniumone posted:I can get a 4-Port Intel PRO/1000PT PCI-E on Ebay for about $100, and my switch (Cisco 3560g) supports LACP, so I should be able to set this up easily. Is that a x4 or a x8 card? I wouldn't mind doing the same, but I have no free slots for it. I guess I could team my 2 x1 Intel adapters I have for now.
|
![]() |
|
movax posted:24GB of RAM and 8 threads, solely for file-serving at the moment. IT Guy posted:True, unless you LAGG your endpoint as well evil_bunnY fucked around with this message at 16:39 on Apr 12, 2012 |
![]() |
|
evil_bunnY posted:I love how you blurb for 6 lines before mentioning this gem. It was all so cheap ![]() ![]()
|
![]() |
|
I'm sitting here idly thinking about how I'm going to handle migrating my 16TB of data once hard drive prices come back down a bit. You may or may not recall me talking in here months ago about how I accidentally discovered ext4 has a 16TB filesystem limit. Well, I need more storage, and I value having it all as one big blob instead of separate filesystems. My current setup involves multiple mdadm RAID5 arrays joined with LVM with an ext4 filesystem. My experience makes me lean towards just keeping my Ubuntu server and continuing with my mdadm RAID5 + LVM set up and migrating to a new filesystem that supports >16TB.
|
![]() |
|
Thermopyle posted:I'm sitting here idly thinking about how I'm going to handle migrating my 16TB of data once hard drive prices come back down a bit. You may or may not recall me talking in here months ago about how I accidentally discovered ext4 has a 16TB filesystem limit. EXT4 has a file size limit of 16TB, not volume size. Edit: After Googling it, it appears to be a partition size limit with e2fstools. IT Guy fucked around with this message at 16:53 on Apr 12, 2012 |
![]() |
|
IT Guy posted:EXT4 has a file size limit of 16TB, not volume size. Yeah, this. Ext4 supports like an exabyte or something ridiculous volume-wise. No modern file system in use on NASes has a native limit you'll hit. Artificial ones like Nexenta Free, or maybe path depth/size issues, but not volume. FAT32 has a 16TB limit though ![]()
|
![]() |
|
Thermopyle posted:I'm sitting here idly thinking about how I'm going to handle migrating my 16TB of data once hard drive prices come back down a bit. You may or may not recall me talking in here months ago about how I accidentally discovered ext4 has a 16TB filesystem limit. BTRFS can do an inplace conversion: https://btrfs.wiki.kernel.org/artic..._Ext3_6e03.html
|
![]() |
|
IT Guy posted:EXT4 has a file size limit of 16TB, not volume size. movax posted:Yeah, this. Ext4 supports like an exabyte or something ridiculous volume-wise. No modern file system in use on NASes has a native limit you'll hit. Artificial ones like Nexenta Free, or maybe path depth/size issues, but not volume. Yes. I talked to Theodore Ts'o about it at the time. He told me that I'm basically SOL if I don't want multiple partitions. Ext4 doesn't support >16TB partitions. Specifically, the problem is that all release versions of ext4 don't support it. ext4, as designed, does, but nothing out there in use is constructed in such a way to support it. Ts'o posted:There isn't a way to get around this issue, I'm afraid. Support for > Thermopyle fucked around with this message at 16:57 on Apr 12, 2012 |
![]() |
|
Wait, huh? What is going on? What the hell is a partition limit on a file system? That doesn't even make any sense to me.
|
![]() |
|
Thermopyle posted:Yes. I talked to Theodore Ts'o about it at the time. He told me that I'm basically SOL if I don't want multiple partitions. Ext4 doesn't support >16TB partitions. Does he have plans to fix this in newer versions of his software or is there a physical limit he's hitting as well? edit: FISHMANPET posted:Wait, huh? What is going on? What the hell is a partition limit on a file system? That doesn't even make any sense to me. http://blog.ronnyegner-consulting.d...mit-now-solved/
|
![]() |
|
Thermopyle posted:I'm sitting here idly thinking about how I'm going to handle migrating my 16TB of data once hard drive prices come back down a bit. You may or may not recall me talking in here months ago about how I accidentally discovered ext4 has a 16TB filesystem limit. You want XFS. FISHMANPET posted:Wait, huh? What is going on? What the hell is a partition limit on a file system? That doesn't even make any sense to me. He means volume size limit. Longinus00 fucked around with this message at 18:39 on Apr 12, 2012 |
![]() |
|
Urgh, after seeing that $199 N40L I feel so dumb for buying 2 of them at $269. Oh well.
|
![]() |
|
Sombrero! posted:Urgh, after seeing that $199 N40L I feel so dumb for buying 2 of them at $269. Oh well. Even at $269 it seems like a really great deal, compared to how much other devices on the market cost. (I bought one at $269)
|
![]() |
|
Sometime today, while I wasn't at work, the N40L I have in place there locked up completely. Nobody could access any shares, etc. So I came in and found it was outputting a full screen of vertical black and white stripes and was completely unresponsive. My confidence in it is extremely shaken. This thing is supposed to work hassle-free for years once I finish setting it up...
|
![]() |
|
DNova posted:Sometime today, while I wasn't at work, the N40L I have in place there locked up completely. Nobody could access any shares, etc. So I came in and found it was outputting a full screen of vertical black and white stripes and was completely unresponsive. Yeah, that sounds like a major hardware or memory problem. I'd run memtest on it, and if that shows clean, time to call HP and demand a new one.
|
![]() |
|
UndyingShadow posted:Yeah, that sounds like a major hardware or memory problem. I'd run memtest on it, and if that shows clean, time to call HP and demand a new one.
|
![]() |
|
It's just sitting on a cart out in the open. It would be nice if it was just the memory... I'll get memtest started on it in a few minutes. edit: first pass completed without errors... ugh. sleepy gary fucked around with this message at 08:05 on Apr 13, 2012 |
![]() |
|
Anyone ever use a ReadyNAS X6? I got one for free a while back and just about ready to pull the trigger on 4x 2TB drives for my media storage. Just wondering if it's worth using or should I just pickup a N40L. It's use is strictly for tv/movie storage and that's it.
|
![]() |
|
Is there any kind of virtualization software that you can run on top of FreeNAS?
|
![]() |
|
Lowen SoDium posted:Is there any kind of virtualization software that you can run on top of FreeNAS? No but you can do it the other way around.
|
![]() |
|
Lowen SoDium posted:Is there any kind of virtualization software that you can run on top of FreeNAS? VirtualBox mostly runs on FreeBSD, so it should run on FreeNAS as well. http://wiki.freebsd.org/VirtualBox
|
![]() |
|
^^^ Thanks! I found this link disusing it more.DNova posted:No but you can do it the other way around. Yeah but then I couldn't store the VM's disk files on the ZFS datastore. Lowen SoDium fucked around with this message at 14:48 on Apr 13, 2012 |
![]() |
|
If you use, say, ESXi, you boot it from memory stick 1. Then you boot FreeNAS virtualized from memory stick 2, and pass your hard drives to it raw. Then you tie part of the volume back to ESXi as iSCSI or somesuch, and use it to store the other VMs.
|
![]() |
|
Factory Factory posted:If you use, say, ESXi, you boot it from memory stick 1. Then you boot FreeNAS virtualized from memory stick 2, and pass your hard drives to it raw. Then you tie part of the volume back to ESXi as iSCSI or somesuch, and use it to store the other VMs. I know for a fact that this would work, but I would rather not do it this way.
|
![]() |
|
Factory Factory posted:If you use, say, ESXi, you boot it from memory stick 1. Then you boot FreeNAS virtualized from memory stick 2, and pass your hard drives to it raw. Then you tie part of the volume back to ESXi as iSCSI or somesuch, and use it to store the other VMs. Just want to say that I just did this and got pathetic performance on brand new hardware, and while I constantly run into people saying to do it this way they all magically never seem to see my follow up post about the poor performance...
|
![]() |
|
marketingman posted:Just want to say that I just did this and got pathetic performance on brand new hardware, and while I constantly run into people saying to do it this way they all magically never seem to see my follow up post about the poor performance... It must be a REALLY shitty USB card, because aside from swap space and booting, you don't really use the card at all.
|
![]() |
|
marketingman posted:Just want to say that I just did this and got pathetic performance on brand new hardware, and while I constantly run into people saying to do it this way they all magically never seem to see my follow up post about the poor performance... LmaoTheKid posted:It must be a REALLY shitty USB card, because aside from swap space and booting, you don't really use the card at all. ![]()
|
![]() |
|
evil_bunnY posted:I don't think he's talking about root FS IO performance Whoops! Didn't even thing of that. Yeah dude, if you were running your VMs off of the stick, you're doing it wrong.
|
![]() |
|
Mapping local disk as RDMs through to a NAS VM seems to give terrible performance for me, 1/10th what it should be. Running the same ZFS pool on the same hardware booting say FreeNAS off a USB drive gives the exact performance expected. And basically, yes, I want this all to end why am I doing this at home daslnfskldnfsdfjnslk Edit: Uhh can you even create a datastore on a USB stick? The VM datastore for the initial NAS VM was a single SATA disk - not many IOPS but as you noted, there's no need for them.
|
![]() |
|
Anyone done any extensive testing on Windows 8 storage spaces? I was running WHS with drive extender, but my system drive is having issues, and rather than go through the trouble of trying to reinstall WHS on a new drive and rebuild my drive extender array, I'd rather just move away from the tech, since it's EOL. Storage spaces sounds like it's exactly what I need, since I have various SATA and USB drives in different sizes, but I ran into some issues when first messing around with it. I set up a parity storage space with 4 drives, 2tb, 1tb, 750gb, and 60gb. Once the smallest drive in my storage space was full, the whole storage space froze, and the storage spaces management tool became unresponsive, even after I restarted windows 8. I eventually got it to unfreeze after I just unplugged the 60gb drive. I was under the impression that a storage space using parity could still do so on different size drives, and wouldn't necessarily be limited by the size of the smallest drive. I just made a new array with the three bigger drives, but am afraid that once the 750gb fills up, I'll experience the same issue. I realize it's just a consumer preview, but I've found a few others people the same problem, and it's a pretty serious problem.
|
![]() |
|
hucknbid posted:Anyone done any extensive testing on Windows 8 storage spaces? I was gonna say "hey, a friend of mine is having that same problem" and then, welp. I think if you had two 750GB drives it might work (with the system treating them essentially as one 1.5TB drive for parity) but that's the common sense way to implement a filesystem, but who knows what Microsoft actually does.
|
![]() |
|
Is there a major problem with mixing drive speeds in a RAID5? I have a set of WD EARS drives at 5400 and I got a set of 7200 Hitachis to replace a dying pair with. They have the same cache size, but I don't know if them being this different is a problem. Should I return these and get a set of 5400s or would it be fine to do it like this and just replace the other EARS drives later? Synology DS410j, if that makes a difference.
|
![]() |
|
Echophonic posted:Is there a major problem with mixing drive speeds in a RAID5? I have a set of WD EARS drives at 5400 and I got a set of 7200 Hitachis to replace a dying pair with. They have the same cache size, but I don't know if them being this different is a problem. Should I return these and get a set of 5400s or would it be fine to do it like this and just replace the other EARS drives later? Synology DS410j, if that makes a difference.
|
![]() |
|
Another N40L question: is the 2TB drive spec a hard limit or are there ways around it? Mine just arrived today.. really pleased at how small it is compared to my HTPC
|
![]() |
|
2TB is not really a hard limit. It's all that's officially supported, but whatever. Many people have run 4x3TB drives in the N40L, and the only limitation seems to be it isn't happy if you try to boot off of such a drive setup, but you should be booting off a separate USB drive or the like anyhow, so that shouldn't be an issue.
|
![]() |
|
DrDork posted:There's no major problem in the sense that it'll all work. Note, however, that without fancy RAID-drives, you'll be limited to the slowest drive's speed, so your 7200RPM Hitachis will be hamstrung by the EARS lower performance. But yeah, it'll all work. Alright, that sounds reasonable enough. I figured as much that I'd be limited to the slowest drive, but worst case I can just buy two more Hitachis this summer and replace the last two and get the speed up a little and find another use for those green drives. Maybe get a RAID1 caddy or something for backups to plug into the back of the NAS.
|
![]() |
|
DEAD MAN'S SHOE posted:Another N40L question: is the 2TB drive spec a hard limit or are there ways around it? The chipset supports 3.2TB HDD's. So you'll be wasting space and money if you buy 4TB drives, but 3TB drives would be perfect. If you saw 2TB batted around in N40L conversations, it was probably in reference to backup limits supported by WHS2011 - the drive will be partitioned into 2 drives.
|
![]() |
|
3.2 TB is a very odd number that I've not encountered in drive capacity limits before. How did they end up there?
|
![]() |
|
Another N40L question here: I've got mine coming in on Monday, with 4x 2TB Drives (Samsung F4s) and a flash drive for the OS. My intent is to run SABnzbd, SickBeard, Couchpotato, and rTorrent on top of your typical NAS fileserving. I'm familiar with unix-based stuff, and the general consensus seems to be that ZFS/RAID-Z is super legit. So the question: Is my best route, then, from a usability, speed, and safety standpoint (assume I'm also taking snapshots here), to go with FreeNAS setup, and setup RAID-Z1 on one big pool with the 4 drives? Anyone want to throw a better/different option in the hat before I roll this all out on Monday?
|
![]() |
|
Star War Sex Parrot posted:3.2 TB is a very odd number that I've not encountered in drive capacity limits before. How did they end up there? http://forums.overclockers.com.au/s...958208&page=316 Others speculate that if you just use a more modern SATA PCI-E card, you'll be able to use the entire thing. 2TB drives still seem to be the price/capacity king, so that's what I'm sticking with for now.
|
![]() |