|
Factory Factory posted:32 GB of RAM isn't enough for more than 32 TB of ZFS storage anyway, is it? You'd want to go with another softRAID. After about 8 GB, you don't need to give too many shits about ZFS ram usage as long as you don't enable Dedupe (never enable dedupe). I have 32GB in my machine, but I also have ~90TB of disk in it. It works fine. My ARC never gets more than 70% full because I just don't access enough data fast enough for the stuff to not expire first. Edit: The only time you really do want more RAM is if you have a lot of L2ARC, which nobody here would invest in enough of for it to make a difference. A 'lot of' is being defined as several TB worth.
|
![]() |
|
Ah, good to know. Thanks. ![]()
|
![]() |
|
I've just about maxed out my current tower case with 22 drives in it. I'm considering building something, but I'm lazy so I might just buy something. What's some options for something to hold more than 22 drives? I'd consider new if the price is right, but I'm picturing some server hardware or something that I could snag for cheap off ebay...
|
![]() |
|
Methylethylaldehyde posted:After about 8 GB, you don't need to give too many shits about ZFS ram usage as long as you don't enable Dedupe (never enable dedupe). 90TB? What in the unholy fuck. I only have 7 total, 4.5 usable. What size disks and how many do you have? What grade board? And ECC I assume? Also, what kind of what kind of case do you have to hold all this? Megaman fucked around with this message at 21:23 on Oct 13, 2014 |
![]() |
|
Thermopyle posted:I've just about maxed out my current tower case with 22 drives in it. I'm considering building something, but I'm lazy so I might just buy something. I think your only options at that point are going to be rackmount-style cases. Norco makes a 4U 24-bay case, or you could spend some real money and get a Supermicro. They make a 4U that will hold 36 drives plus the server itself, or the same thing set up to be used as a JBOD enclosure that will hold 45 drives.
|
![]() |
|
IOwnCalculus posted:I think your only options at that point are going to be rackmount-style cases. Norco makes a 4U 24-bay case, or you could spend some real money and get a Supermicro. They make a 4U that will hold 36 drives plus the server itself, or the same thing set up to be used as a JBOD enclosure that will hold 45 drives. Having owned both the Supermicro 24 bay case and the Norco 24 bay case, the SuperMicro case is the one that will end up lasting the full 5ish years you'd expect hardware to last these days. The Norco can be hit or miss in terms of quality of the backplanes and overall fit and finish. Do keep in mind that the 36 bay monster case only allows half height cards in it. The server space is only 2.2ish U high. Currently I have 3 chassis I use. One is a 24 bay Supermicro 4U case, with 10 4TB drives and 10 3 TB drives. Attached via a SAS expander is a Supermicro 2U 24 pay 2.5" case, with a mix of 20 320GB drives and 20 500 GB drives (they were free). Last is my Norco 24 bay 4U case that has my Hyper-V host sitting inside it. 5 of the 6 rows are attached via a SAS expander to the original storage host, the last row is for local disks for the Hyper-V host to play with. The Norco box has a bunch of 1 and 2TB drives in it. Storage server is a Xeon V2, with a supermicro board and 32GB ECC ram. Hyper-V host is a 3930k with a X79 Workstation board and 64 GB of regular RAM. Methylethylaldehyde fucked around with this message at 00:03 on Oct 14, 2014 |
![]() |
|
Thermopyle posted:I've just about maxed out my current tower case with 22 drives in it. I'm considering building something, but I'm lazy so I might just buy something. http://www.supermicro.com/products/...E16-R1K62B2.cfm ?
|
![]() |
|
http://www.supermicro.com/products/...46E26-R1200.cfm That's the one I got. You want one with an E16 or E26 in the name, those are the ones with the built in expander. Costs you extra, but saves you from having to plug 3 SAS cards in the drive the case. That way you can get a fancier SAS card.
|
![]() |
|
Methylethylaldehyde posted:Do keep in mind that the 36 bay monster case only allows half height cards in it. The server space is only 2.2ish U high. They probably make a version that uses a riser card to let you lay down expansion cards horizontally, but what full-height cards would you really need here? Just about any decent SAS controller can be had in a half-height form factor, same for 1G or 10G NICs.
|
![]() |
|
SamDabbers posted:You may consider the Lenovo server deal I posted on the previous page. Similar horsepower, similar price, but you get a sweet hotswap case with it. I would like to add, that for similar hardware there is value to purchasing something that is a complete unit in terms of warranty and support. It saves a lot of time dealing with hardware issues when it all comes from one vendor. How does IPMI on the ASrock compare to Lenovo?
|
![]() |
|
IOwnCalculus posted:They probably make a version that uses a riser card to let you lay down expansion cards horizontally, but what full-height cards would you really need here? Just about any decent SAS controller can be had in a half-height form factor, same for 1G or 10G NICs. You can get versions of the case with a Supermicro mobo that includes a riser card. The 2.2U thing is generally for things like CPU coolers and the like. I ended up not picking it up because I used an aftermarket Noctua cooler to keep the processor cool, since I removed the 80*25 Delta screamers for slightly less leaf blower-y fans.
|
![]() |
|
I just noticed the one I posted is 2.5" drives. That's what I get for skimming the product page before I post!
|
![]() |
|
IOwnCalculus posted:Newegg Business already has it at $380 but unfortunately, it's all Marvell chipsets for the NIC and SAS - so BSD support is apparently shit. god damnit.
|
![]() |
|
Damn, those are expensive. I might just build something. Shouldn't be terribly hard to make something out of aluminum channel or something.
|
![]() |
|
Have you heard of backblaze? You should check out their storage pod, if you're thinking about building your own chassis you should see what they've done, 45 drives in a 4U enclosure.
|
![]() |
|
FISHMANPET posted:Have you heard of backblaze? You should check out their storage pod, if you're thinking about building your own chassis you should see what they've done, 45 drives in a 4U enclosure. The only problem with a Backblaze style setup is last I checked they were still using some old-school SATA expander type technology. It'll work, but I'm pretty sure the individual drives get a lot less bandwidth than they do in a SAS environment (even accounting for SAS expanders). If you want to just roll your own external enclosure to attach to your existing 22-bay box, just get the chassis of your choice and add one of these. It's a little board that will handle powering the chassis on/off as well as system fans. I suppose there are probably cheaper ways to do it but this one is still a relatively clean solution. Add some appropriate SAS cables / internal-to-external SAS brackets and you have yourself a JBOD. Edit: Latest version of the Backblaze pod has actually gotten rid of the SATA multipliers and now just uses massive SAS controllers. Nice. IOwnCalculus fucked around with this message at 04:12 on Oct 14, 2014 |
![]() |
|
For cheap SAS expanders, take a look at the SGI SE3016 models all over Ebay for < $170.
|
![]() |
|
Supermicro also makes the SC847DE26-R2K02JBOD which can hold 90 3.5" drives in 45 bays in a 4U, and the best part is you dont have to actually slide the server out on its rack mounts to remove drives (I shudder to think about doing that on a running server with drives like the Backplaze Pod requires).
|
![]() |
|
IOwnCalculus posted:The only problem with a Backblaze style setup is last I checked they were still using some old-school SATA expander type technology. It'll work, but I'm pretty sure the individual drives get a lot less bandwidth than they do in a SAS environment (even accounting for SAS expanders). Yep, they've gone SAS, but mostly I was pointing it out for the case, since Thermopyle was ready to design and build his own drive enclosure.
|
![]() |
|
So my idiot self completely forgot that I have an entire fleet of useless computer sitting at work. I decided to bring one home with me and try and set up Freenas on it. I've actually done alright but I'm a little confused on how to set up the folder structure and also why they are so hardcore on not using SMB or AFP outside of your home network. I set up my root password as some ridiculously long password thanks to 1Password and set the share to only my username which I also used some ridiculously long password. Is that not safe enough? I've had my Synology sharing for over 3 years and never had an issue. I'm also having trouble with the AFP share where it freezes my Mac if I try to add to the folder by going to the Shared section under Finder. I can download from the folder without a problem. If I swap over to SMB, this doesn't happen. Also, I'm a bit confused on how to set up my file structure. On my Synology I only had 6 folders (Media, Photo, Music, etc). Is that sort of thing possible with sharing enabled on Freenas? Everything I saw in videos had users creating separate datasets for each individual thing.
|
![]() |
|
suddenlyissoon posted:So my idiot self completely forgot that I have an entire fleet of useless computer sitting at work. I decided to bring one home with me and try and set up Freenas on it. I've actually done alright but I'm a little confused on how to set up the folder structure and also why they are so hardcore on not using SMB or AFP outside of your home network. I set up my root password as some ridiculously long password thanks to 1Password and set the share to only my username which I also used some ridiculously long password. Is that not safe enough? I've had my Synology sharing for over 3 years and never had an issue. I'm also having trouble with the AFP share where it freezes my Mac if I try to add to the folder by going to the Shared section under Finder. I can download from the folder without a problem. If I swap over to SMB, this doesn't happen. Multiple datasets can give you finer control over some things, but it doesn't sound like you want or need any of that. Make one dataset, share it however you want, and then make whatever folders you want in there. As for exposing its services to the Internet, that's up to you.
|
![]() |
|
suddenlyissoon posted:I'm a little confused on how to set up the folder structure and also why they are so hardcore on not using SMB or AFP outside of your home network. Honestly, though, is there a reason you want to use SMB/AFP on an internet-facing service, instead of something like FTP w/SSL?
|
![]() |
|
DrDork posted:It's because FreeNAS is made with the mentality that either you are (1) too dumb to know how to set up security correctly, in which case you should not be sharing anything outside the home network, or (2) you are smart enough to set up security correctly, in which case you should also be able to figure out how to get it to share stuff outside the home network. Well, only two reasons. First, I like to connect my work computer to the drives for easy transfer if needed. Second, my MacBook relies on network drives heavily for storage.
|
![]() |
|
That sounds more like you need to setup a VPN rather than try to open up random holes in your firewall to the Internet. Your firewall is an Internet condom, the fewer holes exist the better.
|
![]() |
|
necrobobsledder posted:That sounds more like you need to setup a VPN rather than try to open up random holes in your firewall to the Internet. Your firewall is an Internet condom, the fewer holes exist the better. There's really only 1 hole that needs to exist: 22 ![]()
|
![]() |
|
Megaman posted:There's really only 1 hole that needs to exist: 22 Doesn't even have to be the default SSH port.
|
![]() |
|
Has anyone used the freenas virtualbox plugin much? I moved from freenas to linux since it has ok ZFS support now because I'm bad at BSD and couldn't get virtualbox working properly in a jail and I want to run some lightweight stuff in a tinyxp vm. Now there's an official virtualbox plugin. If I move back to freenas can I expect this to pretty much work as advertised? Any reason I shouldn't? e: in related news, I've got an old ass raidz2 array with 5 7200rpm seagate barracudas in it that probably should never have been in RAID in the first place (previously raid5, previously raid1) but they've kept on trucking 24/7. Now they don't even make 1.5tb drives anymore, and the oldest one just died on me out of warranty with like 6.5 years of power on time. Guess it's time to start working on replacing the whole array... oh hey look WD reds are on sale at amazon! I can replace the whole array with a sane 2x4tb raid1, but for ![]() ![]() ![]() ![]() poverty goat fucked around with this message at 01:44 on Oct 16, 2014 |
![]() |
|
Farmer Crack-Ass posted:Doesn't even have to be the default SSH port. But why change it? Security through obscurity is not security.
|
![]() |
|
Megaman posted:But why change it? Security through obscurity is not security. It doesn't provide any additional security, nobody said that it did. It certainly removes some clutter from logs though if you don't have script kiddies knocking on your front door all day every day: code:
|
![]() |
|
fletcher posted:It doesn't provide any additional security, nobody said that it did. Setting login timeout grace periods also helps. I suppose in a way you're right, but script kiddies no longer matter if you're using keys anyway, not sure why you'd have to even look at the logs at that point
|
![]() |
|
I'm feeling the need... the need for massive amounts of high-redundancy storage. Is there a consensus as to the best RAID enclosure? I'm looking for something that connects via both USB 3.0 and eSATA (in case I need to use it with my oldish laptop, though just USB is also fine), can hold four or more drives, and will have decent speed as I'll be using it to do video editing and photo editing (of enormous 700mb TIFFs). I've heard Drobos are good, but I've also heard they're terrible. I've also heard that I "should be using ZFS raid, you scrub" but I need something to connect directly to my Windows box, unless Synology makes something with an SFP+ cage in it for direct attach ![]() What's good? edit: do any of y'all have experience with an Oyen Mobius? atomicthumbs fucked around with this message at 09:48 on Oct 16, 2014 |
![]() |
|
External cases via SFF-8088 miniSAS, USB3, FW800, or eSATA connectors can be had from here. Its worth mentioning that iSCSI will allow a connection thats appears as a logical block device to the OS, and both Synology and FreeNAS (SHR and ZFS respectively) offer that. Additionally, since you mention high-redundancy, are you thinking single, dual, or triple parity level? ZFS and SHR provides all three options, as well as hot-spare functionality. D. Ebdrup fucked around with this message at 12:43 on Oct 16, 2014 |
![]() |
|
Anyone using xpenology? How stable is it? It seems like the best of both worlds for me. I'm thinking about taking my new server and setting it up in SHR with nothing but Plex and leaving my critical data on one drive of my Synology DS212j. Maybe leave the 212j behind the vpn and set up the SABNZBD, Sickbeard, etc on it.
|
![]() |
|
D. Ebdrup posted:External cases via SFF-8088 miniSAS, USB3, FW800, or eSATA connectors can be had from here. Dual or triple parity would be nice, but I doubt I can afford that many drives quite yet; NAS stuff isn't ideal for me as I'm on 802.11n without a good way to upgrade. SAS or a RAID controller would be nice, but I'm building a micro-ATX computer and my remaining slots are probably going to be occupied by video capture and wifi cards. atomicthumbs fucked around with this message at 17:23 on Oct 16, 2014 |
![]() |
|
I need to hook up a freenas microserver to another Windows server as an iscsi target to expand the storage. On the microserver, should I use the motherboard raid, zfs, or ufs raid? I've heard a ZFS disk can be slow?
kiwid fucked around with this message at 19:23 on Oct 16, 2014 |
![]() |
|
kiwid posted:I need to hook up a freenas microserver to another Windows server as an iscsi target to expand the storage. On the microserver, should I use the motherboard raid, zfs, or ufs raid? I've heard a ZFS disk can be slow? ZFS
|
![]() |
|
After struggling for a few months and not reading some fine print, I've discovered that because OS X Mavericks implements tagging as POSIX extended attributes, your underlying file system needs to support xattr and because FreeBSD doesn't support xattr syscalls FreeNAS doesn't either. Period. And given people have known about this for a good decade or longer, I don't expect FreeBSD to magically support xattr in the next few years either. This means that you can't use Spotlight indexing and tagging with volumes off of FreeNAS in general. Go ahead and try to run zfs set xattr=on <your.zfs.fs>, do it. And if you wanted to use Spotlight metadata and OS X tagging like me, weep as you realize you have to change OSes entirely ![]() So... what's the new hotness in ZFS on Linux distros? I'm planning for a CentOS 7 install with support services under LXC with Docker. Should be a mostly painless transition since the zpool versions should sync up nicely.
|
![]() |
|
kiwid posted:I need to hook up a freenas microserver to another Windows server as an iscsi target to expand the storage. On the microserver, should I use the motherboard raid, zfs, or ufs raid? I've heard a ZFS disk can be slow? necrobobsledder posted:FreeBSD xattr on ZFS D. Ebdrup fucked around with this message at 08:35 on Oct 17, 2014 |
![]() |
PSA, 6TB WD Reds for $240 holy crap, I'm tempted to replace all my drives with like half a dozen of these. edit: WD says its new 10TB drives are going to have the lowest cost per gig of anything on the market, but that's probably an enormous pile of bs Straker fucked around with this message at 03:20 on Oct 18, 2014 |
|
![]() |
|
Straker posted:PSA, 6TB WD Reds for $240 holy crap, I'm tempted to replace all my drives with like half a dozen of these. Even if it's true, the 10TB drives likely won't make any sense for people running all but the largest arrays. I mean, even if it were 10% cheaper per GB (which is doubtful), a 10TB drive would run $360, so single-redundancy 10TB usable would be $720, which could get you 3 x 6TB drives for 12TB usable. At a 10% per-gig initial discount, 10TB drives don't overtake 6TB ones for single-drive-redundancy until you have 5 of them ($1800 for 40TB usable = $0.045/GB vs $1680 for 36TB = $0.046/GB), and it's worse for multi-drive-redundant arrays. They'd be great for people running out of ports or case space, though. DrDork fucked around with this message at 06:53 on Oct 18, 2014 |
![]() |