|
Fangs404 posted:Do you know how long the long test takes? I know the short test is on the order of a couple minutes, but the duration of the long test will probably determine how often I do them. You can manually run it on one drive and find out: code:
I've read other forums and most people do a short every day and a long once a week. I don't believe SMART tests affect IO operations at all and I don't believe it wears the drive at all. I'm pretty sure SMART is perfectly safe to do. IT Guy fucked around with this message at 19:48 on Mar 30, 2012 |
![]() |
|
Awesome, thanks guys! You've all been a huge help getting this all setup.
|
![]() |
|
Fangs404 posted:Awesome, thanks guys! You've all been a huge help getting this all setup. If you haven't already, you should schedule a cron job to do a scrub of ZFS between 2 weeks and 1 month. Here's mine: ![]() On the first of every month it runs the command: "zpool scrub data". Change "data" to whatever your volume name is.
|
![]() |
|
IT Guy posted:If you haven't already, you should schedule a cron job to do a scrub of ZFS between 2 weeks and 1 month. Oh, good call. I think I set it up to run every other Friday at 6m: ![]() Is that right? Any other maintenance things I should be doing, or are SMART and scrubs it?
|
![]() |
|
Fangs404 posted:Oh, good call. I think I set it up to run every other Friday at 6m: Looks right to me. Other than ZFS snapshots (optional) or rsync backups, that's it.
|
![]() |
|
Is anyone using the native (well, port I guess) ZFS on Linux? I want to switch to Plex and it's not available on FreeBSD in the ports (or to compile). I don't mind starting over, I plan to do a full backup and then recopy my data over. I was thinking of using the latest Ubuntu server. Is it decently stable? I'm not a sperg about speed but as long as its faster than FUSE I'm fine with it.
|
![]() |
|
I think the only major annoyance is the lack of kernel CIFS.
|
![]() |
|
evil_bunnY posted:I think the only major annoyance is the lack of kernel CIFS. Which is fine for me since I'm using AFP.
|
![]() |
|
I was super excited about in kernel CIFS, but at least for the home environment, it's a pain in the ass. I keep using it out of some moral obligation or something, but fuck I hate it. Maybe it works really well in an enterprise (AD) environment, but it's pretty bad without.
|
![]() |
|
FISHMANPET posted:I was super excited about in kernel CIFS, but at least for the home environment, it's a pain in the ass. I keep using it out of some moral obligation or something, but fuck I hate it. Maybe it works really well in an enterprise (AD) environment, but it's pretty bad without.
|
![]() |
|
evil_bunnY posted:What don't you like about it? Guest access, and other general permissions shit. On most of my computers, it maps me to Guest, even though it should map to an actual user with write permissions, so I have to do most of my file changes from the command line (I think a lot of this might have to do with ACLs and the way SABnzbd is writing files). On my HTPC, with the same username, it doesn't map to guest, so I have to configure XBMC to explicitly connect as guest, otherwise it gets permission denied. I guess half of my problems are with NFSv4 ACLs, and not with the CIFS service itself. Another reason why it would probably work better in an AD enviroment, because the ACLs would follow over to Windows as well.
|
![]() |
|
sounds more like you need to fix your specific implemtation, I use nzbget and cifs extensively on my openindiana box with no trouble.
|
![]() |
|
Well so far FreeNAS has been...a fucking nightmare. The hardware in my HP microserver came together perfectly, but FreeNAS has what I consider a really stupid permission model (I guess all Unix like systems do it this way) I created a group, created users for all my pcs, then added the group to each user as an Auxiliary group. I created datasets, changed their permissions so the group "owned" the datasets, then created CFS shares for the datasets. The issue is that Samba (I'm assuming that's what FreeNAS uses?) doesn't seem to pay attention. Permissions aren't respected, and when I started troubleshooting, I could delete users, change permissions, and even remove entire shares, and Samba just kept right on hosting the shares, even letting me write data on shares I'd deleted in the FreeNAS web interface. Anyone have any ideas on how I can make Freenas treat permissions more like windows, with individual allow and block?
|
![]() |
|
UndyingShadow posted:Well so far FreeNAS has been...a fucking nightmare.
|
![]() |
|
thideras posted:Are you restarting the Samba service? I don't have any experience with FreeNAS, but I know that any changes you do to the configuration file require a bounce of the smb service. Actually, I had no idea that was necessary ![]() EDIT: And that seems to actually be reflecting my changes. Thanks! UndyingShadow fucked around with this message at 17:54 on Apr 1, 2012 |
![]() |
|
I need some advice on entering this arena. I have many drives available of varying sizes, listed below. Currently they're distributed randomly across many machines. I'd also like to introduce some sort of data integrity with minimal capacity loss -- because currently I have no integrity, and consolidate and sort my data so that I don't have duplicates across multiple machines, drives, or folders. code:
What's my most cost effective option for consolidating them and making them available to my all of local machines/etc? I've looked at several of the pre-assembled NAS solutions and they are always very expensive, with a limited number of bays. I have an old Core Duo 2ghz w/ 2gb of ram + a 750 watt PSU sitting around, but the core duo's board only has 2x sata 1 ports and 3x pci slots, and 100mbit Ethernet. It'd also be great if the solution was able to seed all of my totally legitimate linux distro torrents ![]()
|
![]() |
|
sini posted:...while also boasting a low power footprint. I'd also like a solution open to future expansion. As for implementation options, you have two main paths: roll your own, and a pre-made box from Synology or the like. The advantage of rolling your own is you have a lot better options for upgradability. Even if you start out with your current C2D box, you can easily and cheaply toss in a SATA card and a gigabit NIC and power an almost arbitrary number of drives. The downside to it is a physically larger box and a higher power footprint--especially if you decide to use all of your drives. Setup can also be more complex if you opt for an OS like FreeNAS or Linux and are not already familiar with it. Or I suppose you could go for WHS, which is easy to set up, but costs $150 or so, which digs into the price advantage. The advantage of a pre-made box, be it a drop-in-the-disks-and-go pre-made one from Synology or Netgear or whomever, is ease of setup and very low power draw. Setup is usually just a few clicks on a web interface, and power draw is often 40W or below. The downside is noticeably higher up-front costs, and limited upgradability: you're going to pay out the ass if you want anything with more than 4 drives in it. The N40L is a thread-favorite "middle ground" option which gives you up to 5 drives for <$300, and then most people throw FreeNAS or the like on it as an OS. Not as easy a setup as a Synology, but a lot cheaper. For size references, 5x2TB drives in RAIDZ (similar to RAID5) gives you about 7TB of usable formatte space and the ability to lose a drive with no data loss. DrDork fucked around with this message at 03:57 on Apr 2, 2012 |
![]() |
|
DrDork posted:You need to make some choices. One, how much space do you actually need? Can toss out all the sub-2TB drives and still have sufficient space? Two, how much are you willing to pay for a new setup? Three, do you have any specific performance requirements that you need to hit? Four, what sort of redundancy do you want? If you don't need/care for any, you can just JBOD it up and be done with it. If, however, you want some form of RAID, you will need to look into some of the options related to that: many RAID setups do not work well with drives of different sizes (different makes and models is usually fine), to the point where putting a 2TB and a 500GB drive together is entirely pointless. Some RAID setups, like BeyondRAID, work pretty well with different sized drives, but have various implementation requirements that you may not be willing to deal with (like being on Linux). The N40L can take 6 drives if you don't mind hunting down a bracket and an esata to sata connector. You lose the esata port but it's still pretty clean.
|
![]() |
|
UndyingShadow posted:The N40L can take 6 drives if you don't mind hunting down a bracket and an esata to sata connector. You lose the esata port but it's still pretty clean.
|
![]() |
|
No you can put 6 drives inside there. I think some people have managed even more with more serious modding effort.
|
![]() |
|
I forget where I saw the picture, but it involves putting drives sideways in or on top of the top drive bay. I think that picture also involved a 4-in-1 2.5"/5.25" bay adapter along with the desktop drives.
|
![]() |
|
Don't hard drives typically take 30watts of power? The PSU in the N40L is only 150watts. 6x Drives would be borderline over.
|
![]() |
|
IT Guy posted:Don't hard drives typically take 30watts of power? The PSU in the N40L is only 150watts. 6x Drives would be borderline over. More like 5-10 watts when active. edit: WD20EARX (2tb green) is rated for 10.1 watts. edit2: And it seems that 15,000 rpm SCSI drives average just under 20 watts. sleepy gary fucked around with this message at 13:32 on Apr 2, 2012 |
![]() |
|
Yep. Looking at the label of a WD Black 2TB drive on Newegg, it says 0.60A on 5V (3W) + 0.45A on 12V (5.4W) = 8.4W total. I picked a random 15K RPM Seagate drive, and that's 16W, but still nowhere near 30W. E: Wait, what? Why is the "green" drive more power-hungry than the high-performance drive? Fucking WD. Factory Factory fucked around with this message at 13:35 on Apr 2, 2012 |
![]() |
|
I don't know how to read this shit: http://www.hitachigst.com/internal-...deskstar-7k3000 What is the wattage on those drives (<3TB)?
|
![]() |
|
IT Guy posted:I don't know how to read this shit: http://www.hitachigst.com/internal-...deskstar-7k3000 It's a bit vague but it seems to imply that maximum start-up power is 30 watts and that idle power (spinning but not doing much with writes/reads) is 5.2 watts.
|
![]() |
|
Power (watts) = volts * amps Based off the label image on Newegg, it's 0.45A on 5V and 0.85A on 12V. That's 12.45W. Peak power will always be higher, but that's what staggered spin-up is for. As long as the drive gets what it's rated for, it can cope.
|
![]() |
|
Factory Factory posted:E: Wait, what? Why is the "green" drive more power-hungry than the high-performance drive? Fucking WD. Yeah I am a little confused... oh well. That figure came off a WD20EARX I had sitting in my desk drawer.
|
![]() |
|
DrDork posted:I suppose you could do so, but would a 6th full sized drive even fit internally? I know you can easily put two 2.5" drives in the upper media bay, but anything more than a single 3.5" doesn't look like it would fit. Or would you just sit it on top (or use an external drive just wired to the internal port)? Nope, it fits internally without any modding or cutting. Use one of these: http://www.frozencpu.com/products/8...se_Reducer.html I just built one with 5 Samsung and one WD (5400 RPM slow drives) and it had no problems.
|
![]() |
|
DNova posted:Yeah I am a little confused... oh well. That figure came off a WD20EARX I had sitting in my desk drawer. The green drive is designed to idle lower and probably use less power on spinup. The power ratings on the back of the drive is also probably not giving the full story.
|
![]() |
|
DrDork posted:You need to make some choices. One, how much space do you actually need? I'd like to keep my costs low. But a few hundred won't break the bank. I think I'll go the roll-your-own route. I'm fine with running linux -- in fact it's preferable, a BSD variant is fine too. My storage needs are only going to increase. I'd like to be able to elastically expand by just adding in new 2tb/3tb drives as I near capacity. I've currently got 6 drives that are >2tb, and most of them are at or near capacity. I'd like to set up a system that could hold up to 12-16 drives, and in an ideal world just go ehhhh I don't need this 600gb drive anymore, pull it, put in a 3tb in its place, and have it expand/recover. Only 7tb usable out of 10tb isn't a bad sacrifice, but are there more effective methods? I thought I saw an unraid build somewhere that had like 20 drives, 38tb storage 2tb parity.
|
![]() |
|
sini posted:I'd like to keep my costs low. But a few hundred won't break the bank. I think I'll go the roll-your-own route. I'm fine with running linux -- in fact it's preferable, a BSD variant is fine too. My storage needs are only going to increase. I'd like to be able to elastically expand by just adding in new 2tb/3tb drives as I near capacity. I've currently got 6 drives that are >2tb, and most of them are at or near capacity. I'd like to set up a system that could hold up to 12-16 drives, and in an ideal world just go ehhhh I don't need this 600gb drive anymore, pull it, put in a 3tb in its place, and have it expand/recover. Only 7tb usable out of 10tb isn't a bad sacrifice, but are there more effective methods? I thought I saw an unraid build somewhere that had like 20 drives, 38tb storage 2tb parity. UNRAID and those kind of variants will do what you want, but there's no way in HELL I'd trust a 20 drive away with one parity disc. The potential for multiple discs dying at once or UREs is just too great. I'd have at least 2, and keep in mind that your parity drive(s) must match the largest drive in capacity.
|
![]() |
|
sini posted:I'd like to keep my costs low. But a few hundred won't break the bank. I think I'll go the roll-your-own route. I'm fine with running linux -- in fact it's preferable, a BSD variant is fine too. My storage needs are only going to increase. I'd like to be able to elastically expand by just adding in new 2tb/3tb drives as I near capacity. I've currently got 6 drives that are >2tb, and most of them are at or near capacity. I'd like to set up a system that could hold up to 12-16 drives, and in an ideal world just go ehhhh I don't need this 600gb drive anymore, pull it, put in a 3tb in its place, and have it expand/recover. Only 7tb usable out of 10tb isn't a bad sacrifice, but are there more effective methods? I thought I saw an unraid build somewhere that had like 20 drives, 38tb storage 2tb parity. Just get a Norco 4224 case and call it a day.
|
![]() |
|
I'd be all over the N40L if it wasn't plastic. And can anyone confirm the amount of HD seek noise it gives out? I have 4 x drives suspended with elastic in a Metal Silverstone HTPC case, so no complaints other than the damn size of the thing.
|
![]() |
|
DEAD MAN'S SHOE posted:I'd be all over the N40L if it wasn't plastic. And can anyone confirm the amount of HD seek noise it gives out? If hard drive seek noise bothers you that much, you'll never be satisfied in life. That said, you can hear it if you listen for it. But it's not like you're driving down a highway with the windows down. It's extremely quiet in general.
|
![]() |
|
sini posted:I saw an unraid build somewhere that had like 20 drives, 38tb storage 2tb parity.
|
![]() |
|
DEAD MAN'S SHOE posted:I'd be all over the N40L if it wasn't plastic. And can anyone confirm the amount of HD seek noise it gives out? I have to see this, which could be one of the spergiest things ever made. FISHMANPET fucked around with this message at 03:02 on Apr 3, 2012 |
![]() |
|
DEAD MAN'S SHOE posted:I'd be all over the N40L if it wasn't plastic. And can anyone confirm the amount of HD seek noise it gives out? Plastic? Also, it's a server - keep it in a closet or your garage or somewhere away from your desk. Still, it's a very quiet machine.
|
![]() |
|
Mine sits on my desk and I barely notice when it's on. It makes a bit of noise when the drives first spin up during bootup but after that it's very quiet.
|
![]() |
|
I just prefer pretty, quiet hardware ![]()
|
![]() |