«608 »
  • Post
  • Reply
IT Guy
Jan 12, 2010

You people drink like you don't want to live!

Fangs404 posted:

Do you know how long the long test takes? I know the short test is on the order of a couple minutes, but the duration of the long test will probably determine how often I do them.

On a side note, I ordered an 8gb Kingston thumb drive on Amazon for $6, and it arrived today. I love how easy FreeNAS made migrating.

You can manually run it on one drive and find out:

code:
[ryan@luna] /# smartctl -t long /dev/ada0
smartctl 5.41 2011-06-09 r3365 [FreeBSD 8.2-RELEASE-p6 amd64] (local build)
Copyright (C) 2002-11 by Bruce Allen, [url]http://smartmontools.sourceforge.net[/url]

=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Extended self-test routine immediately in off-line mode".
Drive command "Execute SMART Extended self-test routine immediately in off-line mode" successful.
Testing has begun.
Please wait 255 minutes for test to complete.
Test will complete after Fri Mar 30 18:57:20 2012

Use smartctl -X to abort test.
[ryan@luna] /#

run "smartctl -X /dev/ada0" to stop the test if you want.

I've read other forums and most people do a short every day and a long once a week.

I don't believe SMART tests affect IO operations at all and I don't believe it wears the drive at all. I'm pretty sure SMART is perfectly safe to do.

IT Guy fucked around with this message at 19:48 on Mar 30, 2012

Fangs404
Dec 20, 2004

I time bomb.

Awesome, thanks guys! You've all been a huge help getting this all setup.

IT Guy
Jan 12, 2010

You people drink like you don't want to live!

Fangs404 posted:

Awesome, thanks guys! You've all been a huge help getting this all setup.

If you haven't already, you should schedule a cron job to do a scrub of ZFS between 2 weeks and 1 month.

Here's mine:



On the first of every month it runs the command: "zpool scrub data".

Change "data" to whatever your volume name is.

Fangs404
Dec 20, 2004

I time bomb.

IT Guy posted:

If you haven't already, you should schedule a cron job to do a scrub of ZFS between 2 weeks and 1 month.

Here's mine:



On the first of every month it runs the command: "zpool scrub data".

Change "data" to whatever your volume name is.

Oh, good call. I think I set it up to run every other Friday at 6m:



Is that right? Any other maintenance things I should be doing, or are SMART and scrubs it?

IT Guy
Jan 12, 2010

You people drink like you don't want to live!

Fangs404 posted:

Oh, good call. I think I set it up to run every other Friday at 6m:



Is that right? Any other maintenance things I should be doing, or are SMART and scrubs it?

Looks right to me.

Other than ZFS snapshots (optional) or rsync backups, that's it.

Matt Zerella
Oct 7, 2002


Is anyone using the native (well, port I guess) ZFS on Linux? I want to switch to Plex and it's not available on FreeBSD in the ports (or to compile).

I don't mind starting over, I plan to do a full backup and then recopy my data over.

I was thinking of using the latest Ubuntu server.

Is it decently stable? I'm not a sperg about speed but as long as its faster than FUSE I'm fine with it.

evil_bunnY
Apr 2, 2003



I think the only major annoyance is the lack of kernel CIFS.

Matt Zerella
Oct 7, 2002


evil_bunnY posted:

I think the only major annoyance is the lack of kernel CIFS.

Which is fine for me since I'm using AFP.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


I was super excited about in kernel CIFS, but at least for the home environment, it's a pain in the ass. I keep using it out of some moral obligation or something, but fuck I hate it. Maybe it works really well in an enterprise (AD) environment, but it's pretty bad without.

evil_bunnY
Apr 2, 2003



FISHMANPET posted:

I was super excited about in kernel CIFS, but at least for the home environment, it's a pain in the ass. I keep using it out of some moral obligation or something, but fuck I hate it. Maybe it works really well in an enterprise (AD) environment, but it's pretty bad without.
What don't you like about it?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


evil_bunnY posted:

What don't you like about it?

Guest access, and other general permissions shit. On most of my computers, it maps me to Guest, even though it should map to an actual user with write permissions, so I have to do most of my file changes from the command line (I think a lot of this might have to do with ACLs and the way SABnzbd is writing files). On my HTPC, with the same username, it doesn't map to guest, so I have to configure XBMC to explicitly connect as guest, otherwise it gets permission denied.

I guess half of my problems are with NFSv4 ACLs, and not with the CIFS service itself. Another reason why it would probably work better in an AD enviroment, because the ACLs would follow over to Windows as well.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

sounds more like you need to fix your specific implemtation, I use nzbget and cifs extensively on my openindiana box with no trouble.

UndyingShadow
May 15, 2006
You're looking ESPECIALLY shadowy this evening, Sir

Well so far FreeNAS has been...a fucking nightmare.

The hardware in my HP microserver came together perfectly, but FreeNAS has what I consider a really stupid permission model (I guess all Unix like systems do it this way)

I created a group, created users for all my pcs, then added the group to each user as an Auxiliary group. I created datasets, changed their permissions so the group "owned" the datasets, then created CFS shares for the datasets.

The issue is that Samba (I'm assuming that's what FreeNAS uses?) doesn't seem to pay attention. Permissions aren't respected, and when I started troubleshooting, I could delete users, change permissions, and even remove entire shares, and Samba just kept right on hosting the shares, even letting me write data on shares I'd deleted in the FreeNAS web interface.

Anyone have any ideas on how I can make Freenas treat permissions more like windows, with individual allow and block?

thideras
Oct 27, 2010

Fuck you, I'm a tree.


Fun Shoe

UndyingShadow posted:

Well so far FreeNAS has been...a fucking nightmare.

The hardware in my HP microserver came together perfectly, but FreeNAS has what I consider a really stupid permission model (I guess all Unix like systems do it this way)

I created a group, created users for all my pcs, then added the group to each user as an Auxiliary group. I created datasets, changed their permissions so the group "owned" the datasets, then created CFS shares for the datasets.

The issue is that Samba (I'm assuming that's what FreeNAS uses?) doesn't seem to pay attention. Permissions aren't respected, and when I started troubleshooting, I could delete users, change permissions, and even remove entire shares, and Samba just kept right on hosting the shares, even letting me write data on shares I'd deleted in the FreeNAS web interface.

Anyone have any ideas on how I can make Freenas treat permissions more like windows, with individual allow and block?
Are you restarting the Samba service? I don't have any experience with FreeNAS, but I know that any changes you do to the configuration file require a bounce of the smb service.

UndyingShadow
May 15, 2006
You're looking ESPECIALLY shadowy this evening, Sir

thideras posted:

Are you restarting the Samba service? I don't have any experience with FreeNAS, but I know that any changes you do to the configuration file require a bounce of the smb service.

Actually, I had no idea that was necessary Thanks, let me try that.


EDIT: And that seems to actually be reflecting my changes. Thanks!

UndyingShadow fucked around with this message at 17:54 on Apr 1, 2012

sini
May 9, 2006


I need some advice on entering this arena. I have many drives available of varying sizes, listed below.

Currently they're distributed randomly across many machines. I'd also like to introduce some sort of data integrity with minimal capacity loss -- because currently I have no integrity, and consolidate and sort my data so that I don't have duplicates across multiple machines, drives, or folders.

code:
My data only drives, SSD's+local storage not listed.
Sata 2:
2x 2tb (same model)
1x 1.5tb
1x 750gb
1x 600gb
1x 500gb (may be failing, linux displays warnings when its in my system -- fine with retiring it...)

Sata 3:
2x 2tb (same models)

USB:
1x 750 (2.5")
2x 1tb (3.5") (I am A-Okay with taking these out of their enclosures, presumable they are SATA)

Network storage:
2x 3tb drives (WD World Edition II -- transfer rates are a bit slow, even with striping)
--

What's my most cost effective option for consolidating them and making them available to my all of local machines/etc? I've looked at several of the pre-assembled NAS solutions and they are always very expensive, with a limited number of bays. I have an old Core Duo 2ghz w/ 2gb of ram + a 750 watt PSU sitting around, but the core duo's board only has 2x sata 1 ports and 3x pci slots, and 100mbit Ethernet. It'd also be great if the solution was able to seed all of my totally legitimate linux distro torrents and usenet downloads -- while also boasting a low power footprint. I'd also like a solution open to future expansion.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

sini posted:

...while also boasting a low power footprint. I'd also like a solution open to future expansion.
You need to make some choices. One, how much space do you actually need? Can toss out all the sub-2TB drives and still have sufficient space? Two, how much are you willing to pay for a new setup? Three, do you have any specific performance requirements that you need to hit? Four, what sort of redundancy do you want? If you don't need/care for any, you can just JBOD it up and be done with it. If, however, you want some form of RAID, you will need to look into some of the options related to that: many RAID setups do not work well with drives of different sizes (different makes and models is usually fine), to the point where putting a 2TB and a 500GB drive together is entirely pointless. Some RAID setups, like BeyondRAID, work pretty well with different sized drives, but have various implementation requirements that you may not be willing to deal with (like being on Linux).

As for implementation options, you have two main paths: roll your own, and a pre-made box from Synology or the like.

The advantage of rolling your own is you have a lot better options for upgradability. Even if you start out with your current C2D box, you can easily and cheaply toss in a SATA card and a gigabit NIC and power an almost arbitrary number of drives. The downside to it is a physically larger box and a higher power footprint--especially if you decide to use all of your drives. Setup can also be more complex if you opt for an OS like FreeNAS or Linux and are not already familiar with it. Or I suppose you could go for WHS, which is easy to set up, but costs $150 or so, which digs into the price advantage.

The advantage of a pre-made box, be it a drop-in-the-disks-and-go pre-made one from Synology or Netgear or whomever, is ease of setup and very low power draw. Setup is usually just a few clicks on a web interface, and power draw is often 40W or below. The downside is noticeably higher up-front costs, and limited upgradability: you're going to pay out the ass if you want anything with more than 4 drives in it.

The N40L is a thread-favorite "middle ground" option which gives you up to 5 drives for <$300, and then most people throw FreeNAS or the like on it as an OS. Not as easy a setup as a Synology, but a lot cheaper. For size references, 5x2TB drives in RAIDZ (similar to RAID5) gives you about 7TB of usable formatte space and the ability to lose a drive with no data loss.

DrDork fucked around with this message at 03:57 on Apr 2, 2012

UndyingShadow
May 15, 2006
You're looking ESPECIALLY shadowy this evening, Sir

DrDork posted:

You need to make some choices. One, how much space do you actually need? Can toss out all the sub-2TB drives and still have sufficient space? Two, how much are you willing to pay for a new setup? Three, do you have any specific performance requirements that you need to hit? Four, what sort of redundancy do you want? If you don't need/care for any, you can just JBOD it up and be done with it. If, however, you want some form of RAID, you will need to look into some of the options related to that: many RAID setups do not work well with drives of different sizes (different makes and models is usually fine), to the point where putting a 2TB and a 500GB drive together is entirely pointless. Some RAID setups, like BeyondRAID, work pretty well with different sized drives, but have various implementation requirements that you may not be willing to deal with (like being on Linux).

As for implementation options, you have two main paths: roll your own, and a pre-made box from Synology or the like.

The advantage of rolling your own is you have a lot better options for upgradability. Even if you start out with your current C2D box, you can easily and cheaply toss in a SATA card and a gigabit NIC and power an almost arbitrary number of drives. The downside to it is a physically larger box and a higher power footprint--especially if you decide to use all of your drives. Setup can also be more complex if you opt for an OS like FreeNAS or Linux and are not already familiar with it. Or I suppose you could go for WHS, which is easy to set up, but costs $150 or so, which digs into the price advantage.

The advantage of a pre-made box, be it a drop-in-the-disks-and-go pre-made one from Synology or Netgear or whomever, is ease of setup and very low power draw. Setup is usually just a few clicks on a web interface, and power draw is often 40W or below. The downside is noticeably higher up-front costs, and limited upgradability: you're going to pay out the ass if you want anything with more than 4 drives in it.

The N40L is a thread-favorite "middle ground" option which gives you up to 5 drives for <$300, and then most people throw FreeNAS or the like on it as an OS. Not as easy a setup as a Synology, but a lot cheaper. For size references, 5x2TB drives in RAIDZ (similar to RAID5) gives you about 7TB of usable formatte space and the ability to lose a drive with no data loss.

The N40L can take 6 drives if you don't mind hunting down a bracket and an esata to sata connector. You lose the esata port but it's still pretty clean.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

UndyingShadow posted:

The N40L can take 6 drives if you don't mind hunting down a bracket and an esata to sata connector. You lose the esata port but it's still pretty clean.
I suppose you could do so, but would a 6th full sized drive even fit internally? I know you can easily put two 2.5" drives in the upper media bay, but anything more than a single 3.5" doesn't look like it would fit. Or would you just sit it on top (or use an external drive just wired to the internal port)?

sleepy gary
Jan 11, 2006



No you can put 6 drives inside there. I think some people have managed even more with more serious modding effort.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


I forget where I saw the picture, but it involves putting drives sideways in or on top of the top drive bay. I think that picture also involved a 4-in-1 2.5"/5.25" bay adapter along with the desktop drives.

IT Guy
Jan 12, 2010

You people drink like you don't want to live!

Don't hard drives typically take 30watts of power? The PSU in the N40L is only 150watts. 6x Drives would be borderline over.

sleepy gary
Jan 11, 2006



IT Guy posted:

Don't hard drives typically take 30watts of power? The PSU in the N40L is only 150watts. 6x Drives would be borderline over.

More like 5-10 watts when active.

edit: WD20EARX (2tb green) is rated for 10.1 watts.

edit2: And it seems that 15,000 rpm SCSI drives average just under 20 watts.

sleepy gary fucked around with this message at 13:32 on Apr 2, 2012

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


Yep. Looking at the label of a WD Black 2TB drive on Newegg, it says 0.60A on 5V (3W) + 0.45A on 12V (5.4W) = 8.4W total.

I picked a random 15K RPM Seagate drive, and that's 16W, but still nowhere near 30W.

E: Wait, what? Why is the "green" drive more power-hungry than the high-performance drive? Fucking WD.

Factory Factory fucked around with this message at 13:35 on Apr 2, 2012

IT Guy
Jan 12, 2010

You people drink like you don't want to live!

I don't know how to read this shit: http://www.hitachigst.com/internal-...deskstar-7k3000

What is the wattage on those drives (<3TB)?

sleepy gary
Jan 11, 2006



IT Guy posted:

I don't know how to read this shit: http://www.hitachigst.com/internal-...deskstar-7k3000

What is the wattage on those drives (<3TB)?

It's a bit vague but it seems to imply that maximum start-up power is 30 watts and that idle power (spinning but not doing much with writes/reads) is 5.2 watts.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


Power (watts) = volts * amps

Based off the label image on Newegg, it's 0.45A on 5V and 0.85A on 12V. That's 12.45W.

Peak power will always be higher, but that's what staggered spin-up is for. As long as the drive gets what it's rated for, it can cope.

sleepy gary
Jan 11, 2006



Factory Factory posted:

E: Wait, what? Why is the "green" drive more power-hungry than the high-performance drive? Fucking WD.

Yeah I am a little confused... oh well. That figure came off a WD20EARX I had sitting in my desk drawer.

UndyingShadow
May 15, 2006
You're looking ESPECIALLY shadowy this evening, Sir

DrDork posted:

I suppose you could do so, but would a 6th full sized drive even fit internally? I know you can easily put two 2.5" drives in the upper media bay, but anything more than a single 3.5" doesn't look like it would fit. Or would you just sit it on top (or use an external drive just wired to the internal port)?

Nope, it fits internally without any modding or cutting.

Use one of these: http://www.frozencpu.com/products/8...se_Reducer.html

I just built one with 5 Samsung and one WD (5400 RPM slow drives) and it had no problems.

Longinus00
Dec 29, 2005
Ur-Quan

DNova posted:

Yeah I am a little confused... oh well. That figure came off a WD20EARX I had sitting in my desk drawer.

The green drive is designed to idle lower and probably use less power on spinup. The power ratings on the back of the drive is also probably not giving the full story.

sini
May 9, 2006


DrDork posted:

You need to make some choices. One, how much space do you actually need?
Can toss out all the sub-2TB drives and still have sufficient space?
Two, how much are you willing to pay for a new setup?
Three, do you have any specific performance requirements that you need to hit? Four, what sort of redundancy do you want? ...

I'd like to keep my costs low. But a few hundred won't break the bank. I think I'll go the roll-your-own route. I'm fine with running linux -- in fact it's preferable, a BSD variant is fine too. My storage needs are only going to increase. I'd like to be able to elastically expand by just adding in new 2tb/3tb drives as I near capacity. I've currently got 6 drives that are >2tb, and most of them are at or near capacity. I'd like to set up a system that could hold up to 12-16 drives, and in an ideal world just go ehhhh I don't need this 600gb drive anymore, pull it, put in a 3tb in its place, and have it expand/recover. Only 7tb usable out of 10tb isn't a bad sacrifice, but are there more effective methods? I thought I saw an unraid build somewhere that had like 20 drives, 38tb storage 2tb parity.

UndyingShadow
May 15, 2006
You're looking ESPECIALLY shadowy this evening, Sir

sini posted:

I'd like to keep my costs low. But a few hundred won't break the bank. I think I'll go the roll-your-own route. I'm fine with running linux -- in fact it's preferable, a BSD variant is fine too. My storage needs are only going to increase. I'd like to be able to elastically expand by just adding in new 2tb/3tb drives as I near capacity. I've currently got 6 drives that are >2tb, and most of them are at or near capacity. I'd like to set up a system that could hold up to 12-16 drives, and in an ideal world just go ehhhh I don't need this 600gb drive anymore, pull it, put in a 3tb in its place, and have it expand/recover. Only 7tb usable out of 10tb isn't a bad sacrifice, but are there more effective methods? I thought I saw an unraid build somewhere that had like 20 drives, 38tb storage 2tb parity.

UNRAID and those kind of variants will do what you want, but there's no way in HELL I'd trust a 20 drive away with one parity disc. The potential for multiple discs dying at once or UREs is just too great. I'd have at least 2, and keep in mind that your parity drive(s) must match the largest drive in capacity.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


sini posted:

I'd like to keep my costs low. But a few hundred won't break the bank. I think I'll go the roll-your-own route. I'm fine with running linux -- in fact it's preferable, a BSD variant is fine too. My storage needs are only going to increase. I'd like to be able to elastically expand by just adding in new 2tb/3tb drives as I near capacity. I've currently got 6 drives that are >2tb, and most of them are at or near capacity. I'd like to set up a system that could hold up to 12-16 drives, and in an ideal world just go ehhhh I don't need this 600gb drive anymore, pull it, put in a 3tb in its place, and have it expand/recover. Only 7tb usable out of 10tb isn't a bad sacrifice, but are there more effective methods? I thought I saw an unraid build somewhere that had like 20 drives, 38tb storage 2tb parity.

Just get a Norco 4224 case and call it a day.

DEAD MAN'S SHOE
Nov 23, 2003

We will become evil and the stars will come alive


I'd be all over the N40L if it wasn't plastic. And can anyone confirm the amount of HD seek noise it gives out?

I have 4 x drives suspended with elastic in a Metal Silverstone HTPC case, so no complaints other than the damn size of the thing.

Fangs404
Dec 20, 2004

I time bomb.

DEAD MAN'S SHOE posted:

I'd be all over the N40L if it wasn't plastic. And can anyone confirm the amount of HD seek noise it gives out?

I have 4 x drives suspended with elastic in a Metal Silverstone HTPC case, so no complaints other than the damn size of the thing.

If hard drive seek noise bothers you that much, you'll never be satisfied in life. That said, you can hear it if you listen for it. But it's not like you're driving down a highway with the windows down. It's extremely quiet in general.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

sini posted:

I saw an unraid build somewhere that had like 20 drives, 38tb storage 2tb parity.
You can do it, but would you really want to? I would not be comfortable with only 1 parity drive to 19 data drives.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


DEAD MAN'S SHOE posted:

I'd be all over the N40L if it wasn't plastic. And can anyone confirm the amount of HD seek noise it gives out?

I have 4 x drives suspended with elastic in a Metal Silverstone HTPC case, so no complaints other than the damn size of the thing.

I have to see this, which could be one of the spergiest things ever made.

FISHMANPET fucked around with this message at 03:02 on Apr 3, 2012

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

DEAD MAN'S SHOE posted:

I'd be all over the N40L if it wasn't plastic. And can anyone confirm the amount of HD seek noise it gives out?

Plastic? Also, it's a server - keep it in a closet or your garage or somewhere away from your desk. Still, it's a very quiet machine.

Galler
Jan 28, 2008



Mine sits on my desk and I barely notice when it's on. It makes a bit of noise when the drives first spin up during bootup but after that it's very quiet.

DEAD MAN'S SHOE
Nov 23, 2003

We will become evil and the stars will come alive


I just prefer pretty, quiet hardware

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »