|
Someone should crack one open, dump the SN into WD's warranty checker, and let the thread know for sure if it just comes up as a generic WD Red or not. Considering I've got a 8x4TB array, and 4TB WD Reds sell used on eBay for $90-$100, this is really tempting as a completely unnecessary way to double my space... DrDork fucked around with this message at 01:44 on Jul 24, 2017 |
![]() |
|
Reddit indicates that historically, shucked drives are allowed to be RMAd without the case. That said, once I've finished my burn-in on them, I'll be cracking them open one at a time to replace my existing array. When I do that, I'll check the serials for the RMA process and see what it says. Expect results on that in a week or more, because badblocks won't even be starting on these until almost noon tomorrow, and it's going to be a HELL of a long run.
|
![]() |
|
What programs do you guys use to test drives when you first purchase them?
|
![]() |
|
chocolateTHUNDER posted:What programs do you guys use to test drives when you first purchase them? What G-Prime just mentioned, badblocks is fairly common I believe.
|
![]() |
|
now that folks are back on the subject:ChiralCondensate posted:Do y'all wait for all four passes of badblocks?
|
![]() |
|
Honestly I'm impatient. I set them up on nwipe for a few hours and if they don't immediately start throwing errors then they are probably fine.
|
![]() |
|
I'm using https://github.com/Spearfoot/disk-burnin-and-testing this time around. It's just a nice script to handle what the FreeNAS forums have been recommending for years. It does a short SMART, an extended SMART, full run of badblocks (badblocks -b 4096 -wsv -o "$BB_File" /dev/"$Drive"), and then both SMART tests again, all logged to a file. I'd suggest tuning the badblocks call a little with the -c flag, because it'll improve performance considerably as long as you have plenty of free RAM. See https://www.pantz.org/software/badb...locksusage.html for details on that part, but the important thing to know is that if you have the RAM available, you can read stuff in larger chunks (-c defaults to 64, I'm running it with 98304) and then process it, and testing has indicated that reduces the run time of the test by a long shot.
|
![]() |
|
Ziploc posted:
Progress. I used the CLI to try to export. And even though it didn't seem like it did anything, I was able to import it again with the WebGUI. However: ![]() All my disks are showing. So I'm not sure why this is.
|
![]() |
|
Ziploc posted:All my disks are showing. So I'm not sure why this is. This is, indeed, strange. Random question, but have you ensured you've updated FreeNAS to the latest stable build? Go to System -> Update -> Check Now and see if it turns up anything.
|
![]() |
|
DrDork posted:This is, indeed, strange. Random question, but have you ensured you've updated FreeNAS to the latest stable build? Go to System -> Update -> Check Now and see if it turns up anything. Confirmed. This is zpool status -v ![]() EDIT: Oh shit. I just realized that my Volume was not called 'Volume' It was called 'Volume 1' and I'm having the same problem as this guy. https://forums.freenas.org/index.ph...e-import.56217/ Ziploc fucked around with this message at 13:25 on Jul 24, 2017 |
![]() |
|
One more reason to go command-line only.
|
![]() |
|
Okay. More progress. I exported the "Volume 1" and re imported changing the name to "Volume" using the CLI the GUI was finally happy. However, I noticed that when I use the CLI to navigate I still see /mnt/Volume 1/[my shares] why is that? I'm rebooting currently to see what affect that has. EDIT: Restart didn't do anything. I also have this problem while trying to create a share so I can rescue my files. ![]() Ziploc fucked around with this message at 15:20 on Jul 24, 2017 |
![]() |
|
Ziploc posted:Confirmed. Just do a zpool import "Volume 1" -f from the command line?
|
![]() |
|
Mr Shiny Pants posted:Just do a zpool import "Volume 1" -f from the command line? That appears to not be an option. ![]() So I can rename it during import. But that seems to leave the /mnt/ path fucked up with the old volume name. Why is that?
|
![]() |
|
Ziploc posted:That appears to not be an option. You've got the syntax backwards. You want: zpool import -f "Volume 1" -or- zpool import -f "Volume 1" Volume_1 or whatever else you want to try to rename it to. If that last screenshot is your current status, you haven't actually renamed it--it's still "Volume 1", which would explain why it shows up as such in the CLI. As much as this sucks for you, it's good to know that I should never try to give a pool a name with a space in it...
|
![]() |
|
DrDork posted:You've got the syntax backwards. You want: I named it back to Volume 1 to try what he suggested. ![]()
|
![]() |
|
Ziploc posted:I named it back to Volume 1 to try what he suggested. I screwed up the syntax, DrDork is right. sorry.
|
![]() |
|
Yea the -f flag doesn't help. I don't think the GUI will ever be happy. If I change the name, (to Volume) the storage manager is happy and shows everything as good. However the /mnt/ path doesn't change. (It remains as "/mnt/Volume 1") and the GUI won't let me create a share. Even though I can see the paths and folders. I guess I just have to copy my files off with the CLI and start fresh?
|
![]() |
|
The people in the tread you posted say that you also need to reset all the settings, because that's the easiest way to completely get rid of the old volume name. So export the volume again and just start over.
|
![]() |
|
Tamba posted:The people in the tread you posted say that you also need to reset all the settings, because that's the easiest way to completely get rid of the old volume name. Can you define how to 'start over'? I'm not even sure how to reset all the settings.
|
![]() |
|
In FreeNAS 9 it's System-->Settings-->Factory restore. No idea if it's still the same in version 11. The other option would be to just flash the USB stick again.
|
![]() |
|
So export. Restore. Import (while at the same time changing the name) Here goes nathin'!
|
![]() |
|
zfs get mountpoint Volume_1/whatever is probably wrong. Set it to the correct value.
|
![]() |
|
Restored to factory settings. zpool import "Volume 1" Volume Storage settings look fine. But the /mnt/ still has reference to the old volume name. ![]() evol262 posted:zfs get mountpoint Volume_1/whatever is probably wrong. Set it to the correct value. I'm going to do my best to figure out how to set that properly.
|
![]() |
|
Ziploc posted:I'm going to do my best to figure out how to set that properly. Here is the manpage for the zfs command.
|
![]() |
|
The extra /mnt directory could just be a leftover that doesn't get wiped on a system resetZiploc posted:That appears to not be an option. zfs is giving you an error message that it's trying to import the "-f" named pool, try 'zpool import -f "Volume 1"' in the future
|
![]() |
|
I'm in a bit of a conundrum, I built my unRAID NAS a few years ago (https://pcpartpicker.com/user/Photex/saved/gZyj4D) and it's been working wonderfully, but now I think I am stuck with no more room to expand in the Box physically and I/O wise, I've been thinking of buying a PCI-E eSATA card and an external enclosure unless there is a better way outside of dropping a ton of $$$ on a new build. I know the build isn't the flashiest it but definitely suits my needs running a few apps in docker and housing all my media. Any enclosure suggestions or am I just wrong.
|
![]() |
|
Isn't the mountpoint one of the properties of a ZFS pool? Such that changing the name won't change the mountpoint?
|
![]() |
|
FISHMANPET posted:Isn't the mountpoint one of the properties of a ZFS pool? Such that changing the name won't change the mountpoint? Yes and no. It's a property of a ZFS filesystem, and changing the name of the pool won't change it. zpool properties are mostly about dedupe, ashift, etc. zfs properties are about acls, sharing properties, case sensitivity, etc. See: zpool get all Volume_1 vs zfs get all Volume_1 To set it, just: zfs set mountpoint=/mnt/Volume_1 Volume_1 Assuming Volume_1 is also the name of the filesystem (ZFS does this by default per-pool, but you can check with `zfs list`)
|
![]() |
|
Is it just as simple as using the rename command? ![]() It looks like the mountpoint is what needs to be changed. Following this guide won't fuck anything up will it? https://www.itfromallangles.com/201...rename-a-zpool/ This should fix it? zfs set mountpoint Volume /mnt/Volume Ziploc fucked around with this message at 18:42 on Jul 24, 2017 |
![]() |
|
You want "zfs set mountpoint=...". The = is important. No idea what freenas has shoved in those configs, though, or where they're mounted. Hopefully they're plaintext. "mountpoint=legacy" means they're shoved in fstab somewhere, so you may need to update those by hand.
|
![]() |
|
I think I've officially spent more time on this than it's worth. There's not that much data on this Volume anyway. And since I can access the data from the CLI I'm just going to copy it to a local windows share with the CLI. If setting the mountpoint fails. I'll just start from scratch. Thanks for your help guys. I did learn a lot and do feel more comfortable with FreeNAS now. And so I don't have to go through this again, what's the fool proofiest way to backup FreeNAS USB keys?
|
![]() |
|
Ziploc posted:And so I don't have to go through this again, what's the fool proofiest way to backup FreeNAS USB keys? You can shove two keys in and use them as a RAID1. You can also get everything all set up and then download the config file. Then, if you manage to lose both keys, you can reinstall from scratch, upload the config file, and it should pick everything up like it'd never happened. Or you can use the disk imaging software of your choice to back up a complete copy every now and then. Even though it won't be recognized by Win10 as a valid filesystem, I'm pretty sure Windows tools like Achronis will still allow you to do a raw device image backup.
|
![]() |
|
Alright, cracked open one of the easystores. With the serial number on the drive inside, WD's site recognizes that it's an easystore, and gives it a just slightly over 2 year warranty. The serial on the case and the one on the drive itself are identical, so I'm going to have to keep the cases on the off chance that I need to RMA. Edit: Got it out of the case without breaking any of the clips. The drive requires a T10 screwdriver to remove the rubber mounts. Edit2: One more thing of note: WD changed the bottom screw hole positioning on the 8TB and up drives. There are no middle holes, only front and back. Relevant to the mounting in some tray styles for certain cases. I left the rubber grommets on my middle holes in the tray, and used the mounts from the external case to fill the remaining two holes. G-Prime fucked around with this message at 03:33 on Jul 25, 2017 |
![]() |
|
People on hardocp saying there's a cache difference if it was made in Taiwan or China apparently. Taiwan is double, I got a China one from a store today because I'm a sucker for a deal.
|
![]() |
|
So a problem statement for the NAS gods up in here: I'm currently running an entry level QNAP 2-bay (TS-212p), and while it technically does do all the things I do, it's CPU limited to the point where neither SABnzbd, Transmission or Download Station can manage over 3.5MB/sec on a 100mbit connection, let alone any transcoding or other business. I currently have a single 4tb and a single 3tb red in there running separate volumes with shares divvied up between them, and I'm down to about 500gb free on both. So say 5.5gb data I want to retain. I'm looking to build a FreeNAS box to serve media and store some ISOs, run plex, a torrent client, SABnzbd, maybe sonarr/couch potato if I can figure them out. I have a spare i5 2400, a Z68 board with 6 SATA, about 20GB of DDR3 and a few 120GB M.2 SSDs (2 with SATA enclosures) lying around doing nothing, and a case with 7 3.5" bays, so the plan is to use this, and maybe upgrade to a denverton setup when they come out, if I'm having any issues with the lack of threads/ecc. The biggest issue as I see it, is that my budget will stretch to maybe one 4TB Red but not two of them right now, so redundancy is definitely out of the question - but none of the data in question is stuff that I can't just download again, so I'm not really bothered. I was thinking the best approach would be to buy a new 4TB, create a single disk vdev and and a new zpool, copy the contents of the other 4tb over to that, create another single disk vdev and add it to the zpool, etc. until both of my current disks and data are on there. Based on my understanding, this way if a single disk fails I lose that vdev, but not the whole zpool, and I take a performance hit for my particular use case, as bigger vdevs are apparently better for HD media, as I understand it. As I will only be streaming to maybe 2 clients at a time, this shouldn't matter. I can't stress enough that I REALLY don't need redundancy for this stuff, but this setup seems better than JBOD or a single vdev without parity for what I'm doing. I can use crashplan or something if I decide I really don't wanna lose a chunk of it to disk failure. I may consider migrating it all to a denverton setup when that comes about, to gain the advantages of more cores and ECC, but I'm mostly just serving media to Kodi with this thing. I do want the ability to add more disks as I can afford them, and would prefer to avoid the risk of losing the whole lot if just one carks it. Let me know if there's a better way to do this in budget, I guess. Or if this is a really stupid idea even for linux isos.
|
![]() |
|
G-Prime posted:
Bottom and side middle holes missing, or just bottom? Seagate pulled both sets of middle holes on the Ironwolf, so there must be some advantage to doing so.
|
![]() |
|
Don Dongington posted:So a problem statement for the NAS gods up in here: I'm rebuilding my storage around a HP Z400 CAD workstation with a Xeon W3565 (Bloomfield) and 16 GB of ECC. You can get similar machines on NewEgg or eBay or other surplus channels pretty cheaply, they go for about $250 nowadays I think. Triple-channel RAM, not sure about PCIe lanes. Big downside, only PCIe 2.0. Also, it uses what looks like a standard 24-pin ATX connector but it actually has a few pins swapped and won't boot with an off-the-shelf PSU, although IIRC you can find adapters. How big a pool can you actually get on 16 GB before performance tails off? I was actually wondering, if you are doing an IO-oriented setup then the chipset's SATA lanes will bottleneck through the QPI link right? (probably 2.0x4) So to actually fully exploit a multi-drive NAS I'd want to use a SATA or SAS controller? (totally doesn't matter for this guy's use-case though, go hog wild with the onboard, it's got something stupid like 6 SATA ports) Ye olde off-lease ThinkeServer TS140 is probably still a better deal for low-end users, but the config is usually very basic (4 GB) and ECC RAM adds up (and I was given this for free years ago - I used to game on it). But yes, literally any random old PC will make a very fine NAS server/SAB station. Going from a Raspberry Pi to an Athlon 5350 was a major leap ahead, and an older high-power PC will greatly outperform lower-power/embedded stuff (and some NASs really aren't much more than a Pi). It'll handle that shit no problem, extracts won't be lightning quick, but it'll happily churn through serving a Samba share and downloading via SAB and an extract at the same time. With a small SSD as a boot/scratch drive it's pretty nice. You do need a SATA/SAS card because it only comes with 2 ports by default (my Z400 came with one, I ganked it), but other than that it was a pretty cute basic fileserver you could pick up for $40 (CPU+mobo), and it maxed out around 35W at full load. In terms of software, nothing makes you run ZFS in a mirrored mode, or use snapshots, or whatever. Just allocate one volume at 100% of the pool size, mount, and away you go. You still passively benefit from scrubbing, although it does take a little bit of RAM. The downside is it's supposed to be harder to expand a pool once it's started (not clear on why). The alternative is something like LVM, gives you the same functionality minus the data integrity checking (and system requirements). One 11 TB volume is much nicer than three smaller drives either way. If you back up any important stuff externally or whatever then sure, go hog wild, worst case is you lose your precious animes. With some M.2 NVMe drives you could do L2ARC cache for ZFS or have a bunch of swap space - this would basically be ZFS transparently caching 256 GB of the most-accessed files. Old boards like this don't have native M.2 ports but you can get PCIe adapter sleds for real cheap (like $20) - although you will only get 2.0x4 speed, which will bottleneck the drive a bit. In fact you can even get double/quad carriers for a x8 or x16 port. Also, there's probably no reason you can't use SATA SSDs for that too (on a SATA/SAS controller), it's just not gonna be quite as fast as NVMe. Also, if you need something faster than SAB, there's always NZBget, which is a native C++ implementation. It's leagues faster than Python - but in my experience you need to be very sure to stop it nicely and shut it down. I used it on a Raspberry Pi for a while, and it made it slightly tolerable if I wasn't doing anything else, but the Pi's tendency for hard poweroffs doesn't mix well with NZBget's native C++ structures and the potential buffering of flash writes. The app state used to regularly shit itself and lose your queue, and get into all kinds of quirky behavior. Usually you'd have to nuke the files and start over. Paul MaudDib fucked around with this message at 04:47 on Jul 25, 2017 |
![]() |
|
EL BROMANCE posted:People on hardocp saying there's a cache difference if it was made in Taiwan or China apparently. Taiwan is double, I got a China one from a store today because I'm a sucker for a deal. It's Thailand and China, but yes, the Thailand ones have 256MB, and China only has 128. I got 7 Thailand and 1 China out of the bunch I grabbed. IOwnCalculus posted:Bottom and side middle holes missing, or just bottom? Bottom only, I think. The bottom ones prescribe locations for 6 holes, but they're only required to put any 4 on there. The advantage is that it frees up a little bit of extra space for the additional platter. I'll double check the side holes when I open up the next case. I started my burn-in test this morning and came home to find a) the PC I had all the drives connected to had hard locked during the run, and b) when I rebooted it and started it all over again the performance was so awful that it was going to be roughly 28 days just for the write portion of the badblocks runs for all the drives, running in parallel. Yay USB2. I didn't have any feasible way to connect them up via USB3. At this point, given that the warranty is intact, I've decided to just live dangerously since the SMART tests came back clean, and I put in the first drive to resilver on my array. It's going well so far, ~300M/s after I did some tuning on kernel parameters. If all goes well, it'll only take a bit over a week to get all these drives live and double the capacity of my array.
|
![]() |
|
IOwnCalculus posted:Bottom and side middle holes missing, or just bottom? Middle holes get in the way of using the entire 1" height of the 3.5" drive form factor for platters. Laptop drives went through a similar mounting hole location redesign a while back for the same reason, but a bit more forced because once you go down to 9.5mm height you really have to ditch the middle holes. The original mechanical design for this stuff was done so long ago that everyone involved expected that of course the entire bottom of the drive would be a PCB packed with electronics forever so why wouldn't you put mounting screw holes wherever the fuck you felt like, there's no way to fill that space with platters anyways.
|
![]() |