|
Meh. I went through some annoying times getting everything to work, but as usual I learned a load more stuff. Whether I retain that knowledge for next time is another matter.![]()
|
![]() |
|
IOwnCalculus posted:I've never had a problem with import -f, it's just annoying to go: ![]() Generic Monk posted:holy fuck ain't no kill like overkill
|
![]() |
|
Does anyone know if SMB3 profits from Windows' caching? It's file and not block based, so I'm wondering whether the network stuff profits from it. From what I have been able to find out, SMB3 seems to have slightly higher performance over iSCSI, so I've been looking to move any Steam game over to a regular network share instead of an iSCSI drive. --edit: Meanwhile I'm arrived to that oplocks stuff. Under Linux, I definitely know/found out that it uses the kernel's pagecache, but there's still nothing definite for Windows. I'd assume it'll use cache manager over there, too. Combat Pretzel fucked around with this message at 12:26 on Jul 1, 2017 |
![]() |
|
Although not exclusive to SMB3, there's both directory leases, branch caching as well as several _UNBUFFERED syscalls which would indicate that there's both client and server-side caching. Anecdotally, I've also noticed fluctuations in memory use when moving large sets of data.
|
![]() |
|
Hmmm OK. I guess I should load some larger files into memory see if the "Cached" section in task manager grows. Also, found out Samba doesn't do RDMA yet, so my Infiniband idea is iced, anyway. Let alone on FreeBSD/FreeNAS.
|
![]() |
|
apropos man posted:Meh. I went through some annoying times getting everything to work, but as usual I learned a load more stuff. Whether I retain that knowledge for next time is another matter. true enough! yeah that's the issue with stuff that's intended to be set-and-forget I guess - you almost have to have it be unstable just to retain a decent working knowledge of it. unless you janitor for a living i guess, in which case a truckload of valium is probably a less stressful option than a custom home server
|
![]() |
|
With SMB3 on FreeNAS, performance metrics are interesting coming from iSCSI. On Q32T1, sequential writes went from 151MB/s to 197MB/s and 4K reads from 146MB/s to 186MB/s. But 4K writes dropped from 160MB/s to 45MB/s. The rest is the same within a small error margin.
|
![]() |
|
Anyone know about SMB Direct? What is it? Something a home horder can use?
|
![]() |
|
It is a way of using RDMA to send stuff directly from system memory to the NIC to achieve higher network speeds when the OS netstack isn't good enough, similar to ISCSI RDMA / iSER, NFSoRDMA, Storage Spaces Direct, TSO/RSO/GSO, GRE offload and whatever else NICs allow, these days. Unless you're running Windows on both client- and server-side, I don't believe you'll be able to use it, as it isn't implimented in samba (similar to SMB multichannel, which isn't implimented either, and which SMB Direct might depend upon in some way). It's not a new idea either, it's been in use at least as far back as AGP, which could access textures in system memory. EDIT: So to answer your other question - With the theme of this thread being overkill, I'm wouldn't be surprised if someone will use it just for the hell of it. D. Ebdrup fucked around with this message at 15:19 on Jul 2, 2017 |
![]() |
|
redeyes posted:Anyone know about SMB Direct? What is it? Something a home horder can use? --edit: Too bad the SSD cache layer in Storage Spaces doesn't work* like the L2ARC, otherwise I could section like 32GB off my SSD and use it as cache and bullshit my way around by sticking the iSCSI extents into Storage Spaces instead of running NTFS on it directly. --edit2: *At least that's how I understand it, that it's offline balanced, altho some random Technet info suggests otherwise. Given it's all closed source, shit is kinda muddy. Combat Pretzel fucked around with this message at 15:29 on Jul 2, 2017 |
![]() |
|
God that Storage Spaces stuff is so fucking confusing. Is S2D just a clustering extension to regular Storage Spaces, i.e. caching and shit works the same in lowly regular Storage Spaces now, or do I need to enable S2D to get that different data handling? gg Microsoft for being so fucking confusing about things. --edit: Meh, Storage Spaces works as it did before in Windows 8.x/Server 2012. All this new stuff seems to be an abstraction in the cluster shared volumes driver. Internally there's still storage tiering going on, from what I can tell experimenting. Combat Pretzel fucked around with this message at 17:10 on Jul 2, 2017 |
![]() |
|
Well that's suspicious. All eight of my ~2.5 year old 4TB drives (mix of WD red and HGST) showed a few checksum errors within ~6 hours of each other, which makes me suspect it was something else and not the individual drives all deciding to fail on the same day. It's an old desktop without ECC RAM but I do trust the RAM inside it. Everything irreplaceable is backed up but if my array just up and died it'd take me a long time to recover. I'm running a scrub, then I'll zpool clear and run another, and I've ordered two of the 8TB drives that I was planning to use in my next NAS so I can make additional local copies. I wish the Denverton stuff would hurry up and come out but if these drives (or this old desktop) are really failing I'll have to grab an old Avoton board. Not a good time to be buying hardware for the server or desktop.
|
![]() |
|
Grabbed 4x 6TB Seagate Ironwolf drives off of Amazon. They delivered so quickly they didn't even have time to get hot in the back of the delivery truck. Of course, one of them is DOA, won't even spin. The other three are running through nwipe to burn them in.
|
![]() |
|
Ugh, I'm inclined to think these Ironwolf drives are garbage. nwipe is only pulling off about 2MB/sec on each drive, while an ancient 200GB drive I threw in for shits and grins is going about 20x faster. I'll go ahead and build a new zpool with them when the replacement fourth drive gets here but I get the feeling they're all going back to Amazon.
|
![]() |
|
IOwnCalculus posted:Ugh, I'm inclined to think these Ironwolf drives are garbage. nwipe is only pulling off about 2MB/sec on each drive, while an ancient 200GB drive I threw in for shits and grins is going about 20x faster. I'll go ahead and build a new zpool with them when the replacement fourth drive gets here but I get the feeling they're all going back to Amazon. Rotten. Might want to go HGST this time. Never had a bad one. You probably got a box that got kicked/dropped/smashed.
|
![]() |
|
IOwnCalculus posted:Ugh, I'm inclined to think these Ironwolf drives are garbage. Thanks for the review. On paper they look nice what with the 7200 RPM and big cache but Seagate gonna Seagate I guess ![]()
|
![]() |
|
redeyes posted:Rotten. Might want to go HGST this time. Never had a bad one. You probably got a box that got kicked/dropped/smashed. HGST drives come in boxes that can't be kicked?
|
![]() |
|
The three that weren't hard DOA aren't making any unusual noises or anything, but they're racking up tons of ECC corrections on smart. They're plugged into the same backplane, same breakout cables, and same controllers as my other drives. I'm gonna go with terrible Seagate quality control. If I hadn't already put in for the first one to be replaced, I'd already queue them all up for a refund.
|
![]() |
|
apropos man posted:HGST drives come in boxes that can't be kicked? They come in retail boxes with good packing so.. a bit less chance of dieing when they are kicked.
|
![]() |
|
Does anyone else try to order the same size and model drive from different stores so you'll be pulling from different batches? My biggest worry is if one drive starts going the other drives might start going as well at the most critical time of reslivering. Just remember Seagate drives getting the click of death at a certain read/write count for some damn reason.
|
![]() |
|
EVIL Gibson posted:Does anyone else try to order the same size and model drive from different stores so you'll be pulling from different batches? Yes, lots of people in this thread have explicitly done that, for that very reason. Especially when you're starting to order drives in sizes where fractions (or whole) dozens make sense as a unit of measure. As to the IronWolf drives, I'd be interested to see if the replacement one IOwnCalculus gets fares any better. Not to white knight Seagate, but fucked up batches happen, as do abused boxes and shipping companies more interested in throwing a box onto your lawn as fast as possible than worrying about the contents of said box. NewEgg is semi-famous for their poor HDD shipping procedures, and if the drives were bought as "sold by X and fulfilled by Amazon" instead of direct from Amazon, who knows what their story is.
|
![]() |
|
Hmmm, my 2TB WD RE4 drive in the NAS is about 4.5 years old. At the beginning of scrubs, there's funny fast periodic seeking noises. I can't decide whether ZFS' escalator sorting is having a ball with me due to how the scrub does things, but at some point early in the scrub, it stops doing that. I guess I should just replace it. Also, FreeNAS 11 new UI seems to be party lipstick on a pig. Doesn't appear to have the async stuff Corral had.
|
![]() |
|
Combat Pretzel posted:Also, FreeNAS 11 new UI seems to be party lipstick on a pig. Doesn't appear to have the async stuff Corral had. It's pretty much just a minorly revamped UI layered on top of 9.x, yeah. Most of the "good stuff" from Corral is actively being ported over to 11, but not until 11.3 and beyond last I checked.
|
![]() |
|
Yeah, they outright said they were just going to put a nice shine on 9.10 to fill in a couple minor gaps, and that there's going to be significant development effort later this year. So I'm going to continue sitting on my perfectly working Corral install for another 6 months or so, or until I can afford to build a brand new NAS and port all my data over, and then worry about it. For all of the flaws Corral had, it's been flawlessly stable for my use case since the week after I upgraded.
|
![]() |
|
DrDork posted:Yes, lots of people in this thread have explicitly done that, for that very reason. Especially when you're starting to order drives in sizes where fractions (or whole) dozens make sense as a unit of measure. Sold by Amazon in this case, though they are infamous for mixing inventory. They were packed in sealed plain Seagate boxes, but it almost seems like the plastic carriers aren't quite big enough for the boxes.
|
![]() |
|
Yeah my Corral setup has been fine, so I'm sitting on it at least until they cook up their Docker solution. I don't wanna ad hoc something and then have to migrate again, when there's not really a pressing need.
|
![]() |
|
does anyone make anything comparable to the hp microserver anymore? not really in the market for anything - mine's been rock solid and for file/media server tasks it's more than powerful enough - but it's kind of a shame that my gen8 is the last model they put out. seems lenovo will sell you something in a tower case that does the same job in twice the space which is pretty lame.
|
![]() |
|
HP Gen10 microserver with AMD APU is coming out: https://www.servethehome.com/new-hp...ron-x3000-apus/ I guess they skipped gen9?
|
![]() |
|
EVIL Gibson posted:Does anyone else try to order the same size and model drive from different stores so you'll be pulling from different batches? Instead of spreading my purchases over retailers, I spread mine over time. When I've got a few extra bucks in the budget, I upgrade a drive in one of my ZFS pools. Over many months, I've upgraded them all in a pool and my available space bumps up.
|
![]() |
|
priznat posted:HP Gen10 microserver with AMD APU is coming out: oh shit why did this not show up on goog this looks like a nice improvement; an APU is kind of a weird choice but could be good for transcoding I guess, although that depends on a shedload of software people getting their shit together
|
![]() |
|
So after not really using my Corral install for very long. I noticed I couldn't get to the GUI and power cycled the machine. Still nothing. Power cycled again with IPMI and...![]() I confuse.
|
![]() |
|
Speaking of packaging, Amazon UK once sent me a bare drive in an anti static bag inside a thin cardboard mailer that looked like it was designed for a paperback book. Unsurprisingly, it lasted less than 24 hours before completely dying on me. Absolutely ridiculous.
|
![]() |
|
Can you be more clear on what you're confused about? The first line doesn't look confusing to me at all. Second one makes perfect sense because Corral uses a ZFS partition for boot. Third is just referencing /dev/random having enough entropy built up to be usable, pretty sure.
|
![]() |
|
The last two lines are a bit confusing to me. (Mostely because I'm a scrub I suspect.) The shares aren't discover able and I can't seem to get to the WebGUI. I can ping it though?
|
![]() |
|
EVIL Gibson posted:Does anyone else try to order the same size and model drive from different stores so you'll be pulling from different batches? Assuming you're talking about what I think you're talking about, that last one wasn't a QC or manufacturing defect, it was a firmware bug. quote:The firmware issue is that the end boundary of the event log circular buffer (320) was set incorrectly. During Event Log initialization, the boundary condition that defines the end of the Event Log is off by one. During power up, if the Event Log counter is at entry 320, or a multiple of (320 + x*256), and if a particular data pattern (dependent on the type of tester used during the drive manufacturing test process) had been present in the reserved-area system tracks when the drive's reserved-area file system was created during manufacturing, firmware will increment the Event Log pointer past the end of the event log data structure. This error is detected and results in an "Assert Failure", which causes the drive to hang as a failsafe measure. When the drive enters failsafe further update s to the counter become impossible and the condition will remain through subsequent power cycles. The problem only arises if a power cycle initialization occurs when the Event Log is at 320 or some multiple of 256 thereafter. Once a drive is in this state, there is no path to resolve/recover existing failed drives without Seagate technical intervention. For a drive to be susceptible to this issue, it must have both the firmware that contains the issue and have been tested through the specific manufacturing process.
|
![]() |
|
Aha. I was more patient and got this through IPMI.![]() Still don't know wtf though.
|
![]() |
|
That looks like your boot media took a shit. Got the fourth drive installed, threw it right into a four-drive raidz... and I'm currently getting <10MB/sec writes on an rsync from my 9-drive raidz2 pool, along with rapidly increasing SMART values on all four drives. These are in the same system so there's no network limitations at play. I realize bad batches and all that, but I've never had effectively 100% DOA on any one order of drives. The reviews on the Ironwolf are pretty poor at most sites too, with at least one other review describing the exact behavior of the one I had that wouldn't spin up (emitted a high pitched 'beep' every few seconds). So, yeah, don't buy these.
|
![]() |
|
That message makes me think there's a hardware failure that's starting to happen on one of your drives. You could always try booting with one or more disks gone to try process of elimination.
|
![]() |
|
IOwnCalculus posted:That looks like your boot media took a shit. This. Hopefully you've got a backup of your config, or can roll back to a previous boot environment on the same drive. If you have a backup, wipe the drive and reinstall and then restore the config (or replace the drive and do the same). You can try the rollback without doing anything crazy if it's available from the boot menu when you power on.
|
![]() |
|
BobHoward posted:Assuming you're talking about what I think you're talking about, that last one wasn't a QC or manufacturing defect, it was a firmware bug. Then this was the best kept secret because me and several others had no clue wtf was going on in 2006 or so Tech support was awful to look at then. Downloads not any better since they all looked like geocities pages with miles and miles of manufacturer firmwares and you considered yourself lucky if there was also a link to download the tool to flash the firmware hah EVIL Gibson fucked around with this message at 20:42 on Jul 3, 2017 |
![]() |