|
SCheeseman posted:I threw together a Ryzen 3 1200-based Linux server a few years ago that I've been using for Plex etc as well as a QNAP NAS that I want to retire. The way storage is set up is a bit of a mess at the moment, so the idea is to set up a software RAID5 with 9x8TB hard drives on the server (eventually expanding to 10 or 11 drives). I already have 6x8TB drives that aren't in an array (four in the NAS and 2 in the server) and I'm going to buy another 3x8TB drives, create the initial array then transfer stuff over, adding drives to the array as I empty them. They're SMR drives, so I imagine this will be slow as hell and I understand there will be speed penalty during RAID rebuilds, though the NAS is only really used to store video files for streaming so speed requirements aren't high. Is this a terrible idea? 9 drives and 1 parity? And smr? And that much rebuilding? Terrrrrrrible idea in every single way.
|
![]() |
|
I need to work with what I have, so suggestions would be useful. The other choice is just leaving them all as discreet disks and accept data loss as inevitable, that seems kind of worse? SCheeseman fucked around with this message at 10:08 on Apr 29, 2020 |
![]() |
|
migrate to unraid?
|
![]() |
|
Might be doable actually, I have a spare G4560+mobo which should presumably work fine?
|
![]() |
|
Should be fine if you're not doing multiple transcoding streams for plex (if you are I'd get a cheap nvidia card to handle it anyway).
|
![]() |
|
SCheeseman posted:I need to work with what I have, so suggestions would be useful. He's not really wrong, though. With a 9x RAID5 SMR setup, if you lose one drive you're really rolling the dice on translating that into losing the entire array, or at the very least having a hell of a time getting it back into a healthy state. And repeatedly rebuilding as part of your transition process is, well, there's got to be another way. How much data do you have now? I'd say the ideal end result would be that you buy an additional 8TB drive and do a paid of 5-drive RAID5's, or one 5-drive and one 4-drive, if you can't/don't want a 10th drive.
|
![]() |
|
Oh snap, I received all 4 of my 10TB PMR hard drives, and my QNAP's out for delivery! I think I'll configure a RAID 5 for these 4 drives. Nfcknblvbl fucked around with this message at 19:24 on Apr 29, 2020 |
![]() |
|
eames posted:Digitimes reports that WD is planning to increase the price of enterprise drives. You know, the ones they recommended to consumers who dont want SMR. Both the price increase and the inclusion of SMR drives for lower capacity consumer HDDs have the same root cause: as much production capacity as possible is being allocated to large capacity enterprise drives, and even then supply is lower than demand. This is exacerbated by lower production yields of newer 14+ TB products. These products have higher margins, FANGS et al are buying them all up, so it doesn't make sense to allocate production capital to other product segments (nor does it make sense to expand factories just to build consumer HDDs). Even now, virtually everything being built (from the component level up) is for high capacity enterprise drives. Lower capacity desktop/consumer HDDs are being phased out of product roadmaps entirely, with the plan essentially being that anything not going to one of the supermajor datacenter big dicks will just be leftovers, with consumer/desktop PC HDDs being the last in that process.
|
![]() |
|
Dropbox has been moving to SMR and I'm sure the others are too, at least for some workloads
|
![]() |
|
Bob Morales posted:Dropbox has been moving to SMR and I'm sure the others are too, at least for some workloads Yes large customers use both SMR and CMR drives, and many/most of them use internally developed host management strategies specific to their needs. There's nothing inherently bad with SMR, it's just ill suited for some tasks. But the point is, the drives sold for home/consumer use are increasingly going to be just "whatever".
|
![]() |
|
Morbus posted:Yes large customers use both SMR and CMR drives, and many/most of them use internally developed host management strategies specific to their needs. There's nothing inherently bad with SMR, it's just ill suited for some tasks. But the point is, the drives sold for home/consumer use are increasingly going to be just "whatever". I certainly can't speak for everyone but I know my whole beef with the "whatever" is that's exactly the attitude you don't want in a drive sold for things other than typical home use. I'd pretty much expect manufacturers to give no fucks about the equivalent of a WD Blue / Green but the Red was just a slap in the face. And, yes, SMR is not inherently bad - but the solutions most of us in here are using for RAID / drive pooling aren't really able to accommodate SMR yet.
|
![]() |
|
eames posted:Digitimes reports
|
![]() |
|
IOwnCalculus posted:I certainly can't speak for everyone but I know my whole beef with the "whatever" is that's exactly the attitude you don't want in a drive sold for things other than typical home use. I'd pretty much expect manufacturers to give no fucks about the equivalent of a WD Blue / Green but the Red was just a slap in the face. No question that there are a lot of consumer use cases that differ a lot from "typical home use", which is exactly why people are upset. The problem is, from a technology/product level, close to zero effort is being directed towards anything other than large cap enterprise, and the drives sold for *any* consumer/non-datacenter use are just cobbled together from what's there.
|
![]() |
|
DrDork posted:Depends entirely on how comfortable you are with a DIY system. If you are, we can give some recommendations, and in that case a 6-drive setup isn't crazy. I am ok with building a system. I just didn't know if there was something recommended that was cheaper\easier. As far as space I just figured if I am doing it now I might as well make it big so I don't really have to worry about space for a while.
|
![]() |
|
SCheeseman posted:I threw together a Ryzen 3 1200-based Linux server a few years ago that I've been using for Plex etc as well as a QNAP NAS that I want to retire. The way storage is set up is a bit of a mess at the moment, so the idea is to set up a software RAID5 with 9x8TB hard drives on the server (eventually expanding to 10 or 11 drives). I already have 6x8TB drives that aren't in an array (four in the NAS and 2 in the server) and I'm going to buy another 3x8TB drives, create the initial array then transfer stuff over, adding drives to the array as I empty them. They're SMR drives, so I imagine this will be slow as hell and I understand there will be speed penalty during RAID rebuilds, though the NAS is only really used to store video files for streaming so speed requirements aren't high. Is this a terrible idea? Okay instead of just saying it's a bad idea, here's why it's a bad idea: 1) Rebuilding is a very likely point of failure. Rebuilding 6 times is insanity. 2) Raid5 only gives you 1 drive parity. You can only afford to lose 1 drive before you lose everything. Any failure during that rebuild will be the end. 3) SMR drives in raid will not rebuild properly or calculate parity properly, and rebuilding to/from an SMR drive is just more data loss. 4) Software raid is usually not portable. That means if you have to reinstall your OS, your raid won't go with it. Usually this is if your hardware dies and you have to buy new parts. 5) If you are buying new drives, for a raid, don't buy SMR drives. 6) rebuilds take a very long time. Aim to minimize rebuilds. And what I would suggest, given that you have a spare motherboard/cpu: 1) Buy new CMR server/nas specific drives. For example, the WD easystores or mybook, and pull them out of the enclosures, or the WD Red 8tb+, seagate ironwolf, toshiba something or another drives, etc. The 8tb easystores are usually on sale, but the 10, 12, and 14tb ones go on sale pretty regularly too. 2) Install freenas onto said spare motherboard/cpu; I would suggest if you have a spare sata port then use an old hard drive or ssd. This is only for the OS, not data. If you don't have a spare sata port, perhaps a sata to usb enclosure would be the route to take. 3) Make a new pool with your new hard drives. Use as much parity as you think you will need; If you are using 3 drives, a raidz1 should be ok. 4) Make sure to set up regular scrub and snapshot tasks. 5) Copy over the data from a few drives first, then once they are empty, set up a new pool with those drives. Repeat. 6) Make sure to save your freenas configuration on something that isn't on that storage. What drives do you have now? Are you sure they are SMR drives? Obviously, just some basic steps with few specifics. - I have no idea how unraid works so someone else will have to tell you about it. - The best part about using zfs and freenas is that the storage info is stored on the drives, so even if you need to replace your cpu/motherboard/freenas boot drive and you have no backup of your configuration, you can simply import the disks and your data is still there. - The speed penalty during a raid rebuild is that your drives will basically be unusable during this time. You should strive to avoid rebuilds if you can help it. - Running VMs is a completely different can of worms.
|
![]() |
|
Toshiba telling people about their SMR drives... https://toshiba.semicon-storage.com...20200428-1.html
|
![]() |
|
Wild EEPROM posted:Okay instead of just saying it's a bad idea, here's why it's a bad idea: Agree with most of your points but why should parity calculation (or even rebuilding) be an issue with DM-SMR drives?
|
![]() |
|
Morbus posted:Agree with most of your points but why should parity calculation (or even rebuilding) be an issue with DM-SMR drives? Because during a rebuild, the drive is going to experience what it's most poorly suited to handle - a prolonged very large write. This can cause the drive to basically act unresponsive while it catches up, which most RAID implementations take as "oh, drive's fucked, drop it and start over".
|
![]() |
|
IOwnCalculus posted:Because during a rebuild, the drive is going to experience what it's most poorly suited to handle - a prolonged very large write. This can cause the drive to basically act unresponsive while it catches up, which most RAID implementations take as "oh, drive's fucked, drop it and start over". Long sequential writes shouldn't be an issue, though, especially if you are starting from a blank state and don't care about what was previously on the drive. In fact, even random writes shouldn't be an issue if you free up every sector on the drive beforehand (which should happen if you are rebuilding, right?). Also I still don't see how parity calculation would be a problem. Don't get me wrong, I appreciate how a RAID implementation not specifically designed to deal with the write characteristics of SMR drives could end up having problems in any task involving prolonged writing--just it would seem that the kind of writing involved in a RAID rebuild is ideally suited to avoiding these problems (mostly sustained, sequential writes; don't care about previously written data).
|
![]() |
|
DMSMR drives write all incoming data to a CMR cache and when that cache fills it stalls for an extremely long time while it dumps that cache. The host has no control over how the data is written or when the cache is rewritten at a higher density. Theyre designed for short bursts of write activity, not sustained writes. Host managed SMR is a different beast entirely, and not available on these disks afaik, considering that is actually an enterprise feature. corgski fucked around with this message at 04:53 on Apr 30, 2020 |
![]() |
|
is there a way to make ZFS not fail the drive for up to like 10 minutes or something? (freebsd or linux)? That would give you enough cushion to ride over the "stalls". In other news my cool lifehack of using a USB-to-SATA bridge to boot my root filesystem on my whitebox microserver to allow me a full 8 ports for storage has ended poorly. After 2 years the bridge has died, or at least I'm assuming it's the bridge and not the SSD. It unmounted the root filesystem during boot, 937 errors on my root filesystem (which is independent from my data volumes) when I last checked before it dropped off again. in the dmesg logs I'm seeing apparently my "scratch" NVMe drive has a "missing GPT table, using secondary GEOM" message although I may have forgotten to init one before passing it to ZFS, so that may have been there. I did rsync everything off and nothing was showing up damaged in the scrub anyway. I've been thinking about moving to a different fileserver with an eye towards eventually setting up a rack, guess I'll be putting that together before too long. Booting from the NVMe will be fine, I just liked the idea of having boot on a cheap SSD.
|
![]() |
|
Riding out 10 minute stalls does not sound like my ideal recovery scenario
|
![]() |
|
corgski posted:DMSMR drives write all incoming data to a CMR cache and when that cache fills it stalls for an extremely long time while it dumps that cache. The host has no control over how the data is written or when the cache is rewritten at a higher density. Theyre designed for short bursts of write activity, not sustained writes. That makes sense. Although I think the largest market for DM-SMR drives is video surveillance, which I would think involves long sustained writes. Probably they can handle sustained writes up to a certain data rate, which is exceeded during a RAID rebuild. Seems dumb that they just fill up their cache then stall until they can dump it, as opposed to throttling data rate to a painfully slow but steady rate. Also seems dumb that they bother with a CMR cache at all for writes that don't require one, especially considering that, In terms of intrinsic sustained write capability, SMR tends to actually be faster than CMR. Then again DM-SMR has been a shitshow since day 1 and mostly an afterthought, so not too surprising that the device management sucks.
|
![]() |
|
I cant speak for other manufacturers but as it stands WD Purples are all CMR.
|
![]() |
|
corgski posted:I cant speak for other manufacturers but as it stands WD Purples are all CMR. Yeah seems like some larger customers are using host managed SMR for bulk data / video (which makes sense, it's an ideal use case if you aren't dumb about how you do it), but the branded stuff is CMR. That makes sense if the approach to DM-SMR is "lol write to a CMR cache then yolo when it fills up".
|
![]() |
|
If you're in this thread asking for advice and you're wanting to run "cheap" SMR drives or old school RAID, more power to you, but its the wrong choice in TYOOL 2020. ESPECIALLY if you're just starting out.
|
![]() |
|
The ideal use case for cheap SMR drives is pooled together to serve up all your Linux ISOs. If you lose one you just re-download everything that was on that disk.
|
![]() |
|
what are you going to do when you want to install all the linux versions at the same time and your nas can't keep up vvv shit oops taqueso fucked around with this message at 06:35 on Apr 30, 2020 |
![]() |
|
taqueso posted:what are you going to do when you want to install all the linux versions at the same time and your nas can't keep up thought SMR was a write impactful tech, not read ..
|
![]() |
|
If you can afford to license the patent, the best way to destroy an SSD is a blender and a pulverizing agent.
|
![]() |
|
xarph posted:If you can afford to license the patent, the best way to destroy an SSD is a blender and a pulverizing agent. wanna meet the guy who successfully recovered data from SSD particles three millimetres in diameter.
|
![]() |
|
Wild EEPROM posted:- I have no idea how unraid works so someone else will have to tell you about it. License tied to a usb is kind of a weird limitation tho.
|
![]() |
|
After looking more deeply into it I think I'll go with unraid. Thanks for the help!
|
![]() |
|
SCheeseman posted:After looking more deeply into it I think I'll go with unraid. Thanks for the help! YOU WON'T REGRET THIS" *i've not yet had a drive go bad in unraid, and while i don't really give a shit because it's all linux iso's I also can't comment on recovery from the parity disk. in the meantime it's made heaps of automation really easy and i've had good success with GPU passthrough to VMs and dockers. I've built this kind of setup from scratch in the past in debian and hated the admin overhead, unraid largely just works and it's worth the price if you're use case is simple media storage and a few dockers/vm's to add to media imo. Get the community apps thing as soon as possible and anything 'binhex' prefixed I rate as working really well (sonarr, plexpass, deluge etc) BurgerQuest fucked around with this message at 15:18 on Apr 30, 2020 |
![]() |
|
Thanks for the advice! I was pretty sure RAID5 was stupid but I was finding it hard to get solid information on this stuff from internet searches, just people with unsubstantiated opinions and vague documentation.
|
![]() |
|
SCheeseman posted:Thanks for the advice! I was pretty sure RAID5 was stupid but I was finding it hard to get solid information on this stuff from internet searches, just people with unsubstantiated opinions and vague documentation. RAID5 isn't inherently stupid--it's actually quite reasonable for many situations. It's just not an ideal setup for large arrays due to the relatively low amount of protection it provides. "Common wisdom" says to limit a RAID5 array size to 4-5 drives. 6+ and you preferably would be using something that would give you at least 2 parity drives, or break it all up into smaller arrays (like 2x 3-drive RAID5s).
|
![]() |
|
I meant stupid in the context of my use case, i don't really feel knowledgeable or passionate enough to hold a strong opinion on different kinds of RAID except for RAID2 fuck you, you have no reason to exist
SCheeseman fucked around with this message at 15:37 on Apr 30, 2020 |
![]() |
|
Wild EEPROM posted:4) Software raid is usually not portable. That means if you have to reinstall your OS, your raid won't go with it. Usually this is if your hardware dies and you have to buy new parts. I have personally moved Windows dynamic disks and Linux md arrays between systems with no problems, and as far as I'm aware OS X's disk sets are equally portable. Windows won't automatically mount the array, it'll flag it as foreign by default, but that's a matter of two clicks in Disk Management to import it. Likewise on Linux, you have to do a mdadm scan for it to identify a newly attached array but we're not talking about rocket science here. Are you maybe thinking about those setups often bundled with "gamer" motherboards that are some proprietary softraid pretending to be hardware RAID? Those are somewhat tied to the hardware of course, but they're easy enough to just not use. I'm pretty sure Linux md is actually able to mount a lot of these as well, as long as the array layout is stored on disk somewhere and not just in an EEPROM on the motherboard.
|
![]() |
|
He's probably thinking Softraid
|
![]() |
|
Morbus posted:wanna meet the guy who successfully recovered data from SSD particles three millimetres in diameter. but, but, that's millions of your bits all hooked together just waiting to be intercepted by an alien species with a dyson sphere to power their reconstruction computers
|
![]() |