|
IMO for home stuff, while FreeNAS/TrueNAS is technically better with ZFS, there's a lot to be said for Unraid's BJOD mechanic.
|
![]() |
|
Maybe if you hate your data.
|
![]() |
|
DrDork posted:"only one can have DHCP enabled" Absolutely fucking infuriating. Not enough for me to change now that I'm in the ecosystem and used to it but god damn.
|
![]() |
|
Crunchy Black posted:Absolutely fucking infuriating. Not enough for me to change now that I'm in the ecosystem and used to it but god damn. Seriously. It's not a big deal for me because I can DHCP my management interface and then just static-assign the bulk data interface, but anything more than 2 gets obnoxious real fast.
|
![]() |
|
Buff Hardback posted:IMO for home stuff, while FreeNAS/TrueNAS is technically better with ZFS, there's a lot to be said for Unraid's BJOD mechanic. I agree. It's much more flexible, and all the scenarios where people run into pinch points with traditional RAID/ZFS simply don't exist in UnRAID. I also prefer to lose random files as opposed to random blocks; at least the data you have left after parity+1 failures can be trusted. That said, of course it's not an enterprise solution, and suffers from bottlenecks, and ultimately it wouldn't be something you'd recommend at the high end. But for the home user, it's pretty much the ideal thing - simple to use and administer, standard hardware, mix and match drives, upgrade as you want to, up to double parity protection. That's a good solution for a LOT of home users. Is checksumming and snapshotting awesome? Hell yes. Are they on the list of features that could be considered absolutely critical in the storing of a bunch of linux ISOs? Not so much. HalloKitty fucked around with this message at 17:04 on Mar 19, 2020 |
![]() |
|
D. Ebdrup posted:Maybe if you hate your data.
|
![]() |
|
I just use raid 6 sorry not sorry. ![]()
|
![]() |
|
HalloKitty posted:I agree. It's much more flexible, That's how I see it too. I don't need or necessarily even desire the absolute blistering fast speeds or perfect resilliancy that other solutions may bring for my linux ISOs. Anything important is backed up (like it is on every system, right? Not by raid). For checksumming: You can have filesystem regular checksumming done by one the plugins to detect rot. Detect only, not resolve. This may not be the same as detect and restore you may find on other file systems, but for the foregoing reasons etc... Also, remember folks, these are personal/consumer devices. If you want to build a 24 matched drive ZFS array on Free/TrueNAS, all power too you. If you want to use 10 year old drives in a used eBay server with a graphics card that's more powerful than the port it's connected to, be our guest!
|
![]() |
|
Heners_UK posted:Anything important is backed up (like it is on every system, right? Not by raid). Exactly.
|
![]() |
|
Yay, the T7810 from Shamino arrives today. I'm planning to roll UnRaid on it. Would love to get a moment to confirm this is how I should plan my drives. Here are the drives: 1 - 512GB PCIE m.2 SSD (I'm guessing this is where I install the OS and use as the cache?) 2 - 6TB Parity Drive - I'm assuming you want the largest drive for this? Pool on Day 1: 6TB 4TB Additional drives to the pool once I get it up and running and transfer over my data: 6TB 5TB Anyone have a good UnRaid setup guide to recommend? I'm sure they're easy to find but always appreciate a good goon recommendation!
|
![]() |
|
TraderStav posted:Yay, the T7810 from Shamino arrives today. I'm planning to roll UnRaid on it. IIRC you can't install it to a drive, so you'll need a USB drive to install the OS to. Os on the USB. SSD = Cache Your spinning rust layout looks fine, remember you are limited to the size of the Parity drive so you always want it to be your largest drive. Honestly, just follow the directions from UnRAID, they're more than fine. Your server will be available as tower.local in a browser once set up. I highly recommend setting a static IP, either manually, or with a DHCP reservation.
|
![]() |
|
Matt Zerella posted:IIRC you can't install it to a drive, so you'll need a USB drive to install the OS to. Oh I hadn't realized that. I hope I have a big enough USB stick sitting around here. I do have an external USB drive with a 200GB old laptop drive that I could potentially use, but doesn't seem that ideal. Can I 'upgrade' my parity drive down the line if I want to start throwing larger drives in? Meaning install a 14TB and migrate the existing 6TB to the 14TB and convert the old 6TB into the pool? Thanks!
|
![]() |
|
TraderStav posted:Oh I hadn't realized that. I hope I have a big enough USB stick sitting around here. I do have an external USB drive with a 200GB old laptop drive that I could potentially use, but doesn't seem that ideal. I use an 8GB drive so you don't have to go nuts. Yes, you can upgrade the cache down the line, but honestly unless you're running a ton of VMs, you should be ok. Their forums are incredibly helpful and not full of shitty elitists like FreeNAS. You can find pretty much anything you need there. And check out SpaceInvaderOne on YouTube, he's the tutorial master and walks you through anything.
|
![]() |
|
Matt Zerella posted:I use an 8GB drive so you don't have to go nuts. Just to be clear, you meant I can upgrade the 6TB Parity drive. The 512GB SSD Cache is sufficient unless I want to run a bunch of VMs? I'm guessing that cache SSD could be upgraded down the line if needed also. Thanks!
|
![]() |
|
TraderStav posted:Just to be clear, you meant I can upgrade the 6TB Parity drive. The 512GB SSD Cache is sufficient unless I want to run a bunch of VMs? I'm guessing that cache SSD could be upgraded down the line if needed also. You can upgrade either of them as needed. There are steps to do both. And yes, that cache is fine, a nice to have down the road is to add a second matching drive for RAID1 on the cache which makes it more durable, but it's not essential.
|
![]() |
|
TraderStav posted:Oh I hadn't realized that. I hope I have a big enough USB stick sitting around here. Pretty much anything above the minimum specs is fine. I think mine is only 8gb. If it's outside the case, consider one of the small ones that doesn't stick out much.
|
![]() |
|
Heners_UK posted:Pretty much anything above the minimum specs is fine. I think mine is only 8gb. If it's outside the case, consider one of the small ones that doesn't stick out much. I've got one of these guys, although I think they used the term USB 3.0 a lot differently back then. https://www.newegg.com/patriot-mode...N82E16820220769 Looking forward to playing with this over the weekend!
|
![]() |
|
This guy has a lot of good videos for setting up basic stuff in his old stuff. His new videos are a lot more niche. https://www.youtube.com/channel/UCZ...vMqTOrtA/videos
|
![]() |
|
TraderStav posted:Oh I hadn't realized that. I hope I have a big enough USB stick sitting around here. UnRAID needs to boot from a USB stick, in fact, that's how they licence it - it's locked to the USB stick's serial number, which is admittedly kind of odd, but there you go. It takes up very little space. I run with an 8GB stick on an internal USB header and it's barely even used at all.
|
![]() |
|
TraderStav posted:Yay, the T7810 from Shamino arrives today. I'm planning to roll UnRaid on it. Some of his guides may be out of date, but Spaceinvader One has a ton of really informative Unraid videos that help users setup any number of things within Unraid, including vms, docker, and community apps. https://www.youtube.com/channel/UCZ...4N0WeAPvMqTOrtA
|
![]() |
|
Matt Zerella posted:You can upgrade either of them as needed. There are steps to do both. And yes, that cache is fine, a nice to have down the road is to add a second matching drive for RAID1 on the cache which makes it more durable, but it's not essential. Raid1 cache means that you can use it as a write cache instead of just a read cache. This is a more dangerous situation in general but these days the overlays are very stable. (back in my day the bcache overlay was hilariously new and even causing problems on reads ![]() Maintain backups of things that you don't want to lose, as always.
|
![]() |
|
H110Hawk posted:Raid1 cache means that you can use it as a write cache instead of just a read cache. This is a more dangerous situation in general but these days the overlays are very stable. (back in my day the bcache overlay was hilariously new and even causing problems on reads It doesn't mean that in unRaid. The cache is read/write no matter the mode.
|
![]() |
|
Matt Zerella posted:It doesn't mean that in unRaid. The cache is read/write no matter the mode. That's horrifying. And makes me think less of unraid.
|
![]() |
|
H110Hawk posted:That's horrifying. And makes me think less of unraid. You don't lose anything except what's in the cache before the mover runs at night. It doesn't affect the main array at all.
|
![]() |
|
Matt Zerella posted:You don't lose anything except what's in the cache before the mover runs at night. It doesn't affect the main array at all. That's not durable storage in my book. It might be acceptable risk for your animes but if you can't disable it for important folders it's dangerous. Can you? (I assumed they were using Linux mainline kernel bcache and not a home rolled solution.)
|
![]() |
|
H110Hawk posted:That's not durable storage in my book. It might be acceptable risk for your animes but if you can't disable it for important folders it's dangerous. Can you? (I assumed they were using Linux mainline kernel bcache and not a home rolled solution.) You set cache on a per share basis, yes.
|
![]() |
|
Unraid's cache drive is really more akin to a (no smarts) tiered storage model, although it's not actually this and deals entirely with whole files so, in theory, you can potentially lose the primary array or the cache and the data on the remaining one is intact. You can pin workload entirely to the primary array so it never touches it, preferring cache, pinning cache, or simply using the cache array as a write location to be batch moved to the primary array on a on overnight schedule (and/or once high water marks are hit, so on). Common usage is to pin VM/docker files on the cache array (that is some flavor of performant storage), have your Linux ISOs go to the primary array (spinning rust, using either of the cache options above). It helps alleviate some of the performance issues that the primary array has with the dedicated parity drives being a bottleneck and keeps the stuff that needs speed on the disks that can handle it and works pretty well, near as I can tell, in the ISO server use case. I wouldn't recommend enabling Unraid's cache array without a minimum of 2 disks in it. Spend the extra 60 bucks and pick up another 500GB SSD or just don't use it - it's entirely optional. If you aren't going to run the primary array without a parity, don't use the cache array without a second disk for the exact same reasons.
|
![]() |
|
Woah nelly I'm seeing a lot of misinformation on what Unraid's cache does Cache is for writing only, never for reads (unless you're reading something that was put into the array prior to the mover running and moving stuff from array to spinning rust). Once something is moved to the spinning rust from the cache, it realistically will never go back onto the cache (there's no smart access file moving or anything like that). IMO you're never going to have a situation where flash fails completely suddenly without warning, so I run my cache in RAID0, but I also have a higher risk tolerance than others.
|
![]() |
|
The meme where the crying guy is talking about durable storage and the other guy is plex machine go brrrrr
|
![]() |
|
Matt Zerella posted:The meme where the crying guy is talking about durable storage and the other guy is plex machine go brrrrr In the middle is me, the incredibly calm genious running a plex machine with galactic amounts of redundancy and future proofing
|
![]() |
|
The Unraid cache space isnt really needed anymore like it used to be. Before they really optimized their code you'd get T E R R I B L E throughput to the array, like 15MBps write and 40MBps read at best. You used a cache drive to write to quickly and then it spent the time writing to the array later, overnight or whatever you scheduled since it was so slow. Since then, things are "faster", not at the level of other solutions, but at least closer to saturating gigabit lan speeds on read and reasonable writes. Most people (including myself) just disable the "mover script" and use the cache drive as a scratch space to run docker apps, download and extract files, and things like that. Stuff that really benefits from a small SSD vs some 10TB HDDs you'd rather not keep spun up. But yeah different strokes for different folks.
|
![]() |
|
Netapp disk shelf owners: The Milkman posted:In the middle is me, the incredibly calm genious running a plex machine with galactic amounts of redundancy and future proofing also, you forgot handsome
|
![]() |
|
For the cache array, do the SSDs need to be identically sized? I have a 512GB in there now and have a spare 256GB that I pulled out of my Thinkpad when I first bought it, could I add that to the cache array? Unraid was so stupid easy to install last night. It has taken about 11 hours for the initial parity sync to complete on the 6TB drive so haven't done much else with it but the next steps are to start transferring data over and installing Plex/etc. My biggest challenge is going to be finding additional ways to add drives securely to the T7810. Two out-of-the-box bays and you can add a third with some parts officially designed for the 5 1/4" bay. I have a few more drives to add so shit is going to get ghetto.
|
![]() |
|
If the drives are mismatched then the raid1 is limited to the size of the smallest one. I'd probably disable the cache for the initial transfer as it's not needed. Once you've got everything moved over, pin your dockers to it (appdata)
|
![]() |
|
Matt Zerella posted:If the drives are mismatched then the raid1 is limited to the size of the smallest one. I'd probably disable the cache for the initial transfer as it's not needed. Ah okay, since I'd effectively end up with the same amount of cache but be over two drives, wouldn't that be preferable over just 1 SSD for the cache as Fancy_Lad stated above?
|
![]() |
|
TraderStav posted:Ah okay, since I'd effectively end up with the same amount of cache but be over two drives, wouldn't that be preferable over just 1 SSD for the cache as Fancy_Lad stated above? I'll leave that up to you. I've been running a single SSD as a cache for 2 years so I'm not the best to advise here.
|
![]() |
|
At the end of the day, most of us don't run our os drives in raid1, and yet we're not freaking out every day. Understanding what the use case is, how important the data is, and taking regular backups should be a part of any storage plan. Would it be recommended to run your cache drives in raid1? Of course. But it's a choice you can make. Again, we're not talking enterprise solutions. HalloKitty fucked around with this message at 15:11 on Mar 21, 2020 |
![]() |
|
I have a single cache drive in my Unraid box with a plugin that moves the stuff on a cahce drive over to a backup folder on the array once a week. I only have a Windows VM for Blue Iris NVR and some docker containers that easy to recreate though so no big loss if it all goes haywire.
|
![]() |
|
As I do not have enough free space initially to transfer all my data at once, is there any downside to having one drive in the array, filling it up, adding the drive that I just copied from to the array (now I have 2 drives in the array) and then moving to the next drive of my source data? Probably will have 3-4 total drives at the end of it. But wasn't sure how it balanced the data over time if a lot was initially loaded onto one drive. Will it 'spread' the data over to the other drives as they're added over time?
|
![]() |
|
TraderStav posted:As I do not have enough free space initially to transfer all my data at once, is there any downside to having one drive in the array, filling it up, adding the drive that I just copied from to the array (now I have 2 drives in the array) and then moving to the next drive of my source data? Probably will have 3-4 total drives at the end of it. But wasn't sure how it balanced the data over time if a lot was initially loaded onto one drive. Will it 'spread' the data over to the other drives as they're added over time? you're fine. as long as you keep it the default fill to high water, it'll try to spread things out as much as possible. With that in mind however, spreads only occur during writes, so stuff written to drive 1 won't get unbunched to drives 2-n (unless you use a plugin to scatter a share)
|
![]() |