|
I am planning to get a NAS but currently I am unsure of whether I want to buy or build. I've spent the last week or two on doing reasearch and I've been vacillating between Synology DS918+, HPE Microserver Gen 10 Plus and a custom Supermicro build. I've only ever built desktops before so all I know about non-desktop is literally from those last two weeks. My original intention was to have a relatively low power storage that would also be able to do some stuff on top: running Docker containers, BitTorrent client, maybe a VM or two, and Plex. It would be nice to have HW transcoding available but I specifically bought ODROID-N2 and put Kodi on it so I don't have to give a shit about it (I'm the only Plex user). Other devices include iPad with Infuse Pro and sometimes my Android phone. If I'd go without HW transcoding, I could get the Xeon version of Microserver Gen 10 Plus for around the same price as the custom build but with less ram but I assume upgrading it in the future is out of the question because of being limited to 180 W power brick and custom components inside. The main reason why I've even started considering getting a custom solution instead of a ready-made solution is because I like to tinker and am unsure which OS do I want to run on the device. The main reason why I haven't bought it yet because I lack the self-confidence to go ahead with it because of lack of knowledge. I put a tentative build together but it's possible I made a mistake somewhere. As for the OS, I am really intrigued by FreeNAS/ZFS but I am a Linux user and my daily job is also Linux-related so FreeBSD is something I've never used. Reading people's opinions in this thread made me want to try ZFS and getting ECC RAM instead of just winging it with a desktop-like build in the first place. I am also vacillating on the size of storage, my initial instinct was to go with 4x4TB because I have around 5TB of data right now but the prices are always teasing me go to higher. One thing that is stopping me from considering bigger sized disks is some people in this thread mentioning RAID5 with only a few huge disks being a bad idea. I don't want the whole thing to crash and burn during a rebuild. Anyway, this is what I have right now for the custom build: Chassis: https://supermicro.com/en/products/chassis/tower/721/sc721tq-250b Motherboard: https://www.supermicro.com/en/products/motherboard/X11SCL-IF CPU: https://ark.intel.com/content/www/us/en/ark/products/191037/intel-xeon-e-2224g-processor-8m-cache-3-50-ghz.html RAM (x2): https://memory.net/product/hma82gu8afr8n-uh-sk-hynix-1x-16gb-ddr4-2400-ecc-udimm-pc4-19200t-e-dual-rank-x8-module/ (The RAM was chosen from a list on Supermicro's website.) I have also added a passive cooler as a placeholder because I have no idea how much this thing will get heated if it's doing anything but idling. Prices here in Europe are probably higher than elsewhere: ![]() One thing that would be sweet is a possibility of adding something faster than 1GbE and being able to manage the server remotely. I've read about things like iLO (HPE) and IPMI but I think for the Microserver I need another module and with the custom build I'm not sure at all. I know it's supported by the motherboard but I don't know what else is need to make it work. Please feel free to tell me whether I'm dumb. Any advice welcome! lordfrikk fucked around with this message at 17:31 on Jun 18, 2020 |
![]() |
|
fletcher posted:Interesting! I didn't know that Windows supported NFSv4. It sounds like Windows doesn't have an official NFSv4 client though, only server. Are you using the client from University of Michigan? I am only planning on accessing these datasets from Windows machines both now and in the future, so I was planning on using the SMB share type on the new NAS. Go hog wild with samba sharing, though. ![]() fletcher posted:I didn't really consider the compression setting, thinking about my data maybe it makes sense to disable compression on all my datasets. fletcher posted:Good to know! I guess one of the advantages with netcat is that it was already available on both machines. I'll see if there's a way get it on my ancient version of NAS4Free. fletcher posted:"scrub repaired 213M in 47h1m with 0 errors" hopefully it makes it through the big transfers! Thanks for the tips D. Edbdrup. Munkeymon posted:OK but a folder isn't automatically a new dataset, as far as I can tell, and "alias mkdir to 'zfs create'" is neat idea but not something that's going to occur to most people. Probably not hard to replace a directory hierarchy with a bunch of hived-off datasets after the fact, but still not the obvious thing to do when you're a home Less Fat Luke posted:As long as moving files between datasets is a full read, write and delete then I wouldn't go that route, it's way too slow.
|
![]() |
|
I'm looking for a decent case to upgrade my home NAS- I'm currently sitting on four drives in the case, and four drives in an external USB enclosure and I hate that. The 4U 15-bay Rosewills seem to all be gone or ridiculously expensive, and it's basically exactly what I want- so I'm looking for something with a pile of 3.5" bays or a lot of contiguous 5.25" bays so I can toss in 3.25 hotswaps in them (though I'd prefer just a bunch of 3.5s). Any suggestions? Noise isn't a huge factor, this thing will live in my basement.
|
![]() |
|
D. Ebdrup posted:ZFS is copy-on-write, mv doesn't magically not make it a CoW-filesystem.
|
![]() |
|
SolusLunes posted:I'm looking for a decent case to upgrade my home NAS- I'm currently sitting on four drives in the case, and four drives in an external USB enclosure and I hate that. I got a stack of 15 caddies with interposers for $10 on ebay just yesterday, so those are cheap too. Less Fat Luke posted:What? I'm aware it's copy-on-write, but I'm saying the reason people still gravitate towards one large dataset instead of one per directory is that it's a full disk copy to move files from one dataset to another, CoW or not whereas moving in a dataset is instantaneous. Anything I can think of that benefits from being moved around (ie. FreeBSD torrents and other protocols that don't write sequential data while downloading to a temporary location) are also of the kind that cause a lot more fragmentation on any CoW filesystem.
|
![]() |
|
D. Ebdrup posted:But what's the purpose of moving data around if you're not moving it off the array for a backup using zfs send|receive? Also, unrelated: code:
|
![]() |
|
D. Ebdrup posted:No, you do need to alias it, as you suggest - I don't know why it doesn't occur to more people, it's blindingly obvious at least to me. Because most people are far more used to traditional filesystems and don't automatically think in terms of datasets like that. Further, unless you've done a lot of research into the topic, most people would presume that the designers of ZFS, BSD, etc., had reason to make mkdir and zfs create different commands and not aliased out of the box, and that they should leave well enough alone. Basically, your question ends up being "I don't know why more people aren't experts on ZFS," the answer to which I should think is immediately obvious. I'm not saying your idea is wrong, just that you're assuming a fairly non-trivial amount of knowledge from users, most of whom are using ZFS for a specific task and aren't typically interested in diving into ZFS arcana if they aren't forced to in order to complete said task.
|
![]() |
|
There's also very little advantages in home use for hundreds of different datasets; like pirated TV and Movies are different datasets. Cool? I guess. Edit: And by that I mean Ubuntu and Debian ISOs Less Fat Luke fucked around with this message at 21:26 on Jun 18, 2020 |
![]() |
|
lordfrikk posted:I am planning to get a NAS but currently I am unsure of whether I want to buy or build. I've spent the last week or two on doing reasearch and I've been vacillating between Synology DS918+, HPE Microserver Gen 10 Plus and a custom Supermicro build. I've only ever built desktops before so all I know about non-desktop is literally from those last two weeks. Is that Xeon really only 4C/4T? I'm not sure it's a problem but I built an Unraid system with 5 year old Xeons recently that were 12C/24T. Seems like a lot of money but again I'm not sure what advantages the newer Xeons have.
|
![]() |
|
Theres no reason to buy the Xeon over an i3 9100 or 9300, the i3s support ECC too and are cheaper/boost higher. The advantage of the E3 is that it boosts higher than an old E5, better IPC, lower power, and hardware transcoding support.
|
![]() |
|
I know things in europe are more expensive but that looks exceedingly expensive, especially for the xeon. Considering for ~350 usd usually (310 euros or so), dell will sell you a full precision workstation with a xeon and some ram and it works out of the box
|
![]() |
|
I compared the i3-9300 and E-2224G and the differences are minuscule (they don't have higher base freq but boost higher). All in all I can save around 70 EUR without huge impact from what I can tell? My reference for what processor to put in a build of this size was the non-G E-2224 Xeon in the Microserver. 12-core Xeon is way of an overkill for my purposes and would double the TPD. Has anyone built a similar specced NAS/server or at least used the same chassis? I wonder about the motherboard mainly. Wild EEPROM posted:I know things in europe are more expensive but that looks exceedingly expensive, especially for the xeon. The CPU is not cheap for what it is, yeah. And probably paying extra for the form factor, not to mention things like IPMI, hot-swap bays etc. I don't want another big computer (already have a gaming pc). That's why I'm deciding between Synology, the Microserver and this similarly sized build. lordfrikk fucked around with this message at 08:41 on Jun 19, 2020 |
![]() |
|
D. Ebdrup posted:No, you do need to alias it, as you suggest - I don't know why it doesn't occur to more people, it's blindingly obvious at least to me. That was a(n attempt at a) joke dude :P Respect the hell out of your ZFS knowledge but most of us just read the enough of the man page to Make It Go.
|
![]() |
|
lordfrikk posted:I compared the i3-9300 and E-2224G and the differences are minuscule (they don't have higher base freq but boost higher). All in all I can save around 70 EUR without huge impact from what I can tell? I have an X11SSH with an i3 7100 for most of the same reasons, it's nice. If you don't want to pass it through to a guest or hardware transcoding, you don't technically need the integrated graphics. IPMI acts as integrated graphics. The other way you can go is something like an Asrock Rack X470D4U, which lets you use a 3600 or similar, and then you can use software encoding instead. More power but better transcode quality. Zen2 is really efficient so it makes up most/all of the difference from jumping to more cores. Paul MaudDib fucked around with this message at 03:26 on Jun 20, 2020 |
![]() |
|
Paul MaudDib posted:
I just did this exact thing, added a few spare parts I had lying around, and slapped together a little proxmox virt server/nas/plex all-in-one. IPMI is so cool for being able to jam it into the basement and never needing to physically touch the box again.
|
![]() |
|
Are any kind of storage snapshots possible in unRaid, and if so is it beneficial to setup some kind of rolling snapshot system to protect against an rm -rf /mnt/user situation?
|
![]() |
|
I think so - but it's CLI only. People mostly seem to want it for VM's but the discussion here might be the right direction - https://forums.unraid.net/topic/517...#comment-523800
|
![]() |
|
Sir Bobert Fishbone posted:I just did this exact thing, added a few spare parts I had lying around, and slapped together a little proxmox virt server/nas/plex all-in-one. IPMI is so cool for being able to jam it into the basement and never needing to physically touch the box again. it also means you don't waste one of your three slots (or whatever) on a GPU that you don't really care about but is required to boot. or if you need to do an install, or boot memtest or some other thing, and you don't want to make a live-USB, you can just boot from an image served over IPMI. also, you can ![]() I'm still waiting for the B-type HDD trays to show up (yes, I know you can hack up an A-type tray, I'd rather just wait) but the mobo is used and I wanted to test it ASAP. All in all just a super convenient thing, never have to fuck around with having to plug a malfunctioning machine into a monitor again. Paul MaudDib fucked around with this message at 03:39 on Jun 20, 2020 |
![]() |
|
Apparently you can install FreeBSD on a QNAP TS-459. I wonder if that's true for more QNAP devices.
|
![]() |
|
1 day after I install Windows version 2004, Microsoft publishes a notice that Storage Spaces is bugged now and RIP your data. Only found 5 files filled with zeroes on my storage (how many partially zeroed, though?). If you use Storage Spaces, watch out! The issue seems to be that writes made to parity spaces do not make it from cache to long term storage - only blocks full of zeroes. EssOEss fucked around with this message at 08:55 on Jun 20, 2020 |
![]() |
|
EssOEss posted:1 day after I install Windows version 2004, Microsoft publishes a notice that Storage Spaces is bugged now and RIP your data. I'm really glad I settled on a JBOD with drivepool instead of storage spaces now.
|
![]() |
|
EssOEss posted:1 day after I install Windows version 2004, Microsoft publishes a notice that Storage Spaces is bugged now and RIP your data. LOLOLOL. That's fucking hilarious. Hilariously bad. Congrats to Microsoft for completely fucking their customers again. Microsoft posted:There is currently no full mitigation for this issue. To prevent issues with the data on your Storage Spaces, you can use the following instructions to mark them as read only: Again: RAID is not backup. sharkytm fucked around with this message at 10:44 on Jun 20, 2020 |
![]() |
|
Windows Server version 2004, does that mean that windows server 2019 is or is not affected?
|
![]() |
|
H110Hawk posted:Windows Server version 2004, does that mean that windows server 2019 is or is not affected? 2004 is the version number, so it's possible server 2019 is affected.
|
![]() |
|
Nulldevice posted:2004 is the version number, so it's possible server 2019 is affected. So do 2016 and 2019 both have the same version? I don't understand how to use the new secret decoder ring. Is that the new windows server delivered like windows desktop os where they just keep lumping on patches and ambiguous feature packs over time?
|
![]() |
|
It depends on which windows update you're on.
|
![]() |
|
H110Hawk posted:So do 2016 and 2019 both have the same version? I don't understand how to use the new secret decoder ring. Is that the new windows server delivered like windows desktop os where they just keep lumping on patches and ambiguous feature packs over time? No, Server 2016 is a separate entity entirely. Windows Server 2016's latest build is v1607 (e; you can pay for a more updated version, up to v1909 IIRC, but most don't bother) Windows Server 2019's latest build is v2004 Windows 10's latest build is v2004 I haven't seen anyone on Server 19 complaining of the bug yet, but there's a solid chance that it's quietly hiding in there--especially since I doubt many people running Server 19 have bothered to upgrade yet.
|
![]() |
|
DrDork posted:No, Server 2016 is a separate entity entirely. I'm asking from a corporate "we do pay the protection money" standpoint because Microsoft's insistence on fragmenting their market makes me want to scream. So Windows Server v2004 is the same as Windows Server 2019 with all the updates applied? Or not? If you apply all the updates to Windows Server 2016 does it become v1607 or v1909? ![]() Charles posted:It depends on which windows update you're on. This makes my blood boil so much.
|
![]() |
|
It's not that complicated. Windows Server 2019 is Windows Server 2019. What build you are on is what's being discussed. Software has had version numbers since the beginning of time.
|
![]() |
|
Charles posted:It's not that complicated. Windows Server 2019 is Windows Server 2019. What build you are on is what's being discussed. Software has had version numbers since the beginning of time. Correct, but Microsoft does in fact have a third train you can download just called "Windows Server", compared to "Windows Server 2016" and "Windows Server 2019". The further versioning has usually just mean patch level - if you apply all the windows updates you get onto the latest version of "2019" or "2016". So far no one has stated "yes" or "no" to if Windows Server 2019 is impacted if you're on the latest updates. The help article says "all editions", which I interpret to mean "No Date Standard"+"No Date Datacenter" not "2016 Standard" and "2016 datacenter" and "2019 standard" and "No Date Standard" and "No Date Datacenter". ![]() This is why I'm confused. Is the last one there just "2019", but it will automatically windows update to "2022" or whatever when that comes out?
|
![]() |
|
There are two types of numbers here. One is YYYY and denotes what Windows you are on eg 2019. The other is YYMM and denotes what 'service pack' that Windows is updated to eg. 2004 (planned to have been released in April 2020). The terminology isn't accurate for this age, but maybe it helps understand.
|
![]() |
|
No. You pay Microsoft for Server 2016, 2019, etc, and without paying for an upgrade, one will never become the other from a labeling standpoint. Even paying the protection money to MS, I think 2016 only goes up to v1909. Even though it shares most of the codebase with Server 2019, it has featuresets that are different from 2019 regardless of build version. IIRC, the "Windows Server" there in the screenshot is just 2019 with the semi-annual chanel selected for patch updates. It will never become Server 2022 or whatever, because they'll want you to pay $$$ to get whatever new features 2023 introduces. No one has reported (AFAIK) Storage Spaces issues with Server 2019 on v2004, but since it's largely the same codebase as Win10 (Home/Pro), it wouldn't surprise me if it did have an issue. Which, I suppose, is part of why most enterprise users don't upgrade for months after a new patch drops.
|
![]() |
|
I have recently set up a small personal Pi 4 webserver and I'm looking at storage options. It will run 24/7 but under very little load, so it's gonna be NAS HDDs, but for the moment I won't mind if it goes down for a day or so, so I'll skip RAID and just set up rsync backups. The Pi can power an external USB HDD but not two, so I will need either a 2-disk bay or two enclosures. The bay would be MUCH nicer in terms of cable management and convenience, and the built-in clone button could come in handy, but I'm worried about it being a single point of failure. More specifically, I don't care if it dies and the server goes down for a while, but I'm worried it could potentially somehow brick both hard disks at the same time. Is that a valid concern at all?
|
![]() |
|
NihilCredo posted:I have recently set up a small personal Pi 4 webserver and I'm looking at storage options. It will run 24/7 but under very little load, so it's gonna be NAS HDDs, but for the moment I won't mind if it goes down for a day or so, so I'll skip RAID and just set up rsync backups. Regarding powering disks, you can get something like this and a USB wall adapter with 3 sockets and enough current to power the RPI3 as well as the disks. Combined with a USB multiplug AC adapter/USB charging station, you should be able to get everything powered. D. Ebdrup fucked around with this message at 12:51 on Jun 21, 2020 |
![]() |
|
D. Ebdrup posted:The RPI is not capable of overcurrenting in any way that can damage the disks, so that doesn't seem like something you need to worry about. NAS-grade hard drives are only available in 3.5 *, so I can't power them through any form of USB as they require 12V instead of 5V. * except for WD Reds, but the 2.5 model is only 1 TB so at that price I'd just buy a SSD I wouldn't use a 2.5 HDD because I already tried and it died after a week. Maybe it was a fluke, I'm sure plenty of people run desktop drives in their NAS, but I'd rather replace it with something that's actually are rated for 24/7. You did give me a good idea though, I've found a similar SATA-USB adapter that takes an optional 12V power input for 3.5 disks and https://www.amazon.com/Uninterrupti...s/dp/B07GDL4LQ8 Only thing is, the disks would be laying out in the open, circuitry exposed and all, and vibrating freely. With a bay or enclosure they would be at least a little protected, hmm. EDIT: Scratch that, it only provides 1A out, the Pi alone needs up to 3. Guess I'm still looking... Might end up going back to a 2.5 HDD and putting the extra budget into a Backblaze account lol. NihilCredo fucked around with this message at 14:09 on Jun 21, 2020 |
![]() |
|
NihilCredo posted:I have recently set up a small personal Pi 4 webserver and I'm looking at storage options. It will run 24/7 but under very little load, so it's gonna be NAS HDDs, but for the moment I won't mind if it goes down for a day or so, so I'll skip RAID and just set up rsync backups. So I ran a similar setup for a couple years, using a pi 3b and a pair of external USB hdds on a powered hub and set up samba for network access. Honestly, the point of failure here is the pi itself, unfortunately. The filesystem is very sensitive to corruption from unscheduled power interruptions. After the first few times my various high-quality SD cards died, I moved to booting off a USB thumb drive and that gave me the longest period of stability, but even it eventually corrupted and died too. With a webserver or any 24/7 program that writes continuously (think logs), you run a real risk of filesystem corruption if the power supply varies even a little. That said, every time the pi died, the data on my HDDs was completely fine, so it really wasn't much impact overall. Get everything set up and then image your SD card so you can just clone it when the install corrupts and get back up and running quickly.
|
![]() |
|
NihilCredo posted:I wouldn't use a 2.5 HDD because I already tried and it died after a week. Maybe it was a fluke, I'm sure plenty of people run desktop drives in their NAS, but I'd rather replace it with something that's actually are rated for 24/7. This was a bit of a fluke / mostly "bathtub curve" turned reality. Drives are most likely to fail when they are very new or very old. How much capacity do you need out of this, anyway?
|
![]() |
|
Good advice! I had similar concerns so I made the slightly questionable choice of mounting as many log folders as I could under tmpfs (didn't care about losing them under a possible power failure). Though a better solution might be to mount them to a disposable USB pendrive. IOwnCalculus posted:This was a bit of a fluke / mostly "bathtub curve" turned reality. Drives are most likely to fail when they are very new or very old. Not much at all by this thread's standards, 4-8 TB should be good enough to last me until I replace the drives. Honestly by looking at the costs of setting up proper NAS HDDs, I'm thinking of just buying another cheap external 2.5 HDD (gonna be a Toshiba this time though!) and using the rest of the budget on a Backblaze subscription. I'm not thrilled at the idea of needing a week to restore a cloud backup, but I guess it's a worthwhile trade in exchange for protection against acts of God.
|
![]() |
|
Oh right, you said Pi4 - yeah, with its four ports on two USB3 hubs, you should be able to run anywhere from 1 to 16 drives depending on your choice of DAS.
|
![]() |
|
NihilCredo posted:I have recently set up a small personal Pi 4 webserver and I'm looking at storage options. brains posted:So I ran a similar setup for a couple years, using a pi 3b and a pair of external USB hdds on a powered hub and set up samba for network access. Honestly, the point of failure here is the pi itself, unfortunately. The filesystem is very sensitive to corruption from unscheduled power interruptions. After the first few times my various high-quality SD cards died, I moved to booting off a USB thumb drive and that gave me the longest period of stability, but even it eventually corrupted and died too. With a webserver or any 24/7 program that writes continuously (think logs), you run a real risk of filesystem corruption if the power supply varies even a little. Ya - RPi isnt great for anything you want to set and forget. I actually bought a cheap little Celeron-based NUC that I do this with. Something similar to this: https://www.amazon.com/dp/B07XRG5YL8/ I run a DNS-over-TLS unbound Forwarder and a WireGuard VPN Server off of it. Id recommend springing for something like that - its very well suited to those tasks and wont suffer the pier related downfalls of the RPi.
|
![]() |