«608 »
  • Post
  • Reply
Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




I'd just like to say that it finally got down to ~30 degrees so I left my office window open last night and holy shit it's nice in here. It's not 15 degrees hotter than the rest of the house!

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.


Clapping Larry

Someone needs to get a rack dedicated room.

suddenlyissoon
Feb 17, 2002

Don't be sad that I am gone.


Crunchy Black posted:

I'd just like to say that it finally got down to ~30 degrees so I left my office window open last night and holy shit it's nice in here. It's not 15 degrees hotter than the rest of the house!

Yeah, I finally got to turn my PC on again due to the cool weather. I've tried almost everything to keep my media room cool that has my server box in it. Next up is to just get a single zone ductless AC and be done with it.

Roundboy
Oct 21, 2008


Question:. Are there such a thing as wall mounted 4 post racks or am I stuck with on the floor?

I have a 12u 2post I wall mounted, but I have a single server to rack I am using a shelf + screws to hold, and I just don't trust a heavier server going on next year

I don't want a floor unit because that means 25-42u and I really only need like, 10

Or I put a small one on a table, and I don't trust it can handle pulling equipment. I know the back posts are typically flush, but maybe some sort of standoff? Also threaded screws vs cage nuts in the future.

Am I being stupid or what are people doing for home server use and networks?

H110Hawk
Dec 28, 2006


Roundboy posted:

Question:. Are there such a thing as wall mounted 4 post racks or am I stuck with on the floor?

I have a 12u 2post I wall mounted, but I have a single server to rack I am using a shelf + screws to hold, and I just don't trust a heavier server going on next year

I don't want a floor unit because that means 25-42u and I really only need like, 10

Or I put a small one on a table, and I don't trust it can handle pulling equipment. I know the back posts are typically flush, but maybe some sort of standoff? Also threaded screws vs cage nuts in the future.

Am I being stupid or what are people doing for home server use and networks?

Probably not for the money I assume you want to spend. How deep do you need it and how many pounds does it need to hold? They make 4 post racks in a huge variety of depths. You can get them on casters! Caged nuts are the only way to go, but they cost a lot more in the fastener column. Given you only need 9 per server assuming it uses a set screw its not a ton of absolute dollars though.

Check out the major manufacturers catalog on their website (tripplite, damac, apc, etc) not cdw or whatever and figure out if what you want even exists.

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!



Roundboy posted:

Question:. Are there such a thing as wall mounted 4 post racks or am I stuck with on the floor?

I have a 12u 2post I wall mounted, but I have a single server to rack I am using a shelf + screws to hold, and I just don't trust a heavier server going on next year

I don't want a floor unit because that means 25-42u and I really only need like, 10

Or I put a small one on a table, and I don't trust it can handle pulling equipment. I know the back posts are typically flush, but maybe some sort of standoff? Also threaded screws vs cage nuts in the future.

Am I being stupid or what are people doing for home server use and networks?

I think for limited home use a lot of people modify the Ikea Lack table into a LackRack:
https://wiki.eth0.nl/index.php/LackRack

For longer servers the "enterprise edition" is the lack coffee table with a shelf on it:
https://www.ikea.com/us/en/p/lack-c...brown-00104291/

If you want something really sturdy and four post you're really looking at floor mounted racks most of the time. There are smaller ones, though, like this 13U or this 12U with casters, it's just not as cheap as ikea's cardboard based furniture.

BeastOfExmoor
Aug 19, 2003

I will be gone, but not forever.


Amazon has 8TB Elements for $125 so I finally hit buy on a couple to add to the current two I have and build a ZFS array on FreeNAS inside a Proxmox VM with a HBA passed through since I run Proxmox as the hypervisor on my server already. A few questions:

1. Proxmox does have native ZFS support, but I'm thinking the FreeNAS UI will be a little more intuitive than the fairly meager Proxmox UI. I'd be open to hearing opinions on if this is the right path.

2. I'm planning on using RAID-Z2 rather than mirroring with the idea that I can move from 16TB usable to 24 usable at some point down the line if needed. Is there a significant downside to this vs. mirroring when I'm using 4x 8TB drives?

3. Any advice on picking an HBA? I'm just planning on picking up a server-pull LSI card off eBay, but there's a so many options and I'd be interested to hear if there are meaningful advantage in a newer or more expensive card over.

4. Are there any guides I should read on setting up ZFS/FreeNAS for the first time?

D. Ebdrup
Mar 13, 2009



BeastOfExmoor posted:

Amazon has 8TB Elements for $125 so I finally hit buy on a couple to add to the current two I have and build a ZFS array on FreeNAS inside a Proxmox VM with a HBA passed through since I run Proxmox as the hypervisor on my server already. A few questions:

1. Proxmox does have native ZFS support, but I'm thinking the FreeNAS UI will be a little more intuitive than the fairly meager Proxmox UI. I'd be open to hearing opinions on if this is the right path.

2. I'm planning on using RAID-Z2 rather than mirroring with the idea that I can move from 16TB usable to 24 usable at some point down the line if needed. Is there a significant downside to this vs. mirroring when I'm using 4x 8TB drives?

3. Any advice on picking an HBA? I'm just planning on picking up a server-pull LSI card off eBay, but there's a so many options and I'd be interested to hear if there are meaningful advantage in a newer or more expensive card over.

4. Are there any guides I should read on setting up ZFS/FreeNAS for the first time?

Be aware that striped mirrors are a LOT faster in terms of IOPS because you're reading and writing to many disks at once - so I hope your VM guests don't do a lot of disk I/O.
Get a HBA that can do IT mode aka Initiator Target mode aka SATA Passthrough.
FreeNAS maintains a guide, but since I switched back to plain FreeBSD, I haven't had to use it for years and years. It looks like they keep it up-to-date for a given version, though.

Guitarchitect
Nov 8, 2003



Alright, I need to get a little more real about backups + stuff. I need help!
My situation - I'm a part-time professional photographer and do my own architecture projects on the side. I need to back all of that stuff up.
I also have a household plex server - I don't care so much if I lose all of that so I don't necessarily need that backed up - but i do need more space.
Current I've got a WD usb drive to back up to, and I've got a couple of hard drives in my computer (m2 OS drive, 500gb WD black working drive, 4tb WD green media drive that is split in two, half of it is backup).
I suspect it would be nice to have a clean expandable 8TB for my plex stuff, and a good 2TB for my work that would have error-prevention (raid?), a local backup (USB, isolated from computer in case of a break-in), and a cloud backup so that I can't possibly lose all those priceless family photos and pro photo work. I've already seen a couple of my older computer drives go bad so I'm realizing that I need a better solution than "put it in a box and someday I'll clean it off"

So - what am I looking at requiring? I'm running windows 10, I have a separate rPi 3b acting as a homeseer hub (and hopefully pi-hole when i have the time). doing my own pi thing would be fine but with two kids and a massive honey-do list I also don't mind an off-the-shelf solution. It's not as though money is no object, but I also don't need anything nutty and over-specified. And if anyone can recommend a cloud backup service I would be very happy to know - I've got a basic Google One membership right now which only gets me 100gb, but that's it...

Hughlander
May 11, 2005



BeastOfExmoor posted:

Amazon has 8TB Elements for $125 so I finally hit buy on a couple to add to the current two I have and build a ZFS array on FreeNAS inside a Proxmox VM with a HBA passed through since I run Proxmox as the hypervisor on my server already. A few questions:

1. Proxmox does have native ZFS support, but I'm thinking the FreeNAS UI will be a little more intuitive than the fairly meager Proxmox UI. I'd be open to hearing opinions on if this is the right path.

2. I'm planning on using RAID-Z2 rather than mirroring with the idea that I can move from 16TB usable to 24 usable at some point down the line if needed. Is there a significant downside to this vs. mirroring when I'm using 4x 8TB drives?

3. Any advice on picking an HBA? I'm just planning on picking up a server-pull LSI card off eBay, but there's a so many options and I'd be interested to hear if there are meaningful advantage in a newer or more expensive card over.

4. Are there any guides I should read on setting up ZFS/FreeNAS for the first time?

I’ve gone this route over the past 4 years. Proxmox is superior to freeNas for me as I do almost everything in docker and don’t pay the 2x memory commit by having proxmox run docker in an lxc.

I started with z2 and after 3 years reformatted as z1 when realized that with backups I was comfortable with the chance of a dual drive dfa before I could rebuild.

The guide I used is now 8 years old... http://sysadmin-talk.org/2011/04/cr...-using-freenas/

ROJO
Jan 14, 2006




Oven Wrangler


Worth highlighting you also seem to get 15% back if you buy on a prime credit card....that seems like a smoking deal.

BeastOfExmoor
Aug 19, 2003

I will be gone, but not forever.


ROJO posted:

Worth highlighting you also seem to get 15% back if you buy on a prime credit card....that seems like a smoking deal.

Yes, I didn't notice this when I ordered this morning, but when I saw that I ordered a third drive. I doubt we're going to see Black Friday pricing less than $106.25.

D. Ebdrup posted:

Be aware that striped mirrors are a LOT faster in terms of IOPS because you're reading and writing to many disks at once - so I hope your VM guests don't do a lot of disk I/O.

Perhaps I'm not using the right search terms, but I'm not finding any benchmarks that could give me a rough estimate of what kind of performance impact we're talking about? I don't have high I/O needs and my "server" is basically just VM's running the standard Usenet/Plex/Etc. workloads and whatever else I feel like spinning up a VM to dick around with. VMs are run off SSDs and will only touch the ZFS pool to read/write media, etc.

All my other uses will be from other computers connecting over ethernet and thus be limited to gigabit speeds anyway until 10gig comes down in price.



Hughlander posted:

I’ve gone this route over the past 4 years. Proxmox is superior to freeNas for me as I do almost everything in docker and don’t pay the 2x memory commit by having proxmox run docker in an lxc.

I started with z2 and after 3 years reformatted as z1 when realized that with backups I was comfortable with the chance of a dual drive dfa before I could rebuild.

The guide I used is now 8 years old... http://sysadmin-talk.org/2011/04/cr...-using-freenas/

Thanks. That domain has long since expired, but Archive.org still has it:
https://web.archive.org/web/2014100...-using-freenas/

Hughlander
May 11, 2005



If the images aren’t there and are still valid I saved it to Evernote and can send a copy with them if you care. Though I can’t recommend proxmox/ docker enough.

D. Ebdrup
Mar 13, 2009



BeastOfExmoor posted:

Yes, I didn't notice this when I ordered this morning, but when I saw that I ordered a third drive. I doubt we're going to see Black Friday pricing less than $106.25.


Perhaps I'm not using the right search terms, but I'm not finding any benchmarks that could give me a rough estimate of what kind of performance impact we're talking about? I don't have high I/O needs and my "server" is basically just VM's running the standard Usenet/Plex/Etc. workloads and whatever else I feel like spinning up a VM to dick around with. VMs are run off SSDs and will only touch the ZFS pool to read/write media, etc.

All my other uses will be from other computers connecting over ethernet and thus be limited to gigabit speeds anyway until 10gig comes down in price.


Thanks. That domain has long since expired, but Archive.org still has it:
https://web.archive.org/web/2014100...-using-freenas/
EDIT: Nope, I was wrong. I just tested with a couple of memory devices, and RAIDz1 scales with 1 disks worth of IOPS, RAIDz2 scales with two disks worth of IOPS, and RAIDz3 scales with 3 disks worth of IOPS.
Bandwidth scales with however many disks you have minus the space taken up by distributed parity (which can be looked up with these charts) in a given vdev.

D. Ebdrup fucked around with this message at 23:28 on Nov 3, 2019

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Nap Ghost

Anyone have any recommendations on a 16 x 3.5" bay or higher capacity SAS chassis for use with an externally cabled DAS configuration for some ZFS zpools? I know about the NetApp DS4246 and similar but from what I can tell they're proprietary and won't be able to use SATA drives as JBODs as ZFS prefers or that there may be some protocol issues with the SMART data potentially. Powervault MD1200 and friends seems like a go-to and I'm fine with like 250w idle for my 16+ drives. I'm kind of tired of having drives stuck in storage and would rather have them accessible power usage be damned (also, $.09 / kWh doesn't hurt either).

IOwnCalculus
Apr 2, 2003





ROJO posted:

Worth highlighting you also seem to get 15% back if you buy on a prime credit card....that seems like a smoking deal.

Yeah I doubt it will be better than that. Grabbed four so I can start retiring some ancient 3TB drives.

Edit: also free returns in case a better deal pops up.

IOwnCalculus fucked around with this message at 01:59 on Nov 4, 2019

IOwnCalculus
Apr 2, 2003





necrobobsledder posted:

Anyone have any recommendations on a 16 x 3.5" bay or higher capacity SAS chassis for use with an externally cabled DAS configuration for some ZFS zpools? I know about the NetApp DS4246 and similar but from what I can tell they're proprietary and won't be able to use SATA drives as JBODs as ZFS prefers or that there may be some protocol issues with the SMART data potentially. Powervault MD1200 and friends seems like a go-to and I'm fine with like 250w idle for my 16+ drives. I'm kind of tired of having drives stuck in storage and would rather have them accessible power usage be damned (also, $.09 / kWh doesn't hurt either).

The only protocol issue with the Netapp is if you use the SATA/SAS transposers that install in the drive sleds. Supposedly they are there to allow mixing SAS and SATA in the same enclosure, but I removed all of mine and have a mix of SAS and SATA and it works fine. The drives work fine either way but yes SMART won't see as much when working through the transposer.

Former Human
Oct 15, 2001



How much louder are enterprise/data center HDDs compared to regular consumer drives? Like if I wanted to install an HGST 10TB helium drive in my PC case (not a NAS) would the noise be insufferable?

Paul MaudDib
May 3, 2006

"Tell me of your home world, Usul"


Former Human posted:

How much louder are enterprise/data center HDDs compared to regular consumer drives? Like if I wanted to install an HGST 10TB helium drive in my PC case (not a NAS) would the noise be insufferable?

Bear in mind that some of the EasyStore drives are HGST heliums.

I have 8 shucked 5400rpm drives in my NAS in hotswap bays (less noise damping). It’s obvious that it’s on, maybe comparable with some medium noisy fans in push/pull on an AIO or something. It’s not shrill at all though. I think it’s less obnoxious than a blower GPU at say 50% load, maybe comparable to 30%.

I have some 7200 rpm Toshiba X300s, same deal pretty much. Maybe fractionally noisier + albeit only 4 drives and inside a tower chassis, not a NAS chassis.

Clear as mud, sorry. I think if your room is quiet you’ll know they’re on, just like thee whoosh of a gaming PC. I don’t think they’re going to be anywhere near as bad as bringing an R710 into your room or whatever.

Paul MaudDib fucked around with this message at 04:50 on Nov 4, 2019

Hed
Mar 31, 2004



Fun Shoe

Hughlander posted:

If the images aren’t there and are still valid I saved it to Evernote and can send a copy with them if you care. Though I can’t recommend proxmox/ docker enough.

I must be missing something, how are you guys running docker on proxmox? I thought it only supported LXC containers, so that’s what I run. It would be neat to be able to run both.

Are you just installing docker on top? Is there a proxmox UI extension for it?

Hughlander
May 11, 2005



Hed posted:

I must be missing something, how are you guys running docker on proxmox? I thought it only supported LXC containers, so that’s what I run. It would be neat to be able to run both.

Are you just installing docker on top? Is there a proxmox UI extension for it?

Yep! My first LXC is a bog standard ubuntu that I installed docker-ce (Now docker-cli I guess?!) Onto Config looks like:
code:
arch: amd64
cores: 4
hostname: jefferson
memory: 32768
mp0: /Main-Volume/archived,mp=/media/archived
mp1: /Main-Volume/comics,mp=/media/comics
mp2: /datastore/Media,mp=/media/public
mp3: /Main-Volume/old,mp=/media/old
mp4: /Main-Volume/docker/volumes,mp=/var/lib/docker/volumes
mp5: /datastore/code,mp=/media/code
mp6: /,mp=/mnt/tso,ro=1
net0: name=eth0,bridge=vmbr0,hwaddr=26:B8:75:62:D2:85,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: Main-Volume:subvol-100-disk-1,size=512G
swap: 512
#lxc.aa_profile: lxc-default-with-nesting
#lxc.apparmor.profile: lxc-container-default-with-nesting
lxc.apparmor.profile: unconfined
lxc.cgroup.devices
Reason I did that vs other solutions is that it has the full 32 gigs of the system available to it and thus for the docker containers. As opposed to before where with hardware pass thru freenas had to have X memory for it, then ESXI took some, then the VM overcommit then....

D. Ebdrup
Mar 13, 2009



necrobobsledder posted:

Anyone have any recommendations on a 16 x 3.5" bay or higher capacity SAS chassis for use with an externally cabled DAS configuration for some ZFS zpools? I know about the NetApp DS4246 and similar but from what I can tell they're proprietary and won't be able to use SATA drives as JBODs as ZFS prefers or that there may be some protocol issues with the SMART data potentially. Powervault MD1200 and friends seems like a go-to and I'm fine with like 250w idle for my 16+ drives. I'm kind of tired of having drives stuck in storage and would rather have them accessible power usage be damned (also, $.09 / kWh doesn't hurt either).
Look for SAS Expander Enclosures - other than the ones from Dell/EMC/Isilon and NetApp there's a company called RAID Machines which make some pretty good ones.

IOwnCalculus
Apr 2, 2003





Paul MaudDib posted:

Clear as mud, sorry. I think if your room is quiet you’ll know they’re on, just like thee whoosh of a gaming PC. I don’t think they’re going to be anywhere near as bad as bringing an R710 into your room or whatever.

Outside of extremely unusual cases I've never heard any 5400/7200RPM drive that I considered any louder than any other, regardless of intended purpose. Even the loudest drives don't come close to the fans in a 2U server.

Also, glad I've got those 8TB drives ordered. Another one of my ancient 3TB drives decided to shit the bed today.

Gay Retard
Jun 7, 2003



Another shucked Seagate from 2010-2012 died over the weekend. I can't wait until the last one finally dies so I can replace it with a shucked 10 TB WD Red.

Edit: Noise-wise, my Fractal Design R5 with 5x 10 TB WD Reds is whisper quiet - I can't even hear the drives being written/read. Best $55 case I've ever gotten.

Gay Retard fucked around with this message at 19:59 on Nov 4, 2019

Atomizer
Jun 24, 2007

Bote McBoteface. so what


Alright, I've been away from the thread for a couple weeks...

Crunchy Black posted:

While we *do* tend to trend towards being data nerds in this thread, as it seems most of us have some amount of industry experience, and its fun to get into the weeds, Atomizer actually answered your question, realistically, in the most direct and succinct way possible lol

BobHoward posted:

My dude you are massively overreacting, I don't think you read Atomizer's tone correctly at all

...and I appreciate most of you guys sticking up for me; I knew I wasn't giving him unreasonable advice. The thing is, D&C is a known shitpoaster (you should really check his rap sheet if you don't believe me...); I vaguely remember us having to deal with him in the WoT thread but thankfully he's been gone for a while. I've resisted responding to him, (and no, he's not going to "change [my] future behavior") lol.

Twerk from Home posted:

Have you guys had SSDs die? What's expected lifespan for consumer SSDs? This guy kicked the bucket this week.



I will say, that looks impressive for what I'm pretty sure is a DRAMless SSD, although I'm guessing that CDI isn't reading the NAND Write value correctly; a few months ago I posted [here, I think] about a cheap HP 120 GB SSD (S700 non-Pro, IIRC) that seemed to be writing an absurd amount of data for just a simple Win10 installation on my Dad's Sandy Bridge laptop. The explanation was that CDI was reading a value and displaying it as GBW when that wasn't actually the unit being reported.

It's hard to find accurate specs for the Sandisk Plus in particular as it's a BOM drive, but 80 TBW for that capacity seems ballpark for at least one version of the drive to be sold under that name. It wouldn't surprise me if it lasted longer than that, but it most certainly didn't actually write 2 PB to the NAND flash.

H110Hawk posted:

It was a different time back then. It hasn't improved much. (newer versions are less dumb, but truly until ssd's most of the failure conditions were pretty binary. In a hdd a single remapped means that the disk is as good as dead. In a ssd it's Tuesday.)

That's a little bit of an exaggeration. I've certainly seen HDDs with dozens of bad/remapped sectors resulting in anomalous behavior, such that even if they still work they should be retired immediately, but I've also used drives (in non-critical scenarios) with a few bad sectors that are stable and work fine. For example, I bought 3x 1 TB WD Greens in 2007, and while they've been boxed up for most of their life since being pulled from the PC they were originally in after it was retired, I threw one of them in a secondary desktop where it's been working perfectly for the last few years despite having a half dozen remapped sectors. It only holds games, (and still doesn't run 24/7) so it's totally fine to use it for such an unimportant workload. I also use ancient drives for a Steam content caching server, which is another perfect use of otherwise garbage drives because you lose essentially nothing if and when they die.

IOwnCalculus posted:

Outside of extremely unusual cases I've never heard any 5400/7200RPM drive that I considered any louder than any other, regardless of intended purpose. Even the loudest drives don't come close to the fans in a 2U server.

Also, glad I've got those 8TB drives ordered. Another one of my ancient 3TB drives decided to shit the bed today.

In my experience the consumer-oriented drives are quieter than NAS/enterprise/etc. drives, at least in terms of how audible the actuator clicking is concerned. A WD Green/Blue will be less audible than a Red/Gold. I have a Seagate FireCuda and HGST He8 in my main gaming desktop, and the former is more or less inaudible, but the latter is easily the loudest component when it's active. Also, really old drives are similarly loud & clicky regardless of speed, platters, capacity, etc. I got a bunch of the aforementioned "ancient" HDDs (including PATA ones) from pulling them out of old PCs so I could more easily wipe them, and you'd be surprised how clunky an old 4.3 GB or whatever PATA drive is when you haven't used one in two decades!


This is the one thing I wanted to point out to the thread; there's a bunch of storage (and other tech components) including those 8 TB external drives eligible for the 15% back via Amazon Prime Rewards.

Heners_UK
Jun 1, 2002


6TB Seagate External Drive for CAD $120 on Amazon Canada: STEB6000403 https://www.amazon.ca/dp/B07CX8QBG4...i_pNKWDb1ENWQ09

Pros: It's in Canada for $20/TB
Cons: SMR

disaster pastor
May 1, 2007




Grimey Drawer

Stupid newbie alert!

I'm looking at finally setting up a home NAS as a holiday project this year, primarily for Plex. Last year the project was upgrading my PC, so I have a bunch of spare parts I'd like to start with.

Case: Fractal Design Define Silent (six HDD bays)
Motherboard: ASRock H97M Pro4 LGA 1150
CPU: Core i5-4570
RAM: gave it away, but 8GB of DDR3 is pretty cheap

My PC currently has a 500GB SSD and two 4TB HDDs. My plan is to get a 1TB M.2 and 2TB 2.5" SSD for my PC to eliminate the spinning platters, and then use the HDDs in this (and possibly the 500GB SSD as the boot drive) and move all my media over.

Roughly speaking, what should I be looking at doing here? Is the hardware old enough that I should just start fresh instead? Are the 4TB drives so small that they're not worth using? Is migrating a Plex server a pain?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

The hardware is more than sufficient for running something simple like Plex, especially if you're not going to be streaming it to more than 2-3 people at a time.

2x4TB drives is still a good chunk of space to play with, and the 500GB SSD can serve handily as either a boot drive or a scratch disk. If you have Amazon Prime and are thinking you might need more space in the near future, they're currently running fantastic deals on 8TB drives (see previous posts).

Moving a Plex library isn't particularly difficult: most of the "brains" of it (watched history, for example) are contained within one file. Most of it can just be drag-and-dropped into the new install location and it'll figure itself out.

One big question you should be asking yourself is what do you want to base this server on? Windows? Linux? FreeBSD? There are reasons to pick any of them, and where you go from here heavily depends on what direction you want to take.

disaster pastor
May 1, 2007




Grimey Drawer

DrDork posted:

The hardware is more than sufficient for running something simple like Plex, especially if you're not going to be streaming it to more than 2-3 people at a time.

Yeah, the vast majority of time it'd be streaming to one place, and only two other people have access to my library anyway, so I feel like this will be fine. I might run some VMs for fun/to fuck around with stuff, but nothing that would be a major resource consumer.

DrDork posted:

2x4TB drives is still a good chunk of space to play with, and the 500GB SSD can serve handily as either a boot drive or a scratch disk. If you have Amazon Prime and are thinking you might need more space in the near future, they're currently running fantastic deals on 8TB drives (see previous posts).

What I don't quite understand is mixing and matching drives. I have no experience with RAID, so if I get two 8TB drives and run them alongside my 4TBs, how much usable storage would I have, and how bad would it hurt if a 4TB drive died?

DrDork posted:

Moving a Plex library isn't particularly difficult: most of the "brains" of it (watched history, for example) are contained within one file. Most of it can just be drag-and-dropped into the new install location and it'll figure itself out.

Cool, thank you.

DrDork posted:

One big question you should be asking yourself is what do you want to base this server on? Windows? Linux? FreeBSD? There are reasons to pick any of them, and where you go from here heavily depends on what direction you want to take.

That is an excellent question and one I'd appreciate guidance on. Looking into this in the past, people have always recommended Unraid, but I really am a stupid newbie to this kind of thing and don't know if that's overkill, or not as good as the press it gets, or what.

unknown
Nov 16, 2002
Ain't got no stinking title yet!


Roundboy posted:

Question:. Are there such a thing as wall mounted 4 post racks or am I stuck with on the floor?

I have a 12u 2post I wall mounted, but I have a single server to rack I am using a shelf + screws to hold, and I just don't trust a heavier server going on next year

I don't want a floor unit because that means 25-42u and I really only need like, 10

Or I put a small one on a table, and I don't trust it can handle pulling equipment. I know the back posts are typically flush, but maybe some sort of standoff? Also threaded screws vs cage nuts in the future.

Am I being stupid or what are people doing for home server use and networks?

https://www.startech.com/ca/Server-...cket~RK419WALLV

Makes the server flush with the wall and out of the way. There's 1,2,4,8u versions by multiple companies, even lockable cabinets, etc. There's no issues to the hardware if the tabs are capable of holding the gear.

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!



I think this is old news by internet standards but I don't remember seeing anyone mention the QNAP specific malware that's out there. Patch your QNAP NAS if you have one of those:
https://www.zdnet.com/article/thous...snatch-malware/


Also, Amazon's got the 8TB Elements at $125:
https://smile.amazon.com/gp/product/B07D5V2ZXD/

Newegg also has it for $120 after $5 off coupon valid today (Nov 6th):
93XPD3
https://www.newegg.com/black-wd-ele...N82E16822234349

Matt Zerella
Oct 7, 2002


disaster pastor posted:

That is an excellent question and one I'd appreciate guidance on. Looking into this in the past, people have always recommended Unraid, but I really am a stupid newbie to this kind of thing and don't know if that's overkill, or not as good as the press it gets, or what.

Unraid is absolutely perfect for you. Its completely web based and uses Docker like a plugin system.

It's nice and flexible too in terms of being able to run multiple different sized disks.

Get a USB stick for unraid, use your SSD as a cache drive and put your hard drives in it. Only thing is, you might want a 3rd 4TB drive since one will be used for parity.

Enos Cabell
Nov 3, 2004



Matt Zerella posted:

Unraid is absolutely perfect for you. Its completely web based and uses Docker like a plugin system.

It's nice and flexible too in terms of being able to run multiple different sized disks.

Get a USB stick for unraid, use your SSD as a cache drive and put your hard drives in it. Only thing is, you might want a 3rd 4TB drive since one will be used for parity.

This is exactly what I'd suggest, very minimal setup or janitoring required and it just works.

Matt Zerella
Oct 7, 2002


OR get a 10TB drive and use it for parity so that way you can expand above 4TB per drive and it kind of future proofs you. Tough pill to swallow since you dont use those extra 6TBs but its something to think about.

disaster pastor
May 1, 2007




Grimey Drawer

Thanks, all!

Matt Zerella posted:

OR get a 10TB drive and use it for parity so that way you can expand above 4TB per drive and it kind of future proofs you. Tough pill to swallow since you dont use those extra 6TBs but its something to think about.

Yeah, I was thinking this. If I'm going to have bigger drives, I've got to start somewhere, and I'm not going to toss the 4TB drives aside (until I eventually run out of bays, I guess).

Mindblast
Jun 28, 2006

Moving at the speed of death.




So I've been reading this thread a bit and have been thinking about getting babbies first NAS. I'd essentially want it to move everything I'd normally have on a regular platter drive into this NAS. So I'd not just storage movies/pictures there, but also run applications that don't really benefit from a fast SSD. So basically "fast like regular internal drive, but not in the pc. Also more capacity". So I guess I need more than just a gigabit ethernet port, but not a million sleeves.

Synology and QNAP have a ton of choices, but I'm not sure which one fits my bill the best. Or maybe I should look elsewhere entirely.

Actuarial Fables
Jul 29, 2014



Taco Defender

Mindblast posted:

So I've been reading this thread a bit and have been thinking about getting babbies first NAS. I'd essentially want it to move everything I'd normally have on a regular platter drive into this NAS. So I'd not just storage movies/pictures there, but also run applications that don't really benefit from a fast SSD. So basically "fast like regular internal drive, but not in the pc. Also more capacity". So I guess I need more than just a gigabit ethernet port, but not a million sleeves.

Synology and QNAP have a ton of choices, but I'm not sure which one fits my bill the best. Or maybe I should look elsewhere entirely.

There are a lot of choices with many different feature sets. Could you elaborate on what you want to use the NAS for (what applications specifically? how much data stored? how many users? do you want something beefy that can run its own apps or just something to hold data?)

Mindblast
Jun 28, 2006

Moving at the speed of death.




Thanks for the fast reply!

Many of the applications would be games(both modern and older emulation things), since many of them just don't benefit that much from ssd-tier speeds but can still hog quite some space. I'd still like to get speeds in the range of a modern platter disk, though. In terms of space I'm still orienting. I currently hold capacity of about 5TB spread across a few drives. I have some TB's left but I'd prefer to both upscale and leave room for future growth. So maybe roughly double up on what I got and have room left in the NAS for the future? User wise it will be just a few clients - my main pc and my future htpc. Good that you mentioned apps - I didn't consider those but those could be handy. I'd guess the htpc would be able to decode on its own but having support for Plex couldn't hurt. And if you can run apps I don't doubt that includes download managers, which would be pretty damn handy.

Crunchy Black
Oct 24, 2017

CASTOR: Uh, it was all fine and you don't remember?
VINDMAN: No, it was bad and I do remember.




I'm well known as someone who likes to overkill and high availability shit, but, honestly? Why wouldn't you just buy a SSD for your games, throw it in your PC and host media /HTPC on a QNAP or something? It seems like you're way overthinking it, especially considering how cheap SSDs have gotten lately.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Crunchy Black posted:

I'm well known as someone who likes to overkill and high availability shit, but, honestly? Why wouldn't you just buy a SSD for your games, throw it in your PC and host media /HTPC on a QNAP or something? It seems like you're way overthinking it, especially considering how cheap SSDs have gotten lately.

Or, if you really don't need the speed of a SSD, a big HDD. Hell, you could get two 8TB drives to throw into a mirror for ~$100/ea right now. Any NAS is gonna cost more than that just in hardware for the base machine.

Where a NAS really comes in is if you're serving files/programs up to multiple clients--a PC + laptop + HTPC + whatever, for example, or if you want a machine you don't feel bad about leaving on 24/7 for torrents, remote access, or whatever else. If you do want that sort of thing, cool--all of that is pretty lightweight and should be manageable by most modern NAS (both roll-your-own and Synology/QNAP/etc) without much trouble.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »