|
Atomizer posted:That, uh, completely defeats the purpose of RAID1, but ok! RAID is not backup. If you haven't touched enough computers to see a raid controller shit the bed and write garbage to all of your mirrored replicas then please just trust in the mantra of raid is not backup.
|
![]() |
|
Atomizer posted:That, uh, completely defeats the purpose of RAID1, but ok! I use raid1 myself and it is good if a drive fails as it's easy to build a new mirror and keep everything running in the mean time. Also read performance from two disks is also very good, where my storage is primarily about reads.
|
![]() |
|
If I'm away from home for a long period of time how feasible is it to manage and use a synology box 100% remote? and are there any potential security issues of doing this that I need to be aware of?
|
![]() |
|
H110Hawk posted:RAID is not backup. If you haven't touched enough computers to see a raid controller shit the bed and write garbage to all of your mirrored replicas then please just trust in the mantra of raid is not backup. Yeah, I know, I wrote basically that about RAID on the last page. I've emphasized the importance of backups totally unrelated to the concept of RAID. The point I was making was that RAID1 in particular is supposed to leave you with one good drive when the other goes bad (in a 2-drive setup of course,) so you can rebuild the mirror from the good drive. Obviously there's a worst-case scenario and the whole array might be destroyed, but it's probably not as common an occurrence as you're making it out to be. And yes, backup everything anyway, but if you have a RAID1, a drive dies, and you replace it but manually restore from backup, then that I guess defeats half the point of that array in the first place (the other being availability.) Devian666 posted:I use raid1 myself and it is good if a drive fails as it's easy to build a new mirror and keep everything running in the mean time. Also read performance from two disks is also very good, where my storage is primarily about reads. This sounds like a normal experience; I think Hawk is just being super-pessimistic, which is understandable when you're talking about data integrity.
|
![]() |
|
H110Hawk posted:RAID is not backup. If you haven't touched enough computers to see a raid controller shit the bed and write garbage to all of your mirrored replicas then please just trust in the mantra of raid is not backup. I backup all of my critical files onto a striped 1000-drive 1.44MB floppy array. ![]()
|
![]() |
|
forbidden dialectics posted:I backup all of my critical files onto a striped 1000-drive 1.44MB floppy array. It technically counts. ![]() Also I missed who you were and was being a bit hyperbolic on purpose. Hard to keep track of who has what dumb ideas on the internet.
|
![]() |
|
Oldie but goodie https://youtu.be/gSrnXgAmK8k 2 bay RAID 1 is alright, but 4 bay RAID 5 is more ideal for main storage ![]() caberham fucked around with this message at 04:23 on Nov 8, 2018 |
![]() |
|
I still laugh my balls off over a stripe of 3 raid5 arrays. What a goddamn moron.
|
![]() |
|
That one never gets old lmao
|
![]() |
|
Matt Zerella posted:I still laugh my balls off over a stripe of 3 raid5 arrays. What a goddamn moron. Well it is a good technique of increasing the probability of failure it's a top notch effort putting them all on one raid controller and have it fail.
|
![]() |
|
Hey thread, need some advice. I have an old (2013 or so?) HP Microserver gen8, that I want to use partially as a NAS. It runs the HP image of ESXI 6.0, and has the disked configured as Raid1 using the onboard B120i raid controller (software based). Since I currently just have 10GB of (ECC) RAM in the device, and I run two linux servers on it, I'd prefer to not spend more than about 2GB RAM for the NAS VM. People typically recommend FreeNAS for home NAS stuff, but I don't think it's a great fit for me due to both the RAM limit, and me already using Raid 1, since I think ZFS prefers having more knowledge of the underlying disk structure. I would be ok with just using ext4 or something. What is the preferred non-FreeNAS distro for doing home NAS stuff?
|
![]() |
|
forbidden dialectics posted:I backup all of my critical files onto a striped 1000-drive 1.44MB floppy array. ![]()
|
![]() |
|
D. Ebdrup posted:Can't use ZFS on floppies, it needs at least 64MB of free space for uber block allocation and other things. Use LVM to stripe sets of ~256 floppies and use those for ZFS.
|
![]() |
|
D. Ebdrup posted:Can't use ZFS on floppies, it needs at least 64MB of free space for uber block allocation and other things. Daisy chained parallel port zip disks then?
|
![]() |
|
Best Buy has 10TB Easystores for $180 right now.
|
![]() |
|
taqueso posted:Use LVM to stripe sets of ~256 floppies and use those for ZFS. Methylethylaldehyde posted:Daisy chained parallel port zip disks then?
|
![]() |
|
D. Ebdrup posted:Even with GEOM, I wouldn't do that. I remember how often a brand new set of floppies for MS-DOS and Win3.1x would fail - and that was am approximate factor of 10 less drives. Go to enough storage wars style storage unit auctions and you're bound to run into one full of a mid 2000s era hoarded computer stuff. A huge pile random IO cards, a heaping pile of CRTs, maybe some VCRs or one of those big TVs on a cart!
|
![]() |
|
Quick question: is a Core i3-4150 good enough to repurpose into a simple unRAID Plex/file server? All I need is something that can handle one or two streams at a time and host drives for a handful of nightly automated backups.
|
![]() |
|
DIEGETIC SPACEMAN posted:Quick question: is a Core i3-4150 good enough to repurpose into a simple unRAID Plex/file server? All I need is something that can handle one or two streams at a time and host drives for a handful of nightly automated backups. Absolutely. My i5 haswell handles way more like a champ.
|
![]() |
|
For those of you feeling the itch for more hard drives, BestBuy has the WD EasyStore 10TB for $180 as part of their early black friday deals. Not quite the deal the 8TB for <$140 were (and probably will be again), but it's a very good price for 10TB.
|
![]() |
|
$180 for 10TB is the equivalent of $144 for 8, with the added benefit of increased density if you're limited by bays.
|
![]() |
|
I have a HP EX920 1TB NVMe drive. For a while, there's been a question about whether x2 actually significantly hurts consumer workloads. Give me some iozone or fio command-line arguments. My proposed protocol here is that I will run the commands on my x4 slot, then on the x2, then again on the x4 just for drive load comparison.
|
![]() |
|
I want to sell some of my old drives, is there an easy to use tool that'll allow me to permanently wipe the files? EDIT: forgot to mention, im looking for a windows based solution Incessant Excess fucked around with this message at 18:36 on Nov 12, 2018 |
![]() |
|
Incessant Excess posted:I want to sell some of my old drives, is there an easy to use tool that'll allow me to permanently wipe the files? What OS? In Linux it's as easy as code:
|
![]() |
|
Incessant Excess posted:I want to sell some of my old drives, is there an easy to use tool that'll allow me to permanently wipe the files? DBAN one pass all 0's if they're rotational. If they are SSD's you must use SATA/SCSI secure erase (it's a protocol level command issued to the drive, which then handles the whole thing) to guarantee all of the data is erased. If close enough is good enough for you, see above. I would do close enough. Disconnect all disks (including things exposed on iscsi etc) you do not want erased. Anything else is FUD.
|
![]() |
|
Don't read this recent change to FreeBSDs rm, if you don't wanna know way too much about disks, controllers, caches, filesystems, and handling of files, or how much systems programmers care about trying to do it right.
|
![]() |
|
Will a 200 watt power supply be fine for two 4TB drives? Do drives alone generate too much heat that a really small case would be an issue, one that isn't designed to hold two full sized drives? I have this I3 with 16GB of RAM that I don't do anything with so was thinking of adding two large drives and freeNAS (or unRAID). This is what it looks like: https://imgur.com/a/E4Dw5DC
|
![]() |
|
200w is fine for a few drives assuming you dont also have a gaming gpu shoved in there as well
|
![]() |
|
Rusty posted:Will a 200 watt power supply be fine for two 4TB drives? Do drives alone generate too much heat that a really small case would be an issue, one that isn't designed to hold two full sized drives? I have this I3 with 16GB of RAM that I don't do anything with so was thinking of adding two large drives and freeNAS (or unRAID). It's the RAM you want to be worried about.
|
![]() |
|
Rusty posted:Will a 200 watt power supply be fine for two 4TB drives? Do drives alone generate too much heat that a really small case would be an issue, one that isn't designed to hold two full sized drives? I have this I3 with 16GB of RAM that I don't do anything with so was thinking of adding two large drives and freeNAS (or unRAID). Yes, and probably not, respectively. The last time I was working with an external 3.5" HDD connected to my Kill-A-Watt it read about 10 W, IIRC. Everything else is fine with that PSU. I'd be more concerned about figuring out how you're gonna fit two drives in there than the amount of power/heat. H110Hawk posted:It's the RAM you want to be worried about. ![]()
|
![]() |
|
Atomizer posted:Yes, and probably not, respectively. The last time I was working with an external 3.5" HDD connected to my Kill-A-Watt it read about 10 W, IIRC. Everything else is fine with that PSU. I'd be more concerned about figuring out how you're gonna fit two drives in there than the amount of power/heat.
|
![]() |
|
Rusty posted:Thanks you, yes, my thoughts as well, I have two full sized drives I can test first before I buy two large drives. Seems like it will fit, it has two drives in it now at the bottom, but obviously not fill sized and I think I can mount one in the DVD enclosure. Ah if you have a spare full-size 5.25" bay you can use something like this (and I have that exact one in a Shuttle XPC) to get a proper 3.5" mounting point (plus a couple of 2.5"!) They also make a 4x2.5" to 5.25" version if you have enough SATA ports to just use laptop-size HDDs and SSDs.
|
![]() |
|
What? Ram and cpus draw way more peak voltage than 2 spinning disks. As you said 20w for both disks vs easily 100w for the cpu+ ram. I don't think they have poor life choices in 16gb of ram.
|
![]() |
|
H110Hawk posted:What? Ram and cpus draw way more peak voltage than 2 spinning disks. As you said 20w for both disks vs easily 100w for the cpu+ ram. I don't think they have poor life choices in 16gb of ram. You didn't say CPU and RAM, you said RAM. Internet Explorer fucked around with this message at 06:58 on Nov 13, 2018 |
![]() |
|
I have a Netgear GS108T managed gigabit switch. 4 of its ports are on VLAN 1 which is sort of my default vlan and the other 4 ports on are VLAN 2 which is my guest network. For some reason VLAN2 only gets about 25MB/s performance while VLAN1 gets the full gigabit around 100MB/s . Does anyone know wtf might be going on? Crap switch?
|
![]() |
|
H110Hawk posted:What? Ram and cpus draw way more peak voltage than 2 spinning disks. As you said 20w for both disks vs easily 100w for the cpu+ ram. I don't think they have poor life choices in 16gb of ram. Your 100W there breaks down to 95W for the CPU and 5W for the RAM; it's pretty safe to just worry about the former, as the spinning disks actually will draw more power than the RAM.
|
![]() |
|
D. Ebdrup posted:Don't read this recent change to FreeBSDs rm, if you don't wanna know way too much about disks, controllers, caches, filesystems, and handling of files, or how much systems programmers care about trying to do it right. People don't need to connect to the internet to get on the internet. So we replaced the eth0 up command to actually connect you to a loopback device that replies to ever single packet with a "nice!" and "Rick and Morty is a good show"
|
![]() |
|
EVIL Gibson posted:People don't need to connect to the internet to get on the internet. So we replaced the eth0 up command to actually connect you to a loopback device that replies to ever single packet with a "nice!" and "Rick and Morty is a good show" You joke. On Solaris you had to "plumb" interfaces before they would work. Eletriarnation posted:Your 100W there breaks down to 95W for the CPU and 5W for the RAM; it's pretty safe to just worry about the former, as the spinning disks actually will draw more power than the RAM. Look at me I am wrong on the internet. I was remembering back to anecdotal evidence from 5? Years ago now where upgrading ram in a rack of servers caused them to blow breakers. We had added amps to the rack by doubling the dimm count. Guess it was a red herring or voltages/types have dropped dramatically in wattage. Could have also been that they were able to work harder so their cpus drew more power? Now I know.
|
![]() |
|
Enterprise workloads are so different from consumer or home NASes its not fair to compare them much Id argue. The biggest issues in enterprise DC designs are related to power efficiency, memory throughput metrics (Facebooks biggest issue according to Brendan Gregg at least), and raw network throughput issues at beyond 100 GbE speeds. The biggest issues for consumers in a home storage system may be power related but usually for cost reasons rather than density. Meanwhile, Google folks have been saying theyre running into electrical code issues where they cant just add more racks, its not that power cost or cooling itself is the issue - theyve hit a wall of bureaucracy / code which explains the expansion into alternative power beyond the liberal brownie points. At home, Im more concerned about the Best Buy EasyStore sales than springing for some Gold drives that wont bust on me (I treat my non-work hours as $0 / hr billable which is really inaccurate probably). Then again, at work were budget strapped in odd ways so the $25k / mo we spend on some bare metal critical for the business is scrutinized more than the $70k+ / mo we blow in AWS on a 80% overprovisioned environment.
|
![]() |
|
H110Hawk posted:Look at me I am wrong on the internet. I was remembering back to anecdotal evidence from 5? Years ago now where upgrading ram in a rack of servers caused them to blow breakers. We had added amps to the rack by doubling the dimm count. Guess it was a red herring or voltages/types have dropped dramatically in wattage. Could have also been that they were able to work harder so their cpus drew more power? It's an understandable mixup considering that servers have a lot more RAM, RDIMMs (or FBDIMMs) use a lot more power, and if your old servers were old enough to use DDR2 or DDR1 then between higher current and higher voltage your wattage goes up substantially from that too. Standard DDR3-1600 @ 1.5V is about 2.5-3W for an 8GB DIMM and DDR4 is going to be less though, from what I am seeing. e: necrobobsledder posted:Google folks have been saying theyre running into electrical code issues where they cant just add more racks, its not that power cost or cooling itself is the issue - theyve hit a wall of bureaucracy / code which explains the expansion into alternative power beyond the liberal brownie points. I work for a networking vendor with lots of labs full of 10kW+ boxes and was told several years ago when I started that at this location we were basically drawing as much power into our buildings (at least the ones which have large labs) as the local power company would allow. I cannot imagine this situation has improved much, considering how power density has increased per-RU. The labs also run into occasional infrastructure issues with how much power they can deliver to a given area because they were designed several years ago around a lot less average draw per rack. I've tripped a breaker before by rebooting two full-rack chassis at once and causing all eight fan trays to spin up to full speed at the same time. Eletriarnation fucked around with this message at 18:06 on Nov 13, 2018 |
![]() |