|
Skandranon posted:Keep in mind that case has very little room for expansion, and if you have plans to expand, it's a lot cheaper to be able to add more hard drives than upgrading the size of your drives. I have a M1015, three 3,5" Drives and four SSDs in mine using an icydock. It served me plenty well.
|
![]() |
|
Skandranon posted:Keep in mind that case has very little room for expansion, and if you have plans to expand, it's a lot cheaper to be able to add more hard drives than upgrading the size of your drives. Yeah that would be the problem. Right now I'm at 1 + 2 + 3 with 3 available in LVM parity. I figure next upgrade would be to ditch the 1 + 2 and get another 3TB, but of course I'd be at the same capacity... I would mount something in place of the DVD drive, though. Say I go with the TS140. Could I just use it to boot off a usb key with something like nas4free? I need linux so it recognizes my lvm setup.
|
![]() |
|
Tiger.Bomb posted:Say I go with the TS140. Could I just use it to boot off a usb key with something like nas4free? I need linux so it recognizes my lvm setup. Booting off a USB drive is the recommended method for running NAS4Free/FreeNAS.
|
![]() |
|
DrDork posted:Booting off a USB drive is the recommended method for running NAS4Free/FreeNAS. Fanastic. I forgot that I have a Rasperry bi 2 that I haven't been using. I will use it at the HTPC for now. As long as I can run sick beard and such I think the TS140 + nas4free will do what I need. Thanks for the help!
|
![]() |
|
So I just setup a Lenovo TS140 ThinkServer with FreeNAS and four WB Red 3TB Drives and it's been fantastic so far. Great performance, whisper quiet, low power usage. The case is really meant to best hold two drives, as there are only two drive caddies in the case. However, the unused floppy disk drive bay happens to be a millimeter-perfect fit for a third WD Red drive. I slid the third drive in there and found a small hole in the metal chassis to use to anchor the drive down with a screw. The fourth drive was a little trickier. I had a generic drive caddy laying around so I installed the fourth drive in the caddy, then slid the drive caddy in the larger bay just under the disc drive. However, it was way too narrow for that slot, so I cut some foam out from the foam holders that shipped with the case and used that to secure the drive in that bay and that's it, the case now holds four drives easily!
|
![]() |
|
GreatGreen posted:So I just setup a Lenovo TS140 ThinkServer with FreeNAS and four WB Red 3TB Drives and it's been fantastic so far. Great performance, whisper quiet, low power usage. Glad you like it, it is a nice machine.
|
![]() |
|
Im asking again because I have new comparisons. Looking at NETGEAR RN31400 or Synology DS415. The Primary use is going to be always on mass storage that also handles my usenet/sickbeard downloading on the NAS. The Netgear has a dual core Intel Atom CPU at 2.1gHz w/2GB RAM , the Synology has a marvell Armada XP Dual Core at 1.3gHz w/1GB Most other things are equal it seems although the netgear is a little noisier (which doesnt factor really as its in another room). It suffers on small bulk file writes in comparison to the Synology but not to a degree I am worried. The Netgear is also about $150 cheaper for me. Im happy to pay the extra for quality but its not like the UI is going to justify the extra cash. I have used Netgear previously and Synology is what we use at work, so I am familiar with both. From a Usenet operation perspective, doesnt it make more sense to have the Intel CPU and extra RAM?
|
![]() |
|
Laserface posted:Im asking again because I have new comparisons. If sickbeard and SAB run on the synology: Will you be waiting on the PAR and RAR operations? If it is a background task, what's an extra 20 minutes or something?
|
![]() |
|
yeah, all the automated downloads are occurring in the middle of the day, but the spontaneous downloads while I am home would be annoying to wait while they unpack.
|
![]() |
|
Laserface posted:yeah, all the automated downloads are occurring in the middle of the day, but the spontaneous downloads while I am home would be annoying to wait while they unpack. Yeah, some more horsepower will be nice in that case. I know I hated waiting on my little server when my own workstation would Par in a couple of minutes what it would do in 10 - 20 minutes.
|
![]() |
|
Tiger.Bomb posted:Looking to build/buy a new NAS/HTPC hybrid. If you are primarily using the storage locally and not sharing it with any other devices then having everything on the same machine sounds like a great idea. You can avoid having to setup the media streaming device to pull files over the network, and you don't have to setup any networking on the storage side either. You get full disk speed without the overhead of the sharing protocols. I second the recommendation for a ts140 for storage+media. If you have a smart tv you can use that for the front end player, or get something like a roku/appletv
|
![]() |
|
Anyone venture to guess which of these 2.5" HDDs is more reliable/better than the other? Segate/Samsung Spinpoint M9T 2T (9.5mm 667G/platter 32M) vs WD Green WD20NPVX 2T (15mm 500G/platter 8M) Both are found in common consumer 2.5" externals. The WD being thicker due to lower density.
|
![]() |
|
Are 3TBs still ass when it comes to reliablity? That's what Backblaze says anyway.
skooma512 fucked around with this message at 22:13 on May 6, 2015 |
![]() |
|
skooma512 posted:Are 3TBs still ass when it comes to reliablity? That's what Backblaze says anyway. Depends which 3TBs. They've put out a few more articles that elaborate more on the topic. The 3TB WD Reds are good, as well as the Hitachis. It's the Seagate DM001 3TBs that are terrible, supposedly because they used cheap parts due to that tsunami that wiped out HD production a few years back.
|
![]() |
|
Skandranon posted:Depends which 3TBs. They've put out a few more articles that elaborate more on the topic. The 3TB WD Reds are good, as well as the Hitachis. It's the Seagate DM001 3TBs that are terrible, supposedly because they used cheap parts due to that tsunami that wiped out HD production a few years back. Ah good, I've been eyeing HGST 3TBs on Amazon.
|
![]() |
|
I have a very odd SSH problem that I need help fixing. I have 2 freenas arrays, one is a backup array that I recently started copying to. In the middle of copy it failed, and naturally I went to investigate why. It turns out the network to the first array died, and when I dove in as to why it turns out that when i reinitialize the network, and ssh to the second array, or from the second array to the first, the network drops out. Other than that the arrays appear fine. Has anyone seen this type of problem, or experienced it? Could there be an underlying hardware issue here? How can I diagnose this?
|
![]() |
|
Are they both trying to claim the same IP?
|
![]() |
|
IOwnCalculus posted:Are they both trying to claim the same IP? They are not
|
![]() |
|
Megaman posted:They are not Are both freenas installations named the same thing (freenas.local, maybe)?
|
![]() |
|
GreatGreen posted:Are both freenas installations named the same thing (freenas.local, maybe)? They are not
|
![]() |
|
SSH to each and give us the output of ifconfig -a. And does the network fail only on the first one, or does it alternate between them?
|
![]() |
|
My NAS was making some unusual noises last night. code:
|
![]() |
|
G-Prime posted:SSH to each and give us the output of ifconfig -a. And does the network fail only on the first one, or does it alternate between them? Note while I do all of this the backup array is fine, it's something wrong with the main array. Main array: code:
code:
I'm supposing that because it can get a connection when I unplug and plug back in that this is not ultimately a hardware problem, but purely a software problem? edit - It also appears that I can ssh from the main array to other machines just fine, it's just SSHing to the backup array that's a problem Megaman fucked around with this message at 20:20 on May 7, 2015 |
![]() |
|
skooma512 posted:Ah good, I've been eyeing HGST 3TBs on Amazon. Megaman posted:Note while I do all of this the backup array is fine, it's something wrong with the main array.
|
![]() |
|
necrobobsledder posted:I'm cheaping out and am looking at some Toshiba drives instead. RAID6 / RAIDZ2 is for drives being unreliable, right? It's really for more than 'drive is unreliable,' it's for any decent sized array past 4-5 drives. Most especially if you're using drives 4tb and larger since rebuild times get to be so long with those.
|
![]() |
|
Megaman posted:Note while I do all of this the backup array is fine, it's something wrong with the main array. I'd say, then, do "ssh -vvvvv <ip>" and give us the output from that. I don't know how many verbose flags it is on FreeBSD, so just do 5, and hopefully we'll get all the output. There might be something screamingly obvious there. If you can give us one with it failing, and then one where you pull the cable and plug it back in with success, that could be helpful too.
|
![]() |
|
necrobobsledder posted:What does your routing table look like from each box? netstat -r or netstat -rn on FreeBSD. Google-fu yo. primary array: (ssh from here stops its network) code:
code:
G-Prime posted:I'd say, then, do "ssh -vvvvv <ip>" and give us the output from that. I don't know how many verbose flags it is on FreeBSD, so just do 5, and hopefully we'll get all the output. There might be something screamingly obvious there. If you can give us one with it failing, and then one where you pull the cable and plug it back in with success, that could be helpful too. ssh from primary array to backup array: code:
I've also just tried reinstalling freenas fresh on a new key, not even mounting my drives, and still just SSHing from freenas to freenas backup drops freenas off the network.
|
![]() |
|
necrobobsledder posted:I'm cheaping out and am looking at some Toshiba drives instead. RAID6 / RAIDZ2 is for drives being unreliable, right? Dual parity is more due to the fact that, with very large drives, the chances of another failure during rebuild go up. So now you need more parity to even allow you to reliably rebuild. Two drives fail, and then it all falls apart. If you wanted to be cheap, I'd go with good drives in RAID5 than crap drives in RAID6. You don't actually want to ever have to rebuild your array, it's there for the worst case scenario.
|
![]() |
|
Also, on the backup array machine, what does your /var/log/messages say? Other logs are of help too, it's not clear what "drop" means at present to me. Routes are super simple.
|
![]() |
|
So ordered the TS140 and a little usb key. Probably also need some cat6 to take advantage of my gigabit nics, but I ran into a problem: I thought nas4free was still on Linux. Right now my 3HDDs are in a (1 + 2) + (3) LVM setup. I don't think FreeBSD will mount LVM. I am OK with transitioning to something else (e.g zfs which seems all the rage + sw RAID1) but I don't have the spare drives to transfer everything over (plus it took like 16 hours last time). Any suggestions?
|
![]() |
|
In my 2 servers, I have a smaller mirrored array along with the main array. One is 2x3tb, the other 2x1.5tb. When moving data around like that, I prefer to copy data there, rebuild new array from scratch, then copy back to array. Always keeps the data in at least some redundancy protected array. However, this requires much more hardware than you have available. How much data do you have? If you are slightly able to tolerate risk, you could copy your data to a single backup drive, then onto your new array. The correct answer to your question is 'get more drives to move data with', but I suspect you can't/won't do this. All other solutions entail some degree of risk of data loss.
|
![]() |
|
Are there any tricks to getting printing to work in XPenology? The printer (Brother HL-2240D) shows up under external devices, the exact model's driver isn't available but I'm trying a generic PCL driver that is known to work with this model. Nothing happens with "print test page". I hooked it up to Google Cloud Print and it shows up as as available, but any print job goes from Queued to In Progress to Finished without actually printing anything. I've tried many sequences of rebooting and reconnecting, and a few other drivers, but nothing seems to work (however it does at least show as unavailable in Google Cloud Print with some drivers). Could it be that these drivers are binaries that aren't x86-compatible? I don't know how Linux print drivers work so I have no idea.
|
![]() |
|
Skandranon posted:In my 2 servers, I have a smaller mirrored array along with the main array. One is 2x3tb, the other 2x1.5tb. When moving data around like that, I prefer to copy data there, rebuild new array from scratch, then copy back to array. Always keeps the data in at least some redundancy protected array. However, this requires much more hardware than you have available. How much data do you have? If you are slightly able to tolerate risk, you could copy your data to a single backup drive, then onto your new array. I have about 1.75 TB used of the 3GB capacity. I could take out the 1 and 2 TB volumes, convert them to a ZFS volume, transfer from LVM (so this would require I first install linux on the server), then convert the 3TB to ZFS. The problem is that the 1TB drive is a few years old and the whole reason I got the 3TB, so I'd like to avoid the point in time where my data is only on the 1 + 2 (the 3TB is brand new). does ZFS work like LVM? In that I can add the physical volumes and present them as a single? The other alternative I have is to buy another 3TB, copy everything over to it, and then convert the 3TB to ZFS, copy it over, and finally convert the new 3TB to ZFS for redundancy. You mentioned posting articles earlier... Do you have a link? I'd like to read up on this more. It was scary/stressful setting up LVM so I'd like to be more confident about ZFS and also have better justification for making the switch (Ubuntu Server would probably work fine, I'm comfortable in the console, and the LVMs should be detected automatically).
|
![]() |
|
ZFS works kind of differently. It is a load-balanced JBOD of virtual devices. The virtual devices can be made up of anything, single drives, 2-3 part mirrors and RAID of varying degrees. Once a vdev has been added, you can't remove or replace it anymore (altho Illumos is working on this finally), just the drives inside the vdev. So if you've added ZFS' equivalent of a RAID6 array to the pool, it'll be what it is. You can't expand or shrink it, nor replace it with say a mirror. You can only switch out the drives one after another with bigger ones to grow it in size. Once Illumos has its shit working, you'll be able to drop vdevs and redo them, if sufficient free space is available on the remainder of the pool to do so. Who knows when that code will be ready, tho.
|
![]() |
|
I've never been a fan of some of the more complex RAID strategies, like ZFS, DROBO, or even RAID5. If you lose more than your parity, you lose either the entire array, or recovery is unimaginably difficult. Same goes with most NAS appliances, as if the hardware itself fails, you again are in a sticky spot where you replace the NAS with exactly the same and hope it will recognize the array. I prefer either RAID1, as recovery is dead simple (use the drive that still works), or things like Unraid/Snapraid. If you blow your parity protection, you still have all the drives that work, and they can be read from any system that supports the partition type (Unraid uses RieserFS, Snapraid uses NFTS/EXT4). You could even read them via USB enclosure if you wanted.
|
![]() |
|
Skandranon posted:I've never been a fan of some of the more complex RAID strategies, like ZFS, DROBO, or even RAID5. If you lose more than your parity, you lose either the entire array, or recovery is unimaginably difficult. Same goes with most NAS appliances, as if the hardware itself fails, you again are in a sticky spot where you replace the NAS with exactly the same and hope it will recognize the array. Recovery is actually pretty easy: replace dead hardware, create new array, restore from backup.
|
![]() |
|
DNova posted:Recovery is actually pretty easy: replace dead hardware, create new array, restore from backup. That's not recovering your data, that's starting from scratch. Doesn't work if your array is actually larger than your capacity to backup.
|
![]() |
|
Skandranon posted:I've never been a fan of some of the more complex RAID strategies, like ZFS, DROBO, or even RAID5. If you lose more than your parity, you lose either the entire array, or recovery is unimaginably difficult. Same goes with most NAS appliances, as if the hardware itself fails, you again are in a sticky spot where you replace the NAS with exactly the same and hope it will recognize the array. Strictly speaking you mean RAIDZ, since ZFS can do mirroring. I do like mirroring for performance and reliability reasons but past a certain point it's not necessarily superior to RAID variations with more redundancy. Consider an eight disk array. I could format it in four mirrored pairs and get four drives worth of capacity, or as a RAIDZ3 volume and get five drives worth of capacity. In RAIDZ3 I can lose any three drives, replace them, resilver, and be whole again. With the mirrored topology it's possible that I could lose as many as four drives and still be whole, but it's also entirely possible that losing two or more devices might mean I lose a mirrored pair and I have to restore from tape. In my mind RAIDZ3 is the clear winner.
|
![]() |
|
Skandranon posted:That's not recovering your data, that's starting from scratch. Doesn't work if your array is actually larger than your capacity to backup. You're right, it doesn't work if you have no backups. But... Zorak of Michigan posted:Strictly speaking you mean RAIDZ, since ZFS can do mirroring. I do like mirroring for performance and reliability reasons but past a certain point it's not necessarily superior to RAID variations with more redundancy. Consider an eight disk array. I could format it in four mirrored pairs and get four drives worth of capacity, or as a RAIDZ3 volume and get five drives worth of capacity. In RAIDZ3 I can lose any three drives, replace them, resilver, and be whole again. With the mirrored topology it's possible that I could lose as many as four drives and still be whole, but it's also entirely possible that losing two or more devices might mean I lose a mirrored pair and I have to restore from tape. In my mind RAIDZ3 is the clear winner. ...having a bunch of independent mirrored pairs instead of a distributed parity array is absurd for the reason quoted above among others. Most importantly, get a backup strategy.
|
![]() |
|
DNova posted:You're right, it doesn't work if you have no backups. I don't think I suggested anywhere a bunch of independent mirrored pairs. For large arrays I was suggesting Unraid/Snapraid, for the reasons specified in my post. I was suggesting Mirrors for simple things like boot drives or temp storage. As to my backup strategy, I have 20+tb of media in my arrays. My only backup option would be to build ANOTHER array of similar size, which is not quite in my budget yet. For me, better fault tolerance is a much more important feature, and would still be anyways, as the fault tolerance I already have with Snapraid would be easier to deal with than restoring 20tb from backup.
|
![]() |