|
Moey posted:Damn, the N40L is on sale for $200 at MacMall. I may have to grab one for a file server now, then get drives once they come back down to normal. I'm thinking the same thing but I don't see prices falling any time soon ![]() It sucks that you can buy the whole computer cheaper than just 1 disk.
|
![]() |
|
Anyone else having problems getting ahold of a DS411? I placed an order for one 3 weeks ago and it's STILL backordered. Didn't realize that a NAS was such a hot Christmas gift.
|
![]() |
|
Moey posted:Damn, the N40L is on sale for $200 at MacMall. I may have to grab one for a file server now, then get drives once they come back down to normal. argh, I should have jumped on this when I saw it last night...back to the standard price again. I'm fairly happy with my current setup, but that was a great price for that little box!
|
![]() |
|
zero0ne posted:I don't see drives going down to what they used to be for a WHILE. Today they are going for 7-10 cents /GB, less than a year ago they were going for as little as 2 - 4 cents /GB. As important as $/GB as a metric is, the real driver of prices falling lower is storage density. Historically, physical multi-platter drives haven't been able to have been sold consistently for less than about $75 a drive. This is simply due to the expensive materials and high cost of manufacture, transport, packaging, etc. Look back, the sweet spot of hard disk in $/GB has usually been somewhere between $75 and $150ish. There is a reason that 1TB drives are not cheaper per GB than 2TB drives... I am not saying that I am not gunshy due to the high cost of storage, but even without the tradgedy in southeast asia, you're not going much below $70@2TB. What will allow a better price ratio will be the continued development and production of larger disks. With the new 4TB drives being announced, we will see the price per TB creep lower only as these disks become cheaper.
|
![]() |
|
New DSM stuff, looks like they are bringing out USB 3.0 models too http://homeservershow.com/new-synol...+Server+Show%29
|
![]() |
|
I just realized, does the flood impact just production or research as well?
|
![]() |
|
Wheelchair Stunts posted:I just realized, does the flood impact just production or research as well?
|
![]() |
|
Wheelchair Stunts posted:I just realized, does the flood impact just production or research as well? Define 'impact.' http://venturebeat.com/2011/12/01/w...hailand-floods/ Most articles I've read have focused on production facilities. Unlike the Japanese Earthquake->Tsunami->Nuclear Crisis that shut down much of Japan, the population center of Bangkok was not as affected. http://www.cnngo.com/bangkok/life/t...tourists-883113 Any researchers not doing specific research on manufacturing techniques and practices who were based in the region would have been less likely to have been impacted. WD and Seagate are American Companies and Samsung is based in Korea followed by Hitachi and Toshiba in Japan. It seems much of this impact is focused on WD's plant in Bang Pi-In which produced 25% of 'sliders' for the entire planet which was completely submerged but has since reopened and another plant is expected to come online in March. Further on the long term it may be good for the industry and consumers as it will likely lead to temporarily increased profit margins leading to increased ability to conduct R&D as well as a willingness to expand and increase capacity. http://www.nytimes.com/2011/11/07/business/global/07iht-floods07.html?pagewanted=all posted:The shortage is not entirely bad news for the disk-drive business, especially for those companies whose facilities were not damaged, such as Seagate, which has a factory high and dry on a plateau in northeastern Thailand. Mr. Monroe said price increases will help lift industry profit margin to about 30 percent from about 20 percent before the floods. This reminds me of a similar incident a few years ago with DRAM chips. For some reason there was a disruption in supply which caused the price of memory to double in a few months. This eventually worked out. Be patient and eventually we will see things stabilize.
|
![]() |
|
I thought with DRAM the issue was that prices got so retardedly low that some manufacturers just stopped production for a long time until demand caught up. To be fair, drives were heading the same way. 3TB drives were actually extremely close to $-per-GB parity with 2TB-and-less drives right before the floods, and I cannot remember a single time when the biggest capacity drive on the market was ever price-competitive on a per-GB basis.
|
![]() |
|
IOwnCalculus posted:I thought with DRAM the issue was that prices got so retardedly low that some manufacturers just stopped production for a long time until demand caught up. I was talking about like in 2001-> 2002 time frame where RAM stopped dropping on a nearly daily basis and went up and stayed up for a time. There were reports if I remember of it being Typhoon related in SE Asia, but turns out it was price fixing http://www.anandtech.com/show/1255 But you are correct, this could actually help the long term health of the market a little bit.
|
![]() |
|
KennyG posted:Define 'impact.' This is about what I was thinking. I was also hoping that capacities would increase when production comes back online about where we'd figure it. As in having 4GB drives / whatever the new hotness would be at the time had R&D not staggered.
|
![]() |
|
My old ReadyNAS just being RMA'd - very old pre-NETGEAR unit with an awesome existing warranty ![]() Should I stick with my original plan and get a Pro 6 or is there a better fit for me? Other than keeping files and streaming stuff (mostly music incl. an old Squeezebox) I didn't use this old box for much but I'd love to have a decent iSCSI solution w/ snapshots for directly storing my WMC recordings (with 4-tuner InfiniTV recording simultaneously), my VM LUNs and finally have some automated cloud-based backup (not subscription-based but my own private one eg JungleDisk, S3 etc.) As far as I can tell almost every unit runs on some shitty underpowered mobile CPU so since I want to keep this box for many years again (my X6 still runs fine, it's just one disk channel is dead) I decided not touch these Atom-based ones. Photo software, USB 3.0 etc are nice but gimmicks for me (my home desktop has 3.0, I use Lightroom etc) but I would like to see a decent plugin support. I definitely want to have 5-6 drives, fast iSCSI and CIFS for reads, decent write speed and enough reserve CPU speed if I want to write my own plugin or go crazy with available ones in the future (eg Plex transcoding etc.) Price should be under $1k (which means $600-700 on top of my NV+ selling price.) My first choice, the ReadyNAS Pro 6 still runs the same fugly old web UI (nowadays well-known to be pretty awful compared to others), I'm content with it, it gets the job done, I don't plan to visit it 2-3x a day, rather one or twice a week or less. It's also got a decent CPU - some older dual-core Intel 2.6GHz, 65W E5300? -, gig of RAM etc and it's around $950... ...or should I be looking at something else? TIA
|
![]() |
|
Hope this is the right thread, but I'm doing a trade on craigslist for a Western Digital My Passport Essential SE 1 TB USB 3.0 Portable External Hard Drive. Can anyone point me to some good software that will quickly inspect the drive for any faults? I'll have a few minutes to plug it in and mess around to make sure it's in good working order but beyond filling it up and then pulling all the data back, is there a faster way to insure integrity?
|
![]() |
|
DNova posted:Someone in this thread a while ago was asking about portable NAS units with RAID. I'm not sure if he ever got an answer but I just stumbled across this thing: http://newertech.com/products/gmaxmini.php How is this a NAS? It doesn't have any network connectivity.
|
![]() |
|
Bucket Joneses posted:How is this a NAS? It doesn't have any network connectivity. I couldn't find the post and I wasn't sure if network connectivity was a real requirement or an assumption. From what I remembered of his requirements, that device would fit them. If it was you who originally posted that question, sorry for getting that wrong.
|
![]() |
|
For all those people who insist on running a windows file server rejoice, you now have a non shit(ntfs) filesystem to choose from if you shell out for the appropriate server version. http://blogs.msdn.com/b/b8/archive/...ndows-refs.aspx Unlike the storage spaces thing this is actually like ZFS, or more accurately BTRFS (everything in checksummed B+ trees, EVERYTHING). Basically storage spaces is the volume manager and ReFS is the filesystem you'd want to stick under it. Before you guys start spouting nonsense again, it's not like ZFS because ZFS is an all-in-one combined VM/FS deal which implements COW very differently and has even more features like deduplication. From a quick read through of the article ReFS is very much like a windows implementation of BTRFS. Now if only apple would just abandon HFS+ already.
|
![]() |
|
I suspect Apple stepped away from the ZFS ball because Oracle was involved and Apple has a history of being very strictly non-enterprise impacting unless aligned with a bunch of policies (such as the iPad and iPhone deployment set of docs). The other big thing about Windows-style redundancy is the per-folder duplication setting. ZFS and every other RAID on the planet doesn't really do that as a rule given in an enterprise an entire array would be considered important or non-important, not just a folder exactly. It's why I'm implementing tiered storage at home too.
|
![]() |
|
Can someone point me to a reasonable explanation of the differences between ZFS and BTRFS?
|
![]() |
|
KennyG posted:Can someone point me to a reasonable explanation of the differences between ZFS and BTRFS? btrfs = new and improved (and untested), GPL They are essentially the same class of FS with different implementations. ZFS has more bells and whistles, but that will likely not be true in the near future.
|
![]() |
|
KennyG posted:Can someone point me to a reasonable explanation of the differences between ZFS and BTRFS? Here's a good start by an ex ZFS developer. http://lwn.net/Articles/342892/ The article is obviously considerably out of date but it covers the basics.
|
![]() |
|
I want BTRFS so bad, because I want all the great stuff of ZFS but natively on a Linux system because Solaris is such a pain in the ass, but at this point I've been with Solaris so long I've got Stockholm syndrome.
|
![]() |
|
FISHMANPET posted:I want BTRFS so bad, because I want all the great stuff of ZFS but natively on a Linux system because Solaris is such a pain in the ass, but at this point I've been with Solaris so long I've got Stockholm syndrome. Is the performance that different than FreeBSD (aside from SMB being awful)? 9.0 is out and on v28 of ZFS. It's got a hell of a lot more software than Solaris and is a lot easier to use IMO. But I hear you, when BTRFS matures it'll be great to go back to the Linux world where there's even more software.
|
![]() |
|
For me it makes sense to slog through Solaris, because we run Solaris at work. Also, the newest features aren't in BSD. I've got Sickbeard, Transmission, SABnzbd, and a Minecraft server running on my server. I seem to be doing well enough.
|
![]() |
|
Hail storage goons. I'm looking to build a 6 - 10 tb NAS/iSCSI target for our small office, mostly video storage, SQL backups, possibly run some VMs from the iSCSI. I've got a license for 2008 storage server, I have a couple of decent dual port gigabit cards and an i5 cpu/mobo available(MSI Z68MA-ED55, I5-2500k) and around $2300 budgeted to complete the project. I was thinking about something like this card: Intel RAID SATA 8 internal port w/ 256MB cache memory PCI-E 2.0 x8 Controller Card (RT3WB080) http://www.newegg.com/Product/Produ...N82E16816117214 8 of these drives: HITACHI Deskstar 5K3000 HDS5C3020ALA632 (0F12117) 2TB 32MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive http://www.newegg.com/Product/Produ...N82E16822145475 This case: Athena Power CA-SWH01BH8 Black 1.2mm Metal, ABS Plastic (front bezel) Pedestal Server Case http://www.newegg.com/Product/Produ...N82E16811192058 This PSU: SeaSonic X Series X650 Gold ((SS-650KM Active PFC F3)) 650W ATX12V V2.3/EPS 12V V2.91 http://www.newegg.com/Product/Produ...N82E16817151088 Any thoughts or suggestions? Total of that listed stuff comes to just over 2k, so I have a little headroom if something should be changed. I figure I'll put the OS on a pair of smaller mirrored drives using the motherboard's raid, and just RAID 6 the 8 2TB drives.
|
![]() |
|
If you're going to spend that kind of dough on a RAID card, check out what Areca has to offer. I have an ARC 1222 (8 port SATA/SAS, PCIe x8) w/BBU and it has been working great since about june 2011. I have used the online RAID level change and online volume expansion without any problems, and what's great is that I did it all through the web interface (that's why it has an ethernet port). It's a great card with some pretty nice extra features for $9 more than the Intel card your looking at. It's also based on an Intel chipset if that matters to you.
|
![]() |
|
DigitalMocking posted:Hail storage goons. I'm looking to build a 6 - 10 tb NAS/iSCSI target for our small office, mostly video storage, SQL backups, possibly run some VMs from the iSCSI. I've got a license for 2008 storage server, I have a couple of decent dual port gigabit cards and an i5 cpu/mobo available(MSI Z68MA-ED55, I5-2500k) and around $2300 budgeted to complete the project. Make sure you buy a UPS and configure it correctly.
|
![]() |
|
LmaoTheKid posted:Make sure you buy a UPS and configure it correctly. Good point, I'll add the backup unit for the card and an extra UPS, I don't think the one in our office has any more capacity right now.
|
![]() |
|
necrobobsledder posted:I suspect Apple stepped away from the ZFS ball because Oracle was involved and Apple has a history of being very strictly non-enterprise impacting unless aligned with a bunch of policies (such as the iPad and iPhone deployment set of docs). Apple has a long history of hating to pay anyone while ripping off open source by building features based on them and selling these in expensive products.
|
![]() |
|
Longinus00 posted:For all those people who insist on running a windows file server rejoice, you now have a non shit(ntfs) filesystem to choose from if you shell out for the appropriate server version. It's going to be Server-ony, still uses shitty NTFS stuff and non-bootable. Yawn, MS is really a boring, lame-incompetent company.
|
![]() |
|
szlevi posted:Apple has a long history of hating to pay anyone while ripping off open source by building features based on them and selling these in expensive products. Almost every single enterprise software vendor has ripped off something in open source software for their internal use at least in the form of, say, everything in the Apache software portfolio. There's almost no Fortune 500 company not guilty of utilizing open source to pad their margins and accelerate development.
|
![]() |
|
I need some goon help with the moment of truth. My NAS is showing that I have a degraded RAID, but I can't figure out which drive is the culprit. The good news is that all my data is still there and I can access it still. However, how can I diagnose which drive has the problem and how serious it is without losing all of my data? What should my next steps be to return to normal redundancy status? This is what I see in the GUI: http://awesomescreenshot.com/033sazj81
|
![]() |
|
Anyone successfully install ZNC (Or other IRC bouncer) on their Synology?
|
![]() |
|
qutius posted:argh, I should have jumped on this when I saw it last night...back to the standard price again.
|
![]() |
|
DigitalMocking posted:Good point, I'll add the backup unit for the card and an extra UPS, I don't think the one in our office has any more capacity right now. You don't even have to break the bank, I got this for Xmas, and even though you're using it in an office, it should be fine. Remember to properly configure PowerChute/ACUPSD depending on what platform you install on the server and make sure it will shut down your VMs properly. http://www.amazon.com/APC-BR1500G-B...U/ref=pd_cp_e_2
|
![]() |
|
I have a question regarding NAS enclosures. I currently have a Thecus N4100 Pro enclosure and I'm getting worried about drive lifetime. In the NAS specs it claims that the disks shouldn't have a temperature above 40C for optimal operation. When checking the drive temperature in the NAS interface, most of them are closer to 50C than 40. One of the drives recently died (not detectable) and while the rest of the drives seem to be doing fine, I'm worried that the high temps might kill the rest of the drives. The drives are 2TB WDEADS20 in RAID 5. I just updated to the latest firmware which claims to keep the fan at max speed at all times. The room where the NAS is usually remains at 28C to 30C. Should I be worried about the high temperatures? The data is not mission-critical but I would be pretty bummed to lose it (I keep the enclosure off while I wait for the replacement drive to arrive.
|
![]() |
|
TerryLennox posted:The room where the NAS is usually remains at 28C to 30C. That's 82F to 86F for us yanks. Anything you can do about that? If it's in a server room can you move it to a lower position? Aside from that, keep the fans clean and make backups.
|
![]() |
|
BnT posted:That's 82F to 86F for us yanks. Anything you can do about that? If it's in a server room can you move it to a lower position? Aside from that, keep the fans clean and make backups. Well I could leave the AC running but that would skyrocket my power bill. Its my room and I'm in the tropics so ambient temps are in that range. Would shutting the NAS down except while I'm using it be a better idea?
|
![]() |
|
TerryLennox posted:Well I could leave the AC running but that would skyrocket my power bill. Its my room and I'm in the tropics so ambient temps are in that range. http://tech.blorge.com/Structure:%2...s-once-thought/ This article may make you feel better about drive temps. I'm sure if you search you can find the paper the article is based on and determine exactly how hot they were running their drives.
|
![]() |
|
The actual study is here. The lowest failure rates are actually associated with their moderate temperature drives - 30-35 and 35-40 degrees C. That said, they do show a considerable failure rate increase at the three-year mark and beyond once drive temps climb past 45 degrees C, but very little increase before then. Drives below 30 degrees C actually fail more often than drives over 45 degrees C until the three-year mark. Anecdotal evidence? My 7200RPM drives have all been running at that kind of temperature for ages, but it took them a few years to start failing. I've got one just shy of three solid years of powered-on hours that's sitting at 46 degrees right now, and it's been over 50 degrees in the past. My newer 1.5TB drives are 5400RPM, though, and they run a bit cooler. IOwnCalculus fucked around with this message at 21:55 on Jan 20, 2012 |
![]() |
|
I wish they'd post a follow-up to that study. It's five years old and based on drives that are even older. Drive manufacturers have done a lot to both the hardware and firmware of hard drives to extend their lifespan and I'm curious if there are tangible results.
|
![]() |