|
DNova posted:OK... but it let you upgrade and retained all your settings and all that good stuff? No, it never would upgrade via the GUI. I tried about 5 times, each time I left the "please wait" screen for 30+ minutes. The first time I left it for a few hours. Eventually I did upgrade it though via booting a CD and it retained all the settings. I had a backup of the settings though so I wasn't worried if I lost it.
|
![]() |
|
IT Guy posted:No, it never would upgrade via the GUI. I tried about 5 times, each time I left the "please wait" screen for 30+ minutes. The first time I left it for a few hours. Yeah that's what I meant.. that it at least went ok with the CD and you weren't forced to do a complete reinstall.
|
![]() |
|
DNova posted:Yeah that's what I meant.. that it at least went ok with the CD and you weren't forced to do a complete reinstall. Oh, then yes it worked ![]() I don't even think the GUI did anything past creating a folder in the mount point I choose called .freeNAS It hung after that. I might try using IE8 next time I have to upgrade. What browsers do you guys use to do the upgrade?
|
![]() |
|
I've never done it before. I suppose I will try very soon with the latest FF. I don't want to do the upgrade before the array is resilvered.
|
![]() |
|
Star War Sex Parrot posted:I've found QNAP to have the best offerings at the moment if you want proper enterprise stuff. Have you used a lot of them, I'm tied between and 859 and and 879, technically I don't really have enough budgeted for an 879, but I could make it work if it's really worth it. It's going to be a backup target for ghetto VCB copies of 4 servers and a couple other machines from a couple of essentials hosts. It's not really enterprise, but i've POC'd the whole thing on a synology, and combined with backup exec to cover the SQL and exchange it makes me relatively confident of our disaster recovery options for critical systems.
|
![]() |
|
bob arctor posted:Have you used a lot of them
|
![]() |
|
bob arctor posted:Have you used a lot of them, I'm tied between and 859 and and 879, technically I don't really have enough budgeted for an 879, but I could make it work if it's really worth it. It's going to be a backup target for ghetto VCB copies of 4 servers and a couple other machines from a couple of essentials hosts. It's not really enterprise, but i've POC'd the whole thing on a synology, and combined with backup exec to cover the SQL and exchange it makes me relatively confident of our disaster recovery options for critical systems. Any specific questions? I am currently re-configuring two of our TS-809u-RP units to be the new backup targets for our Veeam backups. We also have a TS-1079 with 10gb fiber that is populated with 10x3tb drives. No real plans for that thing yet, probably going to keep copies of every backup of every thing I can think of (Nth copies of Veeam backups, switch dumps, firewall dumps, maybe some syslog stuff?).
|
![]() |
|
Moey posted:Any specific questions? is performance such that it's actually worth using 10 gig on it? What RAID level do you use? I'm thinking 8 1tb drives in raid 6.
|
![]() |
|
I'm having trouble with a Buffalo Linkstation after replacing a drive after it failed. The NAS is a duo model and was setup as a mirrored array. I replaced the drive and powered it back up and it immediately started repairing. I let it do it's thing but it would appear once it is done repairing it starts over and begins again. It's been doing this for about 16 hours and the drives are only 500gb. Is anything wrong or am I overreacting at the length of time for the rebuild? The only reason I think something is up is that the web GUI lists a percent complete and time remaining. I watched it go from nearly done back to single digits complete. Thanks for any help!
|
![]() |
|
bob arctor posted:is performance such that it's actually worth using 10 gig on it? What RAID level do you use? I'm thinking 8 1tb drives in raid 6. Right now it is Raid 5 + hot spare. If we ever did loose a drive, I think the rebuild time would be 1 week+. This was a call my boss made, but I am not too worried, as it is going to be used for backups of backups. Once I get a chance to (this weekend or early next week) I will fire up some IOmeter tests over the 10gig and let you know what it can do. Any specific test types you want me to run?
|
![]() |
|
I just moved from freenas to opensolaris/napp-it and I am having issues reconfiguring my shares. Is there a way to share an entire zfs pool over cifs and nfs thru the web gui? I have a zpool mounted to /data and I just want to share the entire /data folder, but the web gui only seems to allow me to share individual zfs folders underneath /data, rather than the entire pool. edit: I ended up just doing it from the command line. astr0man fucked around with this message at 19:12 on May 12, 2012 |
![]() |
|
So I switched out my U-Verse RG like two months ago, and I think I've forgotten to re-add ports for Crashplan to it, because I can't seem to backup to it remotely anymore. Is it actually OK to forward the control port (Telnet based I think, 4242), or is opening that to the world a horrible idea?
|
![]() |
|
movax posted:So I switched out my U-Verse RG like two months ago, and I think I've forgotten to re-add ports for Crashplan to it, because I can't seem to backup to it remotely anymore. Is it actually OK to forward the control port (Telnet based I think, 4242), or is opening that to the world a horrible idea? I wouldn't knowingly forward a port that sends any authentication/control data over plaintext. Could always tunnel it through ssh / openVPN.
|
![]() |
|
If you open telnet to the world in the year of our lord 2012, I will personally come plant an axe in your brains.
|
![]() |
|
It's some kind of proprietary encrypted protocol, an anon Telnet client can get far enough to check connectivity and then it dies. I might have tracked down the problem to something else though, so maybe I don't have to open it.
|
![]() |
|
movax posted:It's some kind of proprietary encrypted protocol, an anon Telnet client can get far enough to check connectivity and then it dies. I might have tracked down the problem to something else though, so maybe I don't have to open it. By that definition every port is "telnet based" because they all send data over a port.
|
![]() |
I'm trying to set up FreeNAS 0.7 with Sabnzbd, Sickbeard, etc. I keep reading that I can add packages such as Sabnzbd through the web GUI. Apparently you go to System > Packages and do it through there. This button doesn't exist on my FreeNAS. I'm pretty confused.
|
|
![]() |
|
FISHMANPET posted:By that definition every port is "telnet based" because they all send data over a port. well... troubleshooting a shitload of things DOES involve "telnet to the port and see if you get a response". I mean honestly telnet is just raw TCP to a port isn't it? Not the cleanest way to refer to it, but it gets the job done.
|
![]() |
|
Telex posted:well... Yes, telnet is very simple and the telnet client can be used to test tcp ports and the like. However, it has its own standard and has been historically used as a means of authentication that sends data over plaintext so some assumption is to be expected.
|
![]() |
|
tijag posted:The difference it currently is now [about $350] makes the choice between the two even easier IMO.
|
![]() |
|
evil_bunnY posted:These days telnet is for people and devices too dumb to use encryption libraries. Also every industrial device ever. And I guess 4242 is safe to open, it isn't telnet-related whatsoever.
|
![]() |
|
movax posted:Also every industrial device ever. Also terrible: any kind of scientific instruments. evil_bunnY fucked around with this message at 15:40 on May 14, 2012 |
![]() |
|
Anyone have experience with Buffalo NAS devices, specifically a TS-RXL6E6? We recently reclaimed this 8TB device from our datacenter. I've tried to configure it as a RAID-10 NFS share for some VMware hosts, but the maximum speed I can get out of it is 10Mb/s using NFS on the latest firmware ![]() I've tried using CIFS/SMB as well with a Windows box and the performance is just as hideous. So far Buffalo is 0/2 for NAS devices in our business, they seem to be slow and crappy.
|
![]() |
|
My last workplace tried to get me to use a Buffalo Terastation or whatever for my ISO image storage. I think physical CD-ROMs went faster than the NAS over both NFS and CIFS/SMB. It was in the process of being decommissioned at least. Oh wait, maybe you got our old NAS ![]()
|
![]() |
|
Shane-O-Mac posted:I'm trying to set up FreeNAS 0.7 with Sabnzbd, Sickbeard, etc. I keep reading that I can add packages such as Sabnzbd through the web GUI. Apparently you go to System > Packages and do it through there. This button doesn't exist on my FreeNAS. I'm pretty confused. FreeNAS 7 is ancient now, try getting 8.0.4-multimedia and follow this guide.
|
![]() |
MOLLUSC posted:FreeNAS 7 is ancient now, try getting 8.0.4-multimedia and follow this guide. Thanks. I took a look at it, and I'm not tech savvy enough to translate that guide into a guide for SAB. However, there's a SAB plugin that exists, it just isn't usable until FreeNAS 8.2 is released. I may just end up waiting until then, unless anybody else knows what to do.
|
|
![]() |
|
I'm going to start backing up my webhost to my zfs-based NAS, and i have a crazy idea. Let me know if it's crazy or if I'm missing something. I want to use a cron script to pull the current web page/database/svn repository via sftp or rsync to a dedicated pool, and take a snapshot every time. This means I will have a historical record (I can roll back to any arbitrary backup if I need to) and the snapshots should be relatively low-cost compared to separate dated backups, especially considering that only a small number of files will change between backups. Is doing that many snapshots a bad idea? I haven't done much with snapshots on ZFS yet, can you pull out several historical revisions, or once you roll back to you lose all of the later snapshots?
|
![]() |
|
On a load like that it'll become unmanageable for you before it has any impact you could perceive.
|
![]() |
|
Delta-Wye posted:I'm going to start backing up my webhost to my zfs-based NAS, and i have a crazy idea. Let me know if it's crazy or if I'm missing something. rdiff-backup is what you want, it's very resource efficient with small changes and has the ability to dump a snapshot of any revision. Actually saving a shitton of stand alone snapshots in a directory or something would become a nightmare very quickly.
|
![]() |
|
Or you just copy them to different directories on a dedupped pool, presto.
|
![]() |
|
evil_bunnY posted:Or you just copy them to different directories on a dedupped pool, presto. I don't know much about zfs, is there any way to roll back to any increment in the backup history if you do this?
|
![]() |
|
He means not using snapshots - i.e. every day you copy a backup to /tank/backups/YYYYMMDD, and you have dedup enabled on tank so that any files that are common between any of those backups are automatically deduped. To pull up any version from any day, you just go to the folder for the appropriate date.
|
![]() |
|
evil_bunnY posted:Or you just copy them to different directories on a dedupped pool, presto. I didn't even know ZFS had this feature! A file-level dedup is exactly what I am looking for, and dropping the files into different directories in the script would be trivial. Thanks.
|
![]() |
|
Delta-Wye posted:I didn't even know ZFS had this feature! A file-level dedup is exactly what I am looking for, and dropping the files into different directories in the script would be trivial. Thanks.
|
![]() |
|
PUNCHITCHEWIE posted:I don't know much about zfs, is there any way to roll back to any increment in the backup history if you do this?
|
![]() |
|
adorai posted:just remember it's block based and increases RAM utilization by a large amount. Shit, I should have continued reading, I read this (https://blogs.oracle.com/bonwick/entry/zfs_dedup): quote:What is it? quote:ZFS provides block-level deduplication because this is the finest granularity that makes sense for a general-purpose storage system. Block-level dedup also maps naturally to ZFS's 256-bit block checksums, which provide unique block signatures for all blocks in a storage pool as long as the checksum function is cryptographically strong (e.g. SHA256).
|
![]() |
|
Snapshots would work. The way snapshots work is there's a secret .zfs directory in the root of your filesystem, and in there is a snapshot folder, and in there is one folder for each snapshot. You can go into that directory and pull your files out. I'm not sure how reverting snapshots works.
|
![]() |
|
Wait why wouldn't dedup work? It's a trivial amount of data.
|
![]() |
|
evil_bunnY posted:Wait why wouldn't dedup work? It's a trivial amount of data. It probably would. Alas, my version of ZFS is way out of date and doesn't support dedup. The cobbler's shoes and all that. Time to upgrade Solaris/jump ship for FreeBSD/something I guess.
|
![]() |
|
Dedup would be a lot more CPU and memory intensive than snapshots, so I'd think you'd want to explore snapshots a little more. When you talk about pulling snapshots out, what exactly are you looking to do? I ask because if you just need to copy files from a snapshot to the web host, that's easy and totally non-disruptive, as Fishmanpet says. You just go to the snapshot directory, find what you want, and copy it over. In your particular use case I don't think you'd ever want to actually restore the whole snapshot, so a lot of the situations that I have trouble wrapping my head around would never come up anyway.
|
![]() |