|
derk posted:You can boot off SSD for FreeNAS as well, but i am a new FreeNAS to Ubuntu server convert. Been a while since i had to type that many commands but i still had to type quite a bit in FreeNAS as well as the plugin system was so behind on updated versions of said plugins, it was just easier to do manual jails and keep the stuff updated that way. Yeah, FreeNAS will boot from SSD just fine, but there's not really any benefit to doing so since it doesn't write anything back to the SSD aside from system updates and config files. And 100% agreed across the board, same reason (plus the whole Corral clusterfuck) I moved off of FreeNAS.
|
![]() |
|
FreeBSD is solid, it's different from Linux but overall it's fine. The big problem for me is it doesn't support docker natively. I am not a big fan of jails. But that's just my personal opinion.
|
![]() |
|
Matt Zerella posted:FreeBSD is solid, it's different from Linux but overall it's fine. The big problem for me is it doesn't support docker natively. I am not a big fan of jails. But that's just my personal opinion. with ubuntu you can do containers, lxd/lxc, basically just like jails, sharing the kernel, different ip addresses like jails in FreeNAS, takes some time to setup but it is great.
|
![]() |
|
derk posted:with ubuntu you can do containers, lxd/lxc, basically just like jails, sharing the kernel, different ip addresses like jails in FreeNAS, takes some time to setup but it is great. Also true but I just love docker compose so much. I'm honestly annoyed Im Tethered to unraid now. I'd probably be on a LTS Ubuntu and btrfs now that the raid bug is apparently fixed. E: welp, looks like the bug isn't fixed.
|
![]() |
|
You might want to look at unraid, I have been really happy with it. It also is really easy to expand your pool, supports cache drives, and can have up to 2 parity drives.
|
![]() |
|
Nthing Unraid, really happy with my setup. What motherboard did you put that 2200G on? Any additional SATA cards required?
|
![]() |
|
I worry about unraid lately because it's built on Slackware which is apparently a one man show now and not doing well. But yeah, is the best hands off roll your own nas option imo. Unless you care about ZFS.
|
![]() |
|
Matt Zerella posted:I worry about unraid lately because it's built on Slackware which is apparently a one man show now and not doing well. But yeah, is the best hands off roll your own nas option imo. Unless you care about ZFS. How crazy would it be to switch distribution bases? Curious if there are any precedents for that. Since its usb stick based a migration tool might be as easy as plug in new stick to unraid server and then it copies all settings over, boot from that. With old stick as backup.
|
![]() |
|
On FreeNAS, if I switch from the 11-STABLE train to the 11.2-STABLE to get the 11.2 beta 2, what happens when/after 11.2 is released? I've got a little time coming up to mess with my setup/redo my jails. But, I just want to get up and running with the new UI, not be continually running betas.
|
![]() |
|
priznat posted:How crazy would it be to switch distribution bases? Curious if there are any precedents for that. Since it’s usb stick based a migration tool might be as easy as plug in new stick to unraid server and then it copies all settings over, boot from that. With old stick as backup. If it's ZFS and you configured it to use the whole disk instead of a partition on setup, I really see no issues exporting then reimporting as long as not trying to import a higher zfs level (version?) into a system with a lower zfs level. You can test this by creating a small box with the distro you want to go to, take out one of your parity drives or a hot spare, and see if zfs can read the disk and dev info
|
![]() |
|
EVIL Gibson posted:If it's ZFS and you configured it to use the whole disk instead of a partition on setup, I really see no issues exporting then reimporting as long as not trying to import a higher zfs level (version?) into a system with a lower zfs level. I would do a zpool scrub just to have it go over every block. Does zpool have a r/o mode where you could import it, scrub it, and look at the output before having it "fix" anything? (Just in case every sector is "bad" as it were.)
|
![]() |
|
Unraid doesn't use zfs.
|
![]() |
|
EVIL Gibson posted:If it's ZFS and you configured it to use the whole disk instead of a partition on setup, I really see no issues exporting then reimporting as long as not trying to import a higher zfs level (version?) into a system with a lower zfs level. Vaguely related, if you use a whole disk in ZFS and then don't fully wipe it afterwards before using it in something else, you can end up with a disk where Gparted and clonezilla see it as a single ZFS member, while parted/fdisk/Ubuntu all natively see the DOS MBR and individual partitions on it. ![]()
|
![]() |
|
The Milkman posted:On FreeNAS, if I switch from the 11-STABLE train to the 11.2-STABLE to get the 11.2 beta 2, what happens when/after 11.2 is released? I've got a little time coming up to mess with my setup/redo my jails. But, I just want to get up and running with the new UI, not be continually running betas. If you mean how can you change from Nightly to Stable, you can do either of two things: (1) Revert back to 11-STABLE or whatever your last non-nightly build was, and then upgrade to 11.2-STABLE. (2) Force a direct move from the nightly train to a stable train via the CLI: # freenas-update -v -T FreeNAS-11.2-STABLE update But in that the 11.2-BETA2 is actually released under the 11.2-STABLE train, you shouldn't have to do anything at all: when the RC and final release versions drop, they should drop in the same 11.2-STABLE train and you'd just directly upgrade to them like you would any other dot-release. DrDork fucked around with this message at 00:51 on Aug 10, 2018 |
![]() |
|
Matt Zerella posted:Also true but I just love docker compose so much. I'm honestly annoyed Im Tethered to unraid now. I'd probably be on a LTS Ubuntu and btrfs now that the raid bug is apparently fixed. Ill pimp my solution again. Proxmox for zfs and lxc/kvm gui. Then an lxc with appropriate app armor settings running docker with all the containers in compose. I use lxc for lpd, Plex, and the docker, kvm for windows 10 and server 2016.
|
![]() |
|
Let's say I'm using ZFS. I have mainpool/myfs@001 and then I do a send/recv to copy it to backuppool. At some future date I now have mainpool/myfs@003. How do I send the new snapshot to backuppool such that it knows they share a block history/etc? Do I need to still have myfs@001 still on the main pool (a shared point in history), or is it sufficient to have myfs@003 and it knows they belong to the same filesystem? Is there like some kind of global uuid assigned to a filesystem at init time being used to match this? Paul MaudDib fucked around with this message at 05:29 on Aug 11, 2018 |
![]() |
|
priznat posted:Nthing Unraid, really happy with my setup. Gigabyte GA-AX370-Gaming motherboard. ATX board with 8 Sata ports and a few PCIE as well. I'm only running 4 hard drives and I only have the SSD in there because I don't have another use for it right now. All that packed into an Antec 300 case. I'll have to check out unraid. If you get that motherboard though, prepare to use a loaner CPU to update the BIOS prior to using any Ryzen 2000 series chips.
|
![]() |
|
MonkeyFit posted:Gigabyte GA-AX370-Gaming motherboard. ATX board with 8 Sata ports and a few PCIE as well. I'm only running 4 hard drives and I only have the SSD in there because I don't have another use for it right now. Good tip. Thats a nice thing about a lot of asus boards, you can bios update right off a usb stick with no cpu in the socket.
|
![]() |
|
priznat posted:Good tip. That’s a nice thing about a lot of asus boards, you can bios update right off a usb stick with no cpu in the socket. Fuck. I was deciding between this one and an Asus but went with this one to save $30. If I'm not able to return this processor then I'll be out $30 over just going with the Asus in the first place.
|
![]() |
|
MonkeyFit posted:Fuck. I was deciding between this one and an Asus but went with this one to save $30. If I'm not able to return this processor then I'll be out $30 over just going with the Asus in the first place. Actually looking at the asus models it seems that function (Bios Flashback) was only on the HEDT motherboards like Intel X99 etc, so probably the X370/X470s wouldn't have it anyway. There was a button on the back panel above a usb plug where you insert the usb key and press and it flashes just so and the bios gets updated. I had to update an X99 board to support Broadwell and it worked great once I found that was the issue.
|
![]() |
|
Hughlander posted:Ill pimp my solution again. Proxmox for zfs and lxc/kvm gui. Then an lxc with appropriate app armor settings running docker with all the containers in compose. You run docker inside LXC? I might have to try that.
|
![]() |
|
Paul MaudDib posted:Let's say I'm using ZFS. I have mainpool/myfs@001 and then I do a send/recv to copy it to backuppool. At some future date I now have mainpool/myfs@003. How do I send the new snapshot to backuppool such that it knows they share a block history/etc? As to your other question, everything in ZFS is GUIDs, they can be found by using zdb (-C for the pool, but datasets, snapshots, and everything else also have them). Another interesting use of zdb is the -h switch. Mr Shiny Pants posted:You run docker inside LXC? I might have to try that. EDIT: That's how its theoretically possible for someone to write a API-compatible version of docker or kubernetes that can use FreeBSD jails with SysVIPC and VIMAGE separation (some have tried, but I don't think they ever succeeded because docker and kubernetes are fast-moving goals unless you have a big team). D. Ebdrup fucked around with this message at 12:29 on Aug 11, 2018 |
![]() |
|
I have a shitload of DDR3 ECC RAM and a SC846 chassis. What's the most "recent" motherboard/Xeon CPU I can put in the chassis?
|
![]() |
|
Don't they fit micro-ATX, ATX and EATX? Pretty sure you can put just about any motherboard you care to mention in that, maybe with the exception of Denverton.
|
![]() |
|
Wow, there are many variants on the SC864. What do you have, the 4U 24 front drive bay one? Finding a SuperMicro mobo thatll take DDR3 RDIMMs will be about $300 on eBay, like a X9SRi-F and a Xeon E5-2630v2 2.60GHz six-core (those procs are cheap... ~$60-80) You could even just put it in an ATX case and give it a Noctua tower cooler to get rid of that damn fan noise). But if you plan to reuse that chassis, find its exact variant and look it up on SMs support site. Get the dimensions and/or list of compatible mobo types. Tapedump fucked around with this message at 02:57 on Aug 13, 2018 |
![]() |
|
Where can I find a crash course in iostat and zfs stat commands? I have a gut feel that my raid is under performing but I don't know how to measure it. It saturates my 1g link but so what. I have 2 zpools, each /Z1s. 1 is 10 8TB Reds, the other is 6 4TB Reds. The 4TBs are directly connected to a SUPERMICRO MBD-X10SL7-F-O that has a LSI2308 in IT Mode. The 8TBs are attached via a LSI LSI00188 PCI Express Low Profile Ready SATA / SAS 9200-8e Controller Card (Also flashed to IT mode) and then a SANS DIGITAL TR8X6G JBOD enclosure. Again everything works, everything 'seems' fine. It just 'feels' slow. But if I can't measure it I can't explain it. I'd assume that since everything is 6GB/sec I should be able to do a dd and time how long it takes to write to each of the arrays, and then maybe read from one and write to the other?
|
![]() |
|
Despite SATA3 being 6Gbps (750MBps), you'll be lucky if you can sustain more than 100MBps on spinning rust, once you've exhausted its cache. Even SATA3 SSDs top out at 550MBps (which they've been capable of doing for a long time, the Intel 520 480GB that I bought new in Q1'12 can read and write at those speeds). If you're on FreeBSD, you can use 'systat -iostat 1' to get the individual drive I/O which culmunatively adds up to the speed your pool is written to. It's also not inconsequential what CPU you have, since you're doing distributed parity calculations and checksumming which presumably isn't being offloaded. What sort of speeds are you seeing for what workload (ie. what's doing the reading/writing)?
|
![]() |
|
If a raid controller lists JBOD, does that mean it does IT mode? I run freenas at home. I am looking at a used server auction which lists th e server's raid contoller as "MegaRAID SAS 9261-8i lsi SAS2108 ROC." I've found old posts saying IT mode can't be flashed on the megaraid 9261, and other posts saying SAS2108 is flashable. Any goons know about this?
|
![]() |
|
D. Ebdrup posted:Despite SATA3 being 6Gbps (750MBps), you'll be lucky if you can sustain more than 100MBps on spinning rust, once you've exhausted its cache. Even SATA3 SSDs top out at 550MBps (which they've been capable of doing for a long time, the Intel 520 480GB that I bought new in Q1'12 can read and write at those speeds). Sure each drive can do 100MBps but when its striped across 5 or 9 of them I figured for higher. Im on Debian ZFSOnLinux with a 6 core Xeon. And part of my challenge was getting the actual good read/write numbers. I didnt save the stats but I did a test of duplicacy to see how it handles millions of files and large datasets by making a new dataset on the 10 drive zpool and backing up from the 6 drive zpool, and it wait was like 40% on the system while it was running. Which makes sense, it should be IO bound not CPU bound when doing that but it just got me thinking that I need to look at this more.
|
![]() |
|
You guys talking about personal cloud storage in this thread? I checked the old "RAID is not backup" thread and its been archived.
|
![]() |
|
JacksAngryBiome posted:If a raid controller lists JBOD, does that mean it does IT mode? I run freenas at home. I am looking at a used server auction which lists th e server's raid contoller as "MegaRAID SAS 9261-8i lsi SAS2108 ROC." JBOD != IT mode. JBOD is basically the worst form of RAID0 possible, with all the failure and none of the performance. I have a Supermicro SAS2108-based controller in my work lab. It won't take any IT-mode firmware so I created a bunch of single disk RAID0 arrays and put ZFS on top of them, don't do this.
|
![]() |
|
IOwnCalculus posted:I have a Supermicro SAS2108-based controller in my work lab. It won't take any IT-mode firmware so I created a bunch of single disk RAID0 arrays and put ZFS on top of them, don't do this. I did that too (actually my old boss) and to add insult to injury zfs on linux doesn't support trim so it need to formatted regularly and even worse the raid card doesn't support trim either so the drives have to be physically removed and wiped on another system!
|
![]() |
|
IOwnCalculus posted:JBOD != IT mode. JBOD is basically the worst form of RAID0 possible, with all the failure and none of the performance. What's the difference? One is raw device pass-thru and the other is going through the RAID abstraction layers?
|
![]() |
|
Perplx posted:I did that too (actually my old boss) and to add insult to injury zfs on linux doesn't support trim so it need to formatted regularly and even worse the raid card doesn't support trim either so the drives have to be physically removed and wiped on another system! Is TRIM relevant on spindles? If so, maybe that's why the performance of this thing sucks balls ![]() H110Hawk posted:What's the difference? One is raw device pass-thru and the other is going through the RAID abstraction layers? Yeah. IT mode literally has the drive show up same as if it was plugged into your motherboard's own SATA controller. IR mode with a bunch of RAID0 volumes means that your ZFS is not technically writing to the raw disk, but to what the RAID controller says is a raw disk. In theory this could cause some bad behaviors and possibly missed data corruption. In practice it probably doesn't matter *that* much but if you're going to buy a controller, buy the right one. My lab will deal with this because I'm not putting a cent into it and that's the hardware I have. If something went shitty and I had to completely blow away the storage it would just be a few hours of work to recover anyway. IOwnCalculus fucked around with this message at 19:51 on Aug 15, 2018 |
![]() |
|
IOwnCalculus posted:Is TRIM relevant on spindles? If so, maybe that's why the performance of this thing sucks balls It doesn't matter for spindles but for my all flash array it matters a lot. Also hard drive are just trash for random io if that's your workload. The best spinning rust gets about 200 iops and a 970 evo gets half a million.
|
![]() |
I'm hoping someone can shine some light on this. I posted about it on reddit (unfortunately.) I want to run bare-metal Hyper-V Server with a Server (GUI) VM. I have 2x 4TB drives for data storage that I want passed to the VM. The VM will maintain the shared folders, file downloads, etc. I don't know enough about Storage Spaces and Hyper-V disk management to determine the best approach. From all of the research I've done so far, it sounds like this may be the best approach, but it leaves a bunch of questions and ifs I have about it: Hyper-V Server 2016 -> Create VHDX files equal to the size of the 4TB disks and then save the VHDX files to those disks. Create the Storage Pool on the Hyper-V Server 2016 using the VHDX files. Offline the resulting Storage Pool drive, configure the Server GUI VM to use the Storage Pool Online the Storage Pool on the VM as one drive (so let's say drive X:\) Does this make sense? My experience so far has been that straight pass-through of the physical disks then doing all of the pool creation, etc. on the VM is not only unsupported, but it also broke entirely when I tried it. https://www.reddit.com/r/homelab/co..._best_practice/
|
|
![]() |
|
IOwnCalculus posted:Is TRIM relevant on spindles?
|
![]() |
|
Hughlander posted:Sure each drive can do 100MBps but when its striped across 5 or 9 of them I figured for higher. Im on Debian ZFSOnLinux with a 6 core Xeon. And part of my challenge was getting the actual good read/write numbers. I didnt save the stats but I did a test of duplicacy to see how it handles millions of files and large datasets by making a new dataset on the 10 drive zpool and backing up from the 6 drive zpool, and it wait was like 40% on the system while it was running. Which makes sense, it should be IO bound not CPU bound when doing that but it just got me thinking that I need to look at this more. Best recommendation I have is to familiarize yourself with Linux's new tracing framework eBPF (which Brendan Gregg of Solaris/Dtrace fame has put a lot of work into getting useful information out of on behalf of Netflix, since their front-end servers run Linux and they had performance issues on them). In general though, when it comes to benchmarks, I like to keep in mind the BUGS section of the diskinfo man-page in FreeBSD. Are you sure what you're hitting isn't the performance loss associated with wide stripe widths of RAIDzN? Also, is there a reason you created the pool layout like this, or is it just that it grew that way over time? D. Ebdrup fucked around with this message at 00:05 on Aug 16, 2018 |
![]() |
|
Star War Sex Parrot posted:For some SMR drives, yes. These aren't those. In the category of $free.99, it's a pool of a bunch of ancient 1 and 2 TB SATA disks with 40-60k hours on them. It was originally two separate four-disk pools of two mirrors each, now it's one pool with four mirrors and performance is slightly less shit. I'm actually slightly amazed none of the drives have taken a shit yet.
|
![]() |
|
Star War Sex Parrot posted:For some SMR drives, yes. Why is that? I thought everything related to SMR was handled by the drives controller/firmware?
|
![]() |