«608 »
  • Post
  • Reply
Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.


GokieKS posted:

Got in my Norco RPC-4020 and the 120mm fan plane this week, and my file server migration is well underway. A few thoughts:

1. I tested out the stock 6 x 80mm fans configuration just to see how loud it is, and it was hilariously so. The 3 x 120mm fan plane might well be a must-have for this thing. As I discovered that I actually had 3 120mm fans laying around - 2 NZXT fans which I had previously bought when I was trying to make a DIY "air conditioner" (don't ask) and the stock Corsair H60 fan, I figured I'd use those and see how they work. The Corsair is PWM, while the NZXTs are not, but all three are actually very quiet, enough that I didn't order any 120mm fans and won't replace them unless I find that they're insufficient as I add more drives. I did remove the 2 rear 80mm fans altogether, and ordered 2 Arctic F8 PWMs, which hopefully should be fairly quiet as well.

2. I'd forgotten how terrible the mounting system for Intel's stock HSF is. On the upside, it's quiet enough under low load that I don't need to go out and buy another Cooler Master TX3 (which will fit in a 4U and is carried by my local Micro Center) immediately and can decide if I want to stick with that (not my first choice, as use of 92mm fans are very limiting), find a 120mm tower HSF that will fit (there are only a very few), or get a top-down (Noctua NH-C14 would probably work great, but is a tad overkill). I also used my old PC&P 610W PSU, and despite it not being modular, the size of the case meant that cabling was really not much of an issue at all, even with 14 SATA cables. So really don't regret not getting the 4220 at all.



3. Supermicro's IPMI is pretty great... when it works. Which it does not for me in any capacity other from the web GUI, because they are Java programs that refuse to work (either at all, or properly) on any platform I tried (OS X, W7 in a VM, WinXP in a VM, W7). It probably requires a specific older version of JRE, but I'll be damned if can be bothered to try a bunch of them to find out. The lesson, as always, is: Fuck Java. And I couldn't get Virtual Media to work for the life of me, either with my W7 machine or my old file server acting hosting the share. Along the way of trying to get it to work, I managed to screw up the web GUI too, and had to use a Linux live USB disk to use the IPMI configure utility to reset it.

4. Installed ESXi 5.5 (had to customize it to get proper NIC drivers), before deciding that virtualizing my storage was probably a dumb idea all things considered (just adds more places where things can get screwed up), so nuked it and just installed Ubuntu LTS. Installing the necessary packages (Samba, Netatalk, ZFS on Linux) went without a hitch, as did importing my old zpool. So now it's chugging along doing a full scrub of the two 3x2TB RZ1 vdevs, which should take about ~14 hours.

5. Ordered 8 WD Red 3TBs - was a bit hesitant to get them from Newegg, but they had a 10% off coupon that meant saving over $100 total, and that was hard to pass up. And maybe by ordering that many at once they'll ship them well? Not sure if 16GB of RAM will be sufficient once I add those 8 - I wanted to wait for 16GB sticks of ECC UDIMM to become widely available (at non-absurd prices) to bring up to 48GB total, but who knows when that'll happen, so might just have to get another two 8GB sticks. Also, still not sure if I want to add an SSD to use as SLOG/ZIL drive.

It looks like one or more of the clips on your heatsink/fan are rotated incorrectly.

Rev. Sam
Mar 2, 2014



i ordered 3 hard drives from amazon and they all came in their own boxes, inside boxes.

just wanted to let you know great transaction, will buy again, +5.

now i need some1 to come build my computer for me.

GokieKS
Dec 15, 2012

Mostly Harmless.


Don Lapre posted:

It looks like one or more of the clips on your heatsink/fan are rotated incorrectly.

The stock Intel HSF doesn't need the clips to be rotated to be installed - just pushed through. The rotation is to make it easier to prepare to push through or remove. My HSF is installed securely and working fine (CPU at 35C, fan spinning at ~1000RPM).

Also, it's coming off soon anyway.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.


GokieKS posted:

The stock Intel HSF doesn't need the clips to be rotated to be installed - just pushed through. The rotation is to make it easier to prepare to push through or remove. My HSF is installed securely and working fine (CPU at 35C, fan spinning at ~1000RPM).

Also, it's coming off soon anyway.

You rotate the clip to remove the heatsink. If they are rotated incorrectly then they can pop out. There is a lock. There is a specific spot they are supposed to be rotated to.

https://www.youtube.com/watch?v=6abFUpPPCfI#t=123s

Don Lapre fucked around with this message at 00:38 on Mar 3, 2014

Legdiian
Jul 14, 2004


Arob1000 posted:

All Synology machines had a vulnerability that someone used to install a rootkit + Bitcoin miner on a ton of them. Updating to the newest DSM version is supposed to fix it: http://forum.synology.com/enu/viewtopic.php?f=1&t=81316 (also appears as a 'lolz' directory in /etc)

Does anyone know how the rootkit was installed? I want to know before I reinstall third-party packages, but Synology locked the discussion thread when they posted the announcement and fix.

I got hit with this on my 1513+ and I too would like to know how it was done. My first clue that something was amiss was SABnzbd started spitting out errors saying "Unpacking failed, See log". When I saw "lolz" in the error log I knew it was a bad thing. I followed these directions from the Synology forums and I was up and running in about 20 minutes without losing any data or packages.

quote:

OK, following the posts from Mark and Mads I finally managed to get my NAS back up and running without losing anything. All the apps and configurations are still there.
Here is how you do it:
1. Shut down the NAS
2. Remove all the hard drives from the NAS
3. Find a spare hard drive that you will not mind wiping and insert it into the NAS
4. Use Synology Assistant to find the NAS and install the latest DSM onto this spare hard drive (use the file DSM_DS410_3827.pat from Synology)
5. When the DSM is fully running on this spare hard drive, shut down the NAS from the web management console.
6. Remove the spare drive and insert ALL your original drives.
7. Power up the NAS and wait patiently. If all goes well after about a minute you will hear a long beep and the NAS will come online.
8. Use Synology Assistant to find the NAS. It should now be visible with the status "migratable".
9. From Synology Assistant choose to install DSM to the NAS, use the same file you used in step 4 and specify the same name and IP address as it was before the crash.
10. Because the NAS is recognized as "migratable", the DSM installation will NOT wipe out the data on either the system partition nor the data partition.
11. After a few minutes, the installation will finish and you will be able to log in to your NAS with your original credentials.

In my case after logging in, the DSM detected that the system partition has crashed and it started repairing it automatically. After about 10-30 seconds it was all over and the system was back to healthy state.

Good luck and thanks to all the members in this community that helped resolve this!

eddiewalker
Apr 28, 2004


Synology question: due to some shuffling around, I now I only have a Volume 2. No Volume 1. That wouldn't be a problem, except a bunch of bootstrap/ipkg tools are hardlinked to install on Volume 1 and it's kind of a pain.

How can I rename my Volume 2 to "1" without messing DSM settings up? I hoped that rebooting would fix it automatically, but it didn't, nor did deleting the Volume 1 mount point via ssh.

Edit: found a big "Not even possible." on the syno forums. Looks like the only solution is shuffling disks around to make a new Volume 1 and move a few terabytes. Again.

eddiewalker fucked around with this message at 03:43 on Mar 3, 2014

madhatter160
Aug 6, 2004
Mad as a hatter

I'm looking for advice on a RAID configuration for a home NAS/media server I'm building. I will be installing FreeNAS for the OS and using ZFS for the file system. The motherboard and case can accommodate up to 10 internal HDDs. In order to keep the initial investment down, I am ordering two 4TB WD Red drives.

My plan was to create a vdev that mirrors the two drives to start. Then, when I want to expand capacity, I will add additional two-drive mirrored vdevs to the zpool.

The alternative I was considering is RaidZ2. The downside is that I'd have to purchase additional HDDs right now. The upside is having more redundancy per span. I am not concerned with write speeds, but boosting read speeds would be nice since the server could be streaming to up to 4 devices at once.

Does anyone have any advice on which way I should go?

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

madhatter160 posted:

Does anyone have any advice on which way I should go?
Looking at newegg prices, it looks like you will pay around $370 for a pair of 4TB reds, which will give you 4TB usable in a mirrored vdev. Alternatively you could spend $300 and get 5x 1TB drives, which in raidz2 would give you 3TB usable. You can add 5 more later to double the size, or replace the existing disks one at a time to add capacity. You could also spend $360 to buy 4x 2tb drives, run a 2+1 raidz1 with a hotspare, and then add more 2+1 vdevs later. The hotspare does not provide the same level of protection from data loss as an additional parity disk, but can be shared among vdevs. When you are all done you could have 12TB of storage for $900, vs $1110 for 3 mirrored pairs of 4TB disks.

madhatter160
Aug 6, 2004
Mad as a hatter

adorai posted:

Looking at newegg prices, it looks like you will pay around $370 for a pair of 4TB reds, which will give you 4TB usable in a mirrored vdev. Alternatively you could spend $300 and get 5x 1TB drives, which in raidz2 would give you 3TB usable. You can add 5 more later to double the size, or replace the existing disks one at a time to add capacity. You could also spend $360 to buy 4x 2tb drives, run a 2+1 raidz1 with a hotspare, and then add more 2+1 vdevs later. The hotspare does not provide the same level of protection from data loss as an additional parity disk, but can be shared among vdevs. When you are all done you could have 12TB of storage for $900, vs $1110 for 3 mirrored pairs of 4TB disks.

Thanks for the reply. There are a lot of articles against raidz1 (RAID5) due to the vulnerability to a second disk failure during a rebuild. So, I'm a bit leery of going that route. It seems like doing the 5x raidz2 is going to give me the most storage and greater redundancy inside of a vdev. How concerned should I be about losing a third HDD during a rebuild?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

madhatter160 posted:

Thanks for the reply. There are a lot of articles against raidz1 (RAID5) due to the vulnerability to a second disk failure during a rebuild. So, I'm a bit leery of going that route. It seems like doing the 5x raidz2 is going to give me the most storage and greater redundancy inside of a vdev. How concerned should I be about losing a third HDD during a rebuild?
It's possible, but not particularly likely. You really need to balance it out against what you're storing. Chances are exceptionally high that if you're mostly storing media (which can be replaced--it may be obnoxious to re-download/rip/whatever it, but it's not like it's gone forever), the cost of anything higher than a reasonably sized Z1 or Z2 is overkill. Doubly so if you have your personal documents and other hard-to-replace stuff backed up elsewhere (cloud, off-site, etc. RAID is not a backup, yadda yadda). Now, if it's mission-critical files that absolutely cannot be replaced, or the time to recover them from backups would be unacceptable, then sure, maybe worry about it.

tl;dr for a home-server scenario you're way over-thinking HDD loss.

DrDork fucked around with this message at 02:34 on Mar 4, 2014

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

madhatter160 posted:

Thanks for the reply. There are a lot of articles against raidz1 (RAID5) due to the vulnerability to a second disk failure during a rebuild. So, I'm a bit leery of going that route. It seems like doing the 5x raidz2 is going to give me the most storage and greater redundancy inside of a vdev. How concerned should I be about losing a third HDD during a rebuild?
First up, the concern you mention is one of an Unrecoverable Read Error. The chances of getting an unrecoverable read error on a rebuild of a 2+1 array are pretty slim, but not approaching zero. Luckily, since I am going to assume that most of your data is not irreplaceable, the magic of ZFS is that if you do happen to get an URE it will continue to rebuild and you will potentially have a few corrupted files, and you will know which ones. If you go with a raidz2 array, you should not be concerned about data loss until you get to much larger raidgroup sizes.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



My server has drives in a mdadm array and ZFS arrays. I'm about to move them all to a new motherboard/CPU/RAM.

I think I know this already, but I want to confirm:

ZFS and mdadm can handle the fact that they'll be plugged in to different ports, correct?

Is there any feature that I need to confirm is enabled for either tech to ensure that they figure out what's what after the move?

Also, the current system only has 4 GB of memory. Is there anything I need to configure to help ZFS take advantage of the 12 GB in the new system?

evol262
Nov 30, 2010
#!/usr/bin/perl

Thermopyle posted:

My server has drives in a mdadm array and ZFS arrays. I'm about to move them all to a new motherboard/CPU/RAM.

I think I know this already, but I want to confirm:

ZFS and mdadm can handle the fact that they'll be plugged in to different ports, correct?

Is there any feature that I need to confirm is enabled for either tech to ensure that they figure out what's what after the move?

Also, the current system only has 4 GB of memory. Is there anything I need to configure to help ZFS take advantage of the 12 GB in the new system?

ZFS handles this gracefully.

mdadm not so much, sometimes. It'll probably come up as md127 or something, and you'll need to go through the "mdadm --detail --scan >> /etc/mdadm.conf" bit, then it should be fine.

Nothing you have to do to tell ZFS to eat more memory.

SamDabbers
May 26, 2003



Fallen Rib

Thermopyle posted:

My server has drives in a mdadm array and ZFS arrays. I'm about to move them all to a new motherboard/CPU/RAM.

I think I know this already, but I want to confirm:

ZFS and mdadm can handle the fact that they'll be plugged in to different ports, correct?

Is there any feature that I need to confirm is enabled for either tech to ensure that they figure out what's what after the move?

Also, the current system only has 4 GB of memory. Is there anything I need to configure to help ZFS take advantage of the 12 GB in the new system?

Both mdadm and ZFS use UUIDs to determine which drives are part of a particular array, so no, neither care about which physical port a drive is plugged into.

You should do a zfs export of your zpools before moving them to the new system. If you forget, you'll have to use zfs import -f to bypass the error that they weren't exported before moving, but it won't actually hurt anything.

I'm not sure about ZFS on Linux, but on FreeBSD/IllumOS it'll automatically use all the RAM you can throw at it. IIRC the ARC auto-sizing on Linux needs manual configuration since it's not integrated with the rest of the kernel caches. Google should help on this one.

SamDabbers fucked around with this message at 16:54 on Mar 4, 2014

sleepy gary
Jan 11, 2006



I wish I had caught the fact that FreeNAS was setting aside 2gb per disk for swap space. It's a trivial loss of storage but it bothers me because the system will never need anywhere close to 8gb of swap.

Tamba
Apr 5, 2010



What's the process for reinstalling FreeNAS? Can I just export the config, set up a new flash drive and load the config from there, or do I have to export the ZFS-pool first?
I'm currently in a weird situation where an upgrade failed, and now it's still working, but unable to be upgraded any further. I'm hoping a reinstall can fix that.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Both. Export the zpool, backup your config file, wipe and reinstall/upgrade, load the config file, import the zpools.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



evol262 posted:

ZFS handles this gracefully.

mdadm not so much, sometimes. It'll probably come up as md127 or something, and you'll need to go through the "mdadm --detail --scan >> /etc/mdadm.conf" bit, then it should be fine.

Nothing you have to do to tell ZFS to eat more memory.

What's the appropriate way to handle this if I boot off of this mdadm array?

To be clear, this isn't a new OS install that I'm wanting to use an existing array with. I'm going to move the array that the OS is installed on, into a new system.

evol262
Nov 30, 2010
#!/usr/bin/perl

Thermopyle posted:

What's the appropriate way to handle this if I boot off of this mdadm array?

To be clear, this isn't a new OS install that I'm wanting to use an existing array with. I'm going to move the array that the OS is installed on, into a new system.

Booting should be ok. You may have problems when it comes time to pivot root if the devices are different. mdadm is ok with drive ordering changing, but is not ok with /dev/sda,/dev/sdb,/dev/sdc changing to /dev/sdd,/dev/sde/,/def/sdf. In the latter case, it'll fail to assemble, mdX won't start, and you'll have to use a recovery cd (mdadm --assemble --scan && mount /dev/mdX /mnt/recover && mdadm --detail --scan >> /mnt/recover/etc/mdadm.conf && reboot).

Comatoast
Aug 1, 2003


Are there any strong opinions about purchasing refurbished hard drives on amazon? The 3tb WD Reds are $10 less and no state sales tax for me if I go with the amazon fullfilled refurb.

SamDabbers
May 26, 2003



Fallen Rib

Comatoast posted:

Are there any strong opinions about purchasing refurbished hard drives on amazon? The 3tb WD Reds are $10 less and no state sales tax for me if I go with the amazon fullfilled refurb.

Only ? I'd probably only go for it if I was buying several and the warranty was the same as a new drive. I don't think it's worth it otherwise, especially without the same warranty.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell



Thanks for the help SamDabbers and evol262.

I didn't have a thing to worry about. Moved drives to new motherboard. Booted. Done.

Comatoast
Aug 1, 2003



Its more about keeping the great state of Texas's grubby hands as far away as possible. A matter of principal, if you will.

IOwnCalculus
Apr 2, 2003





Thermopyle posted:

Thanks for the help SamDabbers and evol262.

I didn't have a thing to worry about. Moved drives to new motherboard. Booted. Done.

This is why software RAID is awesome.

SamDabbers
May 26, 2003



Fallen Rib

Comatoast posted:

Its more about keeping the great state of Texas's grubby hands as far away as possible. A matter of principal, if you will.

Well if it's a matter of principle, then you should buy them even if they cost more!

If it's a matter of principal then you may end up losing more value in the end if your refurb drive craps out and you're just out of the (e.g.) 90-day warranty.

D. Ebdrup
Mar 13, 2009



Please be aware that if you have an uneven number of non-parity drives vs drives in the vdev, with an even number record size (record size defaults to 128 but is tunable at pool creation - and has to follow the binary value progression), you may get unfortuante results - the formula is something like: recordsize / (nr_of_drives - parity_drives) = maximum variable stripe size. So if you have 5 drives in raidz2, you will end up with repeating numbers - and storing that in a 512 byte or 4094 byte sector will lead to preformance problems both at write and at read.


The more time I spend looking up things for the tips and notes I post from time to time - even if I wasn't able to find it this time, the more convinced I become that zfs is arcane magic that involves blood sacrifices and virgins, possibly of the goat variety.

D. Ebdrup fucked around with this message at 23:50 on Mar 5, 2014

Ninja Rope
Oct 22, 2005

Wee.


I've posted about this before in the thread, but the performance problems aren't that bad and would only show up under really specific workloads (lots of fsync'd writes of a size that doesn't match the number of data drives * stripe size), and even then the penalty is that they'd perform as if the write were one stripe larger in size.

This is definitely something to be avoided if you're going to hit that use case (eg, write-heavy ACID compliant database), but for most home NASs it will never be noticeable.

Bob Morales
Aug 18, 2006

I love the succulent taste of cop boots

What the flying fuck is a 2.5TB HD?

Got one from WD as a refurb in exchange for a busted 2.0TB drive. Is it a 3.0TB drive with half a busted platter?

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.


Bob Morales posted:

What the flying fuck is a 2.5TB HD?

Got one from WD as a refurb in exchange for a busted 2.0TB drive. Is it a 3.0TB drive with half a busted platter?

its a drive with 4 640gb platters.

FCKGW
May 21, 2006

aaaaaaaaaa
AAAAAAAAAAA
HHHHHHHHH!!!!!!!!




Bob Morales posted:

What the flying fuck is a 2.5TB HD?

Got one from WD as a refurb in exchange for a busted 2.0TB drive. Is it a 3.0TB drive with half a busted platter?

HDD platters come in 500gb, 640gb, 1tb and 1.25tb sizes. They're then "destroked" to fit whatever size your drive is supposed to be.

Minty Swagger
Sep 8, 2005

Ribbit Ribbit Real Good


Bob Morales posted:

What the flying fuck is a 2.5TB HD?

Got one from WD as a refurb in exchange for a busted 2.0TB drive. Is it a 3.0TB drive with half a busted platter?

Hahah, I got the exact same thing a while back from a RMA! Unfortunately it threw my array for a spin and I was actually kind of pissed they didn't give me an identical sized disk.

GokieKS
Dec 15, 2012

Mostly Harmless.


HDDs from Newegg arrived. While my 8 drives were in fact individually in smaller boxes with 1 drive per instead of the styrofoam holders (which probably require buying 12 drives?), they actually seem pretty well packaged, with a pretty rigid air bubble holder that prevents the drive from being able to move at all within the small boxes:



Time to test and hope they're all good!

E: Well, one of them was DOA. Makes a high-pitched buzzing noise and isn't being detected by the system. The rest were detected properly, and SMART data looks good. Now running badblocks on them, hopefully will only have to RMA the one drive. Might just request a refund and pick up a drive from Micro Center since they have them for $125 this month - paying the sales tax is probably worth not having to wait for the new drive to arrive.

GokieKS fucked around with this message at 20:40 on Mar 6, 2014

Moey
Oct 22, 2010

I LIKE TO MOVE IT


TIMG that shit, por favor.

Megaman
May 8, 2004
I didn't read the thread BUT...

DrDork posted:

Both. Export the zpool, backup your config file, wipe and reinstall/upgrade, load the config file, import the zpools.

Two questions, if I want to upgrade freenas and don't care about the previous config, I assume I don't need to back it up. And also, what is the purpose of exporting a zpool? Doesn't freenas have auto importing of pools/volumes?

MMD3
May 16, 2006

Montmartre -> Portland

About to pick up a new NAS and dump my eSATA/USB3 Drobo on craigslist. It took days to get my files off of the drobo onto an external drive.

Is the Synology 1513+ still the most recommended 5-bay NAS? I want to do Smart RAID or whatever they call it so I can add drives later. I'm mostly going to be housing a bunch of raw photography that I access w/ Lightroom and then video media files.

Just want to make sure I'm still up to date before I pull the trigger.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Megaman posted:

Two questions, if I want to upgrade freenas and don't care about the previous config, I assume I don't need to back it up. And also, what is the purpose of exporting a zpool? Doesn't freenas have auto importing of pools/volumes?
(1) Correct. Just upgrade and start from scratch (which is probably a better plan than importing an old config, anyhow, if you don't have anything fancy set up).
(2) It's just being nice to the zpool. Yes, FreeNAS/FreeBSD can force an import of a not-correctly-exported zpool. In your case not exporting is unlikely to harm anything, since all it really does is force-flush anything that needed to be written to the pool and gracefully remove it from the OS. Since you're unlikely to be writing stuff to it when you pull it, and you don't care about the OS you're leaving behind, it really doesn't matter. Good habit, though.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.


MMD3 posted:

About to pick up a new NAS and dump my eSATA/USB3 Drobo on craigslist. It took days to get my files off of the drobo onto an external drive.

Is the Synology 1513+ still the most recommended 5-bay NAS? I want to do Smart RAID or whatever they call it so I can add drives later. I'm mostly going to be housing a bunch of raw photography that I access w/ Lightroom and then video media files.

Just want to make sure I'm still up to date before I pull the trigger.

Yea, as long as you dont need transcoding the 1513+ is great and can be expanded with the DX513 to a 10 or 15 bay.

Novo
May 13, 2003

Stercorem pro cerebro habes

Soiled Meat

ZFS on Linux appears to be shaping up quite nicely. I got sick of running SmartOS at home and decided to install Debian instead. I had forgotten to export the pool but a force import worked just fine.

MrMoo
Sep 14, 2000



Conversely BTRFS is almost ready too and by the general articles should be better than ZFS.

sellouts
Apr 23, 2003



MMD3 posted:

About to pick up a new NAS and dump my eSATA/USB3 Drobo on craigslist. It took days to get my files off of the drobo onto an external drive.

Is the Synology 1513+ still the most recommended 5-bay NAS? I want to do Smart RAID or whatever they call it so I can add drives later. I'm mostly going to be housing a bunch of raw photography that I access w/ Lightroom and then video media files.

Just want to make sure I'm still up to date before I pull the trigger.

I got the 1813+ locally via eBay (make offer, offer cash + pickup, cancel ebay auction) for the cost of a 1513+. So you may want to keep your eyes open. Brand new in sealed box -- guy got it in exchange for some IT work he did and ended up flipping it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«608 »