FreeBSD10 Stable + ZFS + PostgreSQL + SSD performance drop < 24 hours
trafdev
trafdev at mail.ru
Tue Jun 13 13:37:14 UTC 2017
> Tested on half a dozen machines with different models of SSDs
Do they all share same MB models?
I have a similar setup (OVH Enterprise SP-128-S dedicated server with
128GB RAM, 480GB SSD in ZFS mirror and an original manually installed
FreeBSD 10.3 image):
robert at sqldb:~ % uname -a
FreeBSD xxx.xxx.xxx 10.3-RELEASE-p7 FreeBSD 10.3-RELEASE-p7 #0: Thu Aug
11 18:38:15 UTC 2016
root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64
robert at sqldb:~ % uptime
6:27AM up 95 days, 9:41, 1 user, load averages: 3.29, 4.26, 5.28
zfs partition created with:
zfs create -o recordsize=128k -o primarycache=all zroot/ara/sqldb/pgsql
custom param in sysctl.conf:
vfs.zfs.metaslab.lba_weighting_enabled=0
robert at sqldb:~ % sudo dd if=/dev/urandom of=/ara/sqldb/pgsql/test.bin
bs=1M count=16000
16000+0 records in
16000+0 records out
16777216000 bytes transferred in 283.185773 secs (59244558 bytes/sec)
robert at sqldb:~ % dd if=/ara/sqldb/pgsql/test.bin of=/dev/null bs=1m
16000+0 records in
16000+0 records out
16777216000 bytes transferred in 33.517116 secs (500556670 bytes/sec)
robert at sqldb:~ % sudo diskinfo -c -t -v ada0
ada0
512 # sectorsize
480103981056 # mediasize in bytes (447G)
937703088 # mediasize in sectors
4096 # stripesize
0 # stripeoffset
930261 # Cylinders according to firmware.
16 # Heads according to firmware.
63 # Sectors according to firmware.
PHWA629405UP480FGN # Disk ident.
I/O command overhead:
time to read 10MB block 0.285341 sec = 0.014 msec/sector
time to read 20480 sectors 2.641372 sec = 0.129 msec/sector
calculated command overhead = 0.115 msec/sector
Seek times:
Full stroke: 250 iter in 0.016943 sec = 0.068 msec
Half stroke: 250 iter in 0.016189 sec = 0.065 msec
Quarter stroke: 500 iter in 0.022226 sec = 0.044 msec
Short forward: 400 iter in 0.018208 sec = 0.046 msec
Short backward: 400 iter in 0.019637 sec = 0.049 msec
Seq outer: 2048 iter in 0.066197 sec = 0.032 msec
Seq inner: 2048 iter in 0.054291 sec = 0.027 msec
Transfer rates:
outside: 102400 kbytes in 0.671285 sec = 152543 kbytes/sec
middle: 102400 kbytes in 0.640391 sec = 159902 kbytes/sec
inside: 102400 kbytes in 0.328650 sec = 311578 kbytes/sec
On 06/10/17 09:25, Caza, Aaron wrote:
> Gents,
>
> I'm experiencing an issue where iterating over a PostgreSQL table of ~21.5 million rows (select count(*)) goes from ~35 seconds to ~635 seconds on Intel 540 SSDs. This is using a FreeBSD 10 amd64 stable kernel back from Jan 2017. SSDs are basically 2 drives in a ZFS mirrored zpool. I'm using PostgreSQL 9.5.7.
>
> I've tried:
>
> * Using the FreeBSD10 amd64 stable kernel snapshot of May 25, 2017.
>
> * Tested on half a dozen machines with different models of SSDs:
>
> o Intel 510s (120GB) in ZFS mirrored pair
>
> o Intel 520s (120GB) in ZFS mirrored pair
>
> o Intel 540s (120GB) in ZFS mirrored pair
>
> o Samsung 850 Pros (256GB) in ZFS mirrored pair
>
> * Using bonnie++ to remove Postgres from the equation and performance does indeed drop.
>
> * Rebooting server and immediately re-running test and performance is back to original.
>
> * Tried using Karl Denninger's patch from PR187594 (which took some work to find a kernel that the FreeBSD10 patch would both apply and compile cleanly against).
>
> * Tried disabling ZFS lz4 compression.
>
> * Ran the same test on a FreeBSD9.0 amd64 system using PostgreSQL 9.1.3 with 2 Intel 520s in ZFS mirrored pair. System had 165 days uptime and test took ~80 seconds after which I rebooted and re-ran test and was still at ~80 seconds (older processor and memory in this system).
>
> I realize that there's a whole lot of info I'm not including (dmesg, zfs-stats -a, gstat, et cetera): I'm hoping some enlightened individual will be able to point me to a solution with only the above to go on.
>
> Cheers,
> Aaron
> This message may contain confidential and privileged information. If it has been sent to you in error, please reply to advise the sender of the error and then immediately delete it. If you are not the intended recipient, do not read, copy, disclose or otherwise use this message. The sender disclaims any liability for such unauthorized use. PLEASE NOTE that all incoming e-mails sent to Weatherford e-mail accounts will be archived and may be scanned by us and/or by external service providers to detect and prevent threats to our systems, investigate illegal or inappropriate behavior, and/or eliminate unsolicited promotional e-mails (spam). This process could result in deletion of a legitimate e-mail before it is read by its intended recipient at our organization. Moreover, based on the scanning results, the full text of e-mails and attachments may be made available to Weatherford security and other personnel for review and appropriate action. If you have any concerns about this process,
> please contact us at dataprivacy at weatherford.com.
> _______________________________________________
> freebsd-hackers at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
> To unsubscribe, send any mail to "freebsd-hackers-unsubscribe at freebsd.org"
>
More information about the freebsd-hackers
mailing list