MySQL performance concern

Ivan Voras ivoras at freebsd.org
Sun Oct 3 18:20:08 UTC 2010


On 10/02/10 22:18, Rumen Telbizov wrote:
>   pool: tank
> config:
> 
>         NAME           STATE     READ WRITE CKSUM
>         tank           ONLINE       0     0     0
>           mirror       ONLINE       0     0     0
>             gpt/tank0  ONLINE       0     0     0
>             gpt/tank1  ONLINE       0     0     0
>           mirror       ONLINE       0     0     0
>             gpt/tank2  ONLINE       0     0     0
>             gpt/tank3  ONLINE       0     0     0
>         logs           ONLINE       0     0     0
>           mirror       ONLINE       0     0     0
>             gpt/zil0   ONLINE       0     0     0
>             gpt/zil1   ONLINE       0     0     0
>         cache
>           gpt/l2arc0   ONLINE       0     0     0
>           gpt/l2arc1   ONLINE       0     0     0
> 
>   pool: zroot
> config:
> 
>         NAME            STATE     READ WRITE CKSUM
>         zroot           ONLINE       0     0     0
>           mirror        ONLINE       0     0     0
>             gpt/zroot0  ONLINE       0     0     0
>             gpt/zroot1  ONLINE       0     0     0
>
> zroot is a couple of small partitions from two of the same SAS disks. zil
> and l2arc are 8 and 22G partitions from 32G SSDs

This looks a bit overly complex (your recovery procedure if some of the
drives goes bad will include re-creating the partition layout), but it
probably shouldn't affect performance. Just to check - mapped to
physical drives this looks like this: ("gpt/" prefix omitted for brevity):

 * tank0..tank3 : on SAS drives
 * zroot0, zroot1 : on some of the same SAS drives as above
 * zil0, zil1 : on SSD drives
 * l2arc0, l2arc1 : on the same SSD drives as above

ARC and ZIL have some very different IO characteristics, I don't know if
they would interfere with each other.

Can you spend some time looking at the output of "gstat" while the
database task is running and see if there's something odd? Like "%busy"
column going near 100% for some of them? What IO bandwidth and ops/s are
you getting?

> I pretty much have no zfs tuning done since from what I've found there
> shouldn't be any needed since I'm running 8.1 on a 64bit machine.
> Let me know if you'd like me to experiment with any ...
> 
> Some additional information:
> # sysctl vm.kmem_size
> vm.kmem_size: 5539958784
> # sysctl vm.kmem_size_max
> vm.kmem_size_max: 329853485875
> # sysctl vfs.zfs.arc_max
> vfs.zfs.arc_max: 4466216960

I have done some digging myself and it seems that two settings have
noticable impact on MySQL load:

* zfs block size - you need to re-create all mysql files to change this;
set to 8 KiB (or whatever MyISAM uses for block size)
* reducing vfs.zfs.txg.timeout to about 5 seconds

Are you using ZFS compression?

See http://jp.planet.mysql.com/entry/?id=19489 for more ideas.

Other than that, your CPUs are:

New: 2 x Dual Core Xeon E5502 1.87Ghz
Old: 2 x Xeon Quad E5410 @ 2.33GHz

You can see here how different they are:
http://en.wikipedia.org/wiki/List_of_Intel_Xeon_microprocessors

Specifically, as you are using a single-threaded client, you *need* the
additional GHz of the old server. You are quoting 30% CPU usage on the
new server - I assume this is the "total" CPU as reported by utilities
like "top", "iostat", "vmstat", etc - meaning that if the system has
four CPU cores, one of then is 100% busy (meaning 25% of the total) and
another is about 20% used.



More information about the freebsd-stable mailing list