need help with ZFS
Mikhail (Plus Plus)
m at plus-plus.su
Mon Aug 31 12:09:31 UTC 2009
Pawel Jakub Dawidek wrote:
> I'm running your test on pretty low-end h/w (i386, 1GB of RAM, two cores)
> and cannot reproduce the problem for few hours now. The only tuning I did
> was to set vm.kmem_size to 1GB. You still need to do this very tuning even
> on amd64.
Thanks for your response.
I just opened server case, and one of the possible reasons why system
panics could be due to faulty hardware.. At least right now I see one
SATA controller not sitting properly in it's slot. This could be due to
bad transportation from colo DC.
I'm going to fix all these small hardware-related issues and will re-run
tests once again.
Below is a list of settings you requested:
> # sysctl vm.kmem_size
vm.kmem_size: 2753769472
> # sysctl vm.kmem_size_max
vm.kmem_size_max: 329853485875
> # sysctl vfs.zfs
vfs.zfs.arc_meta_limit: 430276480
vfs.zfs.arc_meta_used: 1534208
vfs.zfs.mdcomp_disable: 0
vfs.zfs.arc_min: 215138240
vfs.zfs.arc_max: 1721105920
vfs.zfs.zfetch.array_rd_sz: 1048576
vfs.zfs.zfetch.block_cap: 256
vfs.zfs.zfetch.min_sec_reap: 2
vfs.zfs.zfetch.max_streams: 8
vfs.zfs.prefetch_disable: 0
vfs.zfs.recover: 0
vfs.zfs.txg.synctime: 5
vfs.zfs.txg.timeout: 30
vfs.zfs.scrub_limit: 10
vfs.zfs.vdev.cache.bshift: 16
vfs.zfs.vdev.cache.size: 10485760
vfs.zfs.vdev.cache.max: 16384
vfs.zfs.vdev.aggregation_limit: 131072
vfs.zfs.vdev.ramp_rate: 2
vfs.zfs.vdev.time_shift: 6
vfs.zfs.vdev.min_pending: 4
vfs.zfs.vdev.max_pending: 35
vfs.zfs.cache_flush_disable: 0
vfs.zfs.zil_disable: 0
vfs.zfs.version.zpl: 3
vfs.zfs.version.vdev_boot: 1
vfs.zfs.version.spa: 13
vfs.zfs.version.dmu_backup_stream: 1
vfs.zfs.version.dmu_backup_header: 2
vfs.zfs.version.acl: 1
vfs.zfs.debug: 0
vfs.zfs.super_owner: 0
> # zpool status
pool: mp3pool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mp3pool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad24 ONLINE 0 0 0
ad8 ONLINE 0 0 0
ad18 ONLINE 0 0 0
ad20 ONLINE 0 0 0
ad22 ONLINE 0 0 0
ad10 ONLINE 0 0 0
spares
ad26 AVAIL
errors: No known data errors
> # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mp3pool 5.44T 3.54T 1.90T 65% ONLINE -
> # zfs get all <your_test_fs>
NAME PROPERTY VALUE SOURCE
mp3pool type filesystem -
mp3pool creation Thu Feb 12 23:02 2009 -
mp3pool used 2.94T -
mp3pool available 1.51T -
mp3pool referenced 2.94T -
mp3pool compressratio 1.00x -
mp3pool mounted yes -
mp3pool quota none default
mp3pool reservation none default
mp3pool recordsize 128K default
mp3pool mountpoint /mp3pool default
mp3pool sharenfs off default
mp3pool checksum on default
mp3pool compression off default
mp3pool atime on default
mp3pool devices on default
mp3pool exec on default
mp3pool setuid on default
mp3pool readonly off default
mp3pool jailed off default
mp3pool snapdir hidden default
mp3pool aclmode groupmask default
mp3pool aclinherit restricted default
mp3pool canmount on default
mp3pool shareiscsi off default
mp3pool xattr off temporary
mp3pool copies 1 default
mp3pool version 3 -
mp3pool utf8only off -
mp3pool normalization none -
mp3pool casesensitivity sensitive -
mp3pool vscan off default
mp3pool nbmand off default
mp3pool sharesmb off default
mp3pool refquota none default
mp3pool refreservation none default
mp3pool primarycache all default
mp3pool secondarycache all default
>
> And place /var/run/dmesg.boot somewhere?
http://91.206.231.132/~miha/zfs.dmesg.boot
Thanks,
Mikhail.
More information about the freebsd-fs
mailing list