zfs, a directory that used to hold lot of files and listing pause
Eugene M. Zheganin
emz at norma.perm.ru
Thu Oct 20 15:34:59 UTC 2016
Hi.
On 20.10.2016 18:54, Nicolas Gilles wrote:
> Looks like it's not taking up any processing time, so my guess is
> the lag probably comes from stalled I/O ... bad disk?
Well, I cannot rule this out completely, but first time I've seen this
lag on this particular server about two months ago, and I guess two
months is enough time for zfs on a redundant pool to ger errors, but as
you can see:
]# zpool status
pool: zroot
state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
scan: resilvered 5.74G in 0h31m with 0 errors on Wed Jun 8 11:54:14 2016
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/zroot0 ONLINE 0 0 0 block size: 512B
configured, 4096B native
gpt/zroot1 ONLINE 0 0 0
errors: No known data errors
there's none. Yup, disks have different sector size, but this issue
happened with one particular directory, not all of them. So I guess this
is irrelevant.
> Does a second "ls" immediately returned (ie. metadata has been
> cached) ?
Nope. Although the lag varies slightly:
4.79s real 0.00s user 0.02s sys
5.51s real 0.00s user 0.02s sys
4.78s real 0.00s user 0.02s sys
6.88s real 0.00s user 0.02s sys
Thanks.
Eugene.
More information about the freebsd-stable
mailing list