This diskfailure should not panic a system, but just disconnect disk from ZFS
Willem Jan Withagen
wjw at digiware.nl
Sat Jun 20 14:50:58 UTC 2015
Hi,
Found my system rebooted this morning:
Jun 20 05:28:33 zfs kernel: sonewconn: pcb 0xfffff8011b6da498: Listen
queue overflow: 8 already in queue awaiting acceptance (48 occurrences)
Jun 20 05:28:33 zfs kernel: panic: I/O to pool 'zfsraid' appears to be
hung on vdev guid 18180224580327100979 at '/dev/da0'.
Jun 20 05:28:33 zfs kernel: cpuid = 0
Jun 20 05:28:33 zfs kernel: Uptime: 8d9h7m9s
Jun 20 05:28:33 zfs kernel: Dumping 6445 out of 8174
MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91%
Which leads me to believe that /dev/da0 went out on vacation, leaving
ZFS into trouble.... But the array is:
----
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP
zfsraid 32.5T 13.3T 19.2T - 7% 41% 1.00x
ONLINE -
raidz2 16.2T 6.67T 9.58T - 8% 41%
da0 - - - - - -
da1 - - - - - -
da2 - - - - - -
da3 - - - - - -
da4 - - - - - -
da5 - - - - - -
raidz2 16.2T 6.67T 9.58T - 7% 41%
da6 - - - - - -
da7 - - - - - -
ada4 - - - - - -
ada5 - - - - - -
ada6 - - - - - -
ada7 - - - - - -
mirror 504M 1.73M 502M - 39% 0%
gpt/log0 - - - - - -
gpt/log1 - - - - - -
cache - - - - - -
gpt/raidcache0 109G 1.34G 107G - 0% 1%
gpt/raidcache1 109G 787M 108G - 0% 0%
----
And thus I'd would have expected that ZFS would disconnect /dev/da0 and
then switch to DEGRADED state and continue, letting the operator fix the
broken disk.
Instead it chooses to panic, which is not a nice thing to do. :)
Or do I have to high hopes of ZFS?
Next question to answer is why this WD RED on:
arcmsr0 at pci0:7:14:0: class=0x010400 card=0x112017d3 chip=0x112017d3
rev=0x00 hdr=0x00
vendor = 'Areca Technology Corp.'
device = 'ARC-1120 8-Port PCI-X to SATA RAID Controller'
class = mass storage
subclass = RAID
got hung, and nothing for this shows in SMART....
Thanx,
--WjW
(If needed vmcore available)
More information about the freebsd-fs
mailing list