Data corruption seen on the pool when an active path is pulled from geom_multipath device while running I/O
Sowmya L
sowmya at cloudbyte.co
Thu Jul 18 07:08:50 UTC 2013
Hi ,
Read/Write errors are recorded when an active path of the geom_multipath
device is pulled while running the i/o on dataset created for the pool.
Running I/o on dataset using dd.
Freebsd version* :* 9.0
Patch imported from stable 9* : *r229303, r234916
zpool status:
* *
pool: poola
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
poola ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
multipath/newdisk4 ONLINE 0 0 0
multipath/newdisk2 ONLINE 0 0 0
errors: No known data errors
*
*
* *
gmultipath status:
* *
Name Status Components
multipath/newdisk2 OPTIMAL da7 (ACTIVE)
da2 (PASSIVE)
multipath/newdisk1 OPTIMAL da6 (ACTIVE)
da1 (PASSIVE)
multipath/newdisk4 OPTIMAL da3 (ACTIVE)
da4 (PASSIVE)
multipath/newdisk OPTIMAL da0 (ACTIVE)
da5 (PASSIVE)
multipath/newdisk3 OPTIMAL da8 (ACTIVE)
da9 (PASSIVE)
* *
*zpool status after pulling the active path g_multipath device:*
pool: mypool1
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scan: resilvered 27.2M in 0h0m with 0 errors on Thu Jul 4 19:47:44 2013
config:
NAME STATE READ WRITE CKSUM
mypool1 ONLINE 0 0 0
mirror-0 ONLINE 0 12 0
multipath/newdisk4 ONLINE 0 27 0
multipath/newdisk2 ONLINE 0 12 0
spares
multipath/newdisk AVAIL
errors: No known data errors
Are there any dependencies for the patch that is picked from stable 9 as
mentioned above??
will be waiting for your reply.
--
Thanks & Regards,
Sowmya L
More information about the freebsd-drivers
mailing list