Re: zfs mirror pool online but drives have read errors
- In reply to: Bram Van Steenlandt: "zfs mirror pool online but drives have read errors"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Sun, 27 Mar 2022 06:02:25 UTC
First, make and test some backups with whatever you prefer, zfs send, rsync, tar... The zfs got errors, pool had enough read redundancy for scrubs / silvering to work, drives are failing underneath. Can also be tried in other box to confirm in isolation... Test full drive read with... dd if=/dev/drive of=/dev/null bs=1m conv=noerror Test full drive write with... dd if=/dev/random of=/dev/drive bs=1m With camcontrol you also try standard and manufacturer drive based sanitize / security erase / initialize functions. These write operations may map out drive bad sectors so the next dd read and write tests may be clean. Then trust the drive again or not. > It did repair 54M on the last scrub, I did another scrub today and again > repairs are needed (only 128K this time). More fails. > smartctl does see the errors (but still says SMART overall-health > self-assessment test result: PASSED ): It's an estimation. Hardware fail is unpredictable. > -zfs doesn't remove the drives because... Likely. > -Both drives are unreliable... Likely. > Could more expensive ssd's made a difference here ? Maybe not, people try to hammer on cheap USB sticks ZFS arrays thousand of write cycles for fun. https://www.youtube.com/watch?v=7z526m1jvls zfs set atime=off compression=on ... saves life too. > 1200TBW /2TB = 600 cycles > "zfs send > imgfile" > what would have have happened here if more and > more read errors would occur ? zpool failmode=wait If the send completed ok, the image should be good. Users are supposed to test their backups anyway.