[Bug 211381] L2ARC degraded, repeatedly, on Samsung SSD 950 Pro nvme

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Wed Jul 27 16:14:05 UTC 2016


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=211381

--- Comment #7 from braddeicide at hotmail.com ---
Yes it's on a geli

dtrace -n 'sdt:::l2arc-iodone /args[0]->io_error != 0/ { printf("io_error = %d
io_offset = %d io_size = %d", args[0]->io_error, args[0]->io_offset,
args[0]->io_size); }'
dtrace: description 'sdt:::l2arc-iodone ' matched 1 probe
dtrace: buffer size lowered to 512k
CPU     ID                    FUNCTION:NAME
  2  57400                none:l2arc-iodone io_error = 22 io_offset = 0 io_size
= 0
  1  57400                none:l2arc-iodone io_error = 22 io_offset = 0 io_size
= 0
  0  57400                none:l2arc-iodone io_error = 22 io_offset = 0 io_size
= 0
  1  57400                none:l2arc-iodone io_error = 22 io_offset = 0 io_size
= 0
  0  57400                none:l2arc-iodone io_error = 22 io_offset = 0 io_size
= 0
  1  57400                none:l2arc-iodone io_error = 22 io_offset = 0 io_size
= 0
  0  57400                none:l2arc-iodone io_error = 22 io_offset = 0 io_size
= 0
  1  57400                none:l2arc-iodone io_error = 22 io_offset = 0 io_size
= 0
  2  57400                none:l2arc-iodone io_error = 22 io_offset = 0 io_size
= 0
  0  57400                none:l2arc-iodone io_error = 22 io_offset = 0 io_size
= 0

# diskinfo -v /dev/nvd0p3.eli
/dev/nvd0p3.eli
        4096            # sectorsize
        483117756416    # mediasize in bytes (450G)
        117948671       # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        7341            # Cylinders according to firmware.
        255             # Heads according to firmware.
        63              # Sectors according to firmware.
        S2GMNCAGA01093W # Disk ident.

Oh 4k, this device is pretty deadset on being 512

# diskinfo -v /dev/nvd0p3
/dev/nvd0p3
        512             # sectorsize
        483117760512    # mediasize in bytes (450G)
        943589376       # mediasize in sectors
        512             # stripesize
        0               # stripeoffset
        58735           # Cylinders according to firmware.
        255             # Heads according to firmware.
        63              # Sectors according to firmware.
        S2GMNCAGA01093W # Disk ident.

I dropped the cache and the dtrace errors stopped so that's the source alright.
I detached the geli, attached the geli and added cache and they started again.
Dropped the cache, detached geli, added nvd0p3 as cache without geli and no
errors and the cache started building.

                  capacity     operations    bandwidth
pool           alloc   free   read  write   read  write
cache              -      -      -      -      -      -
  nvd0p3        514M   449G      0      0      0      0

I recreated geli with 512 Sectorsize, reattached and so far no errors, but it
worked the first time for a while too.  Cache is building, no dtrace errors
yet, i'll watch.

# diskinfo -v /dev/nvd0p3.eli
/dev/nvd0p3.eli
        512             # sectorsize
        483117760000    # mediasize in bytes (450G)
        943589375       # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        58735           # Cylinders according to firmware.
        255             # Heads according to firmware.
        63              # Sectors according to firmware.
        S2GMNCAGA01093W # Disk ident.

re r300039, yeah looks useful
------------------------------------------------------------------------
r300039 | avg | 2016-05-17 18:43:50 +1000 (Tue, 17 May 2016) | 5 lines

MFC r297848: l2arc: make sure that all writes honor ashift of a cache device

Note: no MFC stable/9 because it has become quite out of date with head,
so the merge would be quite labourious and, thus, risky.

------------------------------------------------------------------------

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the freebsd-fs mailing list