From nobody Fri Feb 23 19:16:20 2024 X-Original-To: freebsd-fs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4ThKW55nYzz5BLLv for ; Fri, 23 Feb 2024 19:16:33 +0000 (UTC) (envelope-from wlosh@bsdimp.com) Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com [IPv6:2a00:1450:4864:20::62a]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4ThKW550kwz51Yf for ; Fri, 23 Feb 2024 19:16:33 +0000 (UTC) (envelope-from wlosh@bsdimp.com) Authentication-Results: mx1.freebsd.org; none Received: by mail-ej1-x62a.google.com with SMTP id a640c23a62f3a-a3d484a58f6so172312466b.3 for ; Fri, 23 Feb 2024 11:16:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bsdimp-com.20230601.gappssmtp.com; s=20230601; t=1708715792; x=1709320592; darn=freebsd.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=XJYSV+WedQR/vT9X9dfAh0RZcMsRRnABxjNFMp6G5NE=; b=bE2H7Jd4uFrOXT0e0KQ/a/gzPHk4ynd91TdJuYbbHtQW0moFJ1TdpSKnn+j20DRYhL DtjZLrVs3unKxQ8MtPm9HUDgHdn1XwnXO1m+Xj0yW9SRmV0PftCr6rkiK9Wh/a4N68mh EhyydZ6hq83gQYGeGw72Ei3b4O2HxPztJyiJ5eUwph15RcBdoKpliBSGc61/KqXJouGY 3nG+5eu8TRLE+KFKA0e27XxorT6q0fnZQ79b7OpU4IHXub2kwJfLWrmX6UtDwJ1mmc75 wiov/hLcMHi1LABxJbpVrxRauPwdL89rQw+GsYFLkmLb5oVvAVHwHPS+QV0zmWqZl8fP QvYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708715792; x=1709320592; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=XJYSV+WedQR/vT9X9dfAh0RZcMsRRnABxjNFMp6G5NE=; b=WYU/MDdP3J/5a7WIuvVUj74YbHaZ1H+Wbh7HWK2O7B7+8a1OO5Mm0ok/KLE5ioXOse Bu/5dmkZhn6MiafYCr+Jep4NkxXhEC/8d9JQN6ngEcQ88fpnRQWxH6icHP47MeNST6/f 070nluk97bSKa1iGTlZ2IiMRxy0fHDndN4r3mhQp2o1MI7B+5Dqsdz6johr2WYkv6syX g/fEeMFhyFyZnn4UrQ+T67OPT0Iw1+XtjmSG3RHgU3nltEHcd8Qntk5hUmYphdt/gbkR jnlSfcbHLxvnJsvbaJEcF8dM9Ld5/YJtDrLPYZ530rliM/S0nUd7CeGq3pKE1XfwuTXk gxBQ== X-Gm-Message-State: AOJu0YyJn0t1kwT2tu7k2O90PNN8VetbhzdAE8Wt0iesuFbcTpLX+oEs 0yyoUdvD6yKWsqU1qLyyJM4bttTDVdTO8GtUnRoIqPEE3Pl8QgnBXxMNdBrOlDlbJoM+elqxJOB NI6q70DuQ1d+YMcSDCJXu9RVWeaeAjcfXYUORx/o8eBERP8GwZkw= X-Google-Smtp-Source: AGHT+IEpxKKAXM89bQ5fLProebdLTrqoRdFrMEgN6eOyEPwQnvb8ZZAQmFd2/yEuH/CNFEM9GWBzstvwDH4IR0h1tDM= X-Received: by 2002:a17:906:d930:b0:a42:808b:1e49 with SMTP id rn16-20020a170906d93000b00a42808b1e49mr200646ejb.38.1708715791947; Fri, 23 Feb 2024 11:16:31 -0800 (PST) List-Id: Filesystems List-Archive: https://lists.freebsd.org/archives/freebsd-fs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-fs@freebsd.org MIME-Version: 1.0 References: <5n117579-8259-3on3-90pn-79o8n52q4q32@yvfgf.mnoonqbm.arg> In-Reply-To: From: Warner Losh Date: Fri, 23 Feb 2024 12:16:20 -0700 Message-ID: Subject: Re: really slow problem with nvme To: "Bjoern A. Zeeb" Cc: FreeBSD FS Content-Type: multipart/alternative; boundary="000000000000cd2a3f06121165e0" X-Spamd-Bar: ---- X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:15169, ipnet:2a00:1450::/32, country:US] X-Rspamd-Queue-Id: 4ThKW550kwz51Yf --000000000000cd2a3f06121165e0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Fri, Feb 23, 2024 at 12:03=E2=80=AFPM Bjoern A. Zeeb < bzeeb-lists@lists.zabbadoz.net> wrote: > On Fri, 23 Feb 2024, Warner Losh wrote: > > > On Fri, Feb 23, 2024, 10:46=E2=80=AFAM Bjoern A. Zeeb < > > bzeeb-lists@lists.zabbadoz.net> wrote: > > > >> Hi, > >> > >> this is a Samsung SSD 970 EVO Plus 1TB nvme and gpart and newfs > >> were already slow (it took like two hours for newfs). > >> > >> Here's another example now: > >> > >> # /usr/bin/time mkdir foo > >> 1.82 real 0.00 user 0.00 sys > >> > >> How does one debug this? > >> > > > > What filesystem? Sounds like UFS but just making sure. . > > yes, ufs > > > So what's the link speed and number of lanes? If it's bad i might rese= at > > (though that might not help) that looks good... > > pciconf I had checked: > > nvme0@pci4:1:0:0: class=3D0x010802 rev=3D0x00 hdr=3D0x00 vendor=3D0= x144d > device=3D0xa808 subvendor=3D0x144d subdevice=3D0xa801 > class =3D mass storage > subclass =3D NVM > bar [10] =3D type Memory, range 64, base 0x40000000, size 16384, > enabled > cap 01[40] =3D powerspec 3 supports D0 D3 current D0 > cap 05[50] =3D MSI supports 1 message, 64 bit > cap 10[70] =3D PCI-Express 2 endpoint max data 128(256) FLR RO NS > max read 512 > link x2(x4) speed 8.0(8.0) ASPM disabled(L1) ClockPM > disabled > cap 11[b0] =3D MSI-X supports 33 messages, enabled > Table in map 0x10[0x3000], PBA in map 0x10[0x2000] > ecap 0001[100] =3D AER 2 0 fatal 0 non-fatal 0 corrected > ecap 0003[148] =3D Serial 1 0000000000000000 > ecap 0004[158] =3D Power Budgeting 1 > ecap 0019[168] =3D PCIe Sec 1 lane errors 0 > ecap 0018[188] =3D LTR 1 > ecap 001e[190] =3D L1 PM Substates 1 > x4 card in a x2 slot. If that's intentional, then this looks good. > > > Though I'd bet money that this is an interrupt issue. I'd do a vmstat. = -i > > to watch how quickly they accumulate... > > That I am waiting for a full world to get onto it. I wish I could have > netbooted but not possible there currently. > > Only took 15 minutes to extract the tar now. Should have used ddb... > hadn't thought of that before... > > # vmstat -ai | grep nvme > its0,0: nvme0:admin 0 0 > its0,1: nvme0:io0 0 0 > its0,2: nvme0:io1 0 0 > its0,3: nvme0:io2 0 0 > its0,4: nvme0:io3 0 0 > its0,5: nvme0:io4 0 0 > its0,6: nvme0:io5 0 0 > its0,7: nvme0:io6 0 0 > its0,8: nvme0:io7 0 0 > > How does this even work? Do we poll? > Yes. We poll, and poll slowly. You have an interrupt problem. On an ARM platform. Fun. ITS and I are old.... foes? Friends? frenemies? As for why, I don't know. I've been fortunate never to have to chase interrupts not working on arm problems.... > And before you ask: > > [1.000407] nvme0: mem 0x40000000-0x40003fff at > device 0.0 on pci5 > [1.000409] nvme0: attempting to allocate 9 MSI-X vectors (33 supported) > [1.000410] nvme0: using IRQs 106-114 for MSI-X > [1.000411] nvme0: CapLo: 0x3c033fff: MQES 16383, CQR, AMS WRRwUPC, TO 60 > [1.000412] nvme0: CapHi: 0x00000030: DSTRD 0, NSSRS, CSS 1, CPS 0, MPSMIN > 0, MPSMAX 0 > [1.000413] nvme0: Version: 0x00010300: 1.3 > Yea, that's what I'd expect. > > How old is the drive? Fresh install? Do other drives have this same iss= ue > > in the same slot? Dies this drive have issues in other maxhines or slot= s? > > The drive is a few months old but only in the box until it went on this > board. > > I checked nvmecontrol for anything obvious but didn't see. > OK. So not 'super old nand in its death throes being slow" > > Oh, and what's its temperature? Any message in dmesg? > > Nothing in dmesg, temp seems not too bad. Took a while to get > smartmontools; > we have no way to see this in nvmecontrol in human readable form, do we? > > Temperature Sensor 1: 51 Celsius > Temperature Sensor 2: 48 Celsius > A little warm, but not terrible. 50 is where I start to worry a bit, but the card won't thermal throttle until more like 60. We don't currently have a nvmecontrol identify field to tell you this (I should add it, this is the second time in as many weeks I've wanted it). Ok I got a 2nd identical machine netbooted remotely (pressue with > problems often helps) -- slightly different freebsd version and kernel, > same baord, same type of nvme bought together: > > # /usr/bin/time dd if=3D/dev/zero of=3D/dev/nda0 bs=3D1M count=3D1024 > 1024+0 records in > 1024+0 records out > 1073741824 bytes transferred in 1.657316 secs (647879880 bytes/sec) > 1.66 real 0.00 user 0.94 sys > > and ddb> show intrcnt > .. > its0,0: nvme0:admin 24 > its0,1: nvme0:io0 126 > its0,2: nvme0:io1 143 > its0,3: nvme0:io2 131 > its0,4: nvme0:io3 128 > its0,5: nvme0:io4 135 > its0,6: nvme0:io5 147 > its0,7: nvme0:io6 143 > its0,8: nvme0:io7 144 Yea, that's what I'd expect/ Dozens to hundreds of interrupts. I'll try to make sure I can safely access both over the weekend remotely > from a more comforting place and I know where to start looking now... > > Thanks! > No problem! Warner --000000000000cd2a3f06121165e0 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


=
On Fri, Feb 23, 2024 at 12:03=E2=80= =AFPM Bjoern A. Zeeb <= bzeeb-lists@lists.zabbadoz.net> wrote:
On Fri, 23 Feb 2024, Warner Losh wrote:

> On Fri, Feb 23, 2024, 10:46=E2=80=AFAM Bjoern A. Zeeb <
> bz= eeb-lists@lists.zabbadoz.net> wrote:
>
>> Hi,
>>
>> this is a Samsung SSD 970 EVO Plus 1TB nvme and gpart and newfs >> were already slow (it took like two hours for newfs).
>>
>> Here's another example now:
>>
>> # /usr/bin/time mkdir foo
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 1.82 real=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A00.00 user=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00.00 sys
>>
>> How does one debug this?
>>
>
> What filesystem? Sounds like UFS but just making sure. .

yes, ufs

> So what's the link speed and number of lanes?=C2=A0 If it's ba= d i might reseat
> (though that might not help) that looks good...

pciconf I had checked:

nvme0@pci4:1:0:0:=C2=A0 =C2=A0 =C2=A0 =C2=A0class=3D0x010802 rev=3D0x00 hdr= =3D0x00 vendor=3D0x144d device=3D0xa808 subvendor=3D0x144d subdevice=3D0xa8= 01
=C2=A0 =C2=A0 =C2=A0class=C2=A0 =C2=A0 =C2=A0 =3D mass storage
=C2=A0 =C2=A0 =C2=A0subclass=C2=A0 =C2=A0=3D NVM
=C2=A0 =C2=A0 =C2=A0bar=C2=A0 =C2=A0[10] =3D type Memory, range 64, base 0x= 40000000, size 16384, enabled
=C2=A0 =C2=A0 =C2=A0cap 01[40] =3D powerspec 3=C2=A0 supports D0 D3=C2=A0 c= urrent D0
=C2=A0 =C2=A0 =C2=A0cap 05[50] =3D MSI supports 1 message, 64 bit
=C2=A0 =C2=A0 =C2=A0cap 10[70] =3D PCI-Express 2 endpoint max data 128(256)= FLR RO NS
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 max read 512=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 link x2(x4) = speed 8.0(8.0) ASPM disabled(L1) ClockPM disabled
=C2=A0 =C2=A0 =C2=A0cap 11[b0] =3D MSI-X supports 33 messages, enabled
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Table in map= 0x10[0x3000], PBA in map 0x10[0x2000]
=C2=A0 =C2=A0 =C2=A0ecap 0001[100] =3D AER 2 0 fatal 0 non-fatal 0 correcte= d
=C2=A0 =C2=A0 =C2=A0ecap 0003[148] =3D Serial 1 0000000000000000
=C2=A0 =C2=A0 =C2=A0ecap 0004[158] =3D Power Budgeting 1
=C2=A0 =C2=A0 =C2=A0ecap 0019[168] =3D PCIe Sec 1 lane errors 0
=C2=A0 =C2=A0 =C2=A0ecap 0018[188] =3D LTR 1
=C2=A0 =C2=A0 =C2=A0ecap 001e[190] =3D L1 PM Substates 1

x4 card in a x2 slot. If that's intentional, then thi= s looks good.
=C2=A0

> Though I'd bet money that this is an interrupt issue. I'd do a= vmstat. -i
> to watch how quickly they accumulate...

That I am waiting for a full world to get onto it.=C2=A0 I wish I could hav= e
netbooted but not possible there currently.

Only took 15 minutes to extract the tar now.=C2=A0 Should have used ddb...<= br> hadn't thought of that before...

# vmstat -ai | grep nvme
its0,0: nvme0:admin=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0
its0,1: nvme0:io0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0
its0,2: nvme0:io1=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0
its0,3: nvme0:io2=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0
its0,4: nvme0:io3=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0
its0,5: nvme0:io4=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0
its0,6: nvme0:io5=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0
its0,7: nvme0:io6=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0
its0,8: nvme0:io7=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A00=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0

How does this even work?=C2=A0 Do we poll?

<= div>Yes. We poll, and poll slowly.=C2=A0 You have an interrupt problem.

On an ARM platform. Fun. ITS and I are old.... foes? = Friends? frenemies?

As for why, I don't know. = I've been fortunate never to have to chase
interrupts not wor= king on arm problems....
=C2=A0
And before you ask:

[1.000407] nvme0: <Generic NVMe Device> mem 0x40000000-0x40003fff at = device 0.0 on pci5
[1.000409] nvme0: attempting to allocate 9 MSI-X vectors (33 supported)
[1.000410] nvme0: using IRQs 106-114 for MSI-X
[1.000411] nvme0: CapLo: 0x3c033fff: MQES 16383, CQR, AMS WRRwUPC, TO 60 [1.000412] nvme0: CapHi: 0x00000030: DSTRD 0, NSSRS, CSS 1, CPS 0, MPSMIN 0= , MPSMAX 0
[1.000413] nvme0: Version: 0x00010300: 1.3

<= div>Yea, that's what I'd expect.
=C2=A0
> How old is the drive? Fresh install? Do other drives have this same is= sue
> in the same slot? Dies this drive have issues in other maxhines or slo= ts?

The drive is a few months old but only in the box until it went on this
board.

I checked nvmecontrol for anything obvious but didn't see.

OK. So not 'super old nand in its death throes = being slow"
=C2=A0
> Oh, and what's its temperature? Any message in dmesg?

Nothing in dmesg, temp seems not too bad.=C2=A0 Took a while to get smartmo= ntools;
we have no way to see this in nvmecontrol in human readable form, do we?
Temperature Sensor 1:=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A051 Celsius
Temperature Sensor 2:=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A048 Celsius

A little warm, but not te= rrible. 50 is where I start to worry a bit, but the card won't thermal<= /div>
throttle until more like 60. We don't currently have a nvmeco= ntrol identify field to tell you this
(I should add it, this is t= he second time in as many weeks I've wanted it).

Ok I got a 2nd identical machine netbooted remotely (pressue with
problems often helps) -- slightly different freebsd version and kernel,
same baord, same type of nvme bought together:

# /usr/bin/time dd if=3D/dev/zero of=3D/dev/nda0 bs=3D1M count=3D1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 1.657316 secs (647879880 bytes/sec)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A01.66 real=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A00.00 user=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00.94 sys

and ddb> show intrcnt
..
its0,0: nvme0:admin=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A024
its0,1: nvme0:io0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0126
its0,2: nvme0:io1=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0143
its0,3: nvme0:io2=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0131
its0,4: nvme0:io3=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0128
its0,5: nvme0:io4=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0135
its0,6: nvme0:io5=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0147
its0,7: nvme0:io6=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0143
its0,8: nvme0:io7=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0144

Yea, that= 9;s what I'd expect/ Dozens to hundreds of interrupts.

I'll try to make sure I can safely access both over the weekend remotel= y
from a more comforting place and I know where to start looking now...

Thanks!

No problem!

Warner
--000000000000cd2a3f06121165e0--