panic: non-current pmap on RPI3 on CURRENT (GENERIC) #4 r356366
Mark Millard
marklmi at yahoo.com
Mon Jun 29 00:21:38 UTC 2020
On 2020-Jun-28, at 12:50, bob prohaska <fbsd at www.zefox.net> wrote:
> On Thu, Jan 09, 2020 at 09:23:14AM -0800, bob prohaska wrote:
>> On Thu, Jan 09, 2020 at 01:51:23PM +0200, Konstantin Belousov wrote:
>>>
>>> It would be useful to see both the curcpu pc_curpmap content,
>>> and dump both *(struct pmap *)0xfffffd000385f5a0 and *pc_curpmap
>>> from the vmcore.
>
> The Pi3 is now up to r362283 and just reported:
>
> panic: non-current pmap 0xfffffd000142d440
> cpuid = 0
> time = 1593368952
> KDB: stack backtrace:
> db_trace_self() at db_trace_self_wrapper+0x28
> pc = 0xffff00000075e24c lr = 0xffff00000010a468
> sp = 0xffff00005a86d2e0 fp = 0xffff00005a86d4e0
>
> db_trace_self_wrapper() at vpanic+0x194
> pc = 0xffff00000010a468 lr = 0xffff000000419dcc
> sp = 0xffff00005a86d4f0 fp = 0xffff00005a86d540
>
> vpanic() at panic+0x44
> pc = 0xffff000000419dcc lr = 0xffff000000419b74
> sp = 0xffff00005a86d550 fp = 0xffff00005a86d600
>
> panic() at pmap_remove_pages+0x908
> pc = 0xffff000000419b74 lr = 0xffff000000776e00
> sp = 0xffff00005a86d610 fp = 0xffff00005a86d680
>
> pmap_remove_pages() at vmspace_exit+0x104
> pc = 0xffff000000776e00 lr = 0xffff0000006f7024
> sp = 0xffff00005a86d690 fp = 0xffff00005a86d6e0
>
> vmspace_exit() at exit1+0x48c
> pc = 0xffff0000006f7024 lr = 0xffff0000003d13fc
> sp = 0xffff00005a86d6f0 fp = 0xffff00005a86d750
>
> exit1() at sys_sys_exit+0x10
> pc = 0xffff0000003d13fc lr = 0xffff0000003d0f6c
> sp = 0xffff00005a86d760 fp = 0xffff00005a86d7b0
>
> sys_sys_exit() at do_el0_sync+0x3f8
> pc = 0xffff0000003d0f6c lr = 0xffff00000077dac8
> sp = 0xffff00005a86d7c0 fp = 0xffff00005a86d830
>
> do_el0_sync() at handle_el0_sync+0x90
> pc = 0xffff00000077dac8 lr = 0xffff000000760a24
> sp = 0xffff00005a86d840 fp = 0xffff00005a86d980
>
> handle_el0_sync() at 0x404bd678
> pc = 0xffff000000760a24 lr = 0x00000000404bd678
> sp = 0xffff00005a86d990 fp = 0x0000ffffffffe960
>
> KDB: enter: panic
> [ thread pid 42572 tid 100137 ]
> Stopped at 0x4053fcfc
> db>
>
>
> This time it was in the early stages of compiling www/chromium.
> Boot and root are from a mechanical hard disk, the dying top page
> was:
>
> last pid: 42562; load averages: 1.40, 1.37, 1.38 up 8+22:05:11 11:29:10
> 47 processes: 3 running, 44 sleeping
> CPU: 27.1% user, 0.0% nice, 11.4% system, 0.4% interrupt, 61.0% idle
> Mem: 92M Active, 237M Inact, 1468K Laundry, 158M Wired, 77M Buf, 415M Free
> Swap: 6042M Total, 194M Used, 5849M Free, 3% Inuse
> packet_write_wait: Connection to 50.1.20.28 port 22: Broken pipe
> bob at raspberrypi:~ $ R PRI NICE SIZE RES STATE C TIME WCPU COMMAND
> 42514 root 1 88 0 111M 63M CPU2 2 0:08 100.21% c++
> 81775 bob 1 52 0 13M 352K wait 0 9:50 0.35% sh
> 29366 bob 1 20 0 14M 1340K CPU0 0 3:00 0.22% top
> 29351 bob 1 20 0 20M 936K select 2 0:15 0.03% sshd
> 639 root 1 20 0 13M 972K select 3 0:28 0.01% syslogd
> 30908 root 1 52 0 194M 40M select 1 1:52 0.00% ninja
> 46086 bob 1 20 0 20M 312K select 0 1:48 0.00% sshd
> ......
>
> I'll update OS sources and try again, if somebody can tell me
> how to capture more useful information I'll try that.
Do you have your system set up to allow it to
dump to the swap/paging space for panics and
then to put a copy in the /var/crash/ area during
the next boot? Konstantin B. was asking for
information from such a dump.
Note: A dump can be requested at the db> prompt
by typing a "dump" command at the prompt, if you
have set up to have a dump target identified,
usually a swap/paging partition. If it works, the
next boot would take some time putting material
into /var/crash.
An example of such materials in /var/crash/ is
(2 dumps):
# ls -ldT /var/crash/*
-rw-r--r-- 1 root wheel 2 Jun 16 19:58:17 2020 /var/crash/bounds
-rw-r--r-- 1 root wheel 32484 Jun 11 20:34:35 2020 /var/crash/core.txt.3
-rw-r--r-- 1 root wheel 32498 Jun 16 19:58:47 2020 /var/crash/core.txt.4
-rw------- 1 root wheel 561 Jun 11 20:34:04 2020 /var/crash/info.3
-rw------- 1 root wheel 562 Jun 16 19:58:17 2020 /var/crash/info.4
lrwxr-xr-x 1 root wheel 6 Jun 16 19:58:17 2020 /var/crash/info.last -> info.4
-rw-r--r-- 1 root wheel 5 Feb 22 02:37:33 2016 /var/crash/minfree
-rw------- 1 root wheel 9424896 Jun 11 20:34:04 2020 /var/crash/vmcore.3
-rw------- 1 root wheel 9424896 Jun 16 19:58:17 2020 /var/crash/vmcore.4
lrwxr-xr-x 1 root wheel 8 Jun 16 19:58:17 2020 /var/crash/vmcore.last -> vmcore.4
Do you have devel/gdb installed? It supplies a
/usr/local/bin/kgdb for looking at such vmcore.*
files.
It is important that the kernel debug information
still match the vmcore as I understand, even if
that means needing to boot a different,
sufficiently-working kernel that does not match the
debug information in order to get the /var/crash
materials in place and to inspect them.
I'm not sure you could do as Konstantin requested
based on a non-debug kernel build done the usual
way, even with debug information present.
Are you using a non-debug kernel? A debug-kernel?
You might need to try reproducing with a debug
kernel. (But that likely will make builds
take longer.)
===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
More information about the freebsd-arm
mailing list