FreeBSD-main-amd64-test - Build #20500 - Still Failing

From: <jenkins-admin_at_FreeBSD.org>
Date: Fri, 28 Jan 2022 11:08:00 UTC
FreeBSD-main-amd64-test - Build #20500 (263660c061ac76d449cbca7bdd0db2ecdfad76d9) - Still Failing

Build information: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/20500/
Full change log: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/20500/changes
Full build log: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/20500/console

Status explanation:
"Failure" - the build is suspected being broken by the following changes
"Still Failing" - the build has not been fixed by the following changes and
                  this is a notification to note that these changes have
                  not been fully tested by the CI system

Change summaries:
(Those commits are likely but not certainly responsible)

02db4a1234b3bd9cf153e567827fd387cf91bfb2 by bapt:
bsddialog: import version 0.1



The end of the build log:

[...truncated 3.93 MB...]
epair0b: link state changed to DOWN
epair0a: link state changed to DOWN
passed  [27.664s]
sys/netpfil/common/dummynet:ipfw_pipe_v6  ->  epair0a: Ethernet address: 02:a3:17:ff:0a:0a
epair0b: Ethernet address: 02:a3:17:ff:0a:0b
epair0a: link state changed to UP
epair0b: link state changed to UP
epair0b: link state changed to DOWN
epair0a: link state changed to DOWN
passed  [24.617s]
sys/netpfil/common/dummynet:ipfw_queue  ->  epair0a: Ethernet address: 02:dd:88:ce:fe:0a
epair0b: Ethernet address: 02:dd:88:ce:fe:0b
epair0a: link state changed to UP
epair0b: link state changed to UP
Limiting icmp ping response from 242 to 200 packets/sec
Limiting icmp ping response from 267 to 200 packets/sec
Limiting icmp ping response from 291 to 200 packets/sec
Limiting icmp ping response from 256 to 200 packets/sec
Limiting icmp ping response from 272 to 200 packets/sec
Limiting icmp ping response from 264 to 200 packets/sec
Limiting icmp ping response from 259 to 200 packets/sec
Limiting icmp ping response from 261 to 200 packets/sec
Limiting icmp ping response from 251 to 200 packets/sec
Limiting closed port RST response from 4078 to 200 packets/sec
Limiting icmp ping response from 258 to 200 packets/sec
Limiting icmp ping response from 253 to 200 packets/sec
Limiting icmp ping response from 257 to 200 packets/sec
Limiting icmp ping response from 246 to 200 packets/sec
Limiting icmp ping response from 257 to 200 packets/sec
Limiting icmp ping response from 252 to 200 packets/sec
Limiting icmp ping response from 257 to 200 packets/sec
Limiting icmp ping response from 242 to 200 packets/sec
Limiting icmp ping response from 256 to 200 packets/sec
Limiting icmp ping response from 264 to 200 packets/sec
Limiting icmp ping response from 264 to 200 packets/sec
Limiting icmp ping response from 260 to 200 packets/sec
Limiting icmp ping response from 262 to 200 packets/sec
epair0b: link state changed to DOWN
epair0a: link state changed to DOWN
passed  [18.294s]
sys/netpfil/common/dummynet:ipfw_queue_v6  ->  epair0a: Ethernet address: 02:76:93:6d:11:0a
epair0b: Ethernet address: 02:76:93:6d:11:0b
epair0a: link state changed to UP
epair0b: link state changed to UP
epair0b: link state changed to DOWN
epair0a: link state changed to DOWN
passed  [14.068s]
sys/netpfil/common/dummynet:pf_interface_removal  ->  epair0a: Ethernet address: 02:a7:43:63:f9:0a
epair0b: Ethernet address: 02:a7:43:63:f9:0b
epair0a: link state changed to UP
epair0b: link state changed to UP
epair0a: link state changed to DOWN
epair0b: link state changed to DOWN
passed  [5.936s]
sys/netpfil/common/dummynet:pf_nat  ->  epair0a: Ethernet address: 02:f6:47:6e:ea:0a
epair0b: Ethernet address: 02:f6:47:6e:ea:0b
epair0a: link state changed to UP
epair0b: link state changed to UP
epair1a: Ethernet address: 02:1d:f6:5b:3b:0a
epair1b: Ethernet address: 02:1d:f6:5b:3b:0b
epair1a: link state changed to UP
epair1b: link state changed to UP
epair0b: link state changed to DOWN
epair0a: link state changed to DOWN
epair1a: link state changed to DOWN
epair1b: link state changed to DOWN
passed  [0.796s]
sys/netpfil/common/dummynet:pf_pipe  ->  epair0a: Ethernet address: 02:9d:dc:42:fb:0a
epair0b: Ethernet address: 02:9d:dc:42:fb:0b
epair0a: link state changed to UP
epair0b: link state changed to UP
epair0b: link state changed to DOWN
epair0a: link state changed to DOWN
passed  [24.823s]
sys/netpfil/common/dummynet:pf_pipe_v6  ->  epair0a: Ethernet address: 02:8a:a7:57:53:0a
epair0b: Ethernet address: 02:8a:a7:57:53:0b
epair0a: link state changed to UP
epair0b: link state changed to UP
epair0b: link state changed to DOWN
epair0a: link state changed to DOWN
passed  [22.568s]
sys/netpfil/common/dummynet:pf_queue  ->  epair0a: Ethernet address: 02:ce:ed:57:6b:0a
epair0b: Ethernet address: 02:ce:ed:57:6b:0b
epair0a: link state changed to UP
epair0b: link state changed to UP
Limiting icmp ping response from 242 to 200 packets/sec
Limiting icmp ping response from 217 to 200 packets/sec
Limiting icmp ping response from 287 to 200 packets/sec
Limiting icmp ping response from 265 to 200 packets/sec
Limiting icmp ping response from 268 to 200 packets/sec
Limiting icmp ping response from 264 to 200 packets/sec
Limiting icmp ping response from 264 to 200 packets/sec
Limiting icmp ping response from 261 to 200 packets/sec
Limiting icmp ping response from 264 to 200 packets/sec
Limiting icmp ping response from 257 to 200 packets/sec
Limiting icmp ping response from 245 to 200 packets/sec
Limiting icmp ping response from 260 to 200 packets/sec
Limiting icmp ping response from 264 to 200 packets/sec
Limiting icmp ping response from 257 to 200 packets/sec
Limiting icmp ping response from 250 to 200 packets/sec
Limiting icmp ping response from 255 to 200 packets/sec
Limiting icmp ping response from 260 to 200 packets/sec
Limiting icmp ping response from 263 to 200 packets/sec
Limiting icmp ping response from 261 to 200 packets/sec
Limiting icmp ping response from 261 to 200 packets/sec
Limiting icmp ping response from 261 to 200 packets/sec
Limiting icmp ping response from 259 to 200 packets/sec
epair0b: link state changed to DOWN
epair0a: link state changed to DOWN
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
pf_test: kif == NULL, if_xname epair0b
passed  [18.029s]
sys/netpfil/common/dummynet:pf_queue_v6  ->  epair0a: Ethernet address: 02:89:9c:a0:dc:0a
epair0b: Ethernet address: 02:89:9c:a0:dc:0b
epair0a: link state changed to UP
epair0b: link state changed to UP
epair0b: link state changed to DOWN
epair0a: link state changed to DOWN


Fatal trap 12: page fault while in kernel mode
cpuid = 1; apic id = 01
fault virtual address	= 0x10
fault code		= supervisor read data, page not present
instruction pointer	= 0x20:0xffffffff80e2ac3f
stack pointer	        = 0x28:0xfffffe00cf3a0c40
frame pointer	        = 0x28:0xfffffe00cf3a0d10
code segment		= base 0x0, limit 0xfffff, type 0x1b
			= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags	= interrupt enabled, resume, IOPL = 0
current process		= 0 (dummynet)
trap number		= 12
panic: page fault
cpuid = 1
time = 1643368078
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe00cf3a0a00
vpanic() at vpanic+0x17f/frame 0xfffffe00cf3a0a50
panic() at panic+0x43/frame 0xfffffe00cf3a0ab0
trap_fatal() at trap_fatal+0x385/frame 0xfffffe00cf3a0b10
trap_pfault() at trap_pfault+0xab/frame 0xfffffe00cf3a0b70
calltrap() at calltrap+0x8/frame 0xfffffe00cf3a0b70
--- trap 0xc, rip = 0xffffffff80e2ac3f, rsp = 0xfffffe00cf3a0c40, rbp = 0xfffffe00cf3a0d10 ---
ip6_input() at ip6_input+0x4f/frame 0xfffffe00cf3a0d10
netisr_dispatch_src() at netisr_dispatch_src+0xaf/frame 0xfffffe00cf3a0d70
dummynet_send() at dummynet_send+0x1dd/frame 0xfffffe00cf3a0db0
dummynet_task() at dummynet_task+0x36d/frame 0xfffffe00cf3a0e40
taskqueue_run_locked() at taskqueue_run_locked+0xaa/frame 0xfffffe00cf3a0ec0
taskqueue_thread_loop() at taskqueue_thread_loop+0xc2/frame 0xfffffe00cf3a0ef0
fork_exit() at fork_exit+0x80/frame 0xfffffe00cf3a0f30
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe00cf3a0f30
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---
KDB: enter: panic
[ thread pid 0 tid 100106 ]
Stopped at      kdb_enter+0x37: movq    $0,0x1283a3e(%rip)
db:0:kdb.enter.panic> show pcpu
cpuid        = 1
dynamic pcpu = 0xfffffe008f59b600
curthread    = 0xfffffe00987f7740: pid 0 tid 100106 critnest 1 "dummynet"
curpcb       = 0xfffffe00987f7c50
fpcurthread  = none
idlethread   = 0xfffffe001122ec80: tid 100004 "idle: cpu1"
self         = 0xffffffff82411000
curpmap      = 0xffffffff81e8c258
tssp         = 0xffffffff82411384
rsp0         = 0xfffffe00cf3a1000
kcr3         = 0x8000000002363003
ucr3         = 0xffffffffffffffff
scr3         = 0x1ddbf5b2d
gs32p        = 0xffffffff82411404
ldt          = 0xffffffff82411444
tss          = 0xffffffff82411434
curvnet      = 0xfffff8001f8d4940
spin locks held:
db:0:kdb.enter.panic>  reset
cpu_reset: Restarting BSP
cpu_reset_proxy: Stopped CPU 1
+ rc=0
+ echo 'bhyve return code = 0'
bhyve return code = 0
+ sudo /usr/sbin/bhyvectl '--vm=testvm-main-amd64-20500' --destroy
+ sh -ex freebsd-ci/scripts/test/extract-meta.sh
+ METAOUTDIR=meta-out
+ rm -fr meta-out
+ mkdir meta-out
+ tar xvf meta.tar -C meta-out
x ./
x ./disable-notyet-tests.sh
x ./run.sh
x ./run-kyua.sh
x ./disable-dtrace-tests.sh
x ./auto-shutdown
x ./disable-zfs-tests.sh
+ rm -f 'test-report.*'
+ mv 'meta-out/test-report.*' .
mv: rename meta-out/test-report.* to ./test-report.*: No such file or directory
+ report=test-report.xml
+ [ -e freebsd-ci/jobs/FreeBSD-main-amd64-test/xfail-list -a -e test-report.xml ]
+ rm -f disk-cam
+ jot 5
+ rm -f disk1
+ rm -f disk2
+ rm -f disk3
+ rm -f disk4
+ rm -f disk5
+ rm -f disk-test.img
[PostBuildScript] - [INFO] Executing post build scripts.
[FreeBSD-main-amd64-test] $ /bin/sh -xe /tmp/jenkins8068827180427493056.sh
+ ./freebsd-ci/artifact/post-link.py
Post link: {'job_name': 'FreeBSD-main-amd64-test', 'commit': '263660c061ac76d449cbca7bdd0db2ecdfad76d9', 'branch': 'main', 'target': 'amd64', 'target_arch': 'amd64', 'link_type': 'latest_tested'}
"Link created: main/latest_tested/amd64/amd64 -> ../../263660c061ac76d449cbca7bdd0db2ecdfad76d9/amd64/amd64\n"
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
Checking for post-build
Performing post-build step
Checking if email needs to be generated
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Sending mail from default account using System Admin e-mail address