FreeBSD-main-amd64-test - Build #20504 - Still Failing
Date: Fri, 28 Jan 2022 17:35:15 UTC
FreeBSD-main-amd64-test - Build #20504 (1a0dde338df8b493d74dcb2f7bbaaa6c02cab371) - Still Failing Build information: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/20504/ Full change log: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/20504/changes Full build log: https://ci.FreeBSD.org/job/FreeBSD-main-amd64-test/20504/console Status explanation: "Failure" - the build is suspected being broken by the following changes "Still Failing" - the build has not been fixed by the following changes and this is a notification to note that these changes have not been fully tested by the CI system Change summaries: (Those commits are likely but not certainly responsible) 1a0dde338df8b493d74dcb2f7bbaaa6c02cab371 by emaste: dma: limit lines to 998 characters The end of the build log: [...truncated 4.20 MB...] epair1a: link state changed to UP epair1b: link state changed to UP epair2a: Ethernet address: 02:52:19:44:48:0a epair2b: Ethernet address: 02:52:19:44:48:0b epair2a: link state changed to UP epair2b: link state changed to UP epair3a: Ethernet address: 02:f8:f6:20:60:0a epair3b: Ethernet address: 02:f8:f6:20:60:0b epair3a: link state changed to UP epair3b: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN epair1a: link state changed to DOWN epair1b: link state changed to DOWN epair3b: link state changed to DOWN epair3a: link state changed to DOWN epair2b: link state changed to DOWN epair2a: link state changed to DOWN passed [0.498s] sys/netpfil/common/dummynet:ipfw_interface_removal -> epair0a: Ethernet address: 02:a5:2c:68:2c:0a epair0b: Ethernet address: 02:a5:2c:68:2c:0b epair0a: link state changed to UP epair0b: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN passed [7.394s] sys/netpfil/common/dummynet:ipfw_pipe -> epair0a: Ethernet address: 02:23:75:2b:53:0a epair0b: Ethernet address: 02:23:75:2b:53:0b epair0a: link state changed to UP epair0b: link state changed to UP epair0b: link state changed to DOWN epair0a: link state changed to DOWN passed [27.651s] sys/netpfil/common/dummynet:ipfw_pipe_v6 -> epair0a: Ethernet address: 02:bb:8e:3b:0a:0a epair0b: Ethernet address: 02:bb:8e:3b:0a:0b epair0a: link state changed to UP epair0b: link state changed to UP epair0b: link state changed to DOWN epair0a: link state changed to DOWN passed [23.879s] sys/netpfil/common/dummynet:ipfw_queue -> epair0a: Ethernet address: 02:69:e1:7d:1b:0a epair0b: Ethernet address: 02:69:e1:7d:1b:0b epair0a: link state changed to UP epair0b: link state changed to UP Limiting icmp ping response from 243 to 200 packets/sec Limiting icmp ping response from 268 to 200 packets/sec Limiting icmp ping response from 257 to 200 packets/sec Limiting icmp ping response from 214 to 200 packets/sec Limiting icmp ping response from 285 to 200 packets/sec Limiting icmp ping response from 268 to 200 packets/sec Limiting icmp ping response from 271 to 200 packets/sec Limiting icmp ping response from 261 to 200 packets/sec Limiting icmp ping response from 265 to 200 packets/sec Limiting icmp ping response from 264 to 200 packets/sec Limiting icmp ping response from 264 to 200 packets/sec Limiting closed port RST response from 15279 to 200 packets/sec Limiting icmp ping response from 266 to 200 packets/sec Limiting icmp ping response from 265 to 200 packets/sec Limiting icmp ping response from 264 to 200 packets/sec Limiting icmp ping response from 263 to 200 packets/sec Limiting icmp ping response from 264 to 200 packets/sec Limiting icmp ping response from 259 to 200 packets/sec Limiting icmp ping response from 265 to 200 packets/sec Limiting icmp ping response from 256 to 200 packets/sec Limiting icmp ping response from 266 to 200 packets/sec Limiting icmp ping response from 261 to 200 packets/sec Limiting icmp ping response from 263 to 200 packets/sec Limiting icmp ping response from 264 to 200 packets/sec Limiting icmp ping response from 270 to 200 packets/sec epair0b: link state changed to DOWN epair0a: link state changed to DOWN passed [18.468s] sys/netpfil/common/dummynet:ipfw_queue_v6 -> epair0a: Ethernet address: 02:27:78:81:da:0a epair0b: Ethernet address: 02:27:78:81:da:0b epair0a: link state changed to UP epair0b: link state changed to UP epair0b: link state changed to DOWN epair0a: link state changed to DOWN passed [14.088s] sys/netpfil/common/dummynet:pf_interface_removal -> epair0a: Ethernet address: 02:16:be:a4:e2:0a epair0b: Ethernet address: 02:16:be:a4:e2:0b epair0a: link state changed to UP epair0b: link state changed to UP epair0a: link state changed to DOWN epair0b: link state changed to DOWN passed [5.916s] sys/netpfil/common/dummynet:pf_nat -> epair0a: Ethernet address: 02:63:d0:ae:14:0a epair0b: Ethernet address: 02:63:d0:ae:14:0b epair0a: link state changed to UP epair0b: link state changed to UP epair1a: Ethernet address: 02:1b:b4:0d:f7:0a epair1b: Ethernet address: 02:1b:b4:0d:f7:0b epair1a: link state changed to UP epair1b: link state changed to UP epair0b: link state changed to DOWN epair0a: link state changed to DOWN epair1a: link state changed to DOWN epair1b: link state changed to DOWN passed [0.832s] sys/netpfil/common/dummynet:pf_pipe -> epair0a: Ethernet address: 02:ab:bc:82:f4:0a epair0b: Ethernet address: 02:ab:bc:82:f4:0b epair0a: link state changed to UP epair0b: link state changed to UP epair0b: link state changed to DOWN epair0a: link state changed to DOWN passed [24.741s] sys/netpfil/common/dummynet:pf_pipe_v6 -> epair0a: Ethernet address: 02:85:e4:26:b9:0a epair0b: Ethernet address: 02:85:e4:26:b9:0b epair0a: link state changed to UP epair0b: link state changed to UP epair0b: link state changed to DOWN epair0a: link state changed to DOWN passed [22.528s] sys/netpfil/common/dummynet:pf_queue -> epair0a: Ethernet address: 02:8f:2f:81:86:0a epair0b: Ethernet address: 02:8f:2f:81:86:0b epair0a: link state changed to UP epair0b: link state changed to UP Limiting icmp ping response from 243 to 200 packets/sec Limiting icmp ping response from 215 to 200 packets/sec Limiting icmp ping response from 286 to 200 packets/sec Limiting icmp ping response from 267 to 200 packets/sec Limiting icmp ping response from 271 to 200 packets/sec Limiting icmp ping response from 257 to 200 packets/sec Limiting icmp ping response from 261 to 200 packets/sec Limiting icmp ping response from 258 to 200 packets/sec Limiting icmp ping response from 257 to 200 packets/sec Limiting icmp ping response from 259 to 200 packets/sec Limiting icmp ping response from 256 to 200 packets/sec Limiting icmp ping response from 261 to 200 packets/sec Limiting icmp ping response from 261 to 200 packets/sec Limiting icmp ping response from 264 to 200 packets/sec Limiting icmp ping response from 258 to 200 packets/sec Limiting icmp ping response from 254 to 200 packets/sec Limiting icmp ping response from 262 to 200 packets/sec Limiting icmp ping response from 257 to 200 packets/sec Limiting icmp ping response from 264 to 200 packets/sec Limiting icmp ping response from 267 to 200 packets/sec Limiting icmp ping response from 265 to 200 packets/sec Limiting icmp ping response from 250 to 200 packets/sec epair0b: link state changed to DOWN epair0a: link state changed to DOWN passed [18.012s] sys/netpfil/common/dummynet:pf_queue_v6 -> epair0a: Ethernet address: 02:9e:70:6a:e2:0a epair0b: Ethernet address: 02:9e:70:6a:e2:0b epair0a: link state changed to UP epair0b: link state changed to UP epair0b: link state changed to DOWN epair0a: link state changed to DOWN Fatal trap 12: page fault while in kernel mode cpuid = 1; apic id = 01 fault virtual address = 0x10 fault code = supervisor read data, page not present instruction pointer = 0x20:0xffffffff80e2ac3f stack pointer = 0x28:0xfffffe00c2ddec40 frame pointer = 0x28:0xfffffe00c2dded10 code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 0 (dummynet) trap number = 12 panic: page fault cpuid = 1 time = 1643391313 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe00c2ddea00 vpanic() at vpanic+0x17f/frame 0xfffffe00c2ddea50 panic() at panic+0x43/frame 0xfffffe00c2ddeab0 trap_fatal() at trap_fatal+0x385/frame 0xfffffe00c2ddeb10 trap_pfault() at trap_pfault+0xab/frame 0xfffffe00c2ddeb70 calltrap() at calltrap+0x8/frame 0xfffffe00c2ddeb70 --- trap 0xc, rip = 0xffffffff80e2ac3f, rsp = 0xfffffe00c2ddec40, rbp = 0xfffffe00c2dded10 --- ip6_input() at ip6_input+0x4f/frame 0xfffffe00c2dded10 netisr_dispatch_src() at netisr_dispatch_src+0xaf/frame 0xfffffe00c2dded70 dummynet_send() at dummynet_send+0x1dd/frame 0xfffffe00c2ddedb0 dummynet_task() at dummynet_task+0x36d/frame 0xfffffe00c2ddee40 taskqueue_run_locked() at taskqueue_run_locked+0xaa/frame 0xfffffe00c2ddeec0 taskqueue_thread_loop() at taskqueue_thread_loop+0xc2/frame 0xfffffe00c2ddeef0 fork_exit() at fork_exit+0x80/frame 0xfffffe00c2ddef30 fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe00c2ddef30 --- trap 0, rip = 0, rsp = 0, rbp = 0 --- KDB: enter: panic [ thread pid 0 tid 100106 ] Stopped at kdb_enter+0x37: movq $0,0x1283a3e(%rip) db:0:kdb.enter.panic> show pcpu cpuid = 1 dynamic pcpu = 0xfffffe008f59b600 curthread = 0xfffffe00987e6000: pid 0 tid 100106 critnest 1 "dummynet" curpcb = 0xfffffe00987e6510 fpcurthread = none idlethread = 0xfffffe001122ec80: tid 100004 "idle: cpu1" self = 0xffffffff82411000 curpmap = 0xffffffff81e8c258 tssp = 0xffffffff82411384 rsp0 = 0xfffffe00c2ddf000 kcr3 = 0x8000000002363002 ucr3 = 0xffffffffffffffff scr3 = 0x1280d6ebb gs32p = 0xffffffff82411404 ldt = 0xffffffff82411444 tss = 0xffffffff82411434 curvnet = 0xfffff8010b0c8700 spin locks held: db:0:kdb.enter.panic> reset cpu_reset: Restarting BSP cpu_reset_proxy: Stopped CPU 1 + rc=0 + echo 'bhyve return code = 0' bhyve return code = 0 + sudo /usr/sbin/bhyvectl '--vm=testvm-main-amd64-20504' --destroy + sh -ex freebsd-ci/scripts/test/extract-meta.sh + METAOUTDIR=meta-out + rm -fr meta-out + mkdir meta-out + tar xvf meta.tar -C meta-out x ./ x ./disable-notyet-tests.sh x ./run-kyua.sh x ./auto-shutdown x ./run.sh x ./disable-dtrace-tests.sh x ./disable-zfs-tests.sh + rm -f 'test-report.*' + mv 'meta-out/test-report.*' . mv: rename meta-out/test-report.* to ./test-report.*: No such file or directory + report=test-report.xml + [ -e freebsd-ci/jobs/FreeBSD-main-amd64-test/xfail-list -a -e test-report.xml ] + rm -f disk-cam + jot 5 + rm -f disk1 + rm -f disk2 + rm -f disk3 + rm -f disk4 + rm -f disk5 + rm -f disk-test.img [PostBuildScript] - [INFO] Executing post build scripts. [FreeBSD-main-amd64-test] $ /bin/sh -xe /tmp/jenkins6442957992039035687.sh + ./freebsd-ci/artifact/post-link.py Post link: {'job_name': 'FreeBSD-main-amd64-test', 'commit': '1a0dde338df8b493d74dcb2f7bbaaa6c02cab371', 'branch': 'main', 'target': 'amd64', 'target_arch': 'amd64', 'link_type': 'latest_tested'} "Link created: main/latest_tested/amd64/amd64 -> ../../1a0dde338df8b493d74dcb2f7bbaaa6c02cab371/amd64/amd64\n" Recording test results ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error? Checking for post-build Performing post-build step Checking if email needs to be generated Email was triggered for: Failure - Any Sending email for trigger: Failure - Any Sending mail from default account using System Admin e-mail address