From nobody Fri Jan 28 23:05:06 2022 X-Original-To: freebsd-arm@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id C6B3319869BA for ; Fri, 28 Jan 2022 23:05:22 +0000 (UTC) (envelope-from marklmi@yahoo.com) Received: from sonic317-21.consmr.mail.gq1.yahoo.com (sonic317-21.consmr.mail.gq1.yahoo.com [98.137.66.147]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4JltN12c6cz4jqK for ; Fri, 28 Jan 2022 23:05:21 +0000 (UTC) (envelope-from marklmi@yahoo.com) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1643411112; bh=O+bjiF5xJE+NDzLKL6+CbrumnucEUpWTQ357U/m7WhA=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From:Subject:Reply-To; b=Y7QNnzZs/VZhbDTEbMTDSmQAEzZZO4V5s7nzf1OC526UvOf3BRfTFoPvNw4q5JOzEdJObqUN7eRoOuwExenCIFBpM/SiKYlbNys8bvivqU9UQBxg46C7RoxPNlaAuINszI+QqV2ugmqkcru9qi7xYwXmYarZ56Hrny8PfZyh5IdAvK7Sq00Cv6Pt3rpocwmpekW0Mg6lUmLQZ/0f9VI6SM+xs+nfgMoB80oml9fqgfLn8RPlZdP6zoT7xVGc5kusHq+UrwIC/hmRQNKaIDf6g3gjkAbOcMFzMizM0+Anq+nc2J8wbRBA3Kq90JOra5KbNCM2EUsHXA3v4/yJ79kbRA== X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1643411112; bh=HIagRZlF0ZPNs3wSjvhzdwhg3k31dDhhZl9959qDLDN=; h=X-Sonic-MF:Subject:From:Date:To:From:Subject; b=tkAjMhWizARoTU0akhRe5C4srAJMXWgBxwRfNQva5WakS3eCcFF2Xt51vt8x9qgChOLtSc5jGhMvL5x6IJxuYe0Myz74dsjjGf+QKUqqyv8Qh19Rr6GzeC9BT8TTC6Pf9svVorQFU8J3uT4Z7uI9rwdQTQ++kfU/ilLihPuJyExZt4AS7hU9vGwnYqF2KsJD2eynDUXKKJFyst8SvjVG+8sdvwHNleVaApMHkO8pten5FqsNOVg5/9vla1NB/W2bzyogqqelExPRxTkvZWU4AUHqZIP5OT2h6EBQY9Pch8ETO3gagL1AfP84RLjvzniy2vy7tAfiR8qNWPbRWtYYtg== X-YMail-OSG: j57Sbu0VM1lHuNbvGIMKfWrHdndl1P.hFl3E8yLJ3ze7ohM0.luQth5EbwC4WgZ 7Fg7zYaba.VIx_98zo4Dh.Urhx5QC2JZHzaqWyJoKp1zRtLvVFPXnLZr5mhwYvrN6eHeVt6R0NC0 s2nuQby4YiaM2deteuYINYBhxInNT41FYKc0RoqBvtkSsisSBWYwDLjzOgTrHtQ1UtgJ5EguNwvW X0xeOea1HPSl.tDsfxnsqme5Fndmi._YFbJ3hpABK4BC6k1j8nuhLX5m0vuNY1w4d09KeaNqKXin IZqp81vxix9If7beP95sX.DwWZ97rdErjeKcmcV8GymDoui.EMBZ.Zd6iyFtQD5bTrWfVvHThxAz Op5g83jEFJq1Fy1AA8XwrDvlDu7dCLdVIUWN4RnHhDuYBhoQFQq6XQY5M_8yY4SWhtfp_kScpKIh BYJvTsWGMph7xyMjDWz979VLrNrBYbVmnoQSBRyXy5bu_CxzBaRKfZ2kdlDFb6_HRF_lwDjIsFmH a17k5vGlXTvPFbzryI.eG7NZS6oJ5vyGGKyIHNPLcCQftBhE4flctdT4DyHW6ZexE7vuLKgBKqz_ l.ue8n9_x7mfj9AndlZRlhAEUbmgai2jTHJcTUyV8mWskslE63M4m.l.OQnhjxiN3gzThQfrlGX7 d3OHeAKPPyCr.ilOUdAbhwZkGGKK2lh_drv_qjnSl1Z5bf1rRE9kLUXJGL3JIi.LlUA_L.amkPAJ TNpA5gGTQuTkIGms_mAZfxj_XijqBx93Sb4KK8wYmYEt5V_4CogV0aSJYLPhFuM2LAKFIA6kg91u Q92wkcNut4ignjEImnk7yXI.iG2ep6MD3hqHZlkYSe99jvt4Ugls3zqOIDcCUtLl7u9.CnJt5kpp OrarBw0Yp1Dn5mEMoqHQ8kRFe_CY.hsHpwY61YtEAxyDq2Mw0GsZHFV8XiyWsMHJmc3SvykeRZdT gT6ATrhsah8sTvsd68RhBi1SRxf7gHnWhew7BRGSNgg0L5qpl7uLGT3xOu0kxrqCdFBFmxV._Aa. tloYtvcDLqszOb1p..ClYbC7k7Ta1qsFPstGKe2pL1NSKOOPDSVcxcLe1LMnAZIh3eJ9AOMIx4sm ywP67djeIMjgRIW5tlN.XErLRf6qpDwFBwgFXCvxdXCG4nft11S3LPUpy6DYbcsCTR3.fEF3e75x PX1SvSwhrstHBftgyZlMQyQCzEZsoyvuIeXoAGhUS5zSyi1C3yGlA3XWnPOPohH2cHJ9hib759AL 8jzAmLbuA9aI8tQC0nP6MAe.orVPnFlDsbwLmLMIoq86obR8bfzwHmJHZIXWjDLdbpfkiL1PEXva ISpDkgUEAsoJHUfCHHqUqBcH3bQNJwZEByG5iHmowIBbsWOdGS4ug5Gftu37tIUoXRI1VVCU1KK7 qqb6gwiJdT5P9RarDnh65I5Jk4rh0M3YDNwbZiXM5ZOWvMMeHZGzFsMEMZC505Vg9dHmG5vPyt_r rYIrGCOZ44khvxFdhF8R.vpiZXmIP.t8nnYjA3DKQtJ_ylnjmpBx9vRtlDXWzC7kkamaa55yinxk ponuKa7ntXcxy2.6MSoRXYf4nis0NLoFqhlTeiDTmejE4.VIfrNHtMX7uudR.YWd7JONFFsATTHf WUib9egzoG3ep3UIff7nQ0COi9dla50VI3oIqF1.N9CASQ3KRP9ShcQhxWdi0QlecRQRJNdddIId TVDvTOFS_3LxQ1ATJ4C8YG6fq9hA.yW2qZOsvKRBbhjKSJxnaGebuIrFtrQOsTd8NkySQue1et6x pwVL41jxbuyXQ_LIhsICRWB2i_SCGVu.drAohZQgApDVViQPHdNqrwYoUuFWsGaB5ev_rj4ljBhY tZqr7yWejp7VIcUt06XrOgxN6L6HFqcwVBy_PXpmBA0QGDWEGgvY3VddN62gzMm4YIUMwCPSZGSL ykN2LgK6I8_ePFao49HaHO_hO8LPvDLrige_ozpDx.YiwdAG6ogcl4Vnp_iBWhOOd29tgSo1HgSV 1MO5Q7Siz_JIR3gXiGypVKznQ9hbMciSzfDN.Ft1snSYFCk7E33VaIoHRH8KFGKUlDHCxukAS2TJ 8WvJ1pf7gNx3aRsujqPVqMB4ywIa64a.104ANl76Zuk6XnG7oJKAdto8YVD84BRNhHA4gdh657oZ .2rhL9h25obeTQxXmHQqGu.y.cwZ53rNWtnAth9hSWp7Ho4Qn.Hf4yuR6rJTFnb_mPNa61dOSmYS rIgTWP2zHwdxYL3MbAxRLWL1CLM1bdYiTRNZ9ZlqyKor_M3b50b74XI2xQAjudCxSWagMJaiDNhg colbfS3sOXmYl8.tcBy1xKw-- X-Sonic-MF: Received: from sonic.gate.mail.ne1.yahoo.com by sonic317.consmr.mail.gq1.yahoo.com with HTTP; Fri, 28 Jan 2022 23:05:12 +0000 Received: by kubenode525.mail-prod1.omega.ne1.yahoo.com (VZM Hermes SMTP Server) with ESMTPA ID 73b0cfa545d461c4e43dd2fa1c178b05; Fri, 28 Jan 2022 23:05:08 +0000 (UTC) Content-Type: text/plain; charset=us-ascii List-Id: Porting FreeBSD to ARM processors List-Archive: https://lists.freebsd.org/archives/freebsd-arm List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-arm@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.120.0.1.13\)) Subject: Re: devel/llvm13 failed to reclaim memory on 8 GB Pi4 running -current [UFS context: used the whole swap space too] From: Mark Millard In-Reply-To: Date: Fri, 28 Jan 2022 15:05:06 -0800 Cc: Free BSD Content-Transfer-Encoding: quoted-printable Message-Id: References: <20220127164512.GA51200@www.zefox.net> <2C7E741F-4703-4E41-93FE-72E1F16B60E2@yahoo.com> <20220127214801.GA51710@www.zefox.net> <5E861D46-128A-4E09-A3CF-736195163B17@yahoo.com> <20220127233048.GA51951@www.zefox.net> <6528ED25-A3C6-4277-B951-1F58ADA2D803@yahoo.com> <10B4E2F0-6219-4674-875F-A7B01CA6671C@yahoo.com> <54CD0806-3902-4B9C-AA30-5ED003DE4D41@yahoo.com> <9771EB33-037E-403E-8A77-7E8E98DCF375@yahoo.com> To: bob prohaska X-Mailer: Apple Mail (2.3654.120.0.1.13) X-Rspamd-Queue-Id: 4JltN12c6cz4jqK X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=yahoo.com header.s=s2048 header.b=Y7QNnzZs; dmarc=pass (policy=reject) header.from=yahoo.com; spf=pass (mx1.freebsd.org: domain of marklmi@yahoo.com designates 98.137.66.147 as permitted sender) smtp.mailfrom=marklmi@yahoo.com X-Spamd-Result: default: False [-3.50 / 15.00]; FREEMAIL_FROM(0.00)[yahoo.com]; MV_CASE(0.50)[]; R_SPF_ALLOW(-0.20)[+ptr:yahoo.com]; TO_DN_ALL(0.00)[]; DKIM_TRACE(0.00)[yahoo.com:+]; RCPT_COUNT_TWO(0.00)[2]; DMARC_POLICY_ALLOW(-0.50)[yahoo.com,reject]; NEURAL_HAM_SHORT(-1.00)[-1.000]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; MIME_TRACE(0.00)[0:+]; FREEMAIL_ENVFROM(0.00)[yahoo.com]; ASN(0.00)[asn:36647, ipnet:98.137.64.0/20, country:US]; MID_RHS_MATCH_FROM(0.00)[]; DWL_DNSWL_NONE(0.00)[yahoo.com:dkim]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[yahoo.com:s=s2048]; FROM_HAS_DN(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[98.137.66.147:from]; MLMMJ_DEST(0.00)[freebsd-arm]; RWL_MAILSPIKE_POSSIBLE(0.00)[98.137.66.147:from]; RCVD_COUNT_TWO(0.00)[2] X-ThisMailContainsUnwantedMimeParts: N On 2022-Jan-28, at 00:31, Mark Millard wrote: >> . . . >=20 > UFS context: >=20 > . . .; load averages: . . . MaxObs: 5.47, 4.99, 4.82 > . . . threads: . . ., 14 MaxObsRunning > . . . > Mem: . . ., 6457Mi MaxObsActive, 1263Mi MaxObsWired, 7830Mi = MaxObs(Act+Wir+Lndry) > Swap: 8192Mi Total, 8192Mi Used, K Free, 100% Inuse, 8192Mi = MaxObsUsed, 14758Mi MaxObs(Act+Lndry+SwapUsed), 16017Mi = MaxObs(Act+Wir+Lndry+SwapUsed) >=20 >=20 > Console: >=20 > swap_pager: out of swap space > swp_pager_getswapspace(4): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(2): failed > swp_pager_getswapspace(2): failed > swp_pager_getswapspace(4): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(9): failed > swp_pager_getswapspace(4): failed > swp_pager_getswapspace(7): failed > swp_pager_getswapspace(29): failed > swp_pager_getswapspace(9): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(2): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(4): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(10): failed >=20 > . . . Then some time with no messages . . . >=20 > vm_pageout_mightbe_oom: kill context: v_free_count: 7740, = v_inactive_count: 1 > Jan 27 23:01:07 CA72_UFS kernel: pid 57238 (c++), jid 3, uid 0, was = killed: failed to reclaim memory > swp_pager_getswapspace(2): failed >=20 >=20 > Note: The "vm_pageout_mightbe_oom: kill context:" > notice is one of the few parts of an old reporting > patch Mark J. had supplied (long ago) that still > fits in the modern code (or that I was able to keep > updated enough to fit, anyway). It is another of the > personal updates that I keep in my source trees, > such as in /usr/main-src/ . >=20 > diff --git a/sys/vm/vm_pageout.c b/sys/vm/vm_pageout.c > index 36d5f3275800..f345e2d4a2d4 100644 > --- a/sys/vm/vm_pageout.c > +++ b/sys/vm/vm_pageout.c > @@ -1828,6 +1828,8 @@ vm_pageout_mightbe_oom(struct vm_domain *vmd, = int page_shortage, > * start OOM. Initiate the selection and signaling of the > * victim. > */ > + printf("vm_pageout_mightbe_oom: kill context: v_free_count: = %u, v_inactive_count: %u\n", > + vmd->vmd_free_count, = vmd->vmd_pagequeues[PQ_INACTIVE].pq_cnt); > vm_pageout_oom(VM_OOM_MEM); >=20 > /* >=20 >=20 > Again, I'd used vm.pfault_oom_attempts inappropriately > for running out of swap (although with UFS it did do > a kill fairly soon): >=20 > # Delay when persistent low free RAM leads to > # Out Of Memory killing of processes: > vm.pageout_oom_seq=3D120 > # > # For plunty of swap/paging space (will not > # run out), avoid pageout delays leading to > # Out Of Memory killing of processes: > vm.pfault_oom_attempts=3D-1 > # > # For possibly insufficient swap/paging space > # (might run out), increase the pageout delay > # that leads to Out Of Memory killing of > # processes (showing defaults at the time): > #vm.pfault_oom_attempts=3D 3 > #vm.pfault_oom_wait=3D 10 > # (The multiplication is the total but there > # are other potential tradoffs in the factors > # multiplied, even for nearly the same total.) >=20 > I'll change: >=20 > vm.pfault_oom_attempts > vm.pfault_oom_wait >=20 > and reboot --and start the bulk somewhat before > going to bed. >=20 >=20 > For reference: >=20 > [00:02:13] [01] [00:00:00] Building devel/llvm13 | llvm13-13.0.0_3 > [07:37:05] [01] [07:34:52] Finished devel/llvm13 | llvm13-13.0.0_3: = Failed: build >=20 >=20 > [ 65% 4728/7265] . . . flang/lib/Evaluate/fold-designator.cpp > [ 65% 4729/7265] . . . flang/lib/Evaluate/fold-integer.cpp > FAILED: = tools/flang/lib/Evaluate/CMakeFiles/obj.FortranEvaluate.dir/fold-integer.c= pp.o=20 > [ 65% 4729/7265] . . . flang/lib/Evaluate/fold-logical.cpp > [ 65% 4729/7265] . . . flang/lib/Evaluate/fold-complex.cpp > [ 65% 4729/7265] . . . flang/lib/Evaluate/fold-real.cpp >=20 > So the flang/lib/Evaluate/fold-integer.cpp one was the one killed. >=20 > Notably, the specific sources being compiled are different > than in the ZFS context report. But this might be because > of my killing ninja explicitly in the ZFS context, before > killing the running compilers. >=20 > Again, using the options to avoid building the Fortran > compiler probably avoids such memory use --if you do not > need the Fortran compiler. UFS based on instead using (not vm.pfault_oom_attempts=3D-1): vm.pfault_oom_attempts=3D 3 vm.pfault_oom_wait=3D 10 It reached swap-space-full: . . .; load averages: . . . MaxObs: 5.42, 4.98, 4.80 . . . threads: . . ., 11 MaxObsRunning . . . Mem: . . ., 6482Mi MaxObsActive, 1275Mi MaxObsWired, 7832Mi = MaxObs(Act+Wir+Lndry) Swap: 8192Mi Total, 8192Mi Used, K Free, 100% Inuse, 4096B In, 81920B = Out, 8192Mi MaxObsUsed, 14733Mi MaxObs(Act+Lndry+SwapUsed), 16007Mi = MaxObs(Act+Wir+Lndry+SwapUsed) swap_pager: out of swap space swp_pager_getswapspace(5): failed swp_pager_getswapspace(25): failed swp_pager_getswapspace(1): failed swp_pager_getswapspace(31): failed swp_pager_getswapspace(6): failed swp_pager_getswapspace(1): failed swp_pager_getswapspace(25): failed swp_pager_getswapspace(10): failed swp_pager_getswapspace(17): failed swp_pager_getswapspace(27): failed swp_pager_getswapspace(5): failed swp_pager_getswapspace(11): failed swp_pager_getswapspace(9): failed swp_pager_getswapspace(29): failed swp_pager_getswapspace(2): failed swp_pager_getswapspace(1): failed swp_pager_getswapspace(9): failed swp_pager_getswapspace(20): failed swp_pager_getswapspace(4): failed swp_pager_getswapspace(21): failed swp_pager_getswapspace(11): failed swp_pager_getswapspace(2): failed swp_pager_getswapspace(21): failed swp_pager_getswapspace(2): failed swp_pager_getswapspace(1): failed swp_pager_getswapspace(2): failed swp_pager_getswapspace(3): failed swp_pager_getswapspace(3): failed swp_pager_getswapspace(2): failed swp_pager_getswapspace(1): failed swp_pager_getswapspace(20): failed swp_pager_getswapspace(2): failed swp_pager_getswapspace(1): failed swp_pager_getswapspace(16): failed swp_pager_getswapspace(6): failed swap_pager: out of swap space swp_pager_getswapspace(4): failed swp_pager_getswapspace(9): failed swp_pager_getswapspace(17): failed swp_pager_getswapspace(30): failed swp_pager_getswapspace(1): failed . . . Then some time with no messages . . . vm_pageout_mightbe_oom: kill context: v_free_count: 7875, = v_inactive_count: 1 Jan 28 14:36:44 CA72_UFS kernel: pid 55178 (c++), jid 3, uid 0, was = killed: failed to reclaim memory swp_pager_getswapspace(11): failed So, not all that much different from how the vm.pfault_oom_attempts=3D-1 example looked. [00:01:00] [01] [00:00:00] Building devel/llvm13 | llvm13-13.0.0_3 [07:41:39] [01] [07:40:39] Finished devel/llvm13 | llvm13-13.0.0_3: = Failed: build Again it killed: FAILED: = tools/flang/lib/Evaluate/CMakeFiles/obj.FortranEvaluate.dir/fold-integer.c= pp.o So, basically the same stopping area as for the vm.pfault_oom_attempts=3D-1 example. I'll set things up for swap totaling to 30 GiBytes, reboot, and start it again. This will hopefully let me see and report MaxObs??? figures for a successful build when there is RAM+SWAP: 38 GiBytes. So: more than 9 GiBytes per compiler instance (mean). =3D=3D=3D Mark Millard marklmi at yahoo.com