From nobody Sat Jan 29 11:59:40 2022 X-Original-To: freebsd-arm@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 3837E19759EF for ; Sat, 29 Jan 2022 11:59:49 +0000 (UTC) (envelope-from marklmi@yahoo.com) Received: from sonic305-19.consmr.mail.gq1.yahoo.com (sonic305-19.consmr.mail.gq1.yahoo.com [98.137.64.82]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4JmCYb6F7Wz3Lv3 for ; Sat, 29 Jan 2022 11:59:47 +0000 (UTC) (envelope-from marklmi@yahoo.com) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1643457586; bh=a6nUXhGntzgTw1TyQW0C+grWUXfdz2fVXv1TFdvDSQU=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From:Subject:Reply-To; b=sXlwQC6BwMn7fAjGZL6yJaHOy5NBQvRNvidxIRvi0adsXHM0FUg/V5eqZ2ohv3JlsIEzh3OiBdEhq8W64GMdEvxFLVn5x+1hV5dDfPv/vTb7N9+bT2BQZRHCjPwsZTAUsQMUOaIYSPRYch8UdGbPLXxLGrvLNEi93Pvo6EiYWpTB+KbRJ6xcFAEaKqBCwyTOsEQ7FprVGt1EWtuGnDF8tqAfl191UWF1xi99EFugDWz1fE39/+wJBTUmk2tOaYq2CD5VJP/JZrX3zmDvckmsfz9Fl3p8Gzz+LKPyjZvmGFPei1uLbW+xzZG5UeHFjp4Wx7gpOwTvoGwYQmA2W7UwJg== X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1643457586; bh=DXMHGrfQy4yisMrCRTQ0C26lsVeDpfM8cupb3ar4X6T=; h=X-Sonic-MF:Subject:From:Date:To:From:Subject; b=DWvF/ehYRHF8XqChdznDw/1aXXySJ2frk4so1MNxwLdJEglLU8hHevancJNdp9QHKFZA3zpz4SOoDZpxVYwrvSUFWOpDwrY4XL/9CwrlFvgxIRkAejQNQgJFOcmnT05n0TLFtt4pICkw7UbVumvfOhQg2oUufQ8RQbzRu0Q2ASCLvgOfHCNapwjs/CoQk7fDFAdzg4KroFeieWAuF3lDmd6RBXlD3pkTmpOfs2iuhmyuJcq3pZ5Ol6w79ye3q5Sm+sroLbHD5/a5Et7anu15JbWFkIGRjA4SNphFXIqCryzLZaGT5M8oVgEgE2sMTSOppsC3nI64rK8zvnwdaAVvZA== X-YMail-OSG: 7KsHNvsVM1mf7rA21jeGp0vhbY2_x0oj8yxUyLbEj_bK81z1uoLd0FCe2wEzWiu C27YYfvs5AgtLnMOLURM8wdGcbjpJoFoxkXRkWGKgn6O1b17nroq4_5W33av2AIy40t1EmvnePZ3 YZtDvitg5Eo4vHGPrOf707tZAb3dKWBn3kA6a5o8fqwb09dcDidQb6fX6FCkx7_cw0Gmy7.ER8Dp fgMy5jQpD6CN_gUm6.m7BYPiJiq6hYzIcBAt74Mrv.FJ3QjIeXh5jJhoyEjQfTkflIqqh.22flCU 0VEaj2HeT8dRw62P_toP9YV0EWofcdfG1YQx.sXI5WvAXuc4O.13tkPRl6.PMvF_QN0l67V0IlJX A7kl8u2bjwxUxsi2pbMb.JH_fCnzBmX8U9P1u9MLOczNWm0eGW.h832C7IjIwFyDoujev7bk7QP7 zvFILoblKqQbFY6fzaLAq_5VoRbupQfTGftIUnORJChMgsJd2MVuGjOX7IZXOsbKgV9bjzuY0hTp unvjb.2zII0gCiuoPewG5YtI.mEXXxcdmZKuz04tR9UVnuHt_Mt44UIG1aV0qTPSBSRaj5ErBbVW i.WyZ_tURVTxipTAw3yr._8qUimuUpgo912GcK9whz62F.vQcib11hWVzn46Qwds39VsybUIuSMv x1Akliqco14Vw136vADCSWM9HqBIcY2duTWW8JGO7wRYHyLq1ZFA2gFheJ0rBCS9NJWMAhVTn2MP 3FcMcIHpXiKzzoew2RYf8Q0fIeUgpHi2qo.QwyilgT21XkAUZ_PKwCyL261Uz.vgQuuC1_K7Nphi cej2wBEVirQZZzYwskVP_s9Sov1i_xe6duva3SLXTajAUaL5G94zTKbZDuOO40BQ_GbBbqU0oipE UhhjjowDQ4lva5j0iP8.gapeN3IgQBPMipeZvCqI7xT4TtX1tToi4RAvSiXXLX_8DGnR1mPzbFac JvHAfoyXkVrLdBP8GBx1XkmjeD1JQIEh_AUtWYA6ix2NmveMLDayu3YJRhNSIE9PvS2Zgcrde5Nv O.gsPFLLhVvusJ.DECA.Dkh2UOJrZYKM8LNaQy3zoijYmIaiSH9Zww7XGyJ.Zz.hH8Q28CBK7BX. FVhnsLrMit77yUTU623mqeH_3WG67vLBPit9KDG3.4xU1YYSZuueS5FSNSFceDDwqGhl1_DGS3G7 bEXlQoLgGqVW1fEMLgDg.ihex4f162pz8cR46pnbdnYPy.VlieA6qZo.2Gt1Y2hZhfOvuuxY6DIl dcxh6_0LP7su6Hz7zAxiB9RiqhH2jZVG5kCqB9nhM1.zugQ9ovadXtg1nMtm_tUg1Rru7hs3PPF8 UI5WAtNTj_t4ePVBR_U7ykB8wtp9l7Y6cy1jSkXePenaa4L5MHVuVEu5CgwjO08TDuvKNWhqfESR 7f_SvYjMoF96QoH06UFYyMiX9q2EPfE3_z0b.A6e6eBTdYMmfpqp1E0SfMphRJLDjTxRkjDpTsWj vOQ0apc97OgR0WIopV9tAPdjFbubL2A3px0715_xajzmE3mP_XbKK_RR7sTui0DeTkzfAWFYBdrf YLNEpGrrwouFZgpqwNItDcE4bdlp2dSRKDRkbdih3iRvPYdy8LnVFTv.Aahks2e8RlPJ20U4_c1p pOGHL_D6yUoMIMd7p4RC.J5Jvr08OGWRtD9xMU9wy8Cgyg9mQKepCWoJRtIt17W0na1mSixEcAp7 iDDExMkvh3JStzp3qH21H1sf2hsgvkbcnfNFzm7epQK9cxzTVtRa7nKOxeIeuACcmqhXlQk9lsag dfS.lBrnnVSS4RBw07mgZZcICDcXc2QZSH.AOOaXi7Fo1pHuyawGhY4WgqpkxZDyfMi3pth.ruUS cekLIypN2Je2xrckPgNTXp0jZaGM0q1utOLyKypIFqjIspqcZSfg4uOIeCtaprfMN9n77GxpIiJt L6Sa.E_kP.3tDWJ3IJVLXnLNdv7pftRGpW2qh6bn5nfFHYBy05aHLumoQSezuCw.yLaIvBXvZ5ls RrrVtC8pFlAu.QYoGLsQrrp5geEIrpNUxmUNtGpQ5bO81SnYYS8l5Zm9YvpsLpjYpefh8JrJJjJ_ mH7vTZuyqx.9d8jN9hAYBmY0axvWq9Lt3FVOyqPTrkWiGVOq1K2wW8wfMdSR6c_v5L4.nPtBUdDo mtYO5IYPLjHOIwObhb89.mSChG2zs9jMVvarJU4FW_wg_McxSjECDC7Vkfjxb858Et.B5SCMsC7J EID_biDg33rIzqeYOWdW3mS8ihhTLlFYHylZCJuYdCUq3LHz_vi7071xXGffRvyD1DGYgijgMA_i OTzthkYyizKikq2aS X-Sonic-MF: Received: from sonic.gate.mail.ne1.yahoo.com by sonic305.consmr.mail.gq1.yahoo.com with HTTP; Sat, 29 Jan 2022 11:59:46 +0000 Received: by kubenode519.mail-prod1.omega.gq1.yahoo.com (VZM Hermes SMTP Server) with ESMTPA ID 0db57142e414275bdfdc4e7dbd8d36be; Sat, 29 Jan 2022 11:59:40 +0000 (UTC) Content-Type: text/plain; charset=us-ascii List-Id: Porting FreeBSD to ARM processors List-Archive: https://lists.freebsd.org/archives/freebsd-arm List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-arm@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.120.0.1.13\)) Subject: Re: devel/llvm13 failed to reclaim memory on 8 GB Pi4 running -current [ZFS context: similar to UFS] From: Mark Millard In-Reply-To: <6D67BFDF-D786-4BB7-BF2D-CE4D5532D452@yahoo.com> Date: Sat, 29 Jan 2022 03:59:40 -0800 Cc: Free BSD Content-Transfer-Encoding: quoted-printable Message-Id: References: <20220127164512.GA51200@www.zefox.net> <2C7E741F-4703-4E41-93FE-72E1F16B60E2@yahoo.com> <20220127214801.GA51710@www.zefox.net> <5E861D46-128A-4E09-A3CF-736195163B17@yahoo.com> <20220127233048.GA51951@www.zefox.net> <6528ED25-A3C6-4277-B951-1F58ADA2D803@yahoo.com> <10B4E2F0-6219-4674-875F-A7B01CA6671C@yahoo.com> <54CD0806-3902-4B9C-AA30-5ED003DE4D41@yahoo.com> <9771EB33-037E-403E-8A77-7E8E98DCF375@yahoo.com> <6D67BFDF-D786-4BB7-BF2D-CE4D5532D452@yahoo.com> To: bob prohaska X-Mailer: Apple Mail (2.3654.120.0.1.13) X-Rspamd-Queue-Id: 4JmCYb6F7Wz3Lv3 X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=yahoo.com header.s=s2048 header.b=sXlwQC6B; dmarc=pass (policy=reject) header.from=yahoo.com; spf=pass (mx1.freebsd.org: domain of marklmi@yahoo.com designates 98.137.64.82 as permitted sender) smtp.mailfrom=marklmi@yahoo.com X-Spamd-Result: default: False [-3.50 / 15.00]; FREEMAIL_FROM(0.00)[yahoo.com]; MV_CASE(0.50)[]; R_SPF_ALLOW(-0.20)[+ptr:yahoo.com]; TO_DN_ALL(0.00)[]; DKIM_TRACE(0.00)[yahoo.com:+]; RCPT_COUNT_TWO(0.00)[2]; DMARC_POLICY_ALLOW(-0.50)[yahoo.com,reject]; NEURAL_HAM_SHORT(-1.00)[-1.000]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; MIME_TRACE(0.00)[0:+]; FREEMAIL_ENVFROM(0.00)[yahoo.com]; ASN(0.00)[asn:36647, ipnet:98.137.64.0/20, country:US]; MID_RHS_MATCH_FROM(0.00)[]; DWL_DNSWL_NONE(0.00)[yahoo.com:dkim]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[yahoo.com:s=s2048]; FROM_HAS_DN(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[98.137.64.82:from]; MLMMJ_DEST(0.00)[freebsd-arm]; RWL_MAILSPIKE_POSSIBLE(0.00)[98.137.64.82:from]; RCVD_COUNT_TWO(0.00)[2] X-ThisMailContainsUnwantedMimeParts: N On 2022-Jan-28, at 19:20, Mark Millard wrote: > On 2022-Jan-28, at 15:05, Mark Millard wrote: >=20 >> On 2022-Jan-28, at 00:31, Mark Millard wrote: >>=20 >>>> . . . >>>=20 >>> UFS context: >>>=20 >>> . . .; load averages: . . . MaxObs: 5.47, 4.99, 4.82 >>> . . . threads: . . ., 14 MaxObsRunning >>> . . . >>> Mem: . . ., 6457Mi MaxObsActive, 1263Mi MaxObsWired, 7830Mi = MaxObs(Act+Wir+Lndry) >>> Swap: 8192Mi Total, 8192Mi Used, K Free, 100% Inuse, 8192Mi = MaxObsUsed, 14758Mi MaxObs(Act+Lndry+SwapUsed), 16017Mi = MaxObs(Act+Wir+Lndry+SwapUsed) >>>=20 >>>=20 >>> Console: >>>=20 >>> swap_pager: out of swap space >>> swp_pager_getswapspace(4): failed >>> swp_pager_getswapspace(1): failed >>> swp_pager_getswapspace(1): failed >>> swp_pager_getswapspace(2): failed >>> swp_pager_getswapspace(2): failed >>> swp_pager_getswapspace(4): failed >>> swp_pager_getswapspace(1): failed >>> swp_pager_getswapspace(9): failed >>> swp_pager_getswapspace(4): failed >>> swp_pager_getswapspace(7): failed >>> swp_pager_getswapspace(29): failed >>> swp_pager_getswapspace(9): failed >>> swp_pager_getswapspace(1): failed >>> swp_pager_getswapspace(2): failed >>> swp_pager_getswapspace(1): failed >>> swp_pager_getswapspace(4): failed >>> swp_pager_getswapspace(1): failed >>> swp_pager_getswapspace(10): failed >>>=20 >>> . . . Then some time with no messages . . . >>>=20 >>> vm_pageout_mightbe_oom: kill context: v_free_count: 7740, = v_inactive_count: 1 >>> Jan 27 23:01:07 CA72_UFS kernel: pid 57238 (c++), jid 3, uid 0, was = killed: failed to reclaim memory >>> swp_pager_getswapspace(2): failed >>>=20 >>>=20 >>> Note: The "vm_pageout_mightbe_oom: kill context:" >>> notice is one of the few parts of an old reporting >>> patch Mark J. had supplied (long ago) that still >>> fits in the modern code (or that I was able to keep >>> updated enough to fit, anyway). It is another of the >>> personal updates that I keep in my source trees, >>> such as in /usr/main-src/ . >>>=20 >>> diff --git a/sys/vm/vm_pageout.c b/sys/vm/vm_pageout.c >>> index 36d5f3275800..f345e2d4a2d4 100644 >>> --- a/sys/vm/vm_pageout.c >>> +++ b/sys/vm/vm_pageout.c >>> @@ -1828,6 +1828,8 @@ vm_pageout_mightbe_oom(struct vm_domain *vmd, = int page_shortage, >>> * start OOM. Initiate the selection and signaling of the >>> * victim. >>> */ >>> + printf("vm_pageout_mightbe_oom: kill context: v_free_count: = %u, v_inactive_count: %u\n", >>> + vmd->vmd_free_count, = vmd->vmd_pagequeues[PQ_INACTIVE].pq_cnt); >>> vm_pageout_oom(VM_OOM_MEM); >>>=20 >>> /* >>>=20 >>>=20 >>> Again, I'd used vm.pfault_oom_attempts inappropriately >>> for running out of swap (although with UFS it did do >>> a kill fairly soon): >>>=20 >>> # Delay when persistent low free RAM leads to >>> # Out Of Memory killing of processes: >>> vm.pageout_oom_seq=3D120 >>> # >>> # For plunty of swap/paging space (will not >>> # run out), avoid pageout delays leading to >>> # Out Of Memory killing of processes: >>> vm.pfault_oom_attempts=3D-1 >>> # >>> # For possibly insufficient swap/paging space >>> # (might run out), increase the pageout delay >>> # that leads to Out Of Memory killing of >>> # processes (showing defaults at the time): >>> #vm.pfault_oom_attempts=3D 3 >>> #vm.pfault_oom_wait=3D 10 >>> # (The multiplication is the total but there >>> # are other potential tradoffs in the factors >>> # multiplied, even for nearly the same total.) >>>=20 >>> I'll change: >>>=20 >>> vm.pfault_oom_attempts >>> vm.pfault_oom_wait >>>=20 >>> and reboot --and start the bulk somewhat before >>> going to bed. >>>=20 >>>=20 >>> For reference: >>>=20 >>> [00:02:13] [01] [00:00:00] Building devel/llvm13 | llvm13-13.0.0_3 >>> [07:37:05] [01] [07:34:52] Finished devel/llvm13 | llvm13-13.0.0_3: = Failed: build >>>=20 >>>=20 >>> [ 65% 4728/7265] . . . flang/lib/Evaluate/fold-designator.cpp >>> [ 65% 4729/7265] . . . flang/lib/Evaluate/fold-integer.cpp >>> FAILED: = tools/flang/lib/Evaluate/CMakeFiles/obj.FortranEvaluate.dir/fold-integer.c= pp.o=20 >>> [ 65% 4729/7265] . . . flang/lib/Evaluate/fold-logical.cpp >>> [ 65% 4729/7265] . . . flang/lib/Evaluate/fold-complex.cpp >>> [ 65% 4729/7265] . . . flang/lib/Evaluate/fold-real.cpp >>>=20 >>> So the flang/lib/Evaluate/fold-integer.cpp one was the one killed. >>>=20 >>> Notably, the specific sources being compiled are different >>> than in the ZFS context report. But this might be because >>> of my killing ninja explicitly in the ZFS context, before >>> killing the running compilers. >>>=20 >>> Again, using the options to avoid building the Fortran >>> compiler probably avoids such memory use --if you do not >>> need the Fortran compiler. >>=20 >>=20 >> UFS based on instead using (not vm.pfault_oom_attempts=3D-1): >>=20 >> vm.pfault_oom_attempts=3D 3 >> vm.pfault_oom_wait=3D 10 >>=20 >> It reached swap-space-full: >>=20 >> . . .; load averages: . . . MaxObs: 5.42, 4.98, 4.80 >> . . . threads: . . ., 11 MaxObsRunning >> . . . >> Mem: . . ., 6482Mi MaxObsActive, 1275Mi MaxObsWired, 7832Mi = MaxObs(Act+Wir+Lndry) >> Swap: 8192Mi Total, 8192Mi Used, K Free, 100% Inuse, 4096B In, 81920B = Out, 8192Mi MaxObsUsed, 14733Mi MaxObs(Act+Lndry+SwapUsed), 16007Mi = MaxObs(Act+Wir+Lndry+SwapUsed) >>=20 >>=20 >> swap_pager: out of swap space >> swp_pager_getswapspace(5): failed >> swp_pager_getswapspace(25): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(31): failed >> swp_pager_getswapspace(6): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(25): failed >> swp_pager_getswapspace(10): failed >> swp_pager_getswapspace(17): failed >> swp_pager_getswapspace(27): failed >> swp_pager_getswapspace(5): failed >> swp_pager_getswapspace(11): failed >> swp_pager_getswapspace(9): failed >> swp_pager_getswapspace(29): failed >> swp_pager_getswapspace(2): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(9): failed >> swp_pager_getswapspace(20): failed >> swp_pager_getswapspace(4): failed >> swp_pager_getswapspace(21): failed >> swp_pager_getswapspace(11): failed >> swp_pager_getswapspace(2): failed >> swp_pager_getswapspace(21): failed >> swp_pager_getswapspace(2): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(2): failed >> swp_pager_getswapspace(3): failed >> swp_pager_getswapspace(3): failed >> swp_pager_getswapspace(2): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(20): failed >> swp_pager_getswapspace(2): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(16): failed >> swp_pager_getswapspace(6): failed >> swap_pager: out of swap space >> swp_pager_getswapspace(4): failed >> swp_pager_getswapspace(9): failed >> swp_pager_getswapspace(17): failed >> swp_pager_getswapspace(30): failed >> swp_pager_getswapspace(1): failed >>=20 >> . . . Then some time with no messages . . . >>=20 >> vm_pageout_mightbe_oom: kill context: v_free_count: 7875, = v_inactive_count: 1 >> Jan 28 14:36:44 CA72_UFS kernel: pid 55178 (c++), jid 3, uid 0, was = killed: failed to reclaim memory >> swp_pager_getswapspace(11): failed >>=20 >>=20 >> So, not all that much different from how the >> vm.pfault_oom_attempts=3D-1 example looked. >>=20 >>=20 >> [00:01:00] [01] [00:00:00] Building devel/llvm13 | llvm13-13.0.0_3 >> [07:41:39] [01] [07:40:39] Finished devel/llvm13 | llvm13-13.0.0_3: = Failed: build >>=20 >> Again it killed: >>=20 >> FAILED: = tools/flang/lib/Evaluate/CMakeFiles/obj.FortranEvaluate.dir/fold-integer.c= pp.o >>=20 >> So, basically the same stopping area as for the >> vm.pfault_oom_attempts=3D-1 example. >>=20 >>=20 >> I'll set things up for swap totaling to 30 GiBytes, reboot, >> and start it again. This will hopefully let me see and >> report MaxObs??? figures for a successful build when there >> is RAM+SWAP: 38 GiBytes. So: more than 9 GiBytes per compiler >> instance (mean). >=20 > The analogous ZFS test with: >=20 > vm.pfault_oom_attempts=3D 3 > vm.pfault_oom_wait=3D 10 >=20 > got: >=20 > . . .; load averages: . . . MaxObs: 5.90, 5.07, 4.80 > . . . threads: . . ., 11 MaxObsRunning > . . . > Mem: . . ., 6006Mi MaxObsActive > . . . > Swap: 8192Mi Total, 8192Mi Used, 32768B Free, 99% Inuse, 28984Ki In, = 4792Ki Out, 8192Mi MaxObsUsed, 14282Mi MaxObs(Act+Lndry+SwapUsed), = 16009Mi MaxObs(Act+Wir+Lndry+SwapUsed) >=20 > (I got that slightly early, before the 100% showed up.) >=20 >=20 > swap_pager: out of swap space > swp_pager_getswapspace(10): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(4): failed > swp_pager_getswapspace(16): failed > swp_pager_getswapspace(5): failed > swp_pager_getswapspace(2): failed > swp_pager_getswapspace(8): failed > swp_pager_getswapspace(12): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(32): failed > swp_pager_getswapspace(4): failed > swp_pager_getswapspace(9): failed > swp_pager_getswapspace(4): failed > swp_pager_getswapspace(17): failed > swp_pager_getswapspace(21): failed > swp_pager_getswapspace(10): failed > swp_pager_getswapspace(18): failed > swp_pager_getswapspace(6): failed > swp_pager_getswapspace(2): failed > swp_pager_getswapspace(14): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(5): failed > swp_pager_getswapspace(25): failed > swp_pager_getswapspace(12): failed > swp_pager_getswapspace(5): failed > swp_pager_getswapspace(7): failed > swp_pager_getswapspace(10): failed > swp_pager_getswapspace(3): failed > swp_pager_getswapspace(24): failed > swap_pager: out of swap space > swp_pager_getswapspace(11): failed > swap_pager: out of swap space > swp_pager_getswapspace(17): failed > swp_pager_getswapspace(5): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(32): failed > swp_pager_getswapspace(15): failed > swp_pager_getswapspace(19): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(25): failed > swp_pager_getswapspace(11): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(15): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(8): failed > swp_pager_getswapspace(31): failed > swp_pager_getswapspace(26): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(20): failed > swp_pager_getswapspace(4): failed > swp_pager_getswapspace(3): failed > swp_pager_getswapspace(3): failed > swp_pager_getswapspace(9): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(15): failed > swp_pager_getswapspace(3): failed > swp_pager_getswapspace(7): failed > swp_pager_getswapspace(8): failed > swp_pager_getswapspace(17): failed > swp_pager_getswapspace(2): failed > swp_pager_getswapspace(10): failed > swp_pager_getswapspace(6): failed > swp_pager_getswapspace(2): failed > swp_pager_getswapspace(11): failed > swp_pager_getswapspace(21): failed > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(9): failed > swp_pager_getswapspace(32): failed > swp_pager_getswapspace(2): failed > swp_pager_getswapspace(32): failed > swp_pager_getswapspace(25): failed > swp_pager_getswapspace(21): failed > swp_pager_getswapspace(22): failed > swp_pager_getswapspace(14): failed > swp_pager_getswapspace(10): failed > swap_pager: out of swap space > swp_pager_getswapspace(1): failed > swp_pager_getswapspace(28): failed > swp_pager_getswapspace(2): failed > swp_pager_getswapspace(13): failed > swp_pager_getswapspace(3): failed > swp_pager_getswapspace(31): failed > swp_pager_getswapspace(20): failed > swp_pager_getswapspace(2): failed > vm_pageout_mightbe_oom: kill context: v_free_count: 8186, = v_inactive_count: 1 > Jan 28 18:42:42 CA72_4c8G_ZFS kernel: pid 98734 (c++), jid 3, uid 0, = was killed: failed to reclaim memory >=20 > [00:00:49] [01] [00:00:00] Building devel/llvm13 | llvm13-13.0.0_3 > [08:06:09] [01] [08:05:20] Finished devel/llvm13 | llvm13-13.0.0_3: = Failed: build >=20 > FAILED: = tools/flang/lib/Evaluate/CMakeFiles/obj.FortranEvaluate.dir/fold-complex.c= pp.o >=20 > and flang/lib/Evaluate/fold-integer.cpp was one of the compiles going = on. Finally, what a successful build of devel/llvm13 on UFS was like on the 8 GiByte RPi4B (overclocked, USB3 NVMe based SSD): [00:00:57] [01] [00:00:00] Building devel/llvm13 | llvm13-13.0.0_3 [12:25:40] [01] [12:24:43] Finished devel/llvm13 | llvm13-13.0.0_3: = Success where its Maximum Observed figures were: . . .; load averages: . . . MaxObs: 6.15, 5.71, 5.31 . . . threads: . . ., 11 MaxObsRunning . . . Mem: . . ., 6465Mi MaxObsActive, 1355Mi MaxObsWired, 7832Mi = MaxObs(Act+Wir+Lndry) Swap: . . ., 10429Mi MaxObsUsed, 16799Mi MaxObs(Act+Lndry+SwapUsed), = 18072Mi MaxObs(Act+Wir+Lndry+SwapUsed) But 18072Mi MaxObs(Act+Wir+Lndry+SwapUsed) =3D=3D 17.6484375 GiByte, so more than 17.6484375 GiByte for RAM+SWAP, depending on how much room for inactive and margin one chooses. Probably 20+ GiBytes, so 12+ GiBytes of swap for 8 GiBytes of RAM. (Reminder: maximum of sum <=3D sum of maximums.) =3D=3D=3D Mark Millard marklmi at yahoo.com