From nobody Sat May 04 16:23:22 2024 X-Original-To: freebsd-ports@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4VWtJt1Vm8z5J74K for ; Sat, 4 May 2024 16:23:42 +0000 (UTC) (envelope-from marklmi@yahoo.com) Received: from sonic312-24.consmr.mail.gq1.yahoo.com (sonic312-24.consmr.mail.gq1.yahoo.com [98.137.69.205]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4VWtJr56bHz4dkJ for ; Sat, 4 May 2024 16:23:40 +0000 (UTC) (envelope-from marklmi@yahoo.com) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=yahoo.com header.s=s2048 header.b=d2TTm715; dmarc=pass (policy=reject) header.from=yahoo.com; spf=pass (mx1.freebsd.org: domain of marklmi@yahoo.com designates 98.137.69.205 as permitted sender) smtp.mailfrom=marklmi@yahoo.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1714839818; bh=DT6izgg8N2rziqs3MNBLp4hkRa+Q6Hsc0LBHBjKWfS8=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From:Subject:Reply-To; b=d2TTm7158NyY+x9uxoMZg05f0/NSvSwEIgEFdNYcBv8d46sDc6/x4SYy5JGYuP5Da2rIrOdQQSHFVgvWSpskeQxabzmccLCVZNCuEWt5pgcLDurq3nJbrdetLUB7YVCkrZCxn8F95plpW1EAmrp+NI1ua1VC1O80l4/Hj7k78PMSXdLLNP+P3y4PFl90tJiXLhmq4F9e9vRe2gwzmM8TqjJP3FIJ9lHh30yfQs89qj2tVZadaXZSp2p4Bh8v2dCT4JZbtzVQ92WNZ0NdxkNWXomodyqpqSI3zH3ejyk4jnzLHswWBdmz2WuvA5BD8+D70ftHC4tS0LhhPE4oKJh4FQ== X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1714839818; bh=d4V3mO/Xs3huyfyuFPjry6PX1F+1zkHLh1HxHRqzt3b=; h=X-Sonic-MF:Subject:From:Date:To:From:Subject; b=BpEh4lpR01xue3QUfTGYX14o8K3z9tGLsYdOs4BbjJDyeSUEzVDmQLgHccu3Dz5YloHnt6TqEKzAbkA7161fOPBLOhq5wlbISrs3hD4QeCyN5hJXmoHmD52tn+dt04xXtxg4zWl2mXEWNi+r6rsCSlZS14YrD1hGJejz2XC3bnm/bmtNqa4kMN53Sy7xIlIUVCEBlRWuCRegDkkuSNfYPxpNiaDO5b5LkOp8F5x3y21OZ7Zrl8WCN2diRcthDyMDiaDYT3vTGoO9jzWf+RR7wCBNxG0zyhP8tcYqpQrIjWv/jyGkxlyQuPnCOAdlex4N6in69pldVsRxhPxMcLSjtQ== X-YMail-OSG: 1u76RxUVM1mjhKx178apVQERbeM9JiOx9yFM.J4Xo78WNLTa29IeX1Ahf6c41Cp Hrbo97iDed0CBL8KginHwyg73nw75LTjXH_HAoEbAyLluklXtjMkLJFaPjHMPOTxAI.F5M6ofq2D A40hCn3.0DlKsmFKoxfP1UWK73al1Bnlcezs9ZWd8jwybobtCNSJiTvK2r_UXwg.XWrVlHa.CUzn kLxYAe11Up3JKb9WHjKQnDnTk6A2tN_m9Z.e.kJ2QYbGdd7JJyheHNgkDFiNZb_L_yRto1BdEi_s uP1yHlhIuQckLWzBupmwb8tpE32SHIznLbLVXi6VoZKBVEfpTZyRlzbvM8Yo_2G_SS0bKVyCtgEs 5IEvMjPTw24DhbW9TZsLZlOhs5QNKcnfinm.lEp_HW.55GpZ.zl2ugrSfrRv_kLZ_Ul0EltelMdF SNF9GJV0GmhixcmQePyTtOnxa7g8nt6SrXWq2ALniCrdTKw3kAeG6187f0wHqJ_Y_J0SODMhNuBD iQ_HhkEv8LdR8mcXGgm.LwbdoqLYsjqgBLcYQCxG8gavTn3BOmovTQHz_vJ6P_wRR5mj5Kx6HmFo SywlV.KDTGt7CVy.MH5z20P2alEEB5Xs0bzC6vBKIpZDsRG71Ywajw_GlQD_gRCDdCeWBAJiSx5m e9TIERTK7q3IUvEq1Pm_ofG4CLSYi6.tvVOFJegH8Ni.4gpyIxxw15VOooYyIxGlU7m57fIm9OJx ocwElSXqQH0nlCA6eXkVzkITwnexHCFqEeazvp7iw0KniwI3BCzmZj3Z11whiOajcG6i9IZuTEc7 Um.wuNFJoNePOIvDAQdy22wuprA6anr6WXmln8MTzkzlnZN2WGmIpY7syQyEVIVOqSaJ1tNrTIb0 7SlQIMN0WBJoGbiyxHTEkD5U0EBle12hguB8J5J.DpPwk811lW3w5A3CS53SR.Srf3Bzg_M3Ce62 adZ3awb5Toz7jMIn6PkIQHCfbK_ONajvjVkYhwX2Gp1q.k_gfhT.4uUQ664urm6Mq.FCFnxulDgO acFlzeojzeJ38lCPl3W9APtQmfH0.jiPVmNC8drZJxVJu_1Qe_PoptBmeqwm6PzaDbE7NcU9UoXO k_Io1T8607tGJpw1R_sKG0dBtPvj_AAnNIuzhd_pbb38qBZ3mFzpnB..bD22S_ghnWOwuW2DEdB5 faJRFQsZR8meeTZWBLNMOLz8iSZ4dyE_Myw0P3MnGy0Kkes_0w9y_h1rGHW0O_HVDISgVEewZt6q XaBBGhr5SoI1Fvl2zmDYyGwF53V0A3ZsO9IeLeZcL7zG34IqYgFuk_7Gr5gkWNA2iLD5mFjmV5Wu WGLVYbbyouVQ7PJqwLY1oTzoUtKrXtEYEJ3pYW7U5M_SLIkBYBLwV_eX.9kLHg74BdIqc1AOq6v4 98s8TZNppVyVJMhf5nNyN.Vm9eM8G958FhbnioYIC_nATwoUVaeoM2IHX0anUOgZY9Y6Q1PO1kRi OvdPf2lsrcfE__ZtUo69WWgZQzsr2mxrQAq0XCrU2fHiHSbPqT56RCZW.G5EbpB1.C7owNmYvkEZ SZj03cBbUKPOEREBibLsgbZK30zfA4Zlg4A_mMODOtXfK2P9ruG7ruhbyYT1xH7K0vthYiTCTlL. 9gahsg.wgub25Tx2bn6l9cEfd135NN6N_BBeYx9fqGgmICDXDQJxALiN_wa2k6bBbldWeGr3SFRs PYFT5kYl6tMUdVt.2Zl9KRU.QD93TfEGsoaZiWrBWZECKDX5CCsapDWFRFBSQs1T_5H86kM0I8qD KO5Pi9evoAgFG9bL4BCHp3_7kll8foRK7aDfWUbiwm8qM_5dka1YBTBszRr3YjD7eMrAWpBX1GrI 8GHxgIz1p9FVoo9LIpvfYikXeHKXdgiWJrzY7bGf1a.IRpLJsmkhqP3T_zG3InaowGfgQZASRHD1 W4kW4WW0.FhNe.m9po0avRzhnsysU3cajpq4S0h08e9jaFCToHnOubAN8EmD6GKYioO3Q2s5wRtr thpiziEwlWQBjsj40xITpJg7cxIkY1nncANxtm5v.Ml8jgk14vGIDPh_u6uLEWgjmtgYLSAQgk4w lEoeM8A_vbXt8i9AiRMyVEP7N35OCf98mEyRGw7u3d60Sk_WPax80L40scwGplx6cr8PP8qA5UpI XowXjAxFw30Y8tqEFJD1NKr42b79j6epUO.apkEktx37855FXmV_gjhU1jQhAFjWUCh50r7Jjrpx q9FObTAfG9ZOvSF0BHJYso_biytYPhYHRl_iOS2v0HH8qudENPuSIQLRRRELGtvonqdC8mFwVjzA QQp7iW_YTD4c- X-Sonic-MF: X-Sonic-ID: 29aa8be9-899c-4c65-97fc-0be89af2c500 Received: from sonic.gate.mail.ne1.yahoo.com by sonic312.consmr.mail.gq1.yahoo.com with HTTP; Sat, 4 May 2024 16:23:38 +0000 Received: by hermes--production-gq1-59c575df44-n6ttj (Yahoo Inc. Hermes SMTP Server) with ESMTPA ID b24de13907ff67c3f8b8017b57598fc6; Sat, 04 May 2024 16:23:34 +0000 (UTC) Content-Type: text/plain; charset=us-ascii List-Id: Porting software to FreeBSD List-Archive: https://lists.freebsd.org/archives/freebsd-ports List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-BeenThere: freebsd-ports@freebsd.org Sender: owner-freebsd-ports@FreeBSD.org Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3774.500.171.1.1\)) Subject: Re: I've started collecting tmpfs usage figures from a poudriere-devel bulk -a for later publishing some of the top ones (handy for TMPFS_BLACKLIST judgments) From: Mark Millard In-Reply-To: Date: Sat, 4 May 2024 09:23:22 -0700 Cc: Nuno Teixeira Content-Transfer-Encoding: quoted-printable Message-Id: References: <97E1D1BA-CBA1-42C6-8202-B6D654941C28@yahoo.com> <3BF173B4-E6A8-42B7-9A64-0E87E81B8597@yahoo.com> To: FreeBSD Mailing List , FreeBSD Toolchain X-Mailer: Apple Mail (2.3774.500.171.1.1) X-Spamd-Bar: --- X-Spamd-Result: default: False [-3.50 / 15.00]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-1.00)[-0.999]; DMARC_POLICY_ALLOW(-0.50)[yahoo.com,reject]; MV_CASE(0.50)[]; R_DKIM_ALLOW(-0.20)[yahoo.com:s=s2048]; R_SPF_ALLOW(-0.20)[+ptr:yahoo.com]; MIME_GOOD(-0.10)[text/plain]; FROM_HAS_DN(0.00)[]; TO_DN_ALL(0.00)[]; RCVD_TLS_LAST(0.00)[]; ARC_NA(0.00)[]; FREEMAIL_FROM(0.00)[yahoo.com]; MIME_TRACE(0.00)[0:+]; DWL_DNSWL_NONE(0.00)[yahoo.com:dkim]; DKIM_TRACE(0.00)[yahoo.com:+]; ASN(0.00)[asn:36647, ipnet:98.137.64.0/20, country:US]; FREEMAIL_ENVFROM(0.00)[yahoo.com]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; MLMMJ_DEST(0.00)[freebsd-ports@freebsd.org]; RWL_MAILSPIKE_POSSIBLE(0.00)[98.137.69.205:from]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[98.137.69.205:from] X-Rspamd-Queue-Id: 4VWtJr56bHz4dkJ On May 2, 2024, at 13:42, Mark Millard wrote: > On Apr 30, 2024, at 19:08, Mark Millard wrote: >=20 >> On Apr 28, 2024, at 18:48, Mark Millard wrote: >>=20 >>> I've modified my local poudriere-devel to have Success and Failure = lines also >>> report the tmpfs size at that point. Using, say, script to log the = output to >>> a file allows later sorting and listing of the TMPFS usage filgures. = (The >>> context is an amd64 one that has the resources to do a full bulk -a = with >>> USE_TMPFS=3Dall invovled.) An example (from an in-progress bulk -a = that has a >>> long way to go): >>>=20 >>> # grep TMPFS: ~/mmjnk-bulk-a-output.txt | sort -n -r -k11 >>>=20 >>> [00:36:37] [18] [00:26:53] Finished print/texlive-docs | = texlive-docs-20230313: Success ending TMPFS: 10.67 GiB >>> [01:04:41] [07] [00:06:58] Finished net-mgmt/telegraf | = telegraf-1.30.1_1: Success ending TMPFS: 10.52 GiB >>> [01:03:32] [25] [00:06:09] Finished security/trivy | = trivy-0.50.1_1: Success ending TMPFS: 10.10 GiB >>> . . . >>> [01:15:56] [20] [00:00:54] Finished databases/pg_tileserv | = pg_tileserv-1.0.9_12: Failed: build TMPFS: 2.61 GiB >>> . . . >>>=20 >>> Note that the design is for sort with -k11 to work for Success and >>> for Failure. (This is why "ending" is in place for Success.) I >>> choose to use poudriere -N (no coloring) for this kind of activity. >>>=20 >>> This helps for figuring out what all is appropriate for listing in >>> TMPFS_BLACKLIST for a poudriere-devel configuration to avoid tmpfs >>> competing too much for RAM+SWAP. (But approraite free file system >>> space is needed.) >>>=20 >>>=20 >>> "<" below is what is new, ">" is what was original, in >>> /usr/local/share/poudriere/common.sh : >>>=20 >>> 5928,5934d5927 >>> < tmpfs_at_end=3D"$(env BLOCKSIZE=3D512 df -t tmpfs \ >>> < ${MASTERMNTROOT}/${MY_JOBID}/ \ >>> < ${MASTERMNTROOT}/${MY_JOBID}/.p/ \ >>> < ${MASTERMNTROOT}/${MY_JOBID}/usr/local/ \ >>> < 2>/dev/null | tail -3 \ >>> < | awk '{ tmpfs_use +=3D $3; } END { printf "%s %.2f %s", "TMPFS:", = tmpfs_use*512/(1024**3), "GiB" }')" >>> <=20 >>> 5942c5935 >>> < "Success${COLOR_RESET} ending ${tmpfs_at_end}" >>> --- >>>> "Success" >>> 5968c5961 >>> < "Failed: ${COLOR_PHASE}${failed_phase}${COLOR_RESET} = ${tmpfs_at_end}" >>> --- >>>> "Failed: ${COLOR_PHASE}${failed_phase}" >>>=20 >>>=20 >>> The form of use that I've done also involves (over?) use of >>> MUTUALLY_EXCLUSIVE_BUILD_PACKAGES . It is not as good of data >>> for this other use, but the same .txt file can be processed >>> with: >>>=20 >>> # grep TMPFS: ~/mmjnk-bulk-a-output.txt | sort -r -k3 | more >>> [01:42:09] [04] [00:48:16] Finished lang/erlang-runtime21 | = erlang-runtime21-21.3.8.24_3: Success ending TMPFS: 1.92 GiB >>> [01:38:39] [28] [00:44:41] Finished lang/erlang-runtime22 | = erlang-runtime22-22.3.4.27: Success ending TMPFS: 1.92 GiB >>> [01:05:41] [02] [00:34:54] Finished lang/erlang-runtime26 | = erlang-runtime26-26.2.4: Success ending TMPFS: 2.02 GiB >>> . . . >>>=20 >>> to find longer running package builds. This is subject to >>> significant variation based on what other builders are >>> running in parallel at the time and what sort of load >>> averages are involved over period in question. The >>> MUTUALLY_EXCLUSIVE_BUILD_PACKAGES that I've used will >>> limit that to some extent. But the result is comparisons >>> of some builds that have no activity in parallel by other >>> builders vs. builds that have extensive parallel activity >>> by other builders (of not-huge packages). >>>=20 >>> Note: In modern times the [1D: notation and the like for >>> what the -k3 compares are not well placed in the overall >>> list compared to the likes of, say, [20: . The day vs. >>> hour comparison is not a straight forward thing to sort >>> on. >>>=20 >>> Hopefully in a few days I'll be able to list off example >>> top tmpfs usage for USE_TMPFS=3Dall and top build times as >>> well (such as they are). >>>=20 >>> Note: >>> This is from my personal environment. I've not tried to >>> simulate how FreeBSD's official package builders are set >>> up to operate. >>=20 >> I've decided that the "build times" list is not reasonable >> for use: too much variability of context. >>=20 >> But here is a list of 148 or so of the bigger USE_TMPFS=3Dall >> tmpfs-usage builds from a poudriere-devel "-N bulk -c -a" >> run. It covers the > 7.66 GiByte tmpfs examples for how I >> build. >>=20 >> # grep TMPFS: mmjnk-bulk-a-output.txt | sort -n -r -k11 | head -84 >> [1D:10:04:32] [25] [02:14:20] Finished www/chromium | = chromium-124.0.6367.60: Success ending TMPFS: 31.76 GiB > . . . >> [1D:11:05:43] [04] [00:16:26] Finished astro/kstars | = kstars-3.6.6_1,1: Success ending TMPFS: 7.69 GiB >> . . . >=20 > Note: I had not put ttk into the TMPFS_BLACKLIST as I had intended. >=20 >> For reference: Queued: 34535 Built: 33990 Failed: 159 Skipped: 100 = Ignored: 286 Fetched: 0 >>=20 >> After rebooting, I'm going to re-run a "-N bulk -c -a" based on >> TMPFS_BLACKLIST having a hopefully good approximation to the >> above list matching and TMPFS_BLACKLIST_TMPDIR also being >> defined. Also, use of ALLOW_MAKE_JOBS=3Dyes but no use of >> MUTUALLY_EXCLUSIVE_BUILD_PACKAGES : >> RAM+SWAP: 192 GiBytes + 512 GiBytes =3D=3D 704 GiBytes. >> Media: PCIe Optane 1.4T has the file system and the swap. >> System: 7950X3D based. >=20 > For the 32 hardware thread, 32 builder context, with USE_TMPFS=3Dall > --but also using the TMPFS_BLACKLIST-- the result was: >=20 > [1D:16:49:31] Stopping 32 builders > . . . > [main-amd64-bulk_a-default] [2024-05-01_02h27m06s] [committing] = Queued: 34535 Built: 33985 Failed: 162 Skipped: 101 Ignored: 287 = Fetched: 0 Tobuild: 0 Time: 1D:16:50:42 >=20 > So: a little under 41 hrs for a "-n bulk -c -a". >=20 > My adjusted version of top that keeps track of and reports some > Maximum Observed figures (MaxObsYYY naming) reported Swap having > 50674Mi MaxObsUsed. So SystemRAM+MaxObsSwapUsed of slightly > under 241.5 GiBytyes. >=20 > So the system SystemRAM+SystemSWAP being 704 GiBytes is a little > under 3 times the example observed usage, giving margin for > larger needs over time. >=20 > Note the below build finish times and durations and that indicate > the overlapping time frames for the parallel builds for: >=20 > [1D:08:08:33] [15] [05:25:13] Finished devel/electron27 | = electron27-27.3.11: Success ending TMPFS: 5.15 GiB > [1D:08:24:30] [12] [05:37:16] Finished devel/electron29 | = electron29-29.3.1: Success ending TMPFS: 5.21 GiB > [1D:08:49:26] [04] [05:42:10] Finished devel/electron25 | = electron25-25.9.8_3: Success ending TMPFS: 4.91 GiB > [1D:11:53:32] [16] [07:50:11] Finished www/ungoogled-chromium | = ungoogled-chromium-123.0.6312.122: Success ending TMPFS: 8.08 GiB > [1D:11:54:22] [20] [07:51:01] Finished www/iridium | = iridium-browser-2024.01.120_1: Success ending TMPFS: 6.65 GiB > [1D:11:57:03] [07] [07:53:42] Finished www/chromium | = chromium-124.0.6367.60: Success ending TMPFS: 8.06 GiB > [1D:12:01:01] [05] [06:49:56] Finished devel/electron28 | = electron28-28.3.1: Success ending TMPFS: 5.13 GiB > [1D:13:01:04] [10] [07:23:00] Finished lang/dotnet | dotnet-8.0.0: = Success ending TMPFS: 4.65 GiB >=20 > But I'll note that all 32 builders were active over those periods > as well, all using ALLOW_MAKE_JOBS=3Dyes without any use of the likes > of MAKE_JOBS_NUMBER_LIMIT. It is a high load average style of > building that basically almost always has work pending that can be > immediately scheduled on any hardware thread that is no longer doing > other work. Still, vastly less than 30hr+ for the 2 *chromium, 25hr+ > for iridium, and 20hr+ for electron25. ( 27..29 are blacklisted on > the build server for main-amd64 and there is no 26). >=20 > Reminder: prior experimentation had already fetched everything, > thus the "Fetched: 0". Fetching would add to the elapsed time. >=20 > I make no claim that build servers should be pushed that hard on > any sort of systained basis --but the above does indicate what a > 7950X3D class 16-core/32-hwthread system can do for "bulk -c -a" > activity: somewhat under 2 days for such a "bulk -c -a" to build > 33985 packages (plus 162 failures). (The PCIe 1.4T Optane media > helps as well.) >=20 > Based on the TMPFS_BLACKLIST that I used, the resulting largest > tmpfs usage examples were (ttk had been omitted from TMPFS_BLACKLIST): >=20 > [1D:13:21:23] [12] [00:14:48] Finished science/dynare | dynare-5.4_7: = Success ending TMPFS: 10.74 GiB > [1D:12:55:42] [07] [00:40:11] Finished math/octave | octave-9.1.0: = Success ending TMPFS: 10.27 GiB > [1D:16:34:13] [25] [00:05:04] Finished x11/kde5 | = kde5-5.27.11.23.08.5_2: Success ending TMPFS: 9.95 GiB > [1D:15:29:52] [12] [00:09:15] Finished science/ttk | ttk-1.2.0_1: = Success ending TMPFS: 9.59 GiB > [1D:16:30:49] [08] [00:02:04] Finished biology/biostar-tools | = biostar-tools-1.10.1_3: Success ending TMPFS: 9.56 GiB > [1D:16:01:32] [32] [00:25:56] Finished science/code_saturne | = code_saturne-8.1.1: Success ending TMPFS: 9.50 GiB > [1D:11:16:08] [15] [00:48:29] Finished emulators/virtualbox-ose | = virtualbox-ose-6.1.50_1: Success ending TMPFS: 9.48 GiB > [1D:13:41:14] [32] [00:19:20] Finished deskutils/nextcloudclient | = nextcloudclient-3.12.3: Success ending TMPFS: 9.21 GiB > [1D:12:14:10] [24] [00:16:20] Finished print/tex-xetex | = tex-xetex-0.99993_5: Success ending TMPFS: 9.15 GiB > [1D:15:20:33] [18] [01:14:02] Finished science/paraview | = paraview-5.12.0_2: Success ending TMPFS: 9.14 GiB > [1D:11:55:04] [32] [00:06:42] Finished print/tex-dvipdfmx | = tex-dvipdfmx-20230313_1: Success ending TMPFS: 9.14 GiB > [1D:14:04:21] [20] [00:01:13] Finished print/texlive-full | = texlive-full-20230313: Success ending TMPFS: 9.10 GiB > [1D:13:55:09] [20] [00:05:39] Finished science/libghemical | = libghemical-3.0.0_20: Success ending TMPFS: 8.98 GiB > [1D:11:21:28] [02] [00:53:57] Finished print/tex-luatex | = tex-luatex-1.16.0_1: Success ending TMPFS: 8.92 GiB > [1D:10:21:01] [31] [00:45:03] Finished = emulators/virtualbox-ose-legacy | virtualbox-ose-legacy-5.2.44_17: = Success ending TMPFS: 8.90 GiB > [1D:14:26:10] [17] [01:13:36] Finished graphics/qgis-ltr | = qgis-ltr-3.34.6: Success ending TMPFS: 8.87 GiB > [1D:14:20:59] [06] [01:08:24] Finished graphics/qgis | qgis-3.36.2: = Success ending TMPFS: 8.87 GiB > [1D:16:13:36] [02] [00:41:13] Finished math/octave-forge | = octave-forge-20240403: Success ending TMPFS: 8.56 GiB > . . . >=20 >=20 >=20 > An additional note for *chromium/iridium/electron* (and dotnet): >=20 > I also previously had done a "bulk -n -a" with those and > more listed in MUTUALLY_EXCLUSIVE_BUILD_PACKAGES and no > TMPFS_BLACKLIST use (but ALLOW_MAKE_JOBS=3Dyes in use): >=20 > [1D:01:50:13] [03] [00:33:53] Finished lang/dotnet | dotnet-8.0.0: = Success ending TMPFS: 21.23 GiB > [1D:10:04:32] [25] [02:14:20] Finished www/chromium | = chromium-124.0.6367.60: Success ending TMPFS: 31.76 GiB > [1D:12:02:49] [27] [01:57:58] Finished devel/electron25 | = electron25-25.9.8_3: Success ending TMPFS: 20.55 GiB > [1D:13:51:47] [11] [01:48:15] Finished devel/electron27 | = electron27-27.3.11: Success ending TMPFS: 22.51 GiB > [1D:14:53:51] [01] [00:57:16] Finished devel/electron28 | = electron28-28.3.1: Success ending TMPFS: 22.66 GiB > [1D:15:51:46] [02] [00:57:52] Finished devel/electron29 | = electron29-29.3.1: Success ending TMPFS: 22.49 GiB > [1D:17:19:08] [01] [01:14:40] Finished www/iridium | = iridium-browser-2024.01.120_1: Success ending TMPFS: 25.66 GiB > [1D:22:36:10] [02] [01:09:31] Finished www/ungoogled-chromium | = ungoogled-chromium-123.0.6312.122: Success ending TMPFS: 21.43 GiB For comparison/contrast: I tried it in my UFS based boot media on the same system . . . [I later give notes about graphics/pinta getting stuck, much like print/miktex had if the ZFS experiment.] For the 32 hardware thread, 32 builder context, with USE_TMPFS=3Dall --but also using the TMPFS_BLACKLIST-- the result was: [1D:16:42:48] Stopping 32 builders . . . [main-amd64-bulk_a-default] [2024-05-02_23h01m58s] [committing] Queued: = 34535 Built: 33990 Failed: 157 Skipped: 101 Ignored: 287 Fetched: = 0 Tobuild: 0 Time: 1D:16:43:59 ZFS had been: Time: 1D:16:50:42 My adjusted version of top that keeps track of and reports some Maximum Observed figures (MaxObsYYY naming) reported Swap having 45241Mi MaxObsUsed. So SystemRAM+MaxObsSwapUsed of slightly under 236.2 GiBytyes. (The ZFS example had been: 241.5 GiBytyes.) So the system SystemRAM+SystemSWAP being 704 GiBytes was a little under 3 times the example observed usage in both cases, giving margin for larger needs over time. As for examples of builders getting stuck . . . graphics/pinta in the UFS experiment had gotten stuck in threads of /usr/local/bin/mono (mono-sgen): [05] 15:31:47 graphics/pinta | pinta-1.7.1_4 = stage 15:28:31 2.30 GiB 0% 0% # procstat -k -k 93415 PID TID COMM TDNAME KSTACK = =20 93415 671706 mono-sgen - mi_switch+0xba = sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae = umtxq_sleep+0x2cd do_lock_umutex+0x6a6 __umtx_op_wait_umutex+0x49 = sys__umtx_op+0x7e amd64_syscall+0x115 fast_syscall_common+0xf8=20 93415 678651 mono-sgen SGen worker mi_switch+0xba = sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae = umtxq_sleep+0x2cd do_wait+0x244 __umtx_op_wait_uint_private+0x54 = sys__umtx_op+0x7e amd64_syscall+0x115 fast_syscall_common+0xf8=20 93415 678652 mono-sgen Finalizer mi_switch+0xba = sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae = umtxq_sleep+0x2cd __umtx_op_sem2_wait+0x49a sys__umtx_op+0x7e = amd64_syscall+0x115 fast_syscall_common+0xf8=20 93415 678655 mono-sgen - mi_switch+0xba = sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae = umtxq_sleep+0x2cd do_wait+0x244 __umtx_op_wait_uint_private+0x54 = sys__umtx_op+0x7e amd64_syscall+0x115 fast_syscall_common+0xf8=20 93415 678660 mono-sgen Thread Pool Wor mi_switch+0xba = sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae = umtxq_sleep+0x2cd do_lock_umutex+0x6a6 __umtx_op_wait_umutex+0x49 = sys__umtx_op+0x7e amd64_syscall+0x115 fast_syscall_common+0xf8 So I did a kill -9 93415 to let the bulk run complete. I then removed my ADDITION of BROKEN to print/miktex that had gotten stuck in the ZFS experiment and tried in the now tiny load average context: bulk print/miktex graphics/pinta They both worked just fine, not getting stuck (UFS context): [00:00:50] [02] [00:00:25] Finished graphics/pinta | pinta-1.7.1_4: = Success ending TMPFS: 2.30 GiB [00:14:11] [01] [00:13:47] Finished print/miktex | miktex-23.9_3: = Success ending TMPFS: 3.21 GiB I'll note that the procstat for the stuck print/miketex in the ZFS context had looked like: # procstat -k -k 70121 PID TID COMM TDNAME KSTACK = =20 70121 409420 miktex-ctangle - mi_switch+0xba = sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae = umtxq_sleep+0x2cd do_wait+0x244 __umtx_op_wait+0x53 sys__umtx_op+0x7e = amd64_syscall+0x115 fast_syscall_common+0xf8=20 70121 646547 miktex-ctangle - mi_switch+0xba = sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae = kqueue_scan+0x9f1 kqueue_kevent+0x13b kern_kevent_fp+0x4b = kern_kevent_generic+0xd6 sys_kevent+0x61 amd64_syscall+0x115 = fast_syscall_common+0xf8=20 70121 646548 miktex-ctangle - mi_switch+0xba = sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae = umtxq_sleep+0x2cd do_wait+0x244 __umtx_op_wait_uint_private+0x54 = sys__umtx_op+0x7e amd64_syscall+0x115 fast_syscall_common+0xf8 It looks like there is some form failing race condition that can occur --and does rarely occur in high load average contexts. =3D=3D=3D Mark Millard marklmi at yahoo.com