Re: High swap use building Kyuafile on Pi3
- In reply to: bob prohaska : "Re: High swap use building Kyuafile on Pi3"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Sat, 09 Sep 2023 00:43:48 UTC
On Sep 8, 2023, at 16:42, bob prohaska <fbsd@www.zefox.net> wrote: > On Fri, Sep 08, 2023 at 01:32:06PM -0700, Mark Millard wrote: >> On Sep 8, 2023, at 10:58, Mark Millard <marklmi@yahoo.com> wrote: >> >>> On Sep 8, 2023, at 09:14, bob prohaska <fbsd@www.zefox.net> wrote: >>> >>>> While building a -current world on Pi3 using -DWITH_META_MODE it appears that >>>> swap use is quite heavy (~2GB) well after clang finishes compiling. >>>> >>>> The tail of the build log shows >>>> Building /usr/obj/usr/src/arm64.aarch64/lib/googletest/tests/gmock_main/Kyuafile >>>> as the last entry, suggesting something in tests is the cause. >>>> >>>> The machine reports >>>> FreeBSD pelorus.zefox.org 15.0-CURRENT FreeBSD 15.0-CURRENT aarch64 1500000 #49 main-n265134-4a9cd9fc22d7: Mon Sep 4 10:08:30 PDT 2023 bob@pelorus.zefox.org:/usr/obj/usr/src/arm64.aarch64/sys/GENERIC arm64 >>>> >>>> The build command is >>>> make -j3 -DWITH_META_MODE buildworld > buildworld.log >>> >>> So up to 3 builders can be active at the same time. >>> You seem to have described only 1 builder's activity. >>> >>> Was it the only active builder? If other builders were >>> active at the time you also need to check on what they >>> were doing. The ~2GB is the total across all activity, >>> including the (up to) 3 builders. >>> >>> A command that would show the active builders would be: >>> >>> # poudriere status -b > > I'm lost at this point. No poudriere use is involved, Gack, I substituted the wrong context. Sorry. No poudriere details that I referenced apply. > it's > simply a -j3 buildworld in the "building everything" phase. However, buildworld and buildkernel with -jN do parallel build activities (for example, parallel compiles/links), with up to N at a time. Do you know how many and which commands it was as the time: 1, 2, or 3 active for a sustained, overlapping time (if more than 1)? > Normally swap use peaks while building clang and then > diminishes markedly in the building everything stage. I'll remind of history when a google test build step used to prevent your builds until they changed the optimization level down to avoid the memory-space resource use. LLVM related build activity is likely to have the major duration for notable memory-space use. But it need not be the only example of notable memory space use overall. I'll also note that the early LLVM activity is library code and later the actual compiler, linker, and llbd are built (using the library code to do so). These activites are likely of shorter duration than the library activity, but that need not mean much about the memory-space usage peak. > Previously, by then a -j3 build isn't swap-bound. Monitoring for what is running during the ~2BG swapspace use would be appropriate. top sorting in some appropriate order may give a clue, for example. More detail about what would seem to be needed. > Buildworld was still running, with three jobs, two of which > were over 1GB each in total size, though the RES numbers > totaled only about 700 MB IIRC. What were each of the 3 jobs doing over the time frame leading to and spanning the ~2GB? (I assume USE_TMPFS=no and other avoidance of having tmpfs competing for RAM, for example.) RPi3B variant: so 1 GiByte of RAM or so. 2 GiByte of swapspace or so used. (1+2) GiByte of RAM+SWAP or so (based on the little detail I have). RAM is actually not all available, so on the low side overall. The kernel and other processes use RAM too. RES only tells you (incomplete) information related to the RAM part of RAM+SWAP. Note that, say there was only the 3 jobs and the kernel: 3 GiByte/4 is about 750 MiByte RAM+SWAP for a mean. You indicate two possibly 1 GB jobs, so that might total to 512 MiByte more than the mean scaled to 2 jobs. The figures do not suggest vastly less than ~2GB of swap space use for the whole system. It still suggests the original reporting over focused on one job, apparently one with the command already completed. A more overall span likely is required evidence, possibly across time leading to and during the ~2GB as well. === Mark Millard marklmi at yahoo.com