Re: More swap trouble with armv7, was Re: -current on armv7 stuck with flashing disk light
Date: Wed, 05 Jul 2023 08:50:09 UTC
On Tue, 4 Jul 2023 18:02:31 -0700 bob prohaska <fbsd@www.zefox.net> wrote: > On Wed, Jul 05, 2023 at 08:22:43AM +0900, Tatsuki Makino wrote: > > Hello. > > > > > > It may be possible to set stricter restrictions on selected ports in /usr/local/etc/poudriere.d/make.conf. > > For example, > > > > # normally > > .if ${.CURDIR:tA} == "/usr/ports/devel/llvm15" > > MAKE_JOBS_NUMBER= 1 > > .endif > > > > # pattern matching can be performed > > .if !empty(.CURDIR:tA:M/usr/ports/devel/llvm15) || > > !empty(.CURDIR:tA:M/usr/ports/devel/llvm*) > > MAKE_JOBS_NUMBER= 1 > > .endif > > > > # not limited to /usr/ports > > .if !empty(.CURDIR:tA:T:Mllvm*) && !empty(.CURDIR:tA:H:T:Mdevel) > > MAKE_JOBS_NUMBER= 1 > > .endif > > > > If we write this on an individual port basis, we can use the resources to the very limit where they don't overflow :) > > > > I just tried to turn off parallel jobs entirely by omitting > ALLOW_MAKE_JOBS > from /usr/local/etc/poudriere.conf. The machine ran out of > swap as usual, in about the same time, despite having only > two processes running that were visibly related to poudriere > with a total size of ~250 MB. The number of threads roughly > halved, but the time to swap exhaustion didn't. > > While poudriere makes /devel/llvm15 by itself, top reports > > last pid: 15623; load averages: 0.88, 0.88, 0.75 up 0+00:26:27 17:39:54 > 34 processes: 2 running, 32 sleeping > CPU: 27.9% user, 0.0% nice, 7.3% system, 0.0% interrupt, 64.8% idle > Mem: 274M Active, 219M Inact, 177M Laundry, 221M Wired, 97M Buf, 22M Free > Swap: 2048M Total, 1032M Used, 1016M Free, 50% Inuse > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND > 14989 root 1 20 0 41M 25M select 2 0:10 0.01% pkg-static > 1080 bob 1 20 0 14M 1460K select 0 0:00 0.00% sshd > 1077 root 1 31 0 14M 1420K select 0 0:00 0.00% sshd > 1029 root 1 23 0 13M 1600K select 1 0:00 0.00% sshd > 1042 root 1 20 0 10M 1616K select 0 0:00 0.00% sendmail > 14988 root 1 68 0 10M 2976K wait 0 0:00 0.00% pkg-static > 1045 smmsp 1 68 0 10M 1344K pause 0 0:00 0.00% sendmail > 15405 root 1 126 0 9432K 6636K CPU1 1 0:15 100.04% makewhatis > 1081 bob 1 48 0 6852K 1016K pause 1 0:00 0.00% tcsh > 1162 bob 1 52 0 6824K 1016K pause 3 0:00 0.00% tcsh > 1166 bob 1 20 0 6688K 1784K CPU3 3 0:05 0.30% top > 726 root 1 20 0 6612K 1348K select 3 0:00 0.00% devd > 11515 root 1 68 0 6568K 2812K wait 3 0:01 0.00% make > 1399 root 1 68 0 6212K 1712K nanslp 1 0:33 4.92% sh > 8353 root 1 53 0 5820K 1632K piperd 0 0:00 0.00% sh > 1099 root 1 20 0 5820K 1624K select 1 0:10 0.00% sh > 8360 root 1 68 0 5820K 1588K wait 3 0:00 0.00% sh > 1086 root 1 20 0 5584K 1048K ttyin 2 0:00 0.00% sh > 11543 root 1 68 0 5480K 1548K wait 1 0:00 0.00% sh > 1076 root 1 21 0 5424K 1120K wait 2 0:00 0.00% login > 1085 bob 1 24 0 5380K 1116K wait 3 0:00 0.00% su > > > > The SIZE numbers in relation to swap used are puzzling. Shouldn't > swap be roughly the difference between total of SIZE minus total of RES? > That's not the case, unless my eyeball math is way off. > > The poudriere run just failed, in the same way as before, with 1228 MB > of swap in use. > > It's tempting to try running make in /usr/ports/devel/llvm15, just > to see if there's a difference in behavior. > > Thanks for reading, > > bob prohaska Are you using swap-backed tmpfs for ${TMPDIR}? (This would usually be the default of recent installations). If so, massive use of ${TMPDIR} would mess up swap spaces. Or ${WRKDIR} on TMPFS (poudriere has options to do so) used? This causes the same problem. These cases, try setting TMPDIR to somewhere in regular partition. Poudriere has option to control whether to use tmpfs or not. So for poudriere, try setting TMPFS_BLACKLIST="llvm15" in its configuration file. At least poudriere-devel has the option. If non-devel poudriere does not yet support the option, consider building devel/llvm alone by specifying on command line, with TMPFS option disabled. HTH. -- Tomoaki AOKI <junchoon@dec.sakura.ne.jp>