RE: How to explain high memory consumption of a jail after all large processed in it have finished?

From: Mark Millard <marklmi_at_yahoo.com>
Date: Fri, 13 Sep 2024 00:19:18 UTC
Yuri <yuri_at_FreeBSD.org> wrote on
Date: Thu, 12 Sep 2024 18:45:16 UTC :
 
> I noticed that when the port lang/rust is building in the poudriere jail 
> the memory consumption of the host system remains high all the way into 
> the packaging phase when the pkg-static process is the only active 
> process and it consumes a very little memory.
> 
> 
> During build a lot of memory is consumed, which is understandable. The 
> system remains at ~500MB of free memory through the build process, 
> according to top(1).
> 
> 
> But once the build is finished, poudriere goes into the "packaging" 
> phase which only runs a small pkg-static process that compresses the 
> built files. pkg-static is the only active process in the poudriere jail.
> 
> 
> What looks strange to me is that the host system's memory consumption 
> remains high through the "packaging" phase which itself is low in 
> memory, and only goes down when the jail is destroyed.
> 
> 
> How to explain the high memory consumption of a jail after all large 
> presses have finished?

You do not give much information about configuration
properties that contribute to memory usage patterns
and how to interpret them.

How many FreeBSD cpus does the system have?

How much RAM?
What do the likes of the boot output (dmesg -a) show for:
real memory  = ??? (??? MB)
avail memory = ??? (??? MB)

What does the likes of:

# swapinfo -m

show for "1M-blocks" (Total) once SWAP is fully set up?


ZFS in use? (Any tuning/configuration of note?)
Only UFS in use (no ZFS anyway)?


/usr/local/etc/poudriere.conf (or analogous command line
settings):

How many poudriere bulk builders are there that have been
or are active prior to or during things? (The likes of
PARALLEL_JOBS= . . . can contribute to the answer.)

What is USE_TMPFS= . . . set to?
(all? yes? wrkdir? data? localbase? no? some list of such?)

What is TMPFS_BLACKLIST= . . . set to (if anything)?
(What is TMPFS_BLACKLIST_TMPDIR= . . . set to [if anything]?)

What is ALLOW_MAKE_JOBS= . . . set to?
What is ALLOW_MAKE_JOBS_PACKAGES= . . . set to (if anything)?


/usr/local/etc/poudriere.d/*make.conf (or analogous command
line settings):

What is MAKE_JOBS_NUMBER_LIMIT= . . .set to (if anything)?
(Or analogous settings: there are several related ones.)

What is TMPFS_LIMIT= . . . set to (if anything)?
What is MAX_MEMORY= . . . set to (if anything)?

What is CCACHE_DIR= . . . set up (if anything)?

Outside such configuration files:

There are other things that do not stand by themselves well.
For example: "system remains at ~500MB of free memory". Is
the amount of SWAP space usage varying? For "Mem:"
Active varying?
Inact varying?
Laundry varying? (Is top even showing it? If not: zero.)
Wired varying?
(You are indicating Free is roughly constant at ~500MB for
a type of context.)

(Example figures for comparisons/contrasts could prove
interesting.)

ZFS's ARC Total varying? (if zfs ARC is in use; contributes
mostly to Wired)


Other things for your context:

When rust is building, is it its builder the only active
builder?

What does the likes of:

# df -mt tmpfs | sort -k6,6

show during, say, the packaging stage?


Note:

tmpfs RAM usage can stick around longer than one might
expect, including while a builder is idle between jobs
or after its last job.

One thing for rust builds is that they have a large
file system usage with materials that stick around
even during the package phase. Thus, there are
consequences for RAM+SWAP space competition/usage if
such end up being handled via tmpfs for rust builds.


There are contexts for which (Active+Inact+Laundry+Wired+Free)
shrinks over time compared to "avail memory" --even to the
point of failure/hangup. I've no clue what is at issue if
significant such shrinkage is happening for your context. (No
claim such is happening but it is something that data might
show.)

Note: top's "Buf" overlaps with data in the the other
categories and would cause a double-counting of some RAM
usage if included.


===
Mark Millard
marklmi at yahoo.com