Re: pid xxxx (progname), jid 0, uid 0, was killed: failed to reclaim memory
- In reply to: Daniel : "Re: pid xxxx (progname), jid 0, uid 0, was killed: failed to reclaim memory"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Tue, 02 May 2023 10:20:03 UTC
On May 2, 2023, at 01:35, Daniel <freebsd-arm@c0decafe.de> wrote: > On 5/2/23 09:25, Mark Millard wrote: > >> On May 1, 2023, at 23:55, Daniel <freebsd-arm@c0decafe.de> wrote: >> >>> I noticed that on my aarch64 boards I recently get a lot of 'pid xxxx (progname), jid 0, uid 0, was killed: failed to reclaim memory' messages in syslog. >>> >>> This happens on a rockpro64 as well as a raspberry pi 3b, both running 13.2. None of the boards is near its memory capacity (more like less than 50% used). >> Are you counting RAM+SWAP as "memory capacity"? Just RAM? > > with memory capacity i mean just RAM. Take this vmstat from the pi as an example: > > > # vmstat > procs memory page disks faults cpu > r b w avm fre flt re pi po fr sr mm0 md0 in sy cs us sy id > 0 0 0 864M 511M 2.2K 0 15 0 2.4K 72 0 0 19730 1.8K 6.1K 4 3 93 When was that command done? : A) Somewhat or just before the process was killed? B) After the notice was displayed about the kill (or, at least, after the process was killed)? If (B), it is too late to see the memory usage conditions that lead to the kill: RAM already freed by the kill. One needs to be monitoring the memory usage pattern/sequence that leads up to the kill. If it is too early, the conditions that leads to the kill need not have happend yet. It may be more reliable to get an idea by monitoring free memory over time via, say, top, spanning from somewhat before the problem occurs to after the problem. "systat -vmstat" is another display that can be used for monitoring. A question can be: ZFS in use vs. not? > yes, there is a memdisk (as unionfs overlay for the way too frequently dying and now ro sdcard) but its barley used: > > > # df -h > Filesystem Size Used Avail Capacity Mounted on > [...] > /dev/md0 496M 4.6M 451M 1% /rwroot Off the top of my head, I do not know if it is the Size vs. the Used above that that indicates the (virtual) memory space use better. > still i see processes being killed with the above message. > > > on the rockpro64 i had an fairly huge swap (4G) on an nvme that never really got filled (~500megs maybe). Swap usage is not directly relevant. The kills can happen with no swap in use (despite swap space having been configured) based on one or more processes that stay runnable and that keep sufficiently large working sets active. Large swap spaces (that avoid warning about possible mistuning) are something like 3.6 or so times the RAM. This can be too small for tmpfs use in poudriere for the likes of poudriere's USE_TMPFS=all : rust can use over 10 GiBytes of file space in its build. > I'll try your suggestions below, thanks! > > Do you know of any recent changes to memory mgmt, oom conditions that might trigger this? No. But I also have no good understanding of the complete workload on the systems that get the notice. > I've been running this setup (slapd, radiusd, smtpd) for quite some time on the pi now without any problems, before going to 13.2 I'm not familiar with slapd, radiusd, or the like. > Thanks! > > >> The message is strictly about maintaining a certain amount of free >> RAM. It turns out swap does not automatically avoid the issue for >> all contexts. >> >> QUOTE from back in 2022-Dec for another context with the problem: >> (I've not reworked the wording to your context but the points >> will probably be clear anyway.) >> >> This is the FreeBSD kernel complaining about the configuration >> not well matching the RPi3B+ workload. In essence, it was unable >> to achieve it targeted minimum amount of free RAM in the sort of >> time frame (really: effort) it is configured for. Depending on >> what you do, the FreeBSD defaults do not work well for 1 GiByte >> of RAM. Swap space alone is insufficient because FreeBSD does >> not swap out processes that stay runnable. Just one process that >> stays runnable using a working set that is as large as the fits >> RAM for overall operation will lead to such "failed to reclaim >> memory" kills. >> >> But, if you are getting this, you will almost certainly need >> a non-trivial swap space anyway. >> >> I have a starting point to recommend, configuring some >> settings. As I've no detailed clue for your context, >> I'll just provide the general description. >> >> >> A) I recommend a swap space something like shown in >> the below (from gpart show output): >> >> => 40 1953525088 da0 GPT (932G) >> 40 532480 1 efi (260M) >> 532520 2008 - free - (1.0M) >> 534528 7340032 2 freebsd-swap (3.5G) >> . . . >> 67643392 1740636160 5 freebsd-ufs (830G) >> 1808279552 145245576 - free - (69G) >> >> This size (3.5 GiBytes or so) is somewhat below >> were FreeBSD starts to complain about potential >> mistuning from a large swap space, given the 1 >> GiByte of RAM. (I boot the same boot media on a >> variety of machines and have other swap partitions >> to match up with RAM sizes. But I omitted showing >> them.) >> >> It is easy to have things like buildworld or >> building ports end up with individual processes >> that are temporarily bigger than the 1 GiByte RAM. >> Getting multiple cores going can also lead to >> not fitting and needing to page. >> >> I'll note that I normally use USB3 NVMe media that >> also works with USB2 ports. My alternate is USB3 >> SSD media that works with USB2 ports. I avoid >> spinning rust and microsd cards. This limits what >> I can usefully comment on for some aspects of >> configuration related to the alternatives. >> >> >> B) /boot/loader.conf content: >> >> # >> # Delay when persistent low free RAM leads to >> # Out Of Memory killing of processes: >> vm.pageout_oom_seq=120 >> # >> # For plunty of swap/paging space (will not >> # run out), avoid pageout delays leading to >> # Out Of Memory killing of processes: >> vm.pfault_oom_attempts=-1 >> # >> # For possibly insufficient swap/paging space >> # (might run out), increase the pageout delay >> # that leads to Out Of Memory killing of >> # processes (showing defaults at the time): >> #vm.pfault_oom_attempts= 3 >> #vm.pfault_oom_wait= 10 >> # (The multiplication is the total but there >> # are other potential tradoffs in the factors >> # multiplied, even for nearly the same total.) >> >> If use of vm.pfault_oom_attempts=-1 is going to >> be inappropriate, I do not have background with >> figuring out a good combination of settings for >> vm.pfault_oom_attempts and vm.pfault_oom_wait . >> >> I'll note that vm.pageout_oom_seq is not a time >> --more like how many insufficient tries to >> reclaim RAM happen in sequence before an OOM >> kill is started (effort). 120 is 10 times the >> default. While nothing disables such criteria, >> larger figures can be used if needed. (I've >> never had to but others have.) >> >> >> C) /etc/sysctl.conf content: >> >> # >> # Together this pair avoids swapping out the process kernel stacks. >> # This avoids one way for processes for interacting with the system >> # from ending up being hung-up. >> vm.swap_enabled=0 >> vm.swap_idle_enabled=0 >> >> >> D) I strictly avoid having tmpfs complete for RAM >> in this kind of context. tmpfs use just makes >> avoiding "failed to reclaim memory" more difficult >> to avoid. (As various folks have run into despite >> having vastly more RAM than an RPi3B+.) So my >> /usr/local/etc/poudriere.conf has: >> >> USE_TMPFS=no >> >> There are examples, like building rust, where >> anything but "no" or "data" leads to huge 10 >> GiByte+ tmpfs spaces for poudriere's build >> activity. Not a good match to an RPi3B+ . >> >> >> That is it for the recommendations of a starting >> point configuration. >> >> With such measures, I've been able to have poudriere >> with -j4 but also using ALLOW_MAKE_JOBS= without using >> the likes of MAKE_JOBS_NUMBER limiting it. (So the >> load average could be around 16 a fair amount of the >> time but still not get "failed to reclaim memory" >> kills.) >> >> Note: I'm not claiming above that -j4 is the best >> setting to use from, say, a elapsed time point of >> view for my poudriere bulk activity. >> >> END QUOTE >> >>> It started on the rockpro64 first, were i did a bit of fiddling, e.g. en/disable swap, replace zfs with ufs, etc. nothing helped in the end. I thought the board might be defective but now i start seeing the same thing on the raspi as well. >>> >>> Any ideas what this could be or how to debug this further? >>> >> === Mark Millard marklmi at yahoo.com