Re: pid xxxx (progname), jid 0, uid 0, was killed: failed to reclaim memory
- In reply to: Daniel : "pid xxxx (progname), jid 0, uid 0, was killed: failed to reclaim memory"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Tue, 02 May 2023 07:25:21 UTC
On May 1, 2023, at 23:55, Daniel <freebsd-arm@c0decafe.de> wrote: > I noticed that on my aarch64 boards I recently get a lot of 'pid xxxx (progname), jid 0, uid 0, was killed: failed to reclaim memory' messages in syslog. > > This happens on a rockpro64 as well as a raspberry pi 3b, both running 13.2. None of the boards is near its memory capacity (more like less than 50% used). Are you counting RAM+SWAP as "memory capacity"? Just RAM? The message is strictly about maintaining a certain amount of free RAM. It turns out swap does not automatically avoid the issue for all contexts. QUOTE from back in 2022-Dec for another context with the problem: (I've not reworked the wording to your context but the points will probably be clear anyway.) This is the FreeBSD kernel complaining about the configuration not well matching the RPi3B+ workload. In essence, it was unable to achieve it targeted minimum amount of free RAM in the sort of time frame (really: effort) it is configured for. Depending on what you do, the FreeBSD defaults do not work well for 1 GiByte of RAM. Swap space alone is insufficient because FreeBSD does not swap out processes that stay runnable. Just one process that stays runnable using a working set that is as large as the fits RAM for overall operation will lead to such "failed to reclaim memory" kills. But, if you are getting this, you will almost certainly need a non-trivial swap space anyway. I have a starting point to recommend, configuring some settings. As I've no detailed clue for your context, I'll just provide the general description. A) I recommend a swap space something like shown in the below (from gpart show output): => 40 1953525088 da0 GPT (932G) 40 532480 1 efi (260M) 532520 2008 - free - (1.0M) 534528 7340032 2 freebsd-swap (3.5G) . . . 67643392 1740636160 5 freebsd-ufs (830G) 1808279552 145245576 - free - (69G) This size (3.5 GiBytes or so) is somewhat below were FreeBSD starts to complain about potential mistuning from a large swap space, given the 1 GiByte of RAM. (I boot the same boot media on a variety of machines and have other swap partitions to match up with RAM sizes. But I omitted showing them.) It is easy to have things like buildworld or building ports end up with individual processes that are temporarily bigger than the 1 GiByte RAM. Getting multiple cores going can also lead to not fitting and needing to page. I'll note that I normally use USB3 NVMe media that also works with USB2 ports. My alternate is USB3 SSD media that works with USB2 ports. I avoid spinning rust and microsd cards. This limits what I can usefully comment on for some aspects of configuration related to the alternatives. B) /boot/loader.conf content: # # Delay when persistent low free RAM leads to # Out Of Memory killing of processes: vm.pageout_oom_seq=120 # # For plunty of swap/paging space (will not # run out), avoid pageout delays leading to # Out Of Memory killing of processes: vm.pfault_oom_attempts=-1 # # For possibly insufficient swap/paging space # (might run out), increase the pageout delay # that leads to Out Of Memory killing of # processes (showing defaults at the time): #vm.pfault_oom_attempts= 3 #vm.pfault_oom_wait= 10 # (The multiplication is the total but there # are other potential tradoffs in the factors # multiplied, even for nearly the same total.) If use of vm.pfault_oom_attempts=-1 is going to be inappropriate, I do not have background with figuring out a good combination of settings for vm.pfault_oom_attempts and vm.pfault_oom_wait . I'll note that vm.pageout_oom_seq is not a time --more like how many insufficient tries to reclaim RAM happen in sequence before an OOM kill is started (effort). 120 is 10 times the default. While nothing disables such criteria, larger figures can be used if needed. (I've never had to but others have.) C) /etc/sysctl.conf content: # # Together this pair avoids swapping out the process kernel stacks. # This avoids one way for processes for interacting with the system # from ending up being hung-up. vm.swap_enabled=0 vm.swap_idle_enabled=0 D) I strictly avoid having tmpfs complete for RAM in this kind of context. tmpfs use just makes avoiding "failed to reclaim memory" more difficult to avoid. (As various folks have run into despite having vastly more RAM than an RPi3B+.) So my /usr/local/etc/poudriere.conf has: USE_TMPFS=no There are examples, like building rust, where anything but "no" or "data" leads to huge 10 GiByte+ tmpfs spaces for poudriere's build activity. Not a good match to an RPi3B+ . That is it for the recommendations of a starting point configuration. With such measures, I've been able to have poudriere with -j4 but also using ALLOW_MAKE_JOBS= without using the likes of MAKE_JOBS_NUMBER limiting it. (So the load average could be around 16 a fair amount of the time but still not get "failed to reclaim memory" kills.) Note: I'm not claiming above that -j4 is the best setting to use from, say, a elapsed time point of view for my poudriere bulk activity. END QUOTE > It started on the rockpro64 first, were i did a bit of fiddling, e.g. en/disable swap, replace zfs with ufs, etc. nothing helped in the end. I thought the board might be defective but now i start seeing the same thing on the raspi as well. > > Any ideas what this could be or how to debug this further? > === Mark Millard marklmi at yahoo.com