Re: 14.0-CURRENT failed to reclaim memory error in RPi 3B build
Date: Tue, 08 Nov 2022 06:50:19 UTC
On Tue, Nov 8, 2022 at 11:25 AM Mark Millard <marklmi@yahoo.com> wrote: > On Nov 7, 2022, at 18:40, Archimedes Gaviola <archimedes.gaviola@gmail.com> > wrote: > > > . . . > > > > Hi Mark, > > > > With this set of build commands now, > > > > # cd /usr/src; make -j3 KERNCONF=ARM TARGET_ARCH=aarch64 buildworld > kernel-toolchain buildkernel installworld installkernel distribution > DESTDIR=/home/freebsd/rpi3b > > > > in RPi 3B, I encountered the other OOM error which is the 'thread waited > too long to allocate a page'. This occurred from every build I conducted. > Though the first error on 'failed to reclaim memory' was never experienced > again. Below are the error logs. > > ... > > swap_pager: indefinite wait buffer: bufobj: 0, blkno: 256929, size: 4096 > > swap_pager: indefinite wait buffer: bufobj: 0, blkno: 3628, size: 4096 > > swap_pager: indefinite wait buffer: bufobj: 0, blkno: 255839, size: 40960 > > pid 46153 (c++), jid 0, uid 0, was killed: a thread waited too long to > allocate a page > > swap_pager: indefinite wait buffer: bufobj: 0, blkno: 255857, size: 28672 > > swap_pager: indefinite wait buffer: bufobj: 0, blkno: 3634, size: 8192 > > swap_pager: indefinite wait buffer: bufobj: 0, blkno: 256037, size: 4096 > > swap_pager: indefinite wait buffer: bufobj: 0, blkno: 255320, size: 8192 > > > > Perhaps some further tweaks are needed in the system so I set aside my > RPi 3B temporarily and switched over to my RPi 4B using the same microSD > card and USB flash drive (3.5 GB swap partition device) and the build > completed successfully. It took around 30 hours to complete. This RPi 4B > has 2GB RAM capacity while the RPi 3B has 1GB. From here, I'll continue > looking further for system tunables in RPi 3B which has lesser RAM capacity. > Hi Mark, > Given that you have added enough swap/paging > space to avoid needing more: > > # > # For plunty of swap/paging space (will not > # run out), avoid pageout delays leading to > # Out Of Memory killing of processes: > vm.pfault_oom_attempts=-1 > > With the above setting, if you did run out of > swap/paging space and needed more, deadlocks > would be possible as I understand. The above > disables getting that type of OOM kill > completely but, effectively, a deadlock is > sort of a form of less-controlled kill. > Okay, confirmed the existing value is 3. root@generic:~ # sysctl vm.pfault_oom_attempts vm.pfault_oom_attempts: 3 and is writable as tested. root@generic:~ # sysctl vm.pfault_oom_attempts=-1 vm.pfault_oom_attempts: 3 -> -1 > There is an alternative, but I've no clue how to > find what values to set for any specific context. > I just know the names and default values (as of > when I last checked such defaults): > > # > # For possibly insufficient swap/paging space > # (might run out), increase the pageout delay > # that leads to Out Of Memory killing of > # processes (showing defaults at the time): > #vm.pfault_oom_attempts= 3 > #vm.pfault_oom_wait= 10 > # (The multiplication is the total but there > # are other potential tradoffs in the factors > # multiplied, even for nearly the same total.) > > (Yes, one of those names is the same as was set > to -1 in the earlier suggestion above. -1 > disables making attempts and just waits as long > as it takes. That makes vm.pfault_oom_wait > irrelevant in that kind of context.) > > As for where the settings can be placed . . . > > # sysctl -T vm.pfault_oom_attempts > vm.pfault_oom_attempts: -1 > > # sysctl -T vm.pfault_oom_wait > vm.pfault_oom_wait: 10 > > (So /boot/loader.conf is appropriate: loader tunables.) > Okay noted this. > > # sysctl -W vm.pfault_oom_attempts > vm.pfault_oom_attempts: -1 > > # sysctl -W vm.pfault_oom_wait > vm.pfault_oom_wait: 10 > Checking these values... root@generic:~ # sysctl -T vm.pfault_oom_attempts vm.pfault_oom_attempts: -1 root@generic:~ # sysctl -T vm.pfault_oom_wait vm.pfault_oom_wait: 10 > (So /etc/sysctl.conf or the like is an alternative: > Also writable.) > Okay this is noted. Thanks and best regards, Archimedes