Sudden grow of memory in "Laundry" state
Robert
robert.ayrapetyan at gmail.com
Tue Sep 11 05:23:05 UTC 2018
sysctl vm.stats
vm.stats.object.bypasses: 44686
vm.stats.object.collapses: 1635786
vm.stats.misc.cnt_prezero: 0
vm.stats.misc.zero_page_count: 29511
vm.stats.vm.v_kthreadpages: 0
vm.stats.vm.v_rforkpages: 0
vm.stats.vm.v_vforkpages: 738592
vm.stats.vm.v_forkpages: 15331959
vm.stats.vm.v_kthreads: 25
vm.stats.vm.v_rforks: 0
vm.stats.vm.v_vforks: 21915
vm.stats.vm.v_forks: 378768
vm.stats.vm.v_interrupt_free_min: 2
vm.stats.vm.v_pageout_free_min: 34
vm.stats.vm.v_cache_count: 0
vm.stats.vm.v_laundry_count: 6196772
vm.stats.vm.v_inactive_count: 2205526
vm.stats.vm.v_inactive_target: 390661
vm.stats.vm.v_active_count: 3163069
vm.stats.vm.v_wire_count: 556447
vm.stats.vm.v_free_count: 101235
vm.stats.vm.v_free_min: 77096
vm.stats.vm.v_free_target: 260441
vm.stats.vm.v_free_reserved: 15981
vm.stats.vm.v_page_count: 12223372
vm.stats.vm.v_page_size: 4096
vm.stats.vm.v_tfree: 61213188
vm.stats.vm.v_pfree: 24438917
vm.stats.vm.v_dfree: 1936826
vm.stats.vm.v_tcached: 0
vm.stats.vm.v_pdshortfalls: 12
vm.stats.vm.v_pdpages: 1536983413
vm.stats.vm.v_pdwakeups: 3
vm.stats.vm.v_reactivated: 2621520
vm.stats.vm.v_intrans: 12150
vm.stats.vm.v_vnodepgsout: 0
vm.stats.vm.v_vnodepgsin: 16016
vm.stats.vm.v_vnodeout: 0
vm.stats.vm.v_vnodein: 1782
vm.stats.vm.v_swappgsout: 1682860
vm.stats.vm.v_swappgsin: 6368
vm.stats.vm.v_swapout: 61678
vm.stats.vm.v_swapin: 1763
vm.stats.vm.v_ozfod: 21498
vm.stats.vm.v_zfod: 36072114
vm.stats.vm.v_cow_optim: 5912
vm.stats.vm.v_cow_faults: 18880051
vm.stats.vm.v_io_faults: 3165
vm.stats.vm.v_vm_faults: 705101188
vm.stats.sys.v_soft: 470906002
vm.stats.sys.v_intr: 3743337461
vm.stats.sys.v_syscall: 3134154383
vm.stats.sys.v_trap: 590473243
vm.stats.sys.v_swtch: 1037209739
On 09/10/18 22:18, Robert wrote:
> Hi, if I understood correctly, "written back to swap device" means
> they come from swap at some point, but it's not the case (see attached
> graph).
>
> Swap was 16GB, and slightly reduced when pages rapidly started to move
> from free (or "Inactive"?) into "Laundry" queue.
>
> Here is vmstat output:
>
> vmstat -s
> 821885826 cpu context switches
> 3668809349 device interrupts
> 470487370 software interrupts
> 589970984 traps
> 3010410552 system calls
> 25 kernel threads created
> 378438 fork() calls
> 21904 vfork() calls
> 0 rfork() calls
> 1762 swap pager pageins
> 6367 swap pager pages paged in
> 61678 swap pager pageouts
> 1682860 swap pager pages paged out
> 1782 vnode pager pageins
> 16016 vnode pager pages paged in
> 0 vnode pager pageouts
> 0 vnode pager pages paged out
> 3 page daemon wakeups
> 1535368624 pages examined by the page daemon
> 12 clean page reclamation shortfalls
> 2621520 pages reactivated by the page daemon
> 18865126 copy-on-write faults
> 5910 copy-on-write optimized faults
> 36063024 zero fill pages zeroed
> 21137 zero fill pages prezeroed
> 12149 intransit blocking page faults
> 704496861 total VM faults taken
> 3164 page faults requiring I/O
> 0 pages affected by kernel thread creation
> 15318548 pages affected by fork()
> 738228 pages affected by vfork()
> 0 pages affected by rfork()
> 61175662 pages freed
> 1936826 pages freed by daemon
> 24420300 pages freed by exiting processes
> 3164850 pages active
> 2203028 pages inactive
> 6196772 pages in the laundry queue
> 555637 pages wired down
> 102762 pages free
> 4096 bytes per page
> 2493686705 total name lookups
> cache hits (99% pos + 0% neg) system 0% per-directory
> deletions 0%, falsehits 0%, toolong 0%
>
> What do you think? How pages could be moved into "Laundry" without
> being in Swap?
>
> Thanks.
>
>
> On 09/10/18 17:54, Mark Johnston wrote:
>> On Mon, Sep 10, 2018 at 11:44:52AM -0700, Robert wrote:
>>> Hi, I have a server with FreeBSD 11.2 and 48 Gigs of RAM where an app
>>> with extensive usage of shared memory (24GB allocation) is running.
>>>
>>> After some random amount of time (usually few days of running), there
>>> happens a sudden increase of "Laundry" memory grow (from zero to 24G in
>>> a few minutes).
>>>
>>> Then slowly it reduces.
>>>
>>> Are described symptoms normal and expected? I've never noticed anything
>>> like that on 11.1.
>> The laundry queue contains dirty inactive pages, which need to be
>> written back to the swap device or a filesystem before they can be
>> reused. When the system is short for free pages, it will scan the
>> inactive queue looking for clean pages, which can be freed cheaply.
>> Dirty pages are moved to the laundry queue. My guess is that the
>> system was running without a page shortage for a long time, and
>> suddenly experienced some memory pressure. This caused lots of
>> pages to move from the inactive queue to the laundry queue. Demand
>> for free pages will then cause pages in the laundry queue to be
>> written back and freed, or requeued if the page was referenced after
>> being placed in the laundry queue. "vmstat -s" and "sysctl vm.stats"
>> output might make things more clear.
>>
>> All this is to say that there's nothing particularly abnormal about what
>> you're observing, though it's not clear what effects this behaviour has
>> on your workload, if any. I can't think of any direct reason this would
>> happen on 11.2 but not 11.1.
>
More information about the freebsd-hackers
mailing list