Re: How to watch Active pagequeue transitions with DTrace in the vm layer

From: Shrikanth Kamath <shrikanth07_at_gmail.com>
Date: Fri, 04 Aug 2023 08:31:04 UTC
Thanks and appreciate your response Mark, a follow up query, so the system
was probably at some point in the state where there were no pages in the
laundry or even had pages backed by swap (refer the top snapshot below) .
The two heavy applications with 12G resident + Wired + Buf already caused
the Free to drop close to the minimum threshold, any further memory demand
would have the pages of these applications move to laundry or swap, then
would transition to Inactive or Laundry, later when these pages were
referenced back the pagedaemon would move them back to the Active? Is that
a correct understanding?

last pid: 20494;  load averages:  0.38,  0.73,  0.80  up 0+01:49:05
 21:14:49
Mem: 9439M Active, 3638M Inact, 2644M Wired, 888M Buf, 413M Free

Swap: 8192M Total, 8192M Free

PID USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME    WCPU
COMMAND
12043 root          5  22    0  9069M  7752M kqread   2  49:37   6.25% app1
12051 root          1  20    0  2704M  1964M select   3   0:41   0.00% app2

So if I run DTrace probe on vm_page_enqueue I will probably see that
pagedaemon might be the thread that moved all those pages to Active? Is
there a way to associate these to the process which referenced these pages

Regards,
--
Shrikanth R K

On Thu, Aug 3, 2023 at 8:14 AM Mark Johnston <markj@freebsd.org> wrote:

> On Thu, Aug 03, 2023 at 12:32:22AM -0700, Shrikanth Kamath wrote:
> > A background on the query,
> >
> > Trying to catch a memory “spike” trigger using DTrace, refer here two
> “top”
> > snapshots captured during a 2 minute window,
> >
> > last pid: 89900;  load averages:  0.75,  0.91,  0.94  up 39+00:37:30
> > 20:03:14
> >
> > Mem: 5575M Active, 2152M Inact, 4731M Laundry, 3044M Wired, 1151M Buf,
> 382M
> > Free
> >
> > Swap: 8192M Total, 1058M Used, 7134M Free, 12% Inuse
> >
> > PID USERNAME    THR PRI NICE   SIZE  RES   STATE   C TIME    WCPU COMMAND
> >
> > 12043    root               5    35     0        11G  9747M kqread  3
> > 128.8H  23.34% app1
> >
> > 12051    root               1    20     0    3089M  2274M select   1
> 22:51
> >   0.00%    app2
> >
> > last pid: 90442;  load averages:  1.50,  1.12,  1.02  up 39+00:39:37
> > 20:05:21
> >
> > Mem: 8549M Active, 631M Inact, 3340M Laundry, 3159M Wired, 1252M Buf,
> 359M
> > Free
> >
> > Swap: 8192M Total, 1894M Used, 6298M Free, 23% Inuse
> >
> > PID   USERNAME   THR PRI NICE   SIZE    RES STATE   C TIME   WCPU
> COMMAND
> >
> > 12043       root              5  24    0         11G  9445M kqread  2
> > 128.8H 10.45%  app1
> >
> > 12051       root              1  20    0     3089M  2173M select   3
> 22:51
> >   0.00%    app2
> >
> > The spike is ~3G in Active pages, the two large applications have a
> > combined resident size of ~12G. The resident size of the applications
> > hasn’t changed between these 2 readings, however there is a tar archive
> and
> > gzip on a large directory during that window likely causing a reshuffle.
> If
> > I count the page allocs and dequeue by execname with DTrace, I see
> > tar/vmstat which probably alloc and quickly dequeue, along with a large
> > dequeue being undertaken by bufdaemon and pagedaemon.
> >
> > fbt::vm_page_alloc*:entry
> >
> > {
> >
> >         @cnt[execname] = count();
> >
> > }
> >
> > fbt::vm_page_dequeue:entry
> >
> > {
> >
> >         @dcnt[execname] = count();
> >
> > }
> >
> > Page Alloc
> >
> >   vmstat
> > 20222
> >
> >   tar                                                              21284
> >
> > Page Dequeue
> >
> >   vmstat                                                        20114
> >
> >   bufdaemon                                                 21402
> >
> >   tar                                                               21635
> >
> >   pagedaemon                                             360387
> >
> > Since the tar / vmstat will not hold the pages in Active, I need to find
> > out what application had its pages queued in Active page queue.
>
> One possibility is that the inactive and laundry queue had previously
> contained many referenced pages.  Then, when some memory pressure
> occurred, the pagedaemon scanned the queues and moved a large number of
> pages into the active queue.
>
> > Is it possible that the system is just moving the LRU pages of these two
> > large applications into the inactive queue prior to addressing memory
> > pressure?  Do these applications need to activate those pages later and
> > hence it brings it back into the Active queue? How do I watch this in
> > action by using DTrace? Will the following probe catch this trigger?
> >
> > fbt::vm_page_activate:entry
> >
> > {
> >
> > @cnt[execname, pid] = count();
> >
> > }
> >
> > tick-10sec
> >
> > {
> >
> >        printa(@cnt);
> >
> >        printf("ACTIVE[%d] pages\n", `vm_dom[0].vmd_pagequeues[1].pq_cnt);
> >
> > }
> >
> > *** This system is running only one vmdomain (# sysctl vm.ndomains –>
> > vm.ndomains: 1).
> >
> > *** running release 12.1, on an amd64 kernel. The physical memory
> installed
> > is 16G.
>
> In 12.1, you'd probably want something like:
>
> fbt::vm_page_enqueue:entry
> /args[1] == 1 /* PQ_ACTIVE *//
> {
> ...
> }
>
> since vm_page_unwire(m, PQ_ACTIVE) will also move a page into the active
> queue, but your D script above won't catch that.
>
> I would also look at the "pages reactivated by the page daemon" counter
> that appears in vmstat -s output.  That'll tell you how many times the
> page daemon moved a page from PQ_INACTIVE/PQ_LAUNDRY to PQ_ACTIVE
> because it found that the page had been referenced.
>


-- 
Shrikanth R K