physical memory chunk statistics?

From: Bjoern A. Zeeb <bzeeb-lists_at_lists.zabbadoz.net>
Date: Sat, 21 Dec 2024 17:03:45 UTC
Hi,

upon boot we display our physical memory junks nicely.

Physical memory chunk(s):
0x0000000000001000 - 0x000000000009ffff, 651264 bytes (159 pages)
0x0000000000101000 - 0x0000000012df5fff, 315576320 bytes (77045 pages)
0x0000000013c00000 - 0x0000000013d87fff, 1605632 bytes (392 pages)
0x0000000016401000 - 0x000000001e920fff, 139591680 bytes (34080 pages)
0x000000001eb1b000 - 0x000000001f73afff, 12713984 bytes (3104 pages)


Do we have any way on a running system to export some statistics of
how much each of them is used up?  Something like [Use, Requests]?


Say one wanted to debug the good old lower 4GB contigmalloc failing
problem (and example but also something I am just facing again).
How would one do that?  The immediate questions are:
(a) how much of the available lower physical 4G chunks are used?
(b) how much fragmentation is there roughly?
(c) what's the largest contig. chunk avail?

Given (c) is likely harder and expensive (a) and (b) could at least
give an idea.  Did we ever solve that?


/bz

-- 
Bjoern A. Zeeb                                                     r15:7