PV entry limit
John Capo
jc at irbs.com
Tue Jan 23 19:09:21 UTC 2007
Hopefully answering one of my own questions will save someone else
some time.
> If there is enough KVA now, what kept more PV entries from being
> allocated when needed? Used plus free PV entries has been stuck
> at 2520365 for days.
The PV ENTRY limit shown with sysctl vm is off by vm_page_array_size.
90% of (PV LIMIT - vm_page_array_size) is the value of pv_entry_high_water
and that is where used pv entries + free pv entries sticks.
Still puzzling over whether I need more KVA, more KMEM, or both.
Quoting John Capo (jc at irbs.com):
> I got the infamous pmap_collect: collecting pv entries message 5
> times within 4 hours 12 days ago. The machine is a Cyrus IMAP
> server with 4Gigs of memory peaking around 2000 IMAP processes and
> a about 200 other processes. The machine is running 4.11 with these
> compile tweaks.
>
> options PMAP_SHPGPERPROC=300
> options KVA_PAGES=384
>
> and one sysctl boot tweak.
>
> kern.maxfiles=60000
>
> Open files are 30K or so with about 200MB of shared files mmapped
> into each IMAP process.
>
> sysctl vm | grep PV every 10 seconds shows this.
>
> PV ENTRY: 28, 3749470, 2501404, 18961, 296551065
> PV ENTRY: 28, 3749470, 2501472, 18893, 296563037
> PV ENTRY: 28, 3749470, 2513275, 7090, 296601432
> PV ENTRY: 28, 3749470, 151597, 2368768, 296689650
> PV ENTRY: 28, 3749470, 211052, 2309313, 296783099
> PV ENTRY: 28, 3749470, 283356, 2237009, 296896244
>
> Used plus free PV entries is 2520365 which is considerably less
> that the 3749470 PV entry limit.
>
> AFAICT, I have plenty of kernel space available to allocate more
> PV entries from.
>
> ITEM SIZE LIMIT USED FREE REQUESTS
>
> PIPE: 160, 0, 522, 294, 22806126
> SWAPMETA: 160, 233016, 14858, 29455, 3964171
> unpcb: 160, 0, 2659, 4541, 15805192
> ripcb: 192, 12328, 2, 40, 31222
> syncache: 160, 15359, 0, 76, 8077824
> tcpcb: 576, 12328, 1391, 2889, 10030866
> udpcb: 192, 12328, 33, 73, 11663049
> socket: 224, 12328, 4086, 6848, 37530493
> DIRHASH: 1024, 0, 1684, 292, 8344073
> KNOTE: 64, 0, 2, 126, 9094626
> NFSNODE: 352, 0, 6, 14118, 105689
> NFSMOUNT: 544, 0, 3, 11, 6
> VNODE: 192, 0, 216941, 41, 216941
> NAMEI: 1024, 0, 1, 255, 3156465410
> VMSPACE: 192, 0, 1352, 2168, 9106225
> PROC: 416, 0, 1403, 2223, 15245401
> DP fakepg: 64, 0, 0, 0, 0
> PV ENTRY: 28, 3749470, 2263440, 256925, 801513627
> MAP ENTRY: 48, 0, 64374, 99039, 862331760
> KMAP ENTRY: 48, 73807, 8234, 256, 30468915
> MAP: 108, 0, 7, 3, 7
> VM OBJECT: 92, 0, 215656, 64792, 403861231
> vm.zone_kmem_pages: 13773
> vm.zone_kmem_kvaspace: 136269824
> vm.zone_kern_pages: 21225
> vm.kvm_size: 1606414336
> vm.kvm_free: 494923776
>
> Obviously I need to bump PMAP_SHPGPERPROC some more.
>
> Do I need more KVA also?
>
> If there is enough KVA now, what kept more PV entries from being
> allocated when needed? Used plus free PV entries has been stuck
> at 2520365 for days.
>
> I know 4.11 is EOL but switching to [56].something is just not an
> option right now.
>
> Thanks,
> John
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> freebsd-questions at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe at freebsd.org"
More information about the freebsd-questions
mailing list