PoerMac G5 "4 core" (system total): some from-power-off booting observations of the modern VM_MAX_KERNEL_ADDRESS value mixed with usefdt=1

Mark Millard marklmi at yahoo.com
Thu Jan 31 20:20:37 UTC 2019


[Adding sysctl -a output of some of differences for old vs. modern
VM_MAX_KERNEL_ADDRESS figures being in use. I tired to pick out
static figures rather than active unless the difference was
notably larger for the distinct VM_MAX_KERNEL_ADDRESS figures. Each
sysctl -a was shortly after booting.]

On 2019-Jan-30, at 17:18, Mark Millard <marklmi at yahoo.com> wrote:

> [Where boot -v output is different between booting to completion vs.
> hanging up: actual text.]
> 
> On 2019-Jan-29, at 17:52, Mark Millard <marklmi at yahoo.com> wrote:
> 
>> For the modern VM_MAX_KERNEL_ADDRESS value and also use of the usefdt=1 case:
>> 
>> This usually hang during boot during the "Waking up CPU" message sequence.
>> 
>> But not always. Powering off and retrying , sometimes just a few times, and
>> other times dozens of times, the system that I have access to does eventually
>> boot for the combination. So some sort of race condition or lack of stable
>> initialization?
>> 
>> When it does boot, smp seems to be set up and working.
>> 
>> Once booted, it is usually not very long until the fans are going wild,
>> other than an occasional, temporary lull.
>> 
>> 
>> 
>> For for shutting down the following applies to both VM_MAX_KERNEL_ADDRESS
>> values when a usefdt=1 type of context is in use:
>> 
>> When I've kept explicit track, I've not had any example of all of the:
>> 
>> Waiting (max 60 seconds) for system thread `bufdaemon' to stop...
>> Waiting (max 60 seconds) for system thread `bufspacedaemon-1' to stop...
>> Waiting (max 60 seconds) for system thread `bufspacedaemon-0' to stop...
>> . . .
>> 
>> getting to "done": instead one or more time out. Which and how many
>> vary.
>> 
>> The fans tend to take off for both VM_MAX_KERNEL_ADDRESS values. The
>> buf*daemon timeouts happen even if the fans have not taken off.
>> 
> 
> With VM_MAX_KERNEL_ADDRESS reverted or a successful
> boot with the modern value:
> 
> Adding CPU 0, hwref=cd38, awake=1
> Waking up CPU 3 (dev=c480)
> Adding CPU 3, hwref=c480, awake=1
> Waking up CPU 2 (dev=c768)
> Adding CPU 2, hwref=c768, awake=1
> Waking up CPU 1 (dev=ca50)
> Adding CPU 1, hwref=ca50, awake=1
> SMP: AP CPU #3 launched
> SMP: AP CPU #2 launched
> SMP: AP CPU #1 launched
> Trying to mount root from ufs:/dev/ufs/FBSDG5L2rootfs [rw,noatime]...
> 
> 
> With the modern VM_MAX_KERNEL_ADDRESS value for a boot attempt
> that failed, an example (typed from a picture of the screen) is:
> 
> Adding CPU 0, hwref=cd38, awake=1
> Waking up CPU 3 (dev=c480)
> 
> Another is:
> 
> Adding CPU 0, hwref=cd38, awake=1
> Waking up CPU 3 (dev=c480)
> Waking up CPU 2 (dev=c768)
> 
> (Both examples have no more output.)
> 
> So CPUs 1..3 do not get "Adding CPU" messages. Also:
> I do not remember seeing all 3 "Waking up CPU" messages,
> just 1 or 2 of them.
> 
> (Sometimes the "Trying to mount root from" message is in
> the mix as I remember.)
> 
> 
> One point of difference that is consistently observable for
> the old vs. modern VM_MAX_KERNEL_ADDRESS values is how many
> bufspacedaemon-* threads there are:
> 
> old VM_MAX_KERNEL_ADDRESS value: 0..2
> new VM_MAX_KERNEL_ADDRESS value: 0..6
> 
> 
> I have had many boot attempts in a row boot
> for the modern VM_MAX_KERNEL_ADDRESS value,
> though not as many as the dozens of failures
> in a row. Highly variable with lots of
> testing.
> 

Do all the below increases make sense for the 16 GiByte
RAM G5 example context? (Other G5's may have less RAM.)

-: old VM_MAX_KERNEL_ADDRESS figure in kernel build
+: modern VM_MAX_KERNEL_ADDRESS figure in kernel build
(The context is not using zfs, just ufs.)

-kern.maxvnodes: 188433
+kern.maxvnodes: 337606

-kern.ipc.maxpipekva: 119537663
+kern.ipc.maxpipekva: 267718656

-kern.ipc.maxmbufmem: 1530083328
+kern.ipc.maxmbufmem: 2741362688

-kern.ipc.nmbclusters: 186778
+kern.ipc.nmbclusters: 334640

-kern.ipc.nmbjumbop: 93388
+kern.ipc.nmbjumbop: 167319

-kern.ipc.nmbjumbo9: 27670
+kern.ipc.nmbjumbo9: 49576

-kern.ipc.nmbjumbo16: 15564
+kern.ipc.nmbjumbo16: 27886

-kern.ipc.nmbufs: 1195380
+kern.ipc.nmbufs: 2141700

-kern.minvnodes: 47108
+kern.minvnodes: 84401

-kern.nbuf: 47358
+kern.nbuf: 105243

-vm.max_kernel_address: 16140901072146268159
+vm.max_kernel_address: 16140901098855596031
(included for reference)

-vm.kmem_size: 3060166656
+vm.kmem_size: 5482725376

-vm.kmem_size_max: 3060164198
+vm.kmem_size_max: 13743895347

-vm.kmem_map_size: 44638208
+vm.kmem_map_size: 51691520

-vm.kmem_map_free: 3015528448
+vm.kmem_map_free: 5431033856

-vfs.ufs.dirhash_maxmem: 12115968
+vfs.ufs.dirhash_maxmem: 26935296

-vfs.wantfreevnodes: 47108
+vfs.wantfreevnodes: 84401

-vfs.maxbufspace: 775913472
+vfs.maxbufspace: 1724301312

-vfs.maxmallocbufspace: 38762905
+vfs.maxmallocbufspace: 86182297

-vfs.lobufspace: 736495195
+vfs.lobufspace: 1637463643

-vfs.hibufspace: 775258112
+vfs.hibufspace: 1723645952

-vfs.bufspacethresh: 755876653
+vfs.bufspacethresh: 1680554797

-vfs.lorunningspace: 8126464
+vfs.lorunningspace: 11206656

-vfs.hirunningspace: 12124160
+vfs.hirunningspace: 16777216

-vfs.lodirtybuffers: 5929
+vfs.lodirtybuffers: 13165

-vfs.hidirtybuffers: 11859
+vfs.hidirtybuffers: 26330

-vfs.dirtybufthresh: 10673
+vfs.dirtybufthresh: 23697

-vfs.numfreebuffers: 47358
+vfs.numfreebuffers: 105243

-vfs.nfsd.request_space_low: 63753556
+vfs.nfsd.request_space_low: 114223786

-vfs.nfsd.request_space_high: 95630336
+vfs.nfsd.request_space_high: 171335680

-net.inet.ip.maxfrags: 5836
+net.inet.ip.maxfrags: 10457

-net.inet.ip.maxfragpackets: 5893
+net.inet.ip.maxfragpackets: 10508

-net.inet.tcp.reass.maxsegments: 11703
+net.inet.tcp.reass.maxsegments: 20916

-net.inet.sctp.maxchunks: 23347
+net.inet.sctp.maxchunks: 41830

-net.inet6.ip6.maxfragpackets: 5836
+net.inet6.ip6.maxfragpackets: 10457

-net.inet6.ip6.maxfrags: 5836
+net.inet6.ip6.maxfrags: 10457

-net.inet6.ip6.maxfragbucketsize: 11
+net.inet6.ip6.maxfragbucketsize: 20

-debug.softdep.max_softdeps: 753732
+debug.softdep.max_softdeps: 1350424

-machdep.moea64_pte_valid: 148909
+machdep.moea64_pte_valid: 160636



===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)



More information about the freebsd-ppc mailing list