The first 2 handle_kernel_slb_spill calls on the 2-socket/2-cores-each G5 example context: as expected? (short)

Mark Millard marklmi at yahoo.com
Sat May 4 08:35:53 UTC 2019


[I'm just showing the code that got the handle_kernel_slb_spill
reports.]

On 2019-May-4, at 00:03, Mark Millard <marklmi at yahoo.com> wrote:

> [I forgot to show where I always stop the enable of the
> reporting.]
> 
> On 2019-May-3, at 23:52, Mark Millard <marklmi at yahoo.com> wrote:
> 
>> [A correction --and interesting information from a somewhat later
>> time frame.]
>> 
>> On 2019-May-3, at 20:22, Mark Millard <marklmi at yahoo.com> wrote:
>> 
>>> [This is from the -r347003 experiment context, not my
>>> normal environment.]
>>> 
>>> I stuck a printf in handle_kernel_slb_spill, reporting the type,
>>> dar, and srr0 values. The resultant build does not get far
>>> booting but does report the first 2 calls. Typed from a screen
>>> picture:
>>> 
>>> KDB: debugger backends: ddb
>>> KDB: current backend: ddb
>>> handle_kernel_slb_spill: type=0x380 dar=0x3d99348 srr0=0xa869bc
>>> handle_kernel_slb_spill: type=0x380 dar=0x10000000 srr0=0xa869bc
>>> 
>>> That is as far as it gets, as far as output goes, with that
>>> unconditional printf in place.
>>> 
>>> (I was not sure I'd get anything from this experiment.)
>>> 
>>> This suggests that the slb is partially(?) populated in the
>>> hardware before the (adjusted) loop that I've been testing with
>>> tries to establish coverage of part of the KVA space. The two
>>> examples reported are from neither the Direct-Map space nor the
>>> Kernel-Virtual-Address space.
>>> 
>>> Are these expected? Is their presence handled?
>>> 
>> 
>> I made the printf in handle_kernel_slb_spill conditional
>> on a global so I could control when it would try to
>> print.
>> 
>> I learned that I guessed the ordering wrong on the initial
>> report:
>> 
>> QUOTE
>>       #ifdef __powerpc64__
>>       i = 0;
>>       for (va = virtual_avail; va < virtual_end && i<(n_slbs-1)/2; va += SEGMENT_LENGTH, i++)
>>               moea64_bootstrap_slb_prefault(va, 0);
>>       #endif
>> enable_handle_kernel_slb_spill_reporting= 1;
>> END QUOTE
>> 
>> gets the lines I originally showed:
>> 
>> handle_kernel_slb_spill: type=0x380 dar=0x3d99348 srr0=0xa869bc
>> handle_kernel_slb_spill: type=0x380 dar=0x10000000 srr0=0xa869bc
>> 
>> So these were after then loop, not before.
>> 
>> Note: So far those messages always have displayed and
>> then things were hung-up for this enable placement.
>> 
>> 
>> I then commented that enable out and added a
>> printf:
>> 
>>       pa = moea64_bootstrap_alloc(kstack_pages * PAGE_SIZE, PAGE_SIZE);
>>       va = virtual_avail + KSTACK_GUARD_PAGES * PAGE_SIZE;
>>       virtual_avail = va + kstack_pages * PAGE_SIZE;
>>       CTR2(KTR_PMAP, "moea64_bootstrap: kstack0 at %#x (%#x)", pa, va);
>> printf("moea64_bootstrap: kstack0 at %#x (%#x)\n", pa, va);
>> 
>> and also set up an enable just before dpcpu_init's 
>> all:
>> 
>> enable_handle_kernel_slb_spill_reporting= 1;
>>       dpcpu_init(dpcpu, curcpu);
>> 
>> The result, when it did not boot, was as below,
>> again showing a couple of handle_kernel_slb_spill
>> lines for a not very large addresses and no more
>> lines after that:
>> 
>> KDB: debugger backends: ddb
>> KDB: current backend: ddb
>> moea64_bootstrap: kstack0 at 0x3000 (0x1000)
>> handle_kernel_slb_spill: type=0x380 dar=0x22ef8 srr0=0xa86690
>> handle_kernel_slb_spill: type=0x480 dar=0x22ef8 srr0=0xa86690
>> 
>> It is the same addresses but two distinct types. It
>> also would seem to be the same segment as for:
>> 
>> handle_kernel_slb_spill: type=0x380 dar=0x3d99348 srr0=0xa869bc
>> (from the earlier placement)
>> 
>> 
>> By contrast, interestingly, it did sometimes boot for
>> this later enable placement, and, when it did boot,
>> there were no handle_kernel_slb_spill lines output:
>> 
>> KDB: debugger backends: ddb
>> KDB: current backend: ddb
>> moea64_bootstrap: kstack0 at 0x3000 (0x1000)
>> ---<<BOOT>>---
>> 
>> (and so on.)
>> 
>> 
>> This means that the type=0x?80 dar=0x22ef8 srr0=0xa86690
>> slb-misses are intermittent for this testing context.
>> 
>> 
>> Of course, with more testing I might see more variability.
> 
> 
> I forgot to show that I used:
> 
>        /* Bring up virtual memory */
>        moea64_late_bootstrap(mmup, kernelstart, kernelend);
> enable_handle_kernel_slb_spill_reporting= 0; // hangs without printf display first when this late
> }
> 
> It did no good the enable it this late so I set
> it as a disable point instead. Trying to use the
> handle_kernel_slb_spill printf after this point
> seem to just result in silently hanging-up.
> 
> So this disable was involved in the cases that
> booted for enabling just before dpcpu_init .
> (It is not clear just how far the non-booting
> cases got internally.)


For:
handle_kernel_slb_spill: type=0x380 dar=0x3d99348 srr0=0xa869bc
handle_kernel_slb_spill: type=0x380 dar=0x10000000 srr0=0xa869bc

both seem to involve the stbx instruction in:

0000000000a869bc <.memset+0x20> stbx    r4,r9,r3
0000000000a869c0 <.memset+0x24> addi    r9,r9,1
0000000000a869c4 <.memset+0x28> bdnz    0000000000a869bc <.memset+0x20>


For:

handle_kernel_slb_spill: type=0x380 dar=0x22ef8 srr0=0xa86690
handle_kernel_slb_spill: type=0x480 dar=0x22ef8 srr0=0xa86690

both seem to involve the stdu instruction in:

0000000000a8668c <.memcpy+0x140> ldu     r0,-8(r9)
0000000000a86690 <.memcpy+0x144> stdu    r0,-8(r11)
0000000000a86694 <.memcpy+0x148> bdnz    0000000000a8668c <.memcpy+0x140>

although the first is for a EXC_DSE (Data Segment Exception)
and the second for a EXC_ISE (Instruction Segment Exception).

The effective addresses reported for srr0 seem to match
what objdump shows for the kernel file.

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)



More information about the freebsd-ppc mailing list