PowerMac G5 live slb entries before moea64_mid_bootstrap for bsp vs. later: are duplications of ESID's avoided?

Mark Millard marklmi at yahoo.com
Tue May 7 09:15:47 UTC 2019


moea64_mid_bootstrap has:

        for (i = 0; i < 64; i++) {
                pcpup->pc_aim.slb[i].slbv = 0;
                pcpup->pc_aim.slb[i].slbe = 0;
        }

which does not try to use slbmfee and slbmfev
to copy the live information from prior context,
such as Apple's openfirmware. (There seems to
be no use of slbfev at all, though there is
use of slbmfee.) One means of dealing with what
this note is about might be to fill in the
live entries here. (But the hard coded 64 might
not be correct for what range have potential
live values to get.)

Is there a presumption that openfirmware and
the loader and such were all strictly:
MSR.IR=0 and MSR.DR=0? Is that really true of
Apple's openfirmware?

(Even usefdt mode would have had openfirmware
in use before disabling it. There likely would
still be previously established live-slb
entries around for MSR.IR=1 or MSR.DR=1.)

It also appears the only comparisons for checking
for already-existing esid's are from the
pcpup->pc_aim.slb[?].slbe content, no check of live
(via slbfee).

That includes in handle_kernel_slb_spill itself,
effectively not checking for a duplication of
a live esid with V=1.

(The documentation explicitly reports that
duplications mean undefined behavior as I
remember.)

This would appear to mean that starting to actually
assign the first slb entries needs to be after a
slbia (in a MSR.IR=0 context until things are
re-established) so that the addition(s) do not
potentially duplicate other ESID's that are valid
in the live context. But that is not done.

So it appears to me that in moea64_late_bootstrap
and its:

        mtmsr(mfmsr() | PSL_DR | PSL_IR);
        pmap_bootstrapped++;
. . .
        virtual_avail = VM_MIN_KERNEL_ADDRESS;
        virtual_end = VM_MAX_SAFE_KERNEL_ADDRESS;
. . .
        i = 0;
        for (va = virtual_avail; va < virtual_end && i<n_slbs-1; va += SEGMENT_LENGTH, i++)
                moea64_bootstrap_slb_prefault(va, 0);

there is no check for avoiding duplicates. (Not that
I'm claiming duplications are likely. But there is
more to this than duplications . . .)

Also, the code ends up causing handle_kernel_slb_spill
use as it replaces the slb entry(s) for the esid's
involved in the instruction fetches (for example).
This means that the loop establishes fewer pre-faulted
entries than it suggests.

(I've actually observed handle_kernel_slb_spill use in
over the loop by having handle_kernel_slb_spill count
its activity and sampling before vs. after.)

Overall, it does not look like the kernel is fully
dealing with the transition from prior activity's
live slb entries to just its own.


===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)



More information about the freebsd-ppc mailing list