cvs commit: src/sys/ia64/ia64 elf_machdep.c exception.S machdep.c
ptrace_machdep.c syscall.S trap.c vm_machdep.c
Marcel Moolenaar
marcel at FreeBSD.org
Tue Oct 28 11:38:28 PST 2003
marcel 2003/10/28 11:38:26 PST
FreeBSD src repository
Modified files:
sys/ia64/ia64 elf_machdep.c exception.S machdep.c
ptrace_machdep.c syscall.S trap.c
vm_machdep.c
Log:
When switching the RSE to use the kernel stack as backing store, keep
the RNAT bit index constant. The net effect of this is that there's
no discontinuity WRT NaT collections which greatly simplifies certain
operations. The cost of this is that there can be up to 504 bytes of
unused stack between the true base of the kernel stack and the start
of the RSE backing store. The cost of adjusting the backing store
pointer to keep the RNAT bit index constant, for each kernel entry,
is negligible.
The primary reasons for this change are:
1. Asynchronuous contexts in KSE processes have the disadvantage of
having to copy the dirty registers from the kernel stack onto the
user stack. The implementation we had so far copied the registers
one at a time without calculating NaT collection values. A process
that used speculation would not work. Now that the RNAT bit index
is constant, we can block-copy the registers from the kernel stack
to the user stack without having to worry about NaT collections.
They will be in the right place on the user stack.
2. The ndirty field in the trapframe is now also usable in userland.
This was previously not the case because ndirty also includes the
space occupied by NaT collections. The value could be off by 8,
depending on the discontinuity. Now that the RNAT bit index is
contants, we have exactly the same number of NaT collection points
on the kernel stack as we would have had on the user stack if we
didn't switch backing stores.
3. Debuggers and other applications that use ptrace(2) can now copy
the dirty registers from the kernel stack (using ptrace(2)) and
copy them whereever they want them (onto the user stack of the
inferior as might be the case for gdb) without having to worry
about NaT collections in the same way the kernel doesn't have to
worry about them.
There's a second order effect caused by the randomization of the
base of the backing store, for it depends on the number of dirty
registers the processor happened to have at the time of entry into
the kernel. The second order effect is that the RSE will have a
better cache utilization as compared to having the backing store
always aligned at page boundaries. This has not been measured and
may be in practice only minimally beneficial, if at all measurable.
Revision Changes Path
1.15 +14 -18 src/sys/ia64/ia64/elf_machdep.c
1.53 +3 -2 src/sys/ia64/ia64/exception.S
1.163 +14 -22 src/sys/ia64/ia64/machdep.c
1.3 +4 -2 src/sys/ia64/ia64/ptrace_machdep.c
1.9 +14 -7 src/sys/ia64/ia64/syscall.S
1.93 +2 -1 src/sys/ia64/ia64/trap.c
1.73 +8 -7 src/sys/ia64/ia64/vm_machdep.c
More information about the cvs-src
mailing list