cvs commit: src/sys/kern kern_mutex.c
John Baldwin
jhb at freebsd.org
Fri Jun 8 16:10:56 UTC 2007
On Friday 08 June 2007 10:50:15 am Bruce Evans wrote:
> On Thu, 7 Jun 2007, John Baldwin wrote:
>
> > On Thursday 07 June 2007 04:15:13 am Bruce Evans wrote:
> >>> The next run will have pagezero resetting its priority when this
priority
> >>> gets clobbered.
> >>
> >> That gave only mainly more voluntary context switches (13.5+ million
instead
> >> of the best observed value of 1.3+ million or the value of 2.9+ million
> >> without priority resetting. It reduced the pagezero time from 30 seconds
to
> >> 24. It didn't change the real time significantly.
> >
> > Hmm, one problem with the non-preemption page zero is that it doesn't
yield
> > the lock when it voluntarily yields. I wonder if something like this
patch
> > would help things for the non-preemption case:
>
> This works well. It fixed the extra 1.4 million voluntary context switches
> and even reduced the number by 10-100k. The real runtime is still up a bit,
> but user+sys+pgzero time is down a bit.
>
> > Index: vm_zeroidle.c
> > ===================================================================
> > RCS file: /usr/cvs/src/sys/vm/vm_zeroidle.c,v
> > retrieving revision 1.45
> > diff -u -r1.45 vm_zeroidle.c
> > --- vm_zeroidle.c 18 May 2007 07:10:50 -0000 1.45
> > +++ vm_zeroidle.c 7 Jun 2007 14:56:02 -0000
> > @@ -147,8 +147,10 @@
> > #ifndef PREEMPTION
> > if (sched_runnable()) {
> > mtx_lock_spin(&sched_lock);
> > + mtx_unlock(&vm_page_queue_free_mtx);
> > mi_switch(SW_VOL, NULL);
> > mtx_unlock_spin(&sched_lock);
> > + mtx_lock(&vm_page_queue_free_mtx);
> > }
> > #endif
> > } else {
>
> The sched_locks are now of course thread_locks. I put the vm unlock
> before the thread lock since the above order seems to risk a LOR. That
> may have been a mistake -- we would prefer not to be switched after
> deciding to do it ourself.
Actually, the order is on purpose so that we don't waste time doing a
preemption when we're about to switch anyway.
> > We could simulate this behavior some by using a critical section to
control
> > when preemptions happen so that we wait until we drop the lock perhaps to
> > allow preemptions. Something like this:
>
> Simulate? Do you mean simulate the revious pbehaviour?
Well, simulate the non-PREEMPTION behavior (with the above patch) in the
PREEMPTION case.
> I think the voluntary context switch in the above only worked well
> because it rarely happened. Apparently, switches away from pagezero
> normally happened due to the previous behaviour when the vm lock is
> released.
>
> >> Index: vm_zeroidle.c
> > ===================================================================
> > RCS file: /usr/cvs/src/sys/vm/vm_zeroidle.c,v
> > retrieving revision 1.45
> > diff -u -r1.45 vm_zeroidle.c
> > --- vm_zeroidle.c 18 May 2007 07:10:50 -0000 1.45
> > +++ vm_zeroidle.c 7 Jun 2007 14:58:39 -0000
> > @@ -110,8 +110,10 @@
> > if (m != NULL && (m->flags & PG_ZERO) == 0) {
> > vm_pageq_remove_nowakeup(m);
> > mtx_unlock(&vm_page_queue_free_mtx);
> > + critical_exit();
> > pmap_zero_page_idle(m);
> > mtx_lock(&vm_page_queue_free_mtx);
> > + critical_enter();
> > m->flags |= PG_ZERO;
> > vm_pageq_enqueue(PQ_FREE + m->pc, m);
> > ++vm_page_zero_count;
>
> Next I will try this. I put the critical_exit() before the vm unlock.
> mtx_unlock() should be allowed to switch if it wants. However, we
> would prefer to keep context switches disabled in the above -- just drop
> the lock so that other CPUs can proceed.
Again, the order here is on purpose to make sure we don't preempt while
holding the lock.
> > @@ -141,20 +143,25 @@
> > idlezero_enable = idlezero_enable_default;
> >
> > mtx_lock(&vm_page_queue_free_mtx);
> > + critical_enter();
> > for (;;) {
> > if (vm_page_zero_check()) {
> > vm_page_zero_idle();
> > #ifndef PREEMPTION
> > if (sched_runnable()) {
> > mtx_lock_spin(&sched_lock);
> > + mtx_unlock(&vm_page_queue_free_mtx);
> > mi_switch(SW_VOL, NULL);
> > mtx_unlock_spin(&sched_lock);
> > + mtx_lock(&vm_page_queue_free_mtx);
> > }
> > #endif
>
> I added the necessary critical_exit/enter() calls here.
They aren't needed in the !PREEMPTION case since the context switch is
explicit. The critical sections are only needed for the PREEMPTION case
really.
> > } else {
> > + critical_exit();
> > wakeup_needed = TRUE;
> > msleep(&zero_state, &vm_page_queue_free_mtx, 0,
> > "pgzero", hz * 300);
> > + critical_enter();
> > }
> > }
> > }
>
> Bruce
>
--
John Baldwin
More information about the cvs-all
mailing list