Re: adaptive spinning: fall back to sleep / block?

From: Mateusz Guzik <mjguzik_at_gmail.com>
Date: Mon, 17 May 2021 12:49:08 UTC
On 5/17/21, Andriy Gapon <avg@freebsd.org> wrote:
> On 17/05/2021 13:23, Mateusz Guzik wrote:
>> On 4/9/21, Andriy Gapon <avg@freebsd.org> wrote:
>>>
>>>
>>> Until I recently looked at the actual code I was under an impression
>>> that
>>> the adaptive spinning is bounded and that after some time / number of
>>> spins
>>> a
>>> thread would go to a sleep queue or a turnstile.  But it looks that the
>>> spinning
>>> is actually unbounded as long as its conditions hold (some other thread
>>> owns
>>> the
>>> lock and that thread is running, the owner could be changing too).
>>>
>>> In my opinion, it does not make sense to spin for "too long".
>>> If there was not an opportunity to take a lock quickly, then it's better
>>> to
>>> block waiting for it rather than keep occupying a processor.  For
>>> instance,
>>> the
>>> spinning can prevent another runnable thread from running.
>>>
>>> I think that if a lock is heavily contended or its hold times are on the
>>> longer
>>> side (or both), then the adaptive spinning can make the system behavior
>>> (performance, responsiveness) worse.
>>>
>>> Finally, I was under an impression that 'adaptive' meant some heuristic
>>> on
>>> whether and when to do the spinning.  _A lock owner is running_ seems to
>>> be
>>> too
>>> simple to qualify as 'adaptive'.
>>>
>>> As an example, this looks like a relatively sophisticated implementation
>>> of
>>> the
>>> "adaptiveness":
>>> http://hg.openjdk.java.net/jdk8/jdk8/hotspot/file/87ee5ee27509/src/share/vm/runtime/objectMonitor.cpp#l1919
>>> But, JIMHO, simply having a hard limit on the spin count would be better
>>> than
>>> what we have now.
>>>
>>
>> There is no clear cut answer to this that I'm aware of. Ultimately all
>> behavior in face of contention is about damage control.
>>
>> It's not hard to make a counter point to going off cpu after a timeout:
>> 1. going off cpu is also a serializing operation, that is threads can
>> content on doing so, albeit less than on the original lock
>> 2. existence of blocked waiters makes it more expensive to release the
>> lock, making contention worse and even then it is unclear what wake up
>> policy you would like to enact (e.g., do you hand off the lock to the
>> oldest sleeper? do you wake everyone and let them fight it out? all
>> choices suffer problems)
>> 3. consider a thread which holds foo_lock and now contends on bar_lock
>> and decides to go off cpu. if there is a thread which waits on
>> foo_lock, it will also go off cpu. This kind of behavior easily leads
>> to dramatic collapse in throughput, even on top of whatever problems
>> which are present due to contention to begin with.
>>
>> Now, locking primitives in the FreeBSD kernel are problematic in at
>> least 3 ways:
>> 1. there are no fairness guarantees, for example a constant stream of
>> threads can end up excluding some threads for a long time
>> 2. there is no support for locking under kvm (as in the kernel does
>> not take advantage of it)
>> 3. rw locking is using cas loops instead of add
>>
>> imo, if doing anything serious with locks, the 3 above problems needs
>> to be solved, in that order.
>>
>> I can't stress enough the lack of fairness, which arbitrary going to
>> sleep will only exacerbate. As noted earlier, I don't know if timeout
>> sleep is an inherently good or bad idea, but I'm confident playing
>> with it in the situation is on the bad side.
>
> I agree with you.  And it looks like we do not disagree in general
> .
> I just want to make a clarification, maybe redundant, that you seem to be
> mostly
> concerned about effects on threads that contend on same locks or related
> lock
> chains.  I am more concerned about effects on completely unrelated threads.
>

You can't ignore the other kind. Regardless, even the "unrelated"
threads get affected by what's happening in the kernel.

> Yes, sleeping and waking introduces overhead (and not only) on threads doing
>
> them and threads depending on those threads.  But spinning, when it happens
> on a
> large share of CPUs (e.g., ~ 100% of them), introduces a penalty on the
> whole
> system.
>

And going off cpu can easily introduce even more penalty after you
wake threads up.

> At work, where a substantial part of our application lives in kernel, I
> regularly debug issues that seem to happen because of CPUs starvation.  And
> a
> common theme between them is that many CPUs are tied in the lock spinning.

If you have so much contention that you run into this, the actual bug
is that the workload itself does not scale. As noted earlier, playing
with locking primitives is only a damage control measure.

That said, even if you can't fix scalability issues per se, you may
get some wins as follows:
- play with sysctl debug.lock.delay_max, in particular try to lower it
significantly
- adding should_yield checks around lock_delay calls to go off cpu if
you wait long enough may be a quasi-timeout
- at least make sure the contended locks are not false shared (e.g.,
with __exclusive_cache_line, but warning: it only helps for in-kernel
code, not modules; you will have to pad by hand otherwise)

-- 
Mateusz Guzik <mjguzik gmail.com>