locks and kernel randomness...
Harrison Grundy
harrison.grundy at astrodoggroup.com
Tue Feb 24 17:08:05 UTC 2015
On 02/24/15 06:58, Warner Losh wrote:
>
>> On Feb 23, 2015, at 9:36 PM, Harrison Grundy
>> <harrison.grundy at astrodoggroup.com> wrote:
>>
>>
>>
>> On 02/23/15 18:42, Konstantin Belousov wrote:
>>> On Mon, Feb 23, 2015 at 06:04:12PM -0800, Harrison Grundy
>>> wrote:
>>>>
>>>>
>>>> On 02/23/15 17:57, Konstantin Belousov wrote:
>>>>> On Mon, Feb 23, 2015 at 05:20:26PM -0800, John-Mark Gurney
>>>>> wrote:
>>>>>> I'm working on simplifying kernel randomness interfaces.
>>>>>> I would like to get read of all weak random generators,
>>>>>> and this means replacing read_random and random(9) w/
>>>>>> effectively arc4rand(9) (to be replaced by ChaCha or
>>>>>> Keccak in the future).
>>>>>>
>>>>>> The issue is that random(9) is called from any number of
>>>>>> contexts, such as the scheduler. This makes locking a
>>>>>> bit more interesting. Currently, both arc4rand(9) and
>>>>>> yarrow/fortuna use a default mtx lock to protect their
>>>>>> state. This obviously isn't compatible w/ the scheduler,
>>>>>> and possibly other calling contexts.
>>>>>>
>>>>>> I have a patch[1] that unifies the random interface. It
>>>>>> converts a few of the locks from mtx default to mtx spin
>>>>>> to deal w/ this.
>>>>> This is definitely an overkill. The rebalancing minor use
>>>>> of randomness absolutely does not require
>>>>> cryptographical-strenght randomness to select a moment to
>>>>> rebalance thread queue. Imposing the spin lock on the whole
>>>>> random machinery just to allow the same random gathering
>>>>> code to be used for balance_ticks is detriment to the
>>>>> system responsivness. Scheduler is fine even with
>>>>> congruential generators, as you could see in the
>>>>> cpu_search(), look for the '69069'.
>>>>>
>>>>> Please do not enforce yet another spinlock for the system.
>>>>> _______________________________________________
>>>>
>>>> The patch attached to
>>>> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=197922
>>>> switches sched_balance to use get_cyclecount, which is also a
>>>> suitable source of entropy for this purpose.
>>>>
>>>> It would also be possible to make the scheduler deterministic
>>>> here, using cpuid or some such thing to make sure all CPUs
>>>> don't fire the balancer at the same time.
>>>>
>>>
>>> The patch in the PR is probably in the right direction, but
>>> might be too simple, unless somebody dispel my fallacy. I
>>> remember seeing claims that on the very low-end embedded
>>> devices the get_cyclecount() method may be non-functional, i.e.
>>> returning some constant, probably 0. I somehow associate MIPS
>>> arch with this bias.
>>>
>>
>> Talking to some of the arm and MIPS developers, it appears
>> get_cyclecount() may be slow on some older ARM hardware... (In
>> particular, hardware that doesn't support SMP anyway.)
>
> It simply doesn’t exist on older ARM hardware. Some SoCs have
> something similar to a real-time clock that you can read, but
> that’s not reliable for this use.
>
>> However, after a quick test on some machines here, I don't think
>> this function actually needs randomness, due to the large number
>> of other pathways ULE uses to balance load.
>>
>> New patch attached to the PR that simply removes the randomness
>> entirely.
>
> Are you sure about that?
I'm testing on an 8 core AMD Bulldozer machine without any noticable
issues. You could game the scheduler a bit by running near the
"beginning" of the balance interval, but the preemption, idle steal,
and priority recalculation-caused migrations pretty much wipe out the
effect.
That being said, get_cyclecount is pretty cheap, and this code doesn't
run *that* often, so if there's a rare edge case I'm not running into
that benefits from it, I suspect it's worth keeping the... 'faux'
randomness in there somewhere. Anyone else want to test it?
--- Harrison
>
> Warner
>
> _______________________________________________
> freebsd-arch at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-arch To
> unsubscribe, send any mail to
> "freebsd-arch-unsubscribe at freebsd.org"
>
More information about the freebsd-arch
mailing list