svn commit: r239569 - head/etc/rc.d
David O'Brien
obrien at FreeBSD.org
Tue Sep 11 08:23:10 UTC 2012
> On 09/10/2012 23:46, David O'Brien wrote:
> > In what way did I suggest we don't need to seed the PRNG?
> > I simply removed an outdated and incorrect statement.
>
> Yes, the comment as it stood was out of date. I'm not sure that removing
> it (rather than rephrasing it) was the right call.
Doug you're a FreeBSD committer, you know how to use an editor and
'svn diff'. Where is your patch suggesting a rephrase?
> > In fact writing into /dev/random CANNOT "seeded" yarrow. All /dev/random
> > input is untrusted and is assumed to have _0_ entropy:
> >
> > void
> > random_yarrow_write(void *buf, int count)
> > {
> > ...
> > random_harvest_internal(get_cyclecount(), (char *)buf + i,
> > chunk, 0, 0, RANDOM_WRITE);
>
> You're taking that out of context. The 0 there is just an estimate, but
> it's added to the tailq anyway.
Yes the input written to /dev/random is put into the generator
(provided you have the seed buffer space).
The "0, 0" is the 'bits' and 'frac' argument to
random_harvest_internal(), which become 'event->bits' and 'event->frac'.
Follow the code from there and point out how I am wrong.
What overrides the estimate then? This is discussed in the yarrow paper.
Have you read it yet?
> > So we have two issues -- (1) is how yarrow is operating per the design
> > with its checks on "seeded",
>
> I am specifically avoiding that issue as it is out of scope for the
> rc.d-related discussion. There is room for a larger discussion on
> whether or not we should make .seeded dynamic again.
>
> But regardless of that decision, it's unquestionable that we need to
> seed the device at boot time, which is what I am interested in.
Unquestionable in what regard? Unquestionable in that we must do so to
get any useful output of /dev/random.
Unquestionable in that FreeBSD will not boot? As I mentioned, I tested
that. The system booted up fine with no delays, etc... Scary.
> ... and something that I pointed out that with the current defaults is
> close enough to impossible not to be a threat model we need to spend
> much time on.
Oh? You've done sufficient research? You've gathered 100,000 keys from
random FreeBSD machines from across the Internet? I am aware of research
that has. I'm not saying FreeBSD was a red hearing as Debian was; but
you seem to be quickly dismissing something you seem to have spent little
time investigating or thinking about.
> > Also, both jbh <201209050944.38042.jhb at freebsd.org> and RW
> > <20120905021248.5a17ace9 at gumby.homeunix.com> feel this likely does
> > happen just from reading the code. Please explain from either
> > (1) a code reading, or (2) your own instrumented kernel that dropping
> > of input to /dev/random does not occur.
>
> Once again, you're the one asserting that there is a problem with a
> system that has worked well for 12 years, so the burden of proof is on
> you. That said, I'm interested in Arthur's evidence.
Are you not a sufficient C programmer that you couldn't hack this up
yourself with the amount of time you've spent arguing it? Create a
couple MB buffer and copy the internal RANDOM_WRITE seed buffers to it
when they are processed in random_kthread() or some other suitable
routine. You'll have a running stream of several /dev/random writes.
Look at the output and match it to what was written into /dev/random.
This is not rocket science.
> >> The use of dd to feed the entropy in with 2k chunks is specifically to
> >> address this issue.
> >
> > Maybe I'm missing something... The code in 'initrandom' is
> > "| dd of=/dev/random bs=8k". Where are you getting 2k chunks from that?
>
> You're right, I didn't have a chance to look over the code when I wrote
> that response, and was going by my (obviously faulty) memory on that
> trivial point.
This seems to be one of your problems -- you don't seem to be reading any
code or papers before replying.
> My understanding is that Arthur's tests were with the
> current defaults. It would be interesting to see what happens if we
> reduce that to 4k (to match the input buffer size),
What do you think is the size of ${entropy_file}?
> or perhaps even lower.
Just how much do you expect the write(2) to be slowed down by breaking up
the 4k write into 2 or 3 chunks?
--
-- David (obrien at FreeBSD.org)
More information about the freebsd-security
mailing list