cvs commit: src/sys/kern kern_proc.c
M. Warner Losh
imp at bsdimp.com
Wed Jun 9 18:19:44 GMT 2004
In message: <20040609161818.GA25348 at freefall.freebsd.org>
Bosko Milekic <bmilekic at FreeBSD.org> writes:
: Frankly, I think it is obvious.
If it were obvious, then people wouldn't be making this mistake.
: In the BAD case, you decrement safely but you compare for the refcount
: having hit zero without the lock protecting the count being taken,
: so what happens is that there is a race window between where you drop
: the lock and check for the zero refcount where another racing thread
: could drop the refcount finally to zero and go into the check as
: well. Briefly, this can happen:
:
: - refcount of 'foo' is 2.
: - thread 1 enters BAD code, lowers refcount to 1, releases lock.
: - thread 2 enters BAD code, lowers refcount to 0, releases lock.
: - thread 1 checks against refcount being zero, decides it is now,
: and proceeds to destroy(foo).
: - thread 2 checks against refcount being zero, decides it is now,
: and proceeds to destroy(foo) as well.
:
: Conclusion: foo is destroyed twice.
Yes. That's correct. I understood that, but only after I sent the
message and thought about it for a while. That tends to be my metric
for 'needs a comment' or 'needs to be documented.'
: The GOOD code does not suffer from this problem. Here is a way to
: handle this sort of race if your reference counter is instead
: manipulated atomically (as opposed to protected by a mutex):
: [From Mbuf-related code]
:
: MEXT_REM_REF(m); /* Atomic decrement of m->m_ext.ref_cnt */
: if (atomic_cmpset_int(m->m_ext.ref_cnt, 0, 1)) {
: /* Do the free here... */
: }
: return;
Maybe we need a locking section in the handbook that talks about these
techniques. The more people that know and understand them, the better
the locking that will happen in the rest of the kernel.
Warner
More information about the cvs-src
mailing list