rmlock(9) two additions
Max Laier
max at laiers.net
Mon Aug 23 22:57:53 UTC 2010
On Sunday 22 August 2010 21:05:47 Max Laier wrote:
> On Saturday 21 August 2010 23:18:50 Stephan Uphoff wrote:
> > Max Laier wrote:
> > ...
> > The rm_try_rlock() will never succeed when the lock was last locked as
> > writer.
> > Maybe add:
> >
> > void
> > _rm_wunlock(struct rmlock *rm)
> > {
> > + rm->rm_noreadtoken = 0;
> >
> > mtx_unlock(&rm->rm_lock);
> >
> > }
> >
> > But then
> >
> > _rm_wlock(struct rmlock *rm)
> >
> > always needs to use IPIs - even when the lock was used last as a write
> > lock.
>
> I don't think this is a big problem - I can't see many use cases for
> rmlocks where you'd routinely see repeated wlocks without rlocks between
> them. However, I think there should be a release memory barrier
> before/while clearing rm_noreadtoken, otherwise readers may not see the
> data writes that are supposed to be protected by the lock?!?
> > ...
> I believe either solution will work. #1 is a bit more in the spirit of the
> rmlock - i.e. make the read case cheap and the write case expensive. I'm
> just not sure about the lock semantics.
>
> I guess a
>
> atomic_store_rel_int(&rm->rm_noreadtoken, 0);
>
> should work.
thinking about this for a while makes me wonder: Are readers really guaranteed
to see all the updates of a writer - even in the current version?
Example:
writer thread:
rm_wlock(); // lock mtx, IPI, wait for reader drain
modify_data();
rm_wunlock(); // unlock mtx (this does a atomic_*_rel)
reader thread #1:
// failed to get the lock, spinning/waiting on mtx
mtx_lock(); // this does a atomic_*_acq -> this CPU sees the new data
rm->rm_noreadtoken = 0; // now new readers can enter quickly
...
reader thread 2# (on a different CPU than reader #1):
// enters rm_rlock() "after" rm_noreadtoken was reset -> no memory barrier
// does this thread see the modifications?
I realize this is a somewhat pathological case, but would it be possible in
theory? Or is the compiler_memory_barrier() actually enough?
Otherwise, I think we need an IPI on rm_wunlock() that does a atomic_*_acq on
every CPU.
Thoughts?
Max
More information about the freebsd-arch
mailing list