svn commit: r367713 - head/sys/kern
Mateusz Guzik
mjguzik at gmail.com
Wed Nov 18 15:20:36 UTC 2020
On 11/18/20, Konstantin Belousov <kostikbel at gmail.com> wrote:
> On Tue, Nov 17, 2020 at 03:36:31PM +0100, Mateusz Guzik wrote:
>> On 11/17/20, Konstantin Belousov <kostikbel at gmail.com> wrote:
>> > On Tue, Nov 17, 2020 at 04:15:12AM +0100, Mateusz Guzik wrote:
>> >> On 11/17/20, Konstantin Belousov <kostikbel at gmail.com> wrote:
>> >> > On Mon, Nov 16, 2020 at 03:09:19AM +0000, Mateusz Guzik wrote:
>> >> >> Author: mjg
>> >> >> Date: Mon Nov 16 03:09:18 2020
>> >> >> New Revision: 367713
>> >> >> URL: https://svnweb.freebsd.org/changeset/base/367713
>> >> >>
>> >> >> Log:
>> >> >> select: replace reference counting with memory barriers in selfd
>> >> >>
>> >> >> Refcounting was added to combat a race between selfdfree and
>> >> >> doselwakup,
>> >> >> but it adds avoidable overhead.
>> >> >>
>> >> >> selfdfree detects it can free the object by ->sf_si == NULL, thus
>> >> >> we
>> >> >> can
>> >> >> ensure that the condition only holds after all accesses are
>> >> >> completed.
>> >> >>
>> >> >> Modified:
>> >> >> head/sys/kern/sys_generic.c
>> >> >>
>> >> >> Modified: head/sys/kern/sys_generic.c
>> >> >> ==============================================================================
>> >> >> --- head/sys/kern/sys_generic.c Sun Nov 15 22:49:28 2020 (r367712)
>> >> >> +++ head/sys/kern/sys_generic.c Mon Nov 16 03:09:18 2020 (r367713)
>> >> >> @@ -156,7 +156,6 @@ struct selfd {
>> >> >> struct mtx *sf_mtx; /* Pointer to selinfo mtx. */
>> >> >> struct seltd *sf_td; /* (k) owning seltd. */
>> >> >> void *sf_cookie; /* (k) fd or pollfd. */
>> >> >> - u_int sf_refs;
>> >> >> };
>> >> >>
>> >> >> MALLOC_DEFINE(M_SELFD, "selfd", "selfd");
>> >> >> @@ -1704,16 +1703,17 @@ static void
>> >> >> selfdfree(struct seltd *stp, struct selfd *sfp)
>> >> >> {
>> >> >> STAILQ_REMOVE(&stp->st_selq, sfp, selfd, sf_link);
>> >> >> - if (sfp->sf_si != NULL) {
>> >> >> + /*
>> >> >> + * Paired with doselwakeup.
>> >> >> + */
>> >> >> + if (atomic_load_acq_ptr((uintptr_t *)&sfp->sf_si) !=
>> >> >> (uintptr_t)NULL)
>> >> >> {
>> >> > This could be != 0.
>> >> >
>> >> >> mtx_lock(sfp->sf_mtx);
>> >> >> if (sfp->sf_si != NULL) {
>> >> >> TAILQ_REMOVE(&sfp->sf_si->si_tdlist, sfp, sf_threads);
>> >> >> - refcount_release(&sfp->sf_refs);
>> >> >> }
>> >> >> mtx_unlock(sfp->sf_mtx);
>> >> >> }
>> >> >> - if (refcount_release(&sfp->sf_refs))
>> >> >> - free(sfp, M_SELFD);
>> >> >> + free(sfp, M_SELFD);
>> >> > What guarantees that doselwakeup() finished with sfp ?
>> >> >
>> >>
>> >> Release semantics provided by atomic_store_rel_ptr -- it means the
>> >> NULL store is the last access, neither CPU nor the compiler are going
>> >> to reorder preceding loads/stores past it.
>> > It only guarantees that if we observed NULL as the result of load.
>> >
>>
>> If that did not happen selfdfree takes the lock. If the entry is still
>> there, it removes it and doselwakeup will not see it after it takes
>> the lock. If the entry is not there, doselwaekup is done with it
>> because it released the lock.
>
> I still do not understand it. selfdfree() indeed takes sf_mtx and rechecks
> sf_si. But what prevents doselwakeup() to store NULL into sf_si at any
> moment. e.g. after free() is done ? selfdfree() takes seltd mutex, not
> selinfo.
>
That's the same lock.
In selrecord:
mtxp = sip->si_mtx;
if (mtxp == NULL)
mtxp = mtx_pool_find(mtxpool_select, sip);
/*
* Initialize the sfp and queue it in the thread.
*/
sfp->sf_si = sip;
sfp->sf_mtx = mtxp;
STAILQ_INSERT_TAIL(&stp->st_selq, sfp, sf_link);
/*
* Now that we've locked the sip, check for initialization.
*/
mtx_lock(mtxp);
if (sip->si_mtx == NULL) {
sip->si_mtx = mtxp;
TAILQ_INIT(&sip->si_tdlist);
}
Then doselwakeup mtx_lock(sip->si_mtx) serializes against mtx_lock(sfp->sf_mtx)
--
Mateusz Guzik <mjguzik gmail.com>
More information about the svn-src-all
mailing list