From nobody Tue Jun 07 14:52:30 2022 X-Original-To: dev-commits-src-all@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 2FB187F241A; Tue, 7 Jun 2022 14:52:34 +0000 (UTC) (envelope-from mjguzik@gmail.com) Received: from mail-lf1-x12b.google.com (mail-lf1-x12b.google.com [IPv6:2a00:1450:4864:20::12b]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4LHYHP3mTBz4TCD; Tue, 7 Jun 2022 14:52:33 +0000 (UTC) (envelope-from mjguzik@gmail.com) Received: by mail-lf1-x12b.google.com with SMTP id a29so212831lfk.2; Tue, 07 Jun 2022 07:52:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=/QMCJGoeE1ZFSzyHXQhd8PFDpY3obTb/WeHCRIPFo/w=; b=TBBua18MiQ7ly++BiR2/dszcJTlLtSSHUZOBB2fOuVnh5DBubVSluiMzUPhMjSFECr 4M0lr87aaNkFVFBBhhnH7b3MeMv/Jz4q01acoBmTyyUY9J3X9Y7cjuhxNhKz8EMs/l87 InHKpRU8J/EceqWud7R46hlVhsgH997zlfs5pYx2mnSrzXkaUdhBotK5i4h+YNUVQzQe vRN2QAkrK8BhBGZ0XIGGjiR0RehizO+SPuLRDCCvGDDgHBwZqJjnAq91wn8Ub2IK8Y1X wbsX1GRMIDJhG4Swwly4JsuFPgpHf19wwC7wDJW5IRnfCLBQymUoR8a5rIseBuc6rgem hgIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=/QMCJGoeE1ZFSzyHXQhd8PFDpY3obTb/WeHCRIPFo/w=; b=UyOzk//olEl1XrWjF12eg7IdBQweLONOKC0O6CafED109HSEO4jEI1B9XpbP3HYjwJ sq3LlbXxJL3dDaCeMMH/i+oErSIP8ZLkjzAcfDD4xJDwmsWmLEFTDudYHCpSut45b7Bg cbWxZiNv8f1KRZs0kTo7/C1sM5xD6Zi+9AqSigM4Ly9tsqb0MxdsPmtfeI1eok18GCBH +Vpz2gg4x/Q1h29l31Ru4OP4sKa++uoG7Zfx8SK06izaDEHmnBrEU2RnCgPVDnnaf3VG IuVFgKBMHGjOrAtfb+rxQdBrUHnyfTeq3I/+k+icn0D4xWoXuinTuARrijx7x93oHnc0 VWuw== X-Gm-Message-State: AOAM532XMvpToVGy9UggLGC0hveDd4WCXsVUj8vaNcPyjFMKyF4/uvrk revS2fVRvm8AgR2VnFA8HcwuZdpANW9WfY+n+6pEfVKk X-Google-Smtp-Source: ABdhPJx44if6zktjlDnP4yp9yIwqHvhTkNbKmVXDOgYr2lYK1DVjRCrxbP+kQzmBl6TR73n9UWXTeI9sWo0v+yneMwY= X-Received: by 2002:a05:6512:1150:b0:478:f60b:61d1 with SMTP id m16-20020a056512115000b00478f60b61d1mr19124570lfg.8.1654613551850; Tue, 07 Jun 2022 07:52:31 -0700 (PDT) List-Id: Commit messages for all branches of the src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-all List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-dev-commits-src-all@freebsd.org X-BeenThere: dev-commits-src-all@freebsd.org MIME-Version: 1.0 Received: by 2002:aa6:c2d2:0:b0:1e6:e5ee:dbad with HTTP; Tue, 7 Jun 2022 07:52:30 -0700 (PDT) In-Reply-To: <202206070251.2572p7bh083914@gitrepo.freebsd.org> References: <202206070251.2572p7bh083914@gitrepo.freebsd.org> From: Mateusz Guzik Date: Tue, 7 Jun 2022 16:52:30 +0200 Message-ID: Subject: Re: git: eff9ee7c0c8e - main - hwpmc: Increase thread priority while iterating CPUs. To: Alexander Motin Cc: src-committers@freebsd.org, dev-commits-src-all@freebsd.org, dev-commits-src-main@freebsd.org Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 4LHYHP3mTBz4TCD X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=gmail.com header.s=20210112 header.b=TBBua18M; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (mx1.freebsd.org: domain of mjguzik@gmail.com designates 2a00:1450:4864:20::12b as permitted sender) smtp.mailfrom=mjguzik@gmail.com X-Spamd-Result: default: False [-4.00 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[gmail.com:s=20210112]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; FREEMAIL_FROM(0.00)[gmail.com]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; R_SPF_ALLOW(-0.20)[+ip6:2a00:1450:4000::/36]; NEURAL_HAM_LONG(-1.00)[-1.000]; TO_DN_SOME(0.00)[]; MID_RHS_MATCH_FROMTLD(0.00)[]; DKIM_TRACE(0.00)[gmail.com:+]; DMARC_POLICY_ALLOW(-0.50)[gmail.com,none]; RCVD_IN_DNSWL_NONE(0.00)[2a00:1450:4864:20::12b:from]; NEURAL_HAM_SHORT(-1.00)[-1.000]; MLMMJ_DEST(0.00)[dev-commits-src-all,dev-commits-src-main]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; FREEMAIL_ENVFROM(0.00)[gmail.com]; ASN(0.00)[asn:15169, ipnet:2a00:1450::/32, country:US]; RCVD_COUNT_TWO(0.00)[2]; RCVD_TLS_ALL(0.00)[]; DWL_DNSWL_NONE(0.00)[gmail.com:dkim] X-ThisMailContainsUnwantedMimeParts: N On 6/7/22, Alexander Motin wrote: > The branch main has been updated by mav: > > URL: > https://cgit.FreeBSD.org/src/commit/?id=eff9ee7c0c8e1fe782d6c01a29bb23224b02a847 > > commit eff9ee7c0c8e1fe782d6c01a29bb23224b02a847 > Author: Alexander Motin > AuthorDate: 2022-06-07 02:36:16 +0000 > Commit: Alexander Motin > CommitDate: 2022-06-07 02:51:01 +0000 > > hwpmc: Increase thread priority while iterating CPUs. > > This allows to profile already running high-priority threads, that > otherwise by blocking thread migration to respective CPUs blocked PMC > management, i.e. profiling could start only when workload completed. > > While there, return the thread to its original CPU after iterating > the list. Otherwise all threads using PMC end up on the last CPU. > the iteration is a bogus scheme and needs to be replaced with something similar to what rms locks are doing right now, see rms_rlock and rms_wlock. Maybe I'll hack it up this month. > MFC after: 1 month > --- > sys/dev/hwpmc/hwpmc_logging.c | 21 +++++++++------------ > sys/dev/hwpmc/hwpmc_mod.c | 17 ++++++++--------- > sys/sys/pmc.h | 4 ++++ > 3 files changed, 21 insertions(+), 21 deletions(-) > > diff --git a/sys/dev/hwpmc/hwpmc_logging.c b/sys/dev/hwpmc/hwpmc_logging.c > index c16adff8c842..8d9015af78e9 100644 > --- a/sys/dev/hwpmc/hwpmc_logging.c > +++ b/sys/dev/hwpmc/hwpmc_logging.c > @@ -764,6 +764,7 @@ pmclog_deconfigure_log(struct pmc_owner *po) > { > int error; > struct pmclog_buffer *lb; > + struct pmc_binding pb; > > PMCDBG1(LOG,CFG,1, "de-config po=%p", po); > > @@ -787,19 +788,16 @@ pmclog_deconfigure_log(struct pmc_owner *po) > PMCLOG_RESET_BUFFER_DESCRIPTOR(lb); > pmc_plb_rele(lb); > } > + pmc_save_cpu_binding(&pb); > for (int i = 0; i < mp_ncpus; i++) { > - thread_lock(curthread); > - sched_bind(curthread, i); > - thread_unlock(curthread); > + pmc_select_cpu(i); > /* return the 'current' buffer to the global pool */ > if ((lb = po->po_curbuf[curcpu]) != NULL) { > PMCLOG_RESET_BUFFER_DESCRIPTOR(lb); > pmc_plb_rele(lb); > } > } > - thread_lock(curthread); > - sched_unbind(curthread); > - thread_unlock(curthread); > + pmc_restore_cpu_binding(&pb); > > /* drop a reference to the fd */ > if (po->po_file != NULL) { > @@ -869,18 +867,17 @@ pmclog_schedule_one_cond(struct pmc_owner *po) > static void > pmclog_schedule_all(struct pmc_owner *po) > { > + struct pmc_binding pb; > + > /* > * Schedule the current buffer if any and not empty. > */ > + pmc_save_cpu_binding(&pb); > for (int i = 0; i < mp_ncpus; i++) { > - thread_lock(curthread); > - sched_bind(curthread, i); > - thread_unlock(curthread); > + pmc_select_cpu(i); > pmclog_schedule_one_cond(po); > } > - thread_lock(curthread); > - sched_unbind(curthread); > - thread_unlock(curthread); > + pmc_restore_cpu_binding(&pb); > } > > int > diff --git a/sys/dev/hwpmc/hwpmc_mod.c b/sys/dev/hwpmc/hwpmc_mod.c > index e94527041af8..18b8bb1674a3 100644 > --- a/sys/dev/hwpmc/hwpmc_mod.c > +++ b/sys/dev/hwpmc/hwpmc_mod.c > @@ -249,9 +249,6 @@ static void pmc_process_thread_delete(struct thread > *td); > static void pmc_process_thread_userret(struct thread *td); > static void pmc_remove_owner(struct pmc_owner *po); > static void pmc_remove_process_descriptor(struct pmc_process *pp); > -static void pmc_restore_cpu_binding(struct pmc_binding *pb); > -static void pmc_save_cpu_binding(struct pmc_binding *pb); > -static void pmc_select_cpu(int cpu); > static int pmc_start(struct pmc *pm); > static int pmc_stop(struct pmc *pm); > static int pmc_syscall_handler(struct thread *td, void *syscall_args); > @@ -739,13 +736,14 @@ pmc_ri_to_classdep(struct pmc_mdep *md, int ri, int > *adjri) > * save the cpu binding of the current kthread > */ > > -static void > +void > pmc_save_cpu_binding(struct pmc_binding *pb) > { > PMCDBG0(CPU,BND,2, "save-cpu"); > thread_lock(curthread); > pb->pb_bound = sched_is_bound(curthread); > pb->pb_cpu = curthread->td_oncpu; > + pb->pb_priority = curthread->td_priority; > thread_unlock(curthread); > PMCDBG1(CPU,BND,2, "save-cpu cpu=%d", pb->pb_cpu); > } > @@ -754,16 +752,16 @@ pmc_save_cpu_binding(struct pmc_binding *pb) > * restore the cpu binding of the current thread > */ > > -static void > +void > pmc_restore_cpu_binding(struct pmc_binding *pb) > { > PMCDBG2(CPU,BND,2, "restore-cpu curcpu=%d restore=%d", > curthread->td_oncpu, pb->pb_cpu); > thread_lock(curthread); > - if (pb->pb_bound) > - sched_bind(curthread, pb->pb_cpu); > - else > + sched_bind(curthread, pb->pb_cpu); > + if (!pb->pb_bound) > sched_unbind(curthread); > + sched_prio(curthread, pb->pb_priority); > thread_unlock(curthread); > PMCDBG0(CPU,BND,2, "restore-cpu done"); > } > @@ -772,7 +770,7 @@ pmc_restore_cpu_binding(struct pmc_binding *pb) > * move execution over the specified cpu and bind it there. > */ > > -static void > +void > pmc_select_cpu(int cpu) > { > KASSERT(cpu >= 0 && cpu < pmc_cpu_max(), > @@ -784,6 +782,7 @@ pmc_select_cpu(int cpu) > > PMCDBG1(CPU,SEL,2, "select-cpu cpu=%d", cpu); > thread_lock(curthread); > + sched_prio(curthread, PRI_MIN); > sched_bind(curthread, cpu); > thread_unlock(curthread); > > diff --git a/sys/sys/pmc.h b/sys/sys/pmc.h > index 372e77ecdee7..18c38ca36659 100644 > --- a/sys/sys/pmc.h > +++ b/sys/sys/pmc.h > @@ -997,6 +997,7 @@ struct pmc_cpu { > struct pmc_binding { > int pb_bound; /* is bound? */ > int pb_cpu; /* if so, to which CPU */ > + u_char pb_priority; /* Thread active priority. */ > }; > > struct pmc_mdep; > @@ -1225,6 +1226,9 @@ int pmc_save_kernel_callchain(uintptr_t *_cc, int > _maxsamples, > struct trapframe *_tf); > int pmc_save_user_callchain(uintptr_t *_cc, int _maxsamples, > struct trapframe *_tf); > +void pmc_restore_cpu_binding(struct pmc_binding *pb); > +void pmc_save_cpu_binding(struct pmc_binding *pb); > +void pmc_select_cpu(int cpu); > struct pmc_mdep *pmc_mdep_alloc(int nclasses); > void pmc_mdep_free(struct pmc_mdep *md); > uint64_t pmc_rdtsc(void); > -- Mateusz Guzik