From nobody Thu Jan 16 18:08:02 2025 X-Original-To: dev-commits-src-all@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4YYrSg2Krqz5kktP; Thu, 16 Jan 2025 18:08:03 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R11" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4YYrSf6L26z3RMS; Thu, 16 Jan 2025 18:08:02 +0000 (UTC) (envelope-from git@FreeBSD.org) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1737050882; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=mOzOscrZCZ0OXFyYaQXkmND++MvMj6vbVDzG3pZPhEc=; b=ejQ7dGBaXMJ80rEiQk2aRPLn/ROrOhNNnJgVV94e9x0ykQxuhvofWWIY/lHvZ778QWxPXV 6W/A1cnJOwp9aXu5BORdERtDZ1qLZf3nq6PfyCVCupbYidfURJrawQzl8P/iIUknotOmUP 5KLf2sGI8L3y1CKgBzyX7bBNvmiFp848xj/i4zRtcfDVsIJHh0VvAnIMvhdAwmLxANKLFL 1s3/xBDUvi8voMsYapLhUhn3B4acyFalIzGm+6OXGVIOKGSvf+APeX/bTdDZlIz9FqzCFL sAZYSlz4wg62Ci39Le4GzwojkvLuxeakbRXyKVSUExOiFaw93CesMJA5trxtoA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1737050882; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=mOzOscrZCZ0OXFyYaQXkmND++MvMj6vbVDzG3pZPhEc=; b=huiPn6xBUb7zLFjqyegrCJDulK44U/6URd7F4wz9Ey4/J4WaaUkRH+wDS9vbjqdgKZq+NX rHRPZdzwidrZcyD/e/E7/kh4/Wz3NNupEz5b8I/A6B+z3nwlT+2k0e7KtsIF5IvHIDg67/ AQ1PcrStlGu2hcK5OAfWQM5SNG5l9MG/P1h/1p9F2uFYdIylng+oVFQZYESAHi40+5RSq+ jQgG67/51TKyg+VmqztnAxt/W0gzfMwpklKFKwDdtKKlSTHba0CGzbqTfzaALe8iwxN1fY NTyRzLtP07V/CuTcYXtADJpuZ/IixCJcg65cknD9QsSuexyIQ55tupHjVaaYmg== ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1737050882; a=rsa-sha256; cv=none; b=RNQxKsVIyxb1vLEcyIGOlbIv9exQ+dxJ5LG9igwFO/wqUMLMZAjRucSMr31N9nKpRsFPlL pu62EGu0Rc2Kp/lPF2G/sbz/Pjx1zKepUGH5WxGyl0g9AxjGp2l7mvgQcElCbUJ9ET7c0m T8nzGbRy2O56rd9TIqZTqMz+OBIJtlC83TNUhnlA70lQEjM3jzz9GZ1E2CQz8APRmYAooq j5Kvc5ZbQc7gUrXa3OkscNYHDgqG+olIfkDi7aLP3WuDwuq3fcu3lZ63yf9cLKwojyfIWr AOe9fsEiNQ8qPnChGHRerdnhOmRjBVmVXG8f2MUA50Syb5Ts3Lj2h7QtIVeytg== ARC-Authentication-Results: i=1; mx1.freebsd.org; none Received: from gitrepo.freebsd.org (gitrepo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 4YYrSf5whnzkKd; Thu, 16 Jan 2025 18:08:02 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org ([127.0.1.44]) by gitrepo.freebsd.org (8.18.1/8.18.1) with ESMTP id 50GI82B2091158; Thu, 16 Jan 2025 18:08:02 GMT (envelope-from git@gitrepo.freebsd.org) Received: (from git@localhost) by gitrepo.freebsd.org (8.18.1/8.18.1/Submit) id 50GI82RY091155; Thu, 16 Jan 2025 18:08:02 GMT (envelope-from git) Date: Thu, 16 Jan 2025 18:08:02 GMT Message-Id: <202501161808.50GI82RY091155@gitrepo.freebsd.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-branches@FreeBSD.org From: Olivier Certner Subject: git: 1fc5db8e9f4b - stable/14 - atomics: Constify loads List-Id: Commit messages for all branches of the src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-all List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-BeenThere: dev-commits-src-all@freebsd.org Sender: owner-dev-commits-src-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Git-Committer: olce X-Git-Repository: src X-Git-Refname: refs/heads/stable/14 X-Git-Reftype: branch X-Git-Commit: 1fc5db8e9f4b10fed01fc1c07d0e5725c4089b97 Auto-Submitted: auto-generated The branch stable/14 has been updated by olce: URL: https://cgit.FreeBSD.org/src/commit/?id=1fc5db8e9f4b10fed01fc1c07d0e5725c4089b97 commit 1fc5db8e9f4b10fed01fc1c07d0e5725c4089b97 Author: Olivier Certner AuthorDate: 2024-07-19 15:23:19 +0000 Commit: Olivier Certner CommitDate: 2025-01-16 18:06:53 +0000 atomics: Constify loads In order to match reality, allow using these functions with pointers on const objects, and bring us closer to C11. Remove the '+' modifier in the atomic_load_acq_64_i586()'s inline asm statement's constraint for '*p' (the value to load). CMPXCHG8B always writes back some value, even when the value exchange does not happen in which case what was read is written back. atomic_load_acq_64_i586() further takes care of the operation atomically writing back the same value that was read in any case. All in all, this makes the inline asm's write back undetectable by any other code, whether executing on other CPUs or code on the same CPU before and after the call to atomic_load_acq_64_i586(), except for the fact that CMPXCHG8B will trigger a #GP(0) if the memory address is part of a read-only mapping. This unfortunate property is however out of scope of the C abstract machine, and in particular independent of whether the 'uint64_t' pointed to is declared 'const' or not. Approved by: markj (mentor) MFC after: 5 days Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D46887 (cherry picked from commit 5e9a82e898d55816c366cfa3ffbca84f02569fe5) --- sys/amd64/include/atomic.h | 2 +- sys/arm/include/atomic.h | 8 ++++---- sys/arm64/include/atomic.h | 2 +- sys/i386/include/atomic.h | 28 ++++++++++++++++------------ sys/powerpc/include/atomic.h | 6 +++--- sys/riscv/include/atomic.h | 4 ++-- sys/sys/_atomic64e.h | 2 +- sys/sys/_atomic_subword.h | 4 ++-- sys/sys/atomic_common.h | 20 ++++++++++---------- 9 files changed, 40 insertions(+), 36 deletions(-) diff --git a/sys/amd64/include/atomic.h b/sys/amd64/include/atomic.h index 8eb3fddb0478..91a4398a0815 100644 --- a/sys/amd64/include/atomic.h +++ b/sys/amd64/include/atomic.h @@ -304,7 +304,7 @@ __storeload_barrier(void) #define ATOMIC_LOAD(TYPE) \ static __inline u_##TYPE \ -atomic_load_acq_##TYPE(volatile u_##TYPE *p) \ +atomic_load_acq_##TYPE(const volatile u_##TYPE *p) \ { \ u_##TYPE res; \ \ diff --git a/sys/arm/include/atomic.h b/sys/arm/include/atomic.h index 73e769885f71..a7b5ed76c0ad 100644 --- a/sys/arm/include/atomic.h +++ b/sys/arm/include/atomic.h @@ -614,7 +614,7 @@ atomic_fetchadd_long(volatile u_long *p, u_long val) } static __inline uint32_t -atomic_load_acq_32(volatile uint32_t *p) +atomic_load_acq_32(const volatile uint32_t *p) { uint32_t v; @@ -624,7 +624,7 @@ atomic_load_acq_32(volatile uint32_t *p) } static __inline uint64_t -atomic_load_64(volatile uint64_t *p) +atomic_load_64(const volatile uint64_t *p) { uint64_t ret; @@ -643,7 +643,7 @@ atomic_load_64(volatile uint64_t *p) } static __inline uint64_t -atomic_load_acq_64(volatile uint64_t *p) +atomic_load_acq_64(const volatile uint64_t *p) { uint64_t ret; @@ -653,7 +653,7 @@ atomic_load_acq_64(volatile uint64_t *p) } static __inline u_long -atomic_load_acq_long(volatile u_long *p) +atomic_load_acq_long(const volatile u_long *p) { u_long v; diff --git a/sys/arm64/include/atomic.h b/sys/arm64/include/atomic.h index f7018f2f9e0b..998a49c02e60 100644 --- a/sys/arm64/include/atomic.h +++ b/sys/arm64/include/atomic.h @@ -465,7 +465,7 @@ _ATOMIC_TEST_OP(set, orr, set) #define _ATOMIC_LOAD_ACQ_IMPL(t, w, s) \ static __inline uint##t##_t \ -atomic_load_acq_##t(volatile uint##t##_t *p) \ +atomic_load_acq_##t(const volatile uint##t##_t *p) \ { \ uint##t##_t ret; \ \ diff --git a/sys/i386/include/atomic.h b/sys/i386/include/atomic.h index 71aa09c57007..e68af0454130 100644 --- a/sys/i386/include/atomic.h +++ b/sys/i386/include/atomic.h @@ -249,7 +249,7 @@ atomic_testandclear_int(volatile u_int *p, u_int v) #define ATOMIC_LOAD(TYPE) \ static __inline u_##TYPE \ -atomic_load_acq_##TYPE(volatile u_##TYPE *p) \ +atomic_load_acq_##TYPE(const volatile u_##TYPE *p) \ { \ u_##TYPE res; \ \ @@ -302,8 +302,8 @@ atomic_thread_fence_seq_cst(void) #ifdef WANT_FUNCTIONS int atomic_cmpset_64_i386(volatile uint64_t *, uint64_t, uint64_t); int atomic_cmpset_64_i586(volatile uint64_t *, uint64_t, uint64_t); -uint64_t atomic_load_acq_64_i386(volatile uint64_t *); -uint64_t atomic_load_acq_64_i586(volatile uint64_t *); +uint64_t atomic_load_acq_64_i386(const volatile uint64_t *); +uint64_t atomic_load_acq_64_i586(const volatile uint64_t *); void atomic_store_rel_64_i386(volatile uint64_t *, uint64_t); void atomic_store_rel_64_i586(volatile uint64_t *, uint64_t); uint64_t atomic_swap_64_i386(volatile uint64_t *, uint64_t); @@ -353,12 +353,12 @@ atomic_fcmpset_64_i386(volatile uint64_t *dst, uint64_t *expect, uint64_t src) } static __inline uint64_t -atomic_load_acq_64_i386(volatile uint64_t *p) +atomic_load_acq_64_i386(const volatile uint64_t *p) { - volatile uint32_t *q; + const volatile uint32_t *q; uint64_t res; - q = (volatile uint32_t *)p; + q = (const volatile uint32_t *)p; __asm __volatile( " pushfl ; " " cli ; " @@ -447,8 +447,12 @@ atomic_fcmpset_64_i586(volatile uint64_t *dst, uint64_t *expect, uint64_t src) return (res); } +/* + * Architecturally always writes back some value to '*p' so will trigger + * a #GP(0) on read-only mappings. + */ static __inline uint64_t -atomic_load_acq_64_i586(volatile uint64_t *p) +atomic_load_acq_64_i586(const volatile uint64_t *p) { uint64_t res; @@ -456,9 +460,9 @@ atomic_load_acq_64_i586(volatile uint64_t *p) " movl %%ebx,%%eax ; " " movl %%ecx,%%edx ; " " lock; cmpxchg8b %1" - : "=&A" (res), /* 0 */ - "+m" (*p) /* 1 */ - : : "memory", "cc"); + : "=&A" (res) /* 0 */ + : "m" (*p) /* 1 */ + : "memory", "cc"); return (res); } @@ -514,7 +518,7 @@ atomic_fcmpset_64(volatile uint64_t *dst, uint64_t *expect, uint64_t src) } static __inline uint64_t -atomic_load_acq_64(volatile uint64_t *p) +atomic_load_acq_64(const volatile uint64_t *p) { if ((cpu_feature & CPUID_CX8) == 0) @@ -842,7 +846,7 @@ atomic_swap_long(volatile u_long *p, u_long v) #define atomic_subtract_rel_ptr(p, v) \ atomic_subtract_rel_int((volatile u_int *)(p), (u_int)(v)) #define atomic_load_acq_ptr(p) \ - atomic_load_acq_int((volatile u_int *)(p)) + atomic_load_acq_int((const volatile u_int *)(p)) #define atomic_store_rel_ptr(p, v) \ atomic_store_rel_int((volatile u_int *)(p), (v)) #define atomic_cmpset_ptr(dst, old, new) \ diff --git a/sys/powerpc/include/atomic.h b/sys/powerpc/include/atomic.h index 0c3a57698342..015a283e2de7 100644 --- a/sys/powerpc/include/atomic.h +++ b/sys/powerpc/include/atomic.h @@ -502,7 +502,7 @@ atomic_readandclear_long(volatile u_long *addr) */ #define ATOMIC_STORE_LOAD(TYPE) \ static __inline u_##TYPE \ -atomic_load_acq_##TYPE(volatile u_##TYPE *p) \ +atomic_load_acq_##TYPE(const volatile u_##TYPE *p) \ { \ u_##TYPE v; \ \ @@ -534,10 +534,10 @@ ATOMIC_STORE_LOAD(long) #define atomic_store_rel_ptr atomic_store_rel_long #else static __inline u_long -atomic_load_acq_long(volatile u_long *addr) +atomic_load_acq_long(const volatile u_long *addr) { - return ((u_long)atomic_load_acq_int((volatile u_int *)addr)); + return ((u_long)atomic_load_acq_int((const volatile u_int *)addr)); } static __inline void diff --git a/sys/riscv/include/atomic.h b/sys/riscv/include/atomic.h index aaa7add6894b..bf9c42453d8b 100644 --- a/sys/riscv/include/atomic.h +++ b/sys/riscv/include/atomic.h @@ -121,7 +121,7 @@ ATOMIC_FCMPSET_ACQ_REL(16); #define atomic_load_acq_16 atomic_load_acq_16 static __inline uint16_t -atomic_load_acq_16(volatile uint16_t *p) +atomic_load_acq_16(const volatile uint16_t *p) { uint16_t ret; @@ -312,7 +312,7 @@ ATOMIC_CMPSET_ACQ_REL(32); ATOMIC_FCMPSET_ACQ_REL(32); static __inline uint32_t -atomic_load_acq_32(volatile uint32_t *p) +atomic_load_acq_32(const volatile uint32_t *p) { uint32_t ret; diff --git a/sys/sys/_atomic64e.h b/sys/sys/_atomic64e.h index f7245dafb98a..82fe817f307b 100644 --- a/sys/sys/_atomic64e.h +++ b/sys/sys/_atomic64e.h @@ -55,7 +55,7 @@ int atomic_fcmpset_64(volatile u_int64_t *, u_int64_t *, u_int64_t); u_int64_t atomic_fetchadd_64(volatile u_int64_t *, u_int64_t); -u_int64_t atomic_load_64(volatile u_int64_t *); +u_int64_t atomic_load_64(const volatile u_int64_t *); #define atomic_load_acq_64 atomic_load_64 void atomic_readandclear_64(volatile u_int64_t *); diff --git a/sys/sys/_atomic_subword.h b/sys/sys/_atomic_subword.h index dad23383f642..dee5a3bed871 100644 --- a/sys/sys/_atomic_subword.h +++ b/sys/sys/_atomic_subword.h @@ -176,7 +176,7 @@ atomic_fcmpset_16(__volatile uint16_t *addr, uint16_t *old, uint16_t val) #ifndef atomic_load_acq_8 static __inline uint8_t -atomic_load_acq_8(volatile uint8_t *p) +atomic_load_acq_8(const volatile uint8_t *p) { int shift; uint8_t ret; @@ -189,7 +189,7 @@ atomic_load_acq_8(volatile uint8_t *p) #ifndef atomic_load_acq_16 static __inline uint16_t -atomic_load_acq_16(volatile uint16_t *p) +atomic_load_acq_16(const volatile uint16_t *p) { int shift; uint16_t ret; diff --git a/sys/sys/atomic_common.h b/sys/sys/atomic_common.h index 83e0d5af583d..e03cd93c2d4a 100644 --- a/sys/sys/atomic_common.h +++ b/sys/sys/atomic_common.h @@ -36,18 +36,18 @@ #include -#define __atomic_load_bool_relaxed(p) (*(volatile _Bool *)(p)) +#define __atomic_load_bool_relaxed(p) (*(const volatile _Bool *)(p)) #define __atomic_store_bool_relaxed(p, v) \ (*(volatile _Bool *)(p) = (_Bool)(v)) -#define __atomic_load_char_relaxed(p) (*(volatile u_char *)(p)) -#define __atomic_load_short_relaxed(p) (*(volatile u_short *)(p)) -#define __atomic_load_int_relaxed(p) (*(volatile u_int *)(p)) -#define __atomic_load_long_relaxed(p) (*(volatile u_long *)(p)) -#define __atomic_load_8_relaxed(p) (*(volatile uint8_t *)(p)) -#define __atomic_load_16_relaxed(p) (*(volatile uint16_t *)(p)) -#define __atomic_load_32_relaxed(p) (*(volatile uint32_t *)(p)) -#define __atomic_load_64_relaxed(p) (*(volatile uint64_t *)(p)) +#define __atomic_load_char_relaxed(p) (*(const volatile u_char *)(p)) +#define __atomic_load_short_relaxed(p) (*(const volatile u_short *)(p)) +#define __atomic_load_int_relaxed(p) (*(const volatile u_int *)(p)) +#define __atomic_load_long_relaxed(p) (*(const volatile u_long *)(p)) +#define __atomic_load_8_relaxed(p) (*(const volatile uint8_t *)(p)) +#define __atomic_load_16_relaxed(p) (*(const volatile uint16_t *)(p)) +#define __atomic_load_32_relaxed(p) (*(const volatile uint32_t *)(p)) +#define __atomic_load_64_relaxed(p) (*(const volatile uint64_t *)(p)) #define __atomic_store_char_relaxed(p, v) \ (*(volatile u_char *)(p) = (u_char)(v)) @@ -124,7 +124,7 @@ __atomic_store_generic(p, v, int64_t, uint64_t, 64) #endif -#define atomic_load_ptr(p) (*(volatile __typeof(*p) *)(p)) +#define atomic_load_ptr(p) (*(const volatile __typeof(*p) *)(p)) #define atomic_store_ptr(p, v) (*(volatile __typeof(*p) *)(p) = (v)) /*