core/sync/
atomic.rs

1//! Atomic types
2//!
3//! Atomic types provide primitive shared-memory communication between
4//! threads, and are the building blocks of other concurrent
5//! types.
6//!
7//! This module defines atomic versions of a select number of primitive
8//! types, including [`AtomicBool`], [`AtomicIsize`], [`AtomicUsize`],
9//! [`AtomicI8`], [`AtomicU16`], etc.
10//! Atomic types present operations that, when used correctly, synchronize
11//! updates between threads.
12//!
13//! Atomic variables are safe to share between threads (they implement [`Sync`])
14//! but they do not themselves provide the mechanism for sharing and follow the
15//! [threading model](../../../std/thread/index.html#the-threading-model) of Rust.
16//! The most common way to share an atomic variable is to put it into an [`Arc`][arc] (an
17//! atomically-reference-counted shared pointer).
18//!
19//! [arc]: ../../../std/sync/struct.Arc.html
20//!
21//! Atomic types may be stored in static variables, initialized using
22//! the constant initializers like [`AtomicBool::new`]. Atomic statics
23//! are often used for lazy global initialization.
24//!
25//! ## Memory model for atomic accesses
26//!
27//! Rust atomics currently follow the same rules as [C++20 atomics][cpp], specifically the rules
28//! from the [`intro.races`][cpp-intro.races] section, without the "consume" memory ordering. Since
29//! C++ uses an object-based memory model whereas Rust is access-based, a bit of translation work
30//! has to be done to apply the C++ rules to Rust: whenever C++ talks about "the value of an
31//! object", we understand that to mean the resulting bytes obtained when doing a read. When the C++
32//! standard talks about "the value of an atomic object", this refers to the result of doing an
33//! atomic load (via the operations provided in this module). A "modification of an atomic object"
34//! refers to an atomic store.
35//!
36//! The end result is *almost* equivalent to saying that creating a *shared reference* to one of the
37//! Rust atomic types corresponds to creating an `atomic_ref` in C++, with the `atomic_ref` being
38//! destroyed when the lifetime of the shared reference ends. The main difference is that Rust
39//! permits concurrent atomic and non-atomic reads to the same memory as those cause no issue in the
40//! C++ memory model, they are just forbidden in C++ because memory is partitioned into "atomic
41//! objects" and "non-atomic objects" (with `atomic_ref` temporarily converting a non-atomic object
42//! into an atomic object).
43//!
44//! The most important aspect of this model is that *data races* are undefined behavior. A data race
45//! is defined as conflicting non-synchronized accesses where at least one of the accesses is
46//! non-atomic. Here, accesses are *conflicting* if they affect overlapping regions of memory and at
47//! least one of them is a write. (A `compare_exchange` or `compare_exchange_weak` that does not
48//! succeed is not considered a write.) They are *non-synchronized* if neither of them
49//! *happens-before* the other, according to the happens-before order of the memory model.
50//!
51//! The other possible cause of undefined behavior in the memory model are mixed-size accesses: Rust
52//! inherits the C++ limitation that non-synchronized conflicting atomic accesses may not partially
53//! overlap. In other words, every pair of non-synchronized atomic accesses must be either disjoint,
54//! access the exact same memory (including using the same access size), or both be reads.
55//!
56//! Each atomic access takes an [`Ordering`] which defines how the operation interacts with the
57//! happens-before order. These orderings behave the same as the corresponding [C++20 atomic
58//! orderings][cpp_memory_order]. For more information, see the [nomicon].
59//!
60//! [cpp]: https://en.cppreference.com/w/cpp/atomic
61//! [cpp-intro.races]: https://timsong-cpp.github.io/cppwp/n4868/intro.multithread#intro.races
62//! [cpp_memory_order]: https://en.cppreference.com/w/cpp/atomic/memory_order
63//! [nomicon]: ../../../nomicon/atomics.html
64//!
65//! ```rust,no_run undefined_behavior
66//! use std::sync::atomic::{AtomicU16, AtomicU8, Ordering};
67//! use std::mem::transmute;
68//! use std::thread;
69//!
70//! let atomic = AtomicU16::new(0);
71//!
72//! thread::scope(|s| {
73//!     // This is UB: conflicting non-synchronized accesses, at least one of which is non-atomic.
74//!     s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
75//!     s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
76//! });
77//!
78//! thread::scope(|s| {
79//!     // This is fine: the accesses do not conflict (as none of them performs any modification).
80//!     // In C++ this would be disallowed since creating an `atomic_ref` precludes
81//!     // further non-atomic accesses, but Rust does not have that limitation.
82//!     s.spawn(|| atomic.load(Ordering::Relaxed)); // atomic load
83//!     s.spawn(|| unsafe { atomic.as_ptr().read() }); // non-atomic read
84//! });
85//!
86//! thread::scope(|s| {
87//!     // This is fine: `join` synchronizes the code in a way such that the atomic
88//!     // store happens-before the non-atomic write.
89//!     let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
90//!     handle.join().expect("thread won't panic"); // synchronize
91//!     s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
92//! });
93//!
94//! thread::scope(|s| {
95//!     // This is UB: non-synchronized conflicting differently-sized atomic accesses.
96//!     s.spawn(|| atomic.store(1, Ordering::Relaxed));
97//!     s.spawn(|| unsafe {
98//!         let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
99//!         differently_sized.store(2, Ordering::Relaxed);
100//!     });
101//! });
102//!
103//! thread::scope(|s| {
104//!     // This is fine: `join` synchronizes the code in a way such that
105//!     // the 1-byte store happens-before the 2-byte store.
106//!     let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed));
107//!     handle.join().expect("thread won't panic");
108//!     s.spawn(|| unsafe {
109//!         let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
110//!         differently_sized.store(2, Ordering::Relaxed);
111//!     });
112//! });
113//! ```
114//!
115//! # Portability
116//!
117//! All atomic types in this module are guaranteed to be [lock-free] if they're
118//! available. This means they don't internally acquire a global mutex. Atomic
119//! types and operations are not guaranteed to be wait-free. This means that
120//! operations like `fetch_or` may be implemented with a compare-and-swap loop.
121//!
122//! Atomic operations may be implemented at the instruction layer with
123//! larger-size atomics. For example some platforms use 4-byte atomic
124//! instructions to implement `AtomicI8`. Note that this emulation should not
125//! have an impact on correctness of code, it's just something to be aware of.
126//!
127//! The atomic types in this module might not be available on all platforms. The
128//! atomic types here are all widely available, however, and can generally be
129//! relied upon existing. Some notable exceptions are:
130//!
131//! * PowerPC and MIPS platforms with 32-bit pointers do not have `AtomicU64` or
132//!   `AtomicI64` types.
133//! * ARM platforms like `armv5te` that aren't for Linux only provide `load`
134//!   and `store` operations, and do not support Compare and Swap (CAS)
135//!   operations, such as `swap`, `fetch_add`, etc. Additionally on Linux,
136//!   these CAS operations are implemented via [operating system support], which
137//!   may come with a performance penalty.
138//! * ARM targets with `thumbv6m` only provide `load` and `store` operations,
139//!   and do not support Compare and Swap (CAS) operations, such as `swap`,
140//!   `fetch_add`, etc.
141//!
142//! [operating system support]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
143//!
144//! Note that future platforms may be added that also do not have support for
145//! some atomic operations. Maximally portable code will want to be careful
146//! about which atomic types are used. `AtomicUsize` and `AtomicIsize` are
147//! generally the most portable, but even then they're not available everywhere.
148//! For reference, the `std` library requires `AtomicBool`s and pointer-sized atomics, although
149//! `core` does not.
150//!
151//! The `#[cfg(target_has_atomic)]` attribute can be used to conditionally
152//! compile based on the target's supported bit widths. It is a key-value
153//! option set for each supported size, with values "8", "16", "32", "64",
154//! "128", and "ptr" for pointer-sized atomics.
155//!
156//! [lock-free]: https://en.wikipedia.org/wiki/Non-blocking_algorithm
157//!
158//! # Atomic accesses to read-only memory
159//!
160//! In general, *all* atomic accesses on read-only memory are undefined behavior. For instance, attempting
161//! to do a `compare_exchange` that will definitely fail (making it conceptually a read-only
162//! operation) can still cause a segmentation fault if the underlying memory page is mapped read-only. Since
163//! atomic `load`s might be implemented using compare-exchange operations, even a `load` can fault
164//! on read-only memory.
165//!
166//! For the purpose of this section, "read-only memory" is defined as memory that is read-only in
167//! the underlying target, i.e., the pages are mapped with a read-only flag and any attempt to write
168//! will cause a page fault. In particular, an `&u128` reference that points to memory that is
169//! read-write mapped is *not* considered to point to "read-only memory". In Rust, almost all memory
170//! is read-write; the only exceptions are memory created by `const` items or `static` items without
171//! interior mutability, and memory that was specifically marked as read-only by the operating
172//! system via platform-specific APIs.
173//!
174//! As an exception from the general rule stated above, "sufficiently small" atomic loads with
175//! `Ordering::Relaxed` are implemented in a way that works on read-only memory, and are hence not
176//! undefined behavior. The exact size limit for what makes a load "sufficiently small" varies
177//! depending on the target:
178//!
179//! | `target_arch` | Size limit |
180//! |---------------|---------|
181//! | `x86`, `arm`, `loongarch32`, `mips`, `mips32r6`, `powerpc`, `riscv32`, `sparc`, `hexagon` | 4 bytes |
182//! | `x86_64`, `aarch64`, `loongarch64`, `mips64`, `mips64r6`, `powerpc64`, `riscv64`, `sparc64`, `s390x` | 8 bytes |
183//!
184//! Atomics loads that are larger than this limit as well as atomic loads with ordering other
185//! than `Relaxed`, as well as *all* atomic loads on targets not listed in the table, might still be
186//! read-only under certain conditions, but that is not a stable guarantee and should not be relied
187//! upon.
188//!
189//! If you need to do an acquire load on read-only memory, you can do a relaxed load followed by an
190//! acquire fence instead.
191//!
192//! # Examples
193//!
194//! A simple spinlock:
195//!
196//! ```ignore-wasm
197//! use std::sync::Arc;
198//! use std::sync::atomic::{AtomicUsize, Ordering};
199//! use std::{hint, thread};
200//!
201//! fn main() {
202//!     let spinlock = Arc::new(AtomicUsize::new(1));
203//!
204//!     let spinlock_clone = Arc::clone(&spinlock);
205//!
206//!     let thread = thread::spawn(move || {
207//!         spinlock_clone.store(0, Ordering::Release);
208//!     });
209//!
210//!     // Wait for the other thread to release the lock
211//!     while spinlock.load(Ordering::Acquire) != 0 {
212//!         hint::spin_loop();
213//!     }
214//!
215//!     if let Err(panic) = thread.join() {
216//!         println!("Thread had an error: {panic:?}");
217//!     }
218//! }
219//! ```
220//!
221//! Keep a global count of live threads:
222//!
223//! ```
224//! use std::sync::atomic::{AtomicUsize, Ordering};
225//!
226//! static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0);
227//!
228//! // Note that Relaxed ordering doesn't synchronize anything
229//! // except the global thread counter itself.
230//! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::Relaxed);
231//! // Note that this number may not be true at the moment of printing
232//! // because some other thread may have changed static value already.
233//! println!("live threads: {}", old_thread_count + 1);
234//! ```
235
236#![stable(feature = "rust1", since = "1.0.0")]
237#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
238#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
239#![rustc_diagnostic_item = "atomic_mod"]
240// Clippy complains about the pattern of "safe function calling unsafe function taking pointers".
241// This happens with AtomicPtr intrinsics but is fine, as the pointers clippy is concerned about
242// are just normal values that get loaded/stored, but not dereferenced.
243#![allow(clippy::not_unsafe_ptr_arg_deref)]
244
245use self::Ordering::*;
246use crate::cell::UnsafeCell;
247use crate::hint::spin_loop;
248use crate::intrinsics::AtomicOrdering as AO;
249use crate::{fmt, intrinsics};
250
251trait Sealed {}
252
253/// A marker trait for primitive types which can be modified atomically.
254///
255/// This is an implementation detail for <code>[Atomic]\<T></code> which may disappear or be replaced at any time.
256///
257/// # Safety
258///
259/// Types implementing this trait must be primitives that can be modified atomically.
260///
261/// The associated `Self::AtomicInner` type must have the same size and bit validity as `Self`,
262/// but may have a higher alignment requirement, so the following `transmute`s are sound:
263///
264/// - `&mut Self::AtomicInner` as `&mut Self`
265/// - `Self` as `Self::AtomicInner` or the reverse
266#[unstable(
267    feature = "atomic_internals",
268    reason = "implementation detail which may disappear or be replaced at any time",
269    issue = "none"
270)]
271#[expect(private_bounds)]
272pub unsafe trait AtomicPrimitive: Sized + Copy + Sealed {
273    /// Temporary implementation detail.
274    type AtomicInner: Sized;
275}
276
277macro impl_atomic_primitive(
278    $Atom:ident $(<$T:ident>)? ($Primitive:ty),
279    size($size:literal),
280    align($align:literal) $(,)?
281) {
282    impl $(<$T>)? Sealed for $Primitive {}
283
284    #[unstable(
285        feature = "atomic_internals",
286        reason = "implementation detail which may disappear or be replaced at any time",
287        issue = "none"
288    )]
289    #[cfg(target_has_atomic_load_store = $size)]
290    unsafe impl $(<$T>)? AtomicPrimitive for $Primitive {
291        type AtomicInner = $Atom $(<$T>)?;
292    }
293}
294
295impl_atomic_primitive!(AtomicBool(bool), size("8"), align(1));
296impl_atomic_primitive!(AtomicI8(i8), size("8"), align(1));
297impl_atomic_primitive!(AtomicU8(u8), size("8"), align(1));
298impl_atomic_primitive!(AtomicI16(i16), size("16"), align(2));
299impl_atomic_primitive!(AtomicU16(u16), size("16"), align(2));
300impl_atomic_primitive!(AtomicI32(i32), size("32"), align(4));
301impl_atomic_primitive!(AtomicU32(u32), size("32"), align(4));
302impl_atomic_primitive!(AtomicI64(i64), size("64"), align(8));
303impl_atomic_primitive!(AtomicU64(u64), size("64"), align(8));
304impl_atomic_primitive!(AtomicI128(i128), size("128"), align(16));
305impl_atomic_primitive!(AtomicU128(u128), size("128"), align(16));
306
307#[cfg(target_pointer_width = "16")]
308impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(2));
309#[cfg(target_pointer_width = "32")]
310impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(4));
311#[cfg(target_pointer_width = "64")]
312impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(8));
313
314#[cfg(target_pointer_width = "16")]
315impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(2));
316#[cfg(target_pointer_width = "32")]
317impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(4));
318#[cfg(target_pointer_width = "64")]
319impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(8));
320
321#[cfg(target_pointer_width = "16")]
322impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(2));
323#[cfg(target_pointer_width = "32")]
324impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(4));
325#[cfg(target_pointer_width = "64")]
326impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(8));
327
328/// A memory location which can be safely modified from multiple threads.
329///
330/// This has the same size and bit validity as the underlying type `T`. However,
331/// the alignment of this type is always equal to its size, even on targets where
332/// `T` has alignment less than its size.
333///
334/// For more about the differences between atomic types and non-atomic types as
335/// well as information about the portability of this type, please see the
336/// [module-level documentation].
337///
338/// **Note:** This type is only available on platforms that support atomic loads
339/// and stores of `T`.
340///
341/// [module-level documentation]: crate::sync::atomic
342#[unstable(feature = "generic_atomic", issue = "130539")]
343pub type Atomic<T> = <T as AtomicPrimitive>::AtomicInner;
344
345// Some architectures don't have byte-sized atomics, which results in LLVM
346// emulating them using a LL/SC loop. However for AtomicBool we can take
347// advantage of the fact that it only ever contains 0 or 1 and use atomic OR/AND
348// instead, which LLVM can emulate using a larger atomic OR/AND operation.
349//
350// This list should only contain architectures which have word-sized atomic-or/
351// atomic-and instructions but don't natively support byte-sized atomics.
352#[cfg(target_has_atomic = "8")]
353const EMULATE_ATOMIC_BOOL: bool = cfg!(any(
354    target_arch = "riscv32",
355    target_arch = "riscv64",
356    target_arch = "loongarch32",
357    target_arch = "loongarch64"
358));
359
360/// A boolean type which can be safely shared between threads.
361///
362/// This type has the same size, alignment, and bit validity as a [`bool`].
363///
364/// **Note**: This type is only available on platforms that support atomic
365/// loads and stores of `u8`.
366#[cfg(target_has_atomic_load_store = "8")]
367#[stable(feature = "rust1", since = "1.0.0")]
368#[rustc_diagnostic_item = "AtomicBool"]
369#[repr(C, align(1))]
370pub struct AtomicBool {
371    v: UnsafeCell<u8>,
372}
373
374#[cfg(target_has_atomic_load_store = "8")]
375#[stable(feature = "rust1", since = "1.0.0")]
376impl Default for AtomicBool {
377    /// Creates an `AtomicBool` initialized to `false`.
378    #[inline]
379    fn default() -> Self {
380        Self::new(false)
381    }
382}
383
384// Send is implicitly implemented for AtomicBool.
385#[cfg(target_has_atomic_load_store = "8")]
386#[stable(feature = "rust1", since = "1.0.0")]
387unsafe impl Sync for AtomicBool {}
388
389/// A raw pointer type which can be safely shared between threads.
390///
391/// This type has the same size and bit validity as a `*mut T`.
392///
393/// **Note**: This type is only available on platforms that support atomic
394/// loads and stores of pointers. Its size depends on the target pointer's size.
395#[cfg(target_has_atomic_load_store = "ptr")]
396#[stable(feature = "rust1", since = "1.0.0")]
397#[rustc_diagnostic_item = "AtomicPtr"]
398#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
399#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
400#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
401pub struct AtomicPtr<T> {
402    p: UnsafeCell<*mut T>,
403}
404
405#[cfg(target_has_atomic_load_store = "ptr")]
406#[stable(feature = "rust1", since = "1.0.0")]
407impl<T> Default for AtomicPtr<T> {
408    /// Creates a null `AtomicPtr<T>`.
409    fn default() -> AtomicPtr<T> {
410        AtomicPtr::new(crate::ptr::null_mut())
411    }
412}
413
414#[cfg(target_has_atomic_load_store = "ptr")]
415#[stable(feature = "rust1", since = "1.0.0")]
416unsafe impl<T> Send for AtomicPtr<T> {}
417#[cfg(target_has_atomic_load_store = "ptr")]
418#[stable(feature = "rust1", since = "1.0.0")]
419unsafe impl<T> Sync for AtomicPtr<T> {}
420
421/// Atomic memory orderings
422///
423/// Memory orderings specify the way atomic operations synchronize memory.
424/// In its weakest [`Ordering::Relaxed`], only the memory directly touched by the
425/// operation is synchronized. On the other hand, a store-load pair of [`Ordering::SeqCst`]
426/// operations synchronize other memory while additionally preserving a total order of such
427/// operations across all threads.
428///
429/// Rust's memory orderings are [the same as those of
430/// C++20](https://en.cppreference.com/w/cpp/atomic/memory_order).
431///
432/// For more information see the [nomicon].
433///
434/// [nomicon]: ../../../nomicon/atomics.html
435#[stable(feature = "rust1", since = "1.0.0")]
436#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
437#[non_exhaustive]
438#[rustc_diagnostic_item = "Ordering"]
439pub enum Ordering {
440    /// No ordering constraints, only atomic operations.
441    ///
442    /// Corresponds to [`memory_order_relaxed`] in C++20.
443    ///
444    /// [`memory_order_relaxed`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering
445    #[stable(feature = "rust1", since = "1.0.0")]
446    Relaxed,
447    /// When coupled with a store, all previous operations become ordered
448    /// before any load of this value with [`Acquire`] (or stronger) ordering.
449    /// In particular, all previous writes become visible to all threads
450    /// that perform an [`Acquire`] (or stronger) load of this value.
451    ///
452    /// Notice that using this ordering for an operation that combines loads
453    /// and stores leads to a [`Relaxed`] load operation!
454    ///
455    /// This ordering is only applicable for operations that can perform a store.
456    ///
457    /// Corresponds to [`memory_order_release`] in C++20.
458    ///
459    /// [`memory_order_release`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
460    #[stable(feature = "rust1", since = "1.0.0")]
461    Release,
462    /// When coupled with a load, if the loaded value was written by a store operation with
463    /// [`Release`] (or stronger) ordering, then all subsequent operations
464    /// become ordered after that store. In particular, all subsequent loads will see data
465    /// written before the store.
466    ///
467    /// Notice that using this ordering for an operation that combines loads
468    /// and stores leads to a [`Relaxed`] store operation!
469    ///
470    /// This ordering is only applicable for operations that can perform a load.
471    ///
472    /// Corresponds to [`memory_order_acquire`] in C++20.
473    ///
474    /// [`memory_order_acquire`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
475    #[stable(feature = "rust1", since = "1.0.0")]
476    Acquire,
477    /// Has the effects of both [`Acquire`] and [`Release`] together:
478    /// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering.
479    ///
480    /// Notice that in the case of `compare_and_swap`, it is possible that the operation ends up
481    /// not performing any store and hence it has just [`Acquire`] ordering. However,
482    /// `AcqRel` will never perform [`Relaxed`] accesses.
483    ///
484    /// This ordering is only applicable for operations that combine both loads and stores.
485    ///
486    /// Corresponds to [`memory_order_acq_rel`] in C++20.
487    ///
488    /// [`memory_order_acq_rel`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
489    #[stable(feature = "rust1", since = "1.0.0")]
490    AcqRel,
491    /// Like [`Acquire`]/[`Release`]/[`AcqRel`] (for load, store, and load-with-store
492    /// operations, respectively) with the additional guarantee that all threads see all
493    /// sequentially consistent operations in the same order.
494    ///
495    /// Corresponds to [`memory_order_seq_cst`] in C++20.
496    ///
497    /// [`memory_order_seq_cst`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Sequentially-consistent_ordering
498    #[stable(feature = "rust1", since = "1.0.0")]
499    SeqCst,
500}
501
502/// An [`AtomicBool`] initialized to `false`.
503#[cfg(target_has_atomic_load_store = "8")]
504#[stable(feature = "rust1", since = "1.0.0")]
505#[deprecated(
506    since = "1.34.0",
507    note = "the `new` function is now preferred",
508    suggestion = "AtomicBool::new(false)"
509)]
510pub const ATOMIC_BOOL_INIT: AtomicBool = AtomicBool::new(false);
511
512#[cfg(target_has_atomic_load_store = "8")]
513impl AtomicBool {
514    /// Creates a new `AtomicBool`.
515    ///
516    /// # Examples
517    ///
518    /// ```
519    /// use std::sync::atomic::AtomicBool;
520    ///
521    /// let atomic_true = AtomicBool::new(true);
522    /// let atomic_false = AtomicBool::new(false);
523    /// ```
524    #[inline]
525    #[stable(feature = "rust1", since = "1.0.0")]
526    #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
527    #[must_use]
528    pub const fn new(v: bool) -> AtomicBool {
529        AtomicBool { v: UnsafeCell::new(v as u8) }
530    }
531
532    /// Creates a new `AtomicBool` from a pointer.
533    ///
534    /// # Examples
535    ///
536    /// ```
537    /// use std::sync::atomic::{self, AtomicBool};
538    ///
539    /// // Get a pointer to an allocated value
540    /// let ptr: *mut bool = Box::into_raw(Box::new(false));
541    ///
542    /// assert!(ptr.cast::<AtomicBool>().is_aligned());
543    ///
544    /// {
545    ///     // Create an atomic view of the allocated value
546    ///     let atomic = unsafe { AtomicBool::from_ptr(ptr) };
547    ///
548    ///     // Use `atomic` for atomic operations, possibly share it with other threads
549    ///     atomic.store(true, atomic::Ordering::Relaxed);
550    /// }
551    ///
552    /// // It's ok to non-atomically access the value behind `ptr`,
553    /// // since the reference to the atomic ended its lifetime in the block above
554    /// assert_eq!(unsafe { *ptr }, true);
555    ///
556    /// // Deallocate the value
557    /// unsafe { drop(Box::from_raw(ptr)) }
558    /// ```
559    ///
560    /// # Safety
561    ///
562    /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that this is always true, since
563    ///   `align_of::<AtomicBool>() == 1`).
564    /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
565    /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
566    ///   allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
567    ///   without synchronization.
568    ///
569    /// [valid]: crate::ptr#safety
570    /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
571    #[inline]
572    #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
573    #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
574    pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool {
575        // SAFETY: guaranteed by the caller
576        unsafe { &*ptr.cast() }
577    }
578
579    /// Returns a mutable reference to the underlying [`bool`].
580    ///
581    /// This is safe because the mutable reference guarantees that no other threads are
582    /// concurrently accessing the atomic data.
583    ///
584    /// # Examples
585    ///
586    /// ```
587    /// use std::sync::atomic::{AtomicBool, Ordering};
588    ///
589    /// let mut some_bool = AtomicBool::new(true);
590    /// assert_eq!(*some_bool.get_mut(), true);
591    /// *some_bool.get_mut() = false;
592    /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
593    /// ```
594    #[inline]
595    #[stable(feature = "atomic_access", since = "1.15.0")]
596    pub fn get_mut(&mut self) -> &mut bool {
597        // SAFETY: the mutable reference guarantees unique ownership.
598        unsafe { &mut *(self.v.get() as *mut bool) }
599    }
600
601    /// Gets atomic access to a `&mut bool`.
602    ///
603    /// # Examples
604    ///
605    /// ```
606    /// #![feature(atomic_from_mut)]
607    /// use std::sync::atomic::{AtomicBool, Ordering};
608    ///
609    /// let mut some_bool = true;
610    /// let a = AtomicBool::from_mut(&mut some_bool);
611    /// a.store(false, Ordering::Relaxed);
612    /// assert_eq!(some_bool, false);
613    /// ```
614    #[inline]
615    #[cfg(target_has_atomic_equal_alignment = "8")]
616    #[unstable(feature = "atomic_from_mut", issue = "76314")]
617    pub fn from_mut(v: &mut bool) -> &mut Self {
618        // SAFETY: the mutable reference guarantees unique ownership, and
619        // alignment of both `bool` and `Self` is 1.
620        unsafe { &mut *(v as *mut bool as *mut Self) }
621    }
622
623    /// Gets non-atomic access to a `&mut [AtomicBool]` slice.
624    ///
625    /// This is safe because the mutable reference guarantees that no other threads are
626    /// concurrently accessing the atomic data.
627    ///
628    /// # Examples
629    ///
630    /// ```ignore-wasm
631    /// #![feature(atomic_from_mut)]
632    /// use std::sync::atomic::{AtomicBool, Ordering};
633    ///
634    /// let mut some_bools = [const { AtomicBool::new(false) }; 10];
635    ///
636    /// let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
637    /// assert_eq!(view, [false; 10]);
638    /// view[..5].copy_from_slice(&[true; 5]);
639    ///
640    /// std::thread::scope(|s| {
641    ///     for t in &some_bools[..5] {
642    ///         s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
643    ///     }
644    ///
645    ///     for f in &some_bools[5..] {
646    ///         s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
647    ///     }
648    /// });
649    /// ```
650    #[inline]
651    #[unstable(feature = "atomic_from_mut", issue = "76314")]
652    pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool] {
653        // SAFETY: the mutable reference guarantees unique ownership.
654        unsafe { &mut *(this as *mut [Self] as *mut [bool]) }
655    }
656
657    /// Gets atomic access to a `&mut [bool]` slice.
658    ///
659    /// # Examples
660    ///
661    /// ```rust,ignore-wasm
662    /// #![feature(atomic_from_mut)]
663    /// use std::sync::atomic::{AtomicBool, Ordering};
664    ///
665    /// let mut some_bools = [false; 10];
666    /// let a = &*AtomicBool::from_mut_slice(&mut some_bools);
667    /// std::thread::scope(|s| {
668    ///     for i in 0..a.len() {
669    ///         s.spawn(move || a[i].store(true, Ordering::Relaxed));
670    ///     }
671    /// });
672    /// assert_eq!(some_bools, [true; 10]);
673    /// ```
674    #[inline]
675    #[cfg(target_has_atomic_equal_alignment = "8")]
676    #[unstable(feature = "atomic_from_mut", issue = "76314")]
677    pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self] {
678        // SAFETY: the mutable reference guarantees unique ownership, and
679        // alignment of both `bool` and `Self` is 1.
680        unsafe { &mut *(v as *mut [bool] as *mut [Self]) }
681    }
682
683    /// Consumes the atomic and returns the contained value.
684    ///
685    /// This is safe because passing `self` by value guarantees that no other threads are
686    /// concurrently accessing the atomic data.
687    ///
688    /// # Examples
689    ///
690    /// ```
691    /// use std::sync::atomic::AtomicBool;
692    ///
693    /// let some_bool = AtomicBool::new(true);
694    /// assert_eq!(some_bool.into_inner(), true);
695    /// ```
696    #[inline]
697    #[stable(feature = "atomic_access", since = "1.15.0")]
698    #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
699    pub const fn into_inner(self) -> bool {
700        self.v.into_inner() != 0
701    }
702
703    /// Loads a value from the bool.
704    ///
705    /// `load` takes an [`Ordering`] argument which describes the memory ordering
706    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
707    ///
708    /// # Panics
709    ///
710    /// Panics if `order` is [`Release`] or [`AcqRel`].
711    ///
712    /// # Examples
713    ///
714    /// ```
715    /// use std::sync::atomic::{AtomicBool, Ordering};
716    ///
717    /// let some_bool = AtomicBool::new(true);
718    ///
719    /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
720    /// ```
721    #[inline]
722    #[stable(feature = "rust1", since = "1.0.0")]
723    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
724    pub fn load(&self, order: Ordering) -> bool {
725        // SAFETY: any data races are prevented by atomic intrinsics and the raw
726        // pointer passed in is valid because we got it from a reference.
727        unsafe { atomic_load(self.v.get(), order) != 0 }
728    }
729
730    /// Stores a value into the bool.
731    ///
732    /// `store` takes an [`Ordering`] argument which describes the memory ordering
733    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
734    ///
735    /// # Panics
736    ///
737    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
738    ///
739    /// # Examples
740    ///
741    /// ```
742    /// use std::sync::atomic::{AtomicBool, Ordering};
743    ///
744    /// let some_bool = AtomicBool::new(true);
745    ///
746    /// some_bool.store(false, Ordering::Relaxed);
747    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
748    /// ```
749    #[inline]
750    #[stable(feature = "rust1", since = "1.0.0")]
751    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
752    pub fn store(&self, val: bool, order: Ordering) {
753        // SAFETY: any data races are prevented by atomic intrinsics and the raw
754        // pointer passed in is valid because we got it from a reference.
755        unsafe {
756            atomic_store(self.v.get(), val as u8, order);
757        }
758    }
759
760    /// Stores a value into the bool, returning the previous value.
761    ///
762    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
763    /// of this operation. All ordering modes are possible. Note that using
764    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
765    /// using [`Release`] makes the load part [`Relaxed`].
766    ///
767    /// **Note:** This method is only available on platforms that support atomic
768    /// operations on `u8`.
769    ///
770    /// # Examples
771    ///
772    /// ```
773    /// use std::sync::atomic::{AtomicBool, Ordering};
774    ///
775    /// let some_bool = AtomicBool::new(true);
776    ///
777    /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
778    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
779    /// ```
780    #[inline]
781    #[stable(feature = "rust1", since = "1.0.0")]
782    #[cfg(target_has_atomic = "8")]
783    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
784    pub fn swap(&self, val: bool, order: Ordering) -> bool {
785        if EMULATE_ATOMIC_BOOL {
786            if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
787        } else {
788            // SAFETY: data races are prevented by atomic intrinsics.
789            unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 }
790        }
791    }
792
793    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
794    ///
795    /// The return value is always the previous value. If it is equal to `current`, then the value
796    /// was updated.
797    ///
798    /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
799    /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
800    /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
801    /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
802    /// happens, and using [`Release`] makes the load part [`Relaxed`].
803    ///
804    /// **Note:** This method is only available on platforms that support atomic
805    /// operations on `u8`.
806    ///
807    /// # Migrating to `compare_exchange` and `compare_exchange_weak`
808    ///
809    /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
810    /// memory orderings:
811    ///
812    /// Original | Success | Failure
813    /// -------- | ------- | -------
814    /// Relaxed  | Relaxed | Relaxed
815    /// Acquire  | Acquire | Acquire
816    /// Release  | Release | Relaxed
817    /// AcqRel   | AcqRel  | Acquire
818    /// SeqCst   | SeqCst  | SeqCst
819    ///
820    /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
821    /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
822    /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
823    /// rather than to infer success vs failure based on the value that was read.
824    ///
825    /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
826    /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
827    /// which allows the compiler to generate better assembly code when the compare and swap
828    /// is used in a loop.
829    ///
830    /// # Examples
831    ///
832    /// ```
833    /// use std::sync::atomic::{AtomicBool, Ordering};
834    ///
835    /// let some_bool = AtomicBool::new(true);
836    ///
837    /// assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
838    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
839    ///
840    /// assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
841    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
842    /// ```
843    #[inline]
844    #[stable(feature = "rust1", since = "1.0.0")]
845    #[deprecated(
846        since = "1.50.0",
847        note = "Use `compare_exchange` or `compare_exchange_weak` instead"
848    )]
849    #[cfg(target_has_atomic = "8")]
850    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
851    pub fn compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool {
852        match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
853            Ok(x) => x,
854            Err(x) => x,
855        }
856    }
857
858    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
859    ///
860    /// The return value is a result indicating whether the new value was written and containing
861    /// the previous value. On success this value is guaranteed to be equal to `current`.
862    ///
863    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
864    /// ordering of this operation. `success` describes the required ordering for the
865    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
866    /// `failure` describes the required ordering for the load operation that takes place when
867    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
868    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
869    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
870    ///
871    /// **Note:** This method is only available on platforms that support atomic
872    /// operations on `u8`.
873    ///
874    /// # Examples
875    ///
876    /// ```
877    /// use std::sync::atomic::{AtomicBool, Ordering};
878    ///
879    /// let some_bool = AtomicBool::new(true);
880    ///
881    /// assert_eq!(some_bool.compare_exchange(true,
882    ///                                       false,
883    ///                                       Ordering::Acquire,
884    ///                                       Ordering::Relaxed),
885    ///            Ok(true));
886    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
887    ///
888    /// assert_eq!(some_bool.compare_exchange(true, true,
889    ///                                       Ordering::SeqCst,
890    ///                                       Ordering::Acquire),
891    ///            Err(false));
892    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
893    /// ```
894    ///
895    /// # Considerations
896    ///
897    /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
898    /// of CAS operations. In particular, a load of the value followed by a successful
899    /// `compare_exchange` with the previous load *does not ensure* that other threads have not
900    /// changed the value in the interim. This is usually important when the *equality* check in
901    /// the `compare_exchange` is being used to check the *identity* of a value, but equality
902    /// does not necessarily imply identity. In this case, `compare_exchange` can lead to the
903    /// [ABA problem].
904    ///
905    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
906    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
907    #[inline]
908    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
909    #[doc(alias = "compare_and_swap")]
910    #[cfg(target_has_atomic = "8")]
911    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
912    pub fn compare_exchange(
913        &self,
914        current: bool,
915        new: bool,
916        success: Ordering,
917        failure: Ordering,
918    ) -> Result<bool, bool> {
919        if EMULATE_ATOMIC_BOOL {
920            // Pick the strongest ordering from success and failure.
921            let order = match (success, failure) {
922                (SeqCst, _) => SeqCst,
923                (_, SeqCst) => SeqCst,
924                (AcqRel, _) => AcqRel,
925                (_, AcqRel) => {
926                    panic!("there is no such thing as an acquire-release failure ordering")
927                }
928                (Release, Acquire) => AcqRel,
929                (Acquire, _) => Acquire,
930                (_, Acquire) => Acquire,
931                (Release, Relaxed) => Release,
932                (_, Release) => panic!("there is no such thing as a release failure ordering"),
933                (Relaxed, Relaxed) => Relaxed,
934            };
935            let old = if current == new {
936                // This is a no-op, but we still need to perform the operation
937                // for memory ordering reasons.
938                self.fetch_or(false, order)
939            } else {
940                // This sets the value to the new one and returns the old one.
941                self.swap(new, order)
942            };
943            if old == current { Ok(old) } else { Err(old) }
944        } else {
945            // SAFETY: data races are prevented by atomic intrinsics.
946            match unsafe {
947                atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure)
948            } {
949                Ok(x) => Ok(x != 0),
950                Err(x) => Err(x != 0),
951            }
952        }
953    }
954
955    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
956    ///
957    /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
958    /// comparison succeeds, which can result in more efficient code on some platforms. The
959    /// return value is a result indicating whether the new value was written and containing the
960    /// previous value.
961    ///
962    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
963    /// ordering of this operation. `success` describes the required ordering for the
964    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
965    /// `failure` describes the required ordering for the load operation that takes place when
966    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
967    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
968    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
969    ///
970    /// **Note:** This method is only available on platforms that support atomic
971    /// operations on `u8`.
972    ///
973    /// # Examples
974    ///
975    /// ```
976    /// use std::sync::atomic::{AtomicBool, Ordering};
977    ///
978    /// let val = AtomicBool::new(false);
979    ///
980    /// let new = true;
981    /// let mut old = val.load(Ordering::Relaxed);
982    /// loop {
983    ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
984    ///         Ok(_) => break,
985    ///         Err(x) => old = x,
986    ///     }
987    /// }
988    /// ```
989    ///
990    /// # Considerations
991    ///
992    /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
993    /// of CAS operations. In particular, a load of the value followed by a successful
994    /// `compare_exchange` with the previous load *does not ensure* that other threads have not
995    /// changed the value in the interim. This is usually important when the *equality* check in
996    /// the `compare_exchange` is being used to check the *identity* of a value, but equality
997    /// does not necessarily imply identity. In this case, `compare_exchange` can lead to the
998    /// [ABA problem].
999    ///
1000    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1001    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1002    #[inline]
1003    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1004    #[doc(alias = "compare_and_swap")]
1005    #[cfg(target_has_atomic = "8")]
1006    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1007    pub fn compare_exchange_weak(
1008        &self,
1009        current: bool,
1010        new: bool,
1011        success: Ordering,
1012        failure: Ordering,
1013    ) -> Result<bool, bool> {
1014        if EMULATE_ATOMIC_BOOL {
1015            return self.compare_exchange(current, new, success, failure);
1016        }
1017
1018        // SAFETY: data races are prevented by atomic intrinsics.
1019        match unsafe {
1020            atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure)
1021        } {
1022            Ok(x) => Ok(x != 0),
1023            Err(x) => Err(x != 0),
1024        }
1025    }
1026
1027    /// Logical "and" with a boolean value.
1028    ///
1029    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1030    /// the new value to the result.
1031    ///
1032    /// Returns the previous value.
1033    ///
1034    /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
1035    /// of this operation. All ordering modes are possible. Note that using
1036    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1037    /// using [`Release`] makes the load part [`Relaxed`].
1038    ///
1039    /// **Note:** This method is only available on platforms that support atomic
1040    /// operations on `u8`.
1041    ///
1042    /// # Examples
1043    ///
1044    /// ```
1045    /// use std::sync::atomic::{AtomicBool, Ordering};
1046    ///
1047    /// let foo = AtomicBool::new(true);
1048    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
1049    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1050    ///
1051    /// let foo = AtomicBool::new(true);
1052    /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
1053    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1054    ///
1055    /// let foo = AtomicBool::new(false);
1056    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
1057    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1058    /// ```
1059    #[inline]
1060    #[stable(feature = "rust1", since = "1.0.0")]
1061    #[cfg(target_has_atomic = "8")]
1062    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1063    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
1064        // SAFETY: data races are prevented by atomic intrinsics.
1065        unsafe { atomic_and(self.v.get(), val as u8, order) != 0 }
1066    }
1067
1068    /// Logical "nand" with a boolean value.
1069    ///
1070    /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1071    /// the new value to the result.
1072    ///
1073    /// Returns the previous value.
1074    ///
1075    /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1076    /// of this operation. All ordering modes are possible. Note that using
1077    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1078    /// using [`Release`] makes the load part [`Relaxed`].
1079    ///
1080    /// **Note:** This method is only available on platforms that support atomic
1081    /// operations on `u8`.
1082    ///
1083    /// # Examples
1084    ///
1085    /// ```
1086    /// use std::sync::atomic::{AtomicBool, Ordering};
1087    ///
1088    /// let foo = AtomicBool::new(true);
1089    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1090    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1091    ///
1092    /// let foo = AtomicBool::new(true);
1093    /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1094    /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1095    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1096    ///
1097    /// let foo = AtomicBool::new(false);
1098    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1099    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1100    /// ```
1101    #[inline]
1102    #[stable(feature = "rust1", since = "1.0.0")]
1103    #[cfg(target_has_atomic = "8")]
1104    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1105    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1106        // We can't use atomic_nand here because it can result in a bool with
1107        // an invalid value. This happens because the atomic operation is done
1108        // with an 8-bit integer internally, which would set the upper 7 bits.
1109        // So we just use fetch_xor or swap instead.
1110        if val {
1111            // !(x & true) == !x
1112            // We must invert the bool.
1113            self.fetch_xor(true, order)
1114        } else {
1115            // !(x & false) == true
1116            // We must set the bool to true.
1117            self.swap(true, order)
1118        }
1119    }
1120
1121    /// Logical "or" with a boolean value.
1122    ///
1123    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1124    /// new value to the result.
1125    ///
1126    /// Returns the previous value.
1127    ///
1128    /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1129    /// of this operation. All ordering modes are possible. Note that using
1130    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1131    /// using [`Release`] makes the load part [`Relaxed`].
1132    ///
1133    /// **Note:** This method is only available on platforms that support atomic
1134    /// operations on `u8`.
1135    ///
1136    /// # Examples
1137    ///
1138    /// ```
1139    /// use std::sync::atomic::{AtomicBool, Ordering};
1140    ///
1141    /// let foo = AtomicBool::new(true);
1142    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1143    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1144    ///
1145    /// let foo = AtomicBool::new(true);
1146    /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1147    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1148    ///
1149    /// let foo = AtomicBool::new(false);
1150    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1151    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1152    /// ```
1153    #[inline]
1154    #[stable(feature = "rust1", since = "1.0.0")]
1155    #[cfg(target_has_atomic = "8")]
1156    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1157    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1158        // SAFETY: data races are prevented by atomic intrinsics.
1159        unsafe { atomic_or(self.v.get(), val as u8, order) != 0 }
1160    }
1161
1162    /// Logical "xor" with a boolean value.
1163    ///
1164    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1165    /// the new value to the result.
1166    ///
1167    /// Returns the previous value.
1168    ///
1169    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1170    /// of this operation. All ordering modes are possible. Note that using
1171    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1172    /// using [`Release`] makes the load part [`Relaxed`].
1173    ///
1174    /// **Note:** This method is only available on platforms that support atomic
1175    /// operations on `u8`.
1176    ///
1177    /// # Examples
1178    ///
1179    /// ```
1180    /// use std::sync::atomic::{AtomicBool, Ordering};
1181    ///
1182    /// let foo = AtomicBool::new(true);
1183    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1184    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1185    ///
1186    /// let foo = AtomicBool::new(true);
1187    /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1188    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1189    ///
1190    /// let foo = AtomicBool::new(false);
1191    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1192    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1193    /// ```
1194    #[inline]
1195    #[stable(feature = "rust1", since = "1.0.0")]
1196    #[cfg(target_has_atomic = "8")]
1197    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1198    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1199        // SAFETY: data races are prevented by atomic intrinsics.
1200        unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 }
1201    }
1202
1203    /// Logical "not" with a boolean value.
1204    ///
1205    /// Performs a logical "not" operation on the current value, and sets
1206    /// the new value to the result.
1207    ///
1208    /// Returns the previous value.
1209    ///
1210    /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1211    /// of this operation. All ordering modes are possible. Note that using
1212    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1213    /// using [`Release`] makes the load part [`Relaxed`].
1214    ///
1215    /// **Note:** This method is only available on platforms that support atomic
1216    /// operations on `u8`.
1217    ///
1218    /// # Examples
1219    ///
1220    /// ```
1221    /// use std::sync::atomic::{AtomicBool, Ordering};
1222    ///
1223    /// let foo = AtomicBool::new(true);
1224    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1225    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1226    ///
1227    /// let foo = AtomicBool::new(false);
1228    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1229    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1230    /// ```
1231    #[inline]
1232    #[stable(feature = "atomic_bool_fetch_not", since = "1.81.0")]
1233    #[cfg(target_has_atomic = "8")]
1234    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1235    pub fn fetch_not(&self, order: Ordering) -> bool {
1236        self.fetch_xor(true, order)
1237    }
1238
1239    /// Returns a mutable pointer to the underlying [`bool`].
1240    ///
1241    /// Doing non-atomic reads and writes on the resulting boolean can be a data race.
1242    /// This method is mostly useful for FFI, where the function signature may use
1243    /// `*mut bool` instead of `&AtomicBool`.
1244    ///
1245    /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
1246    /// atomic types work with interior mutability. All modifications of an atomic change the value
1247    /// through a shared reference, and can do so safely as long as they use atomic operations. Any
1248    /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
1249    /// restriction: operations on it must be atomic.
1250    ///
1251    /// # Examples
1252    ///
1253    /// ```ignore (extern-declaration)
1254    /// # fn main() {
1255    /// use std::sync::atomic::AtomicBool;
1256    ///
1257    /// extern "C" {
1258    ///     fn my_atomic_op(arg: *mut bool);
1259    /// }
1260    ///
1261    /// let mut atomic = AtomicBool::new(true);
1262    /// unsafe {
1263    ///     my_atomic_op(atomic.as_ptr());
1264    /// }
1265    /// # }
1266    /// ```
1267    #[inline]
1268    #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
1269    #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
1270    #[rustc_never_returns_null_ptr]
1271    pub const fn as_ptr(&self) -> *mut bool {
1272        self.v.get().cast()
1273    }
1274
1275    /// Fetches the value, and applies a function to it that returns an optional
1276    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1277    /// returned `Some(_)`, else `Err(previous_value)`.
1278    ///
1279    /// Note: This may call the function multiple times if the value has been
1280    /// changed from other threads in the meantime, as long as the function
1281    /// returns `Some(_)`, but the function will have been applied only once to
1282    /// the stored value.
1283    ///
1284    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1285    /// ordering of this operation. The first describes the required ordering for
1286    /// when the operation finally succeeds while the second describes the
1287    /// required ordering for loads. These correspond to the success and failure
1288    /// orderings of [`AtomicBool::compare_exchange`] respectively.
1289    ///
1290    /// Using [`Acquire`] as success ordering makes the store part of this
1291    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1292    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1293    /// [`Acquire`] or [`Relaxed`].
1294    ///
1295    /// **Note:** This method is only available on platforms that support atomic
1296    /// operations on `u8`.
1297    ///
1298    /// # Considerations
1299    ///
1300    /// This method is not magic; it is not provided by the hardware, and does not act like a
1301    /// critical section or mutex.
1302    ///
1303    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1304    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1305    ///
1306    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1307    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1308    ///
1309    /// # Examples
1310    ///
1311    /// ```rust
1312    /// use std::sync::atomic::{AtomicBool, Ordering};
1313    ///
1314    /// let x = AtomicBool::new(false);
1315    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1316    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1317    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1318    /// assert_eq!(x.load(Ordering::SeqCst), false);
1319    /// ```
1320    #[inline]
1321    #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1322    #[cfg(target_has_atomic = "8")]
1323    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1324    pub fn fetch_update<F>(
1325        &self,
1326        set_order: Ordering,
1327        fetch_order: Ordering,
1328        mut f: F,
1329    ) -> Result<bool, bool>
1330    where
1331        F: FnMut(bool) -> Option<bool>,
1332    {
1333        let mut prev = self.load(fetch_order);
1334        while let Some(next) = f(prev) {
1335            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1336                x @ Ok(_) => return x,
1337                Err(next_prev) => prev = next_prev,
1338            }
1339        }
1340        Err(prev)
1341    }
1342
1343    /// Fetches the value, and applies a function to it that returns an optional
1344    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1345    /// returned `Some(_)`, else `Err(previous_value)`.
1346    ///
1347    /// See also: [`update`](`AtomicBool::update`).
1348    ///
1349    /// Note: This may call the function multiple times if the value has been
1350    /// changed from other threads in the meantime, as long as the function
1351    /// returns `Some(_)`, but the function will have been applied only once to
1352    /// the stored value.
1353    ///
1354    /// `try_update` takes two [`Ordering`] arguments to describe the memory
1355    /// ordering of this operation. The first describes the required ordering for
1356    /// when the operation finally succeeds while the second describes the
1357    /// required ordering for loads. These correspond to the success and failure
1358    /// orderings of [`AtomicBool::compare_exchange`] respectively.
1359    ///
1360    /// Using [`Acquire`] as success ordering makes the store part of this
1361    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1362    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1363    /// [`Acquire`] or [`Relaxed`].
1364    ///
1365    /// **Note:** This method is only available on platforms that support atomic
1366    /// operations on `u8`.
1367    ///
1368    /// # Considerations
1369    ///
1370    /// This method is not magic; it is not provided by the hardware, and does not act like a
1371    /// critical section or mutex.
1372    ///
1373    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1374    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1375    ///
1376    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1377    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1378    ///
1379    /// # Examples
1380    ///
1381    /// ```rust
1382    /// #![feature(atomic_try_update)]
1383    /// use std::sync::atomic::{AtomicBool, Ordering};
1384    ///
1385    /// let x = AtomicBool::new(false);
1386    /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1387    /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1388    /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1389    /// assert_eq!(x.load(Ordering::SeqCst), false);
1390    /// ```
1391    #[inline]
1392    #[unstable(feature = "atomic_try_update", issue = "135894")]
1393    #[cfg(target_has_atomic = "8")]
1394    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1395    pub fn try_update(
1396        &self,
1397        set_order: Ordering,
1398        fetch_order: Ordering,
1399        f: impl FnMut(bool) -> Option<bool>,
1400    ) -> Result<bool, bool> {
1401        // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
1402        //      when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
1403        self.fetch_update(set_order, fetch_order, f)
1404    }
1405
1406    /// Fetches the value, applies a function to it that it return a new value.
1407    /// The new value is stored and the old value is returned.
1408    ///
1409    /// See also: [`try_update`](`AtomicBool::try_update`).
1410    ///
1411    /// Note: This may call the function multiple times if the value has been changed from other threads in
1412    /// the meantime, but the function will have been applied only once to the stored value.
1413    ///
1414    /// `update` takes two [`Ordering`] arguments to describe the memory
1415    /// ordering of this operation. The first describes the required ordering for
1416    /// when the operation finally succeeds while the second describes the
1417    /// required ordering for loads. These correspond to the success and failure
1418    /// orderings of [`AtomicBool::compare_exchange`] respectively.
1419    ///
1420    /// Using [`Acquire`] as success ordering makes the store part
1421    /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
1422    /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1423    ///
1424    /// **Note:** This method is only available on platforms that support atomic operations on `u8`.
1425    ///
1426    /// # Considerations
1427    ///
1428    /// This method is not magic; it is not provided by the hardware, and does not act like a
1429    /// critical section or mutex.
1430    ///
1431    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1432    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem].
1433    ///
1434    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1435    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1436    ///
1437    /// # Examples
1438    ///
1439    /// ```rust
1440    /// #![feature(atomic_try_update)]
1441    ///
1442    /// use std::sync::atomic::{AtomicBool, Ordering};
1443    ///
1444    /// let x = AtomicBool::new(false);
1445    /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), false);
1446    /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), true);
1447    /// assert_eq!(x.load(Ordering::SeqCst), false);
1448    /// ```
1449    #[inline]
1450    #[unstable(feature = "atomic_try_update", issue = "135894")]
1451    #[cfg(target_has_atomic = "8")]
1452    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1453    pub fn update(
1454        &self,
1455        set_order: Ordering,
1456        fetch_order: Ordering,
1457        mut f: impl FnMut(bool) -> bool,
1458    ) -> bool {
1459        let mut prev = self.load(fetch_order);
1460        loop {
1461            match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
1462                Ok(x) => break x,
1463                Err(next_prev) => prev = next_prev,
1464            }
1465        }
1466    }
1467}
1468
1469#[cfg(target_has_atomic_load_store = "ptr")]
1470impl<T> AtomicPtr<T> {
1471    /// Creates a new `AtomicPtr`.
1472    ///
1473    /// # Examples
1474    ///
1475    /// ```
1476    /// use std::sync::atomic::AtomicPtr;
1477    ///
1478    /// let ptr = &mut 5;
1479    /// let atomic_ptr = AtomicPtr::new(ptr);
1480    /// ```
1481    #[inline]
1482    #[stable(feature = "rust1", since = "1.0.0")]
1483    #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
1484    pub const fn new(p: *mut T) -> AtomicPtr<T> {
1485        AtomicPtr { p: UnsafeCell::new(p) }
1486    }
1487
1488    /// Creates a new `AtomicPtr` from a pointer.
1489    ///
1490    /// # Examples
1491    ///
1492    /// ```
1493    /// use std::sync::atomic::{self, AtomicPtr};
1494    ///
1495    /// // Get a pointer to an allocated value
1496    /// let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut()));
1497    ///
1498    /// assert!(ptr.cast::<AtomicPtr<u8>>().is_aligned());
1499    ///
1500    /// {
1501    ///     // Create an atomic view of the allocated value
1502    ///     let atomic = unsafe { AtomicPtr::from_ptr(ptr) };
1503    ///
1504    ///     // Use `atomic` for atomic operations, possibly share it with other threads
1505    ///     atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed);
1506    /// }
1507    ///
1508    /// // It's ok to non-atomically access the value behind `ptr`,
1509    /// // since the reference to the atomic ended its lifetime in the block above
1510    /// assert!(!unsafe { *ptr }.is_null());
1511    ///
1512    /// // Deallocate the value
1513    /// unsafe { drop(Box::from_raw(ptr)) }
1514    /// ```
1515    ///
1516    /// # Safety
1517    ///
1518    /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1519    ///   can be bigger than `align_of::<*mut T>()`).
1520    /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1521    /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
1522    ///   allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
1523    ///   without synchronization.
1524    ///
1525    /// [valid]: crate::ptr#safety
1526    /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
1527    #[inline]
1528    #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
1529    #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
1530    pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T> {
1531        // SAFETY: guaranteed by the caller
1532        unsafe { &*ptr.cast() }
1533    }
1534
1535    /// Returns a mutable reference to the underlying pointer.
1536    ///
1537    /// This is safe because the mutable reference guarantees that no other threads are
1538    /// concurrently accessing the atomic data.
1539    ///
1540    /// # Examples
1541    ///
1542    /// ```
1543    /// use std::sync::atomic::{AtomicPtr, Ordering};
1544    ///
1545    /// let mut data = 10;
1546    /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1547    /// let mut other_data = 5;
1548    /// *atomic_ptr.get_mut() = &mut other_data;
1549    /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1550    /// ```
1551    #[inline]
1552    #[stable(feature = "atomic_access", since = "1.15.0")]
1553    pub fn get_mut(&mut self) -> &mut *mut T {
1554        self.p.get_mut()
1555    }
1556
1557    /// Gets atomic access to a pointer.
1558    ///
1559    /// # Examples
1560    ///
1561    /// ```
1562    /// #![feature(atomic_from_mut)]
1563    /// use std::sync::atomic::{AtomicPtr, Ordering};
1564    ///
1565    /// let mut data = 123;
1566    /// let mut some_ptr = &mut data as *mut i32;
1567    /// let a = AtomicPtr::from_mut(&mut some_ptr);
1568    /// let mut other_data = 456;
1569    /// a.store(&mut other_data, Ordering::Relaxed);
1570    /// assert_eq!(unsafe { *some_ptr }, 456);
1571    /// ```
1572    #[inline]
1573    #[cfg(target_has_atomic_equal_alignment = "ptr")]
1574    #[unstable(feature = "atomic_from_mut", issue = "76314")]
1575    pub fn from_mut(v: &mut *mut T) -> &mut Self {
1576        let [] = [(); align_of::<AtomicPtr<()>>() - align_of::<*mut ()>()];
1577        // SAFETY:
1578        //  - the mutable reference guarantees unique ownership.
1579        //  - the alignment of `*mut T` and `Self` is the same on all platforms
1580        //    supported by rust, as verified above.
1581        unsafe { &mut *(v as *mut *mut T as *mut Self) }
1582    }
1583
1584    /// Gets non-atomic access to a `&mut [AtomicPtr]` slice.
1585    ///
1586    /// This is safe because the mutable reference guarantees that no other threads are
1587    /// concurrently accessing the atomic data.
1588    ///
1589    /// # Examples
1590    ///
1591    /// ```ignore-wasm
1592    /// #![feature(atomic_from_mut)]
1593    /// use std::ptr::null_mut;
1594    /// use std::sync::atomic::{AtomicPtr, Ordering};
1595    ///
1596    /// let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
1597    ///
1598    /// let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
1599    /// assert_eq!(view, [null_mut::<String>(); 10]);
1600    /// view
1601    ///     .iter_mut()
1602    ///     .enumerate()
1603    ///     .for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
1604    ///
1605    /// std::thread::scope(|s| {
1606    ///     for ptr in &some_ptrs {
1607    ///         s.spawn(move || {
1608    ///             let ptr = ptr.load(Ordering::Relaxed);
1609    ///             assert!(!ptr.is_null());
1610    ///
1611    ///             let name = unsafe { Box::from_raw(ptr) };
1612    ///             println!("Hello, {name}!");
1613    ///         });
1614    ///     }
1615    /// });
1616    /// ```
1617    #[inline]
1618    #[unstable(feature = "atomic_from_mut", issue = "76314")]
1619    pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T] {
1620        // SAFETY: the mutable reference guarantees unique ownership.
1621        unsafe { &mut *(this as *mut [Self] as *mut [*mut T]) }
1622    }
1623
1624    /// Gets atomic access to a slice of pointers.
1625    ///
1626    /// # Examples
1627    ///
1628    /// ```ignore-wasm
1629    /// #![feature(atomic_from_mut)]
1630    /// use std::ptr::null_mut;
1631    /// use std::sync::atomic::{AtomicPtr, Ordering};
1632    ///
1633    /// let mut some_ptrs = [null_mut::<String>(); 10];
1634    /// let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
1635    /// std::thread::scope(|s| {
1636    ///     for i in 0..a.len() {
1637    ///         s.spawn(move || {
1638    ///             let name = Box::new(format!("thread{i}"));
1639    ///             a[i].store(Box::into_raw(name), Ordering::Relaxed);
1640    ///         });
1641    ///     }
1642    /// });
1643    /// for p in some_ptrs {
1644    ///     assert!(!p.is_null());
1645    ///     let name = unsafe { Box::from_raw(p) };
1646    ///     println!("Hello, {name}!");
1647    /// }
1648    /// ```
1649    #[inline]
1650    #[cfg(target_has_atomic_equal_alignment = "ptr")]
1651    #[unstable(feature = "atomic_from_mut", issue = "76314")]
1652    pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self] {
1653        // SAFETY:
1654        //  - the mutable reference guarantees unique ownership.
1655        //  - the alignment of `*mut T` and `Self` is the same on all platforms
1656        //    supported by rust, as verified above.
1657        unsafe { &mut *(v as *mut [*mut T] as *mut [Self]) }
1658    }
1659
1660    /// Consumes the atomic and returns the contained value.
1661    ///
1662    /// This is safe because passing `self` by value guarantees that no other threads are
1663    /// concurrently accessing the atomic data.
1664    ///
1665    /// # Examples
1666    ///
1667    /// ```
1668    /// use std::sync::atomic::AtomicPtr;
1669    ///
1670    /// let mut data = 5;
1671    /// let atomic_ptr = AtomicPtr::new(&mut data);
1672    /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1673    /// ```
1674    #[inline]
1675    #[stable(feature = "atomic_access", since = "1.15.0")]
1676    #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
1677    pub const fn into_inner(self) -> *mut T {
1678        self.p.into_inner()
1679    }
1680
1681    /// Loads a value from the pointer.
1682    ///
1683    /// `load` takes an [`Ordering`] argument which describes the memory ordering
1684    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1685    ///
1686    /// # Panics
1687    ///
1688    /// Panics if `order` is [`Release`] or [`AcqRel`].
1689    ///
1690    /// # Examples
1691    ///
1692    /// ```
1693    /// use std::sync::atomic::{AtomicPtr, Ordering};
1694    ///
1695    /// let ptr = &mut 5;
1696    /// let some_ptr = AtomicPtr::new(ptr);
1697    ///
1698    /// let value = some_ptr.load(Ordering::Relaxed);
1699    /// ```
1700    #[inline]
1701    #[stable(feature = "rust1", since = "1.0.0")]
1702    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1703    pub fn load(&self, order: Ordering) -> *mut T {
1704        // SAFETY: data races are prevented by atomic intrinsics.
1705        unsafe { atomic_load(self.p.get(), order) }
1706    }
1707
1708    /// Stores a value into the pointer.
1709    ///
1710    /// `store` takes an [`Ordering`] argument which describes the memory ordering
1711    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1712    ///
1713    /// # Panics
1714    ///
1715    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1716    ///
1717    /// # Examples
1718    ///
1719    /// ```
1720    /// use std::sync::atomic::{AtomicPtr, Ordering};
1721    ///
1722    /// let ptr = &mut 5;
1723    /// let some_ptr = AtomicPtr::new(ptr);
1724    ///
1725    /// let other_ptr = &mut 10;
1726    ///
1727    /// some_ptr.store(other_ptr, Ordering::Relaxed);
1728    /// ```
1729    #[inline]
1730    #[stable(feature = "rust1", since = "1.0.0")]
1731    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1732    pub fn store(&self, ptr: *mut T, order: Ordering) {
1733        // SAFETY: data races are prevented by atomic intrinsics.
1734        unsafe {
1735            atomic_store(self.p.get(), ptr, order);
1736        }
1737    }
1738
1739    /// Stores a value into the pointer, returning the previous value.
1740    ///
1741    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1742    /// of this operation. All ordering modes are possible. Note that using
1743    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1744    /// using [`Release`] makes the load part [`Relaxed`].
1745    ///
1746    /// **Note:** This method is only available on platforms that support atomic
1747    /// operations on pointers.
1748    ///
1749    /// # Examples
1750    ///
1751    /// ```
1752    /// use std::sync::atomic::{AtomicPtr, Ordering};
1753    ///
1754    /// let ptr = &mut 5;
1755    /// let some_ptr = AtomicPtr::new(ptr);
1756    ///
1757    /// let other_ptr = &mut 10;
1758    ///
1759    /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1760    /// ```
1761    #[inline]
1762    #[stable(feature = "rust1", since = "1.0.0")]
1763    #[cfg(target_has_atomic = "ptr")]
1764    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1765    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1766        // SAFETY: data races are prevented by atomic intrinsics.
1767        unsafe { atomic_swap(self.p.get(), ptr, order) }
1768    }
1769
1770    /// Stores a value into the pointer if the current value is the same as the `current` value.
1771    ///
1772    /// The return value is always the previous value. If it is equal to `current`, then the value
1773    /// was updated.
1774    ///
1775    /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
1776    /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
1777    /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
1778    /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
1779    /// happens, and using [`Release`] makes the load part [`Relaxed`].
1780    ///
1781    /// **Note:** This method is only available on platforms that support atomic
1782    /// operations on pointers.
1783    ///
1784    /// # Migrating to `compare_exchange` and `compare_exchange_weak`
1785    ///
1786    /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
1787    /// memory orderings:
1788    ///
1789    /// Original | Success | Failure
1790    /// -------- | ------- | -------
1791    /// Relaxed  | Relaxed | Relaxed
1792    /// Acquire  | Acquire | Acquire
1793    /// Release  | Release | Relaxed
1794    /// AcqRel   | AcqRel  | Acquire
1795    /// SeqCst   | SeqCst  | SeqCst
1796    ///
1797    /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
1798    /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
1799    /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
1800    /// rather than to infer success vs failure based on the value that was read.
1801    ///
1802    /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
1803    /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
1804    /// which allows the compiler to generate better assembly code when the compare and swap
1805    /// is used in a loop.
1806    ///
1807    /// # Examples
1808    ///
1809    /// ```
1810    /// use std::sync::atomic::{AtomicPtr, Ordering};
1811    ///
1812    /// let ptr = &mut 5;
1813    /// let some_ptr = AtomicPtr::new(ptr);
1814    ///
1815    /// let other_ptr = &mut 10;
1816    ///
1817    /// let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed);
1818    /// ```
1819    #[inline]
1820    #[stable(feature = "rust1", since = "1.0.0")]
1821    #[deprecated(
1822        since = "1.50.0",
1823        note = "Use `compare_exchange` or `compare_exchange_weak` instead"
1824    )]
1825    #[cfg(target_has_atomic = "ptr")]
1826    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1827    pub fn compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T {
1828        match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
1829            Ok(x) => x,
1830            Err(x) => x,
1831        }
1832    }
1833
1834    /// Stores a value into the pointer if the current value is the same as the `current` value.
1835    ///
1836    /// The return value is a result indicating whether the new value was written and containing
1837    /// the previous value. On success this value is guaranteed to be equal to `current`.
1838    ///
1839    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1840    /// ordering of this operation. `success` describes the required ordering for the
1841    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1842    /// `failure` describes the required ordering for the load operation that takes place when
1843    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1844    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1845    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1846    ///
1847    /// **Note:** This method is only available on platforms that support atomic
1848    /// operations on pointers.
1849    ///
1850    /// # Examples
1851    ///
1852    /// ```
1853    /// use std::sync::atomic::{AtomicPtr, Ordering};
1854    ///
1855    /// let ptr = &mut 5;
1856    /// let some_ptr = AtomicPtr::new(ptr);
1857    ///
1858    /// let other_ptr = &mut 10;
1859    ///
1860    /// let value = some_ptr.compare_exchange(ptr, other_ptr,
1861    ///                                       Ordering::SeqCst, Ordering::Relaxed);
1862    /// ```
1863    ///
1864    /// # Considerations
1865    ///
1866    /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
1867    /// of CAS operations. In particular, a load of the value followed by a successful
1868    /// `compare_exchange` with the previous load *does not ensure* that other threads have not
1869    /// changed the value in the interim. This is usually important when the *equality* check in
1870    /// the `compare_exchange` is being used to check the *identity* of a value, but equality
1871    /// does not necessarily imply identity. This is a particularly common case for pointers, as
1872    /// a pointer holding the same address does not imply that the same object exists at that
1873    /// address! In this case, `compare_exchange` can lead to the [ABA problem].
1874    ///
1875    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1876    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1877    #[inline]
1878    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1879    #[cfg(target_has_atomic = "ptr")]
1880    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1881    pub fn compare_exchange(
1882        &self,
1883        current: *mut T,
1884        new: *mut T,
1885        success: Ordering,
1886        failure: Ordering,
1887    ) -> Result<*mut T, *mut T> {
1888        // SAFETY: data races are prevented by atomic intrinsics.
1889        unsafe { atomic_compare_exchange(self.p.get(), current, new, success, failure) }
1890    }
1891
1892    /// Stores a value into the pointer if the current value is the same as the `current` value.
1893    ///
1894    /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1895    /// comparison succeeds, which can result in more efficient code on some platforms. The
1896    /// return value is a result indicating whether the new value was written and containing the
1897    /// previous value.
1898    ///
1899    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1900    /// ordering of this operation. `success` describes the required ordering for the
1901    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1902    /// `failure` describes the required ordering for the load operation that takes place when
1903    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1904    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1905    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1906    ///
1907    /// **Note:** This method is only available on platforms that support atomic
1908    /// operations on pointers.
1909    ///
1910    /// # Examples
1911    ///
1912    /// ```
1913    /// use std::sync::atomic::{AtomicPtr, Ordering};
1914    ///
1915    /// let some_ptr = AtomicPtr::new(&mut 5);
1916    ///
1917    /// let new = &mut 10;
1918    /// let mut old = some_ptr.load(Ordering::Relaxed);
1919    /// loop {
1920    ///     match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1921    ///         Ok(_) => break,
1922    ///         Err(x) => old = x,
1923    ///     }
1924    /// }
1925    /// ```
1926    ///
1927    /// # Considerations
1928    ///
1929    /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
1930    /// of CAS operations. In particular, a load of the value followed by a successful
1931    /// `compare_exchange` with the previous load *does not ensure* that other threads have not
1932    /// changed the value in the interim. This is usually important when the *equality* check in
1933    /// the `compare_exchange` is being used to check the *identity* of a value, but equality
1934    /// does not necessarily imply identity. This is a particularly common case for pointers, as
1935    /// a pointer holding the same address does not imply that the same object exists at that
1936    /// address! In this case, `compare_exchange` can lead to the [ABA problem].
1937    ///
1938    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1939    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1940    #[inline]
1941    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1942    #[cfg(target_has_atomic = "ptr")]
1943    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1944    pub fn compare_exchange_weak(
1945        &self,
1946        current: *mut T,
1947        new: *mut T,
1948        success: Ordering,
1949        failure: Ordering,
1950    ) -> Result<*mut T, *mut T> {
1951        // SAFETY: This intrinsic is unsafe because it operates on a raw pointer
1952        // but we know for sure that the pointer is valid (we just got it from
1953        // an `UnsafeCell` that we have by reference) and the atomic operation
1954        // itself allows us to safely mutate the `UnsafeCell` contents.
1955        unsafe { atomic_compare_exchange_weak(self.p.get(), current, new, success, failure) }
1956    }
1957
1958    /// Fetches the value, and applies a function to it that returns an optional
1959    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1960    /// returned `Some(_)`, else `Err(previous_value)`.
1961    ///
1962    /// Note: This may call the function multiple times if the value has been
1963    /// changed from other threads in the meantime, as long as the function
1964    /// returns `Some(_)`, but the function will have been applied only once to
1965    /// the stored value.
1966    ///
1967    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1968    /// ordering of this operation. The first describes the required ordering for
1969    /// when the operation finally succeeds while the second describes the
1970    /// required ordering for loads. These correspond to the success and failure
1971    /// orderings of [`AtomicPtr::compare_exchange`] respectively.
1972    ///
1973    /// Using [`Acquire`] as success ordering makes the store part of this
1974    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1975    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1976    /// [`Acquire`] or [`Relaxed`].
1977    ///
1978    /// **Note:** This method is only available on platforms that support atomic
1979    /// operations on pointers.
1980    ///
1981    /// # Considerations
1982    ///
1983    /// This method is not magic; it is not provided by the hardware, and does not act like a
1984    /// critical section or mutex.
1985    ///
1986    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
1987    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
1988    /// which is a particularly common pitfall for pointers!
1989    ///
1990    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1991    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
1992    ///
1993    /// # Examples
1994    ///
1995    /// ```rust
1996    /// use std::sync::atomic::{AtomicPtr, Ordering};
1997    ///
1998    /// let ptr: *mut _ = &mut 5;
1999    /// let some_ptr = AtomicPtr::new(ptr);
2000    ///
2001    /// let new: *mut _ = &mut 10;
2002    /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2003    /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2004    ///     if x == ptr {
2005    ///         Some(new)
2006    ///     } else {
2007    ///         None
2008    ///     }
2009    /// });
2010    /// assert_eq!(result, Ok(ptr));
2011    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2012    /// ```
2013    #[inline]
2014    #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
2015    #[cfg(target_has_atomic = "ptr")]
2016    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2017    pub fn fetch_update<F>(
2018        &self,
2019        set_order: Ordering,
2020        fetch_order: Ordering,
2021        mut f: F,
2022    ) -> Result<*mut T, *mut T>
2023    where
2024        F: FnMut(*mut T) -> Option<*mut T>,
2025    {
2026        let mut prev = self.load(fetch_order);
2027        while let Some(next) = f(prev) {
2028            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2029                x @ Ok(_) => return x,
2030                Err(next_prev) => prev = next_prev,
2031            }
2032        }
2033        Err(prev)
2034    }
2035    /// Fetches the value, and applies a function to it that returns an optional
2036    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
2037    /// returned `Some(_)`, else `Err(previous_value)`.
2038    ///
2039    /// See also: [`update`](`AtomicPtr::update`).
2040    ///
2041    /// Note: This may call the function multiple times if the value has been
2042    /// changed from other threads in the meantime, as long as the function
2043    /// returns `Some(_)`, but the function will have been applied only once to
2044    /// the stored value.
2045    ///
2046    /// `try_update` takes two [`Ordering`] arguments to describe the memory
2047    /// ordering of this operation. The first describes the required ordering for
2048    /// when the operation finally succeeds while the second describes the
2049    /// required ordering for loads. These correspond to the success and failure
2050    /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2051    ///
2052    /// Using [`Acquire`] as success ordering makes the store part of this
2053    /// operation [`Relaxed`], and using [`Release`] makes the final successful
2054    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
2055    /// [`Acquire`] or [`Relaxed`].
2056    ///
2057    /// **Note:** This method is only available on platforms that support atomic
2058    /// operations on pointers.
2059    ///
2060    /// # Considerations
2061    ///
2062    /// This method is not magic; it is not provided by the hardware, and does not act like a
2063    /// critical section or mutex.
2064    ///
2065    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2066    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2067    /// which is a particularly common pitfall for pointers!
2068    ///
2069    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2070    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2071    ///
2072    /// # Examples
2073    ///
2074    /// ```rust
2075    /// #![feature(atomic_try_update)]
2076    /// use std::sync::atomic::{AtomicPtr, Ordering};
2077    ///
2078    /// let ptr: *mut _ = &mut 5;
2079    /// let some_ptr = AtomicPtr::new(ptr);
2080    ///
2081    /// let new: *mut _ = &mut 10;
2082    /// assert_eq!(some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2083    /// let result = some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2084    ///     if x == ptr {
2085    ///         Some(new)
2086    ///     } else {
2087    ///         None
2088    ///     }
2089    /// });
2090    /// assert_eq!(result, Ok(ptr));
2091    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2092    /// ```
2093    #[inline]
2094    #[unstable(feature = "atomic_try_update", issue = "135894")]
2095    #[cfg(target_has_atomic = "ptr")]
2096    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2097    pub fn try_update(
2098        &self,
2099        set_order: Ordering,
2100        fetch_order: Ordering,
2101        f: impl FnMut(*mut T) -> Option<*mut T>,
2102    ) -> Result<*mut T, *mut T> {
2103        // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
2104        //      when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
2105        self.fetch_update(set_order, fetch_order, f)
2106    }
2107
2108    /// Fetches the value, applies a function to it that it return a new value.
2109    /// The new value is stored and the old value is returned.
2110    ///
2111    /// See also: [`try_update`](`AtomicPtr::try_update`).
2112    ///
2113    /// Note: This may call the function multiple times if the value has been changed from other threads in
2114    /// the meantime, but the function will have been applied only once to the stored value.
2115    ///
2116    /// `update` takes two [`Ordering`] arguments to describe the memory
2117    /// ordering of this operation. The first describes the required ordering for
2118    /// when the operation finally succeeds while the second describes the
2119    /// required ordering for loads. These correspond to the success and failure
2120    /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2121    ///
2122    /// Using [`Acquire`] as success ordering makes the store part
2123    /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
2124    /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2125    ///
2126    /// **Note:** This method is only available on platforms that support atomic
2127    /// operations on pointers.
2128    ///
2129    /// # Considerations
2130    ///
2131    /// This method is not magic; it is not provided by the hardware, and does not act like a
2132    /// critical section or mutex.
2133    ///
2134    /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
2135    /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem],
2136    /// which is a particularly common pitfall for pointers!
2137    ///
2138    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2139    /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
2140    ///
2141    /// # Examples
2142    ///
2143    /// ```rust
2144    /// #![feature(atomic_try_update)]
2145    ///
2146    /// use std::sync::atomic::{AtomicPtr, Ordering};
2147    ///
2148    /// let ptr: *mut _ = &mut 5;
2149    /// let some_ptr = AtomicPtr::new(ptr);
2150    ///
2151    /// let new: *mut _ = &mut 10;
2152    /// let result = some_ptr.update(Ordering::SeqCst, Ordering::SeqCst, |_| new);
2153    /// assert_eq!(result, ptr);
2154    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2155    /// ```
2156    #[inline]
2157    #[unstable(feature = "atomic_try_update", issue = "135894")]
2158    #[cfg(target_has_atomic = "8")]
2159    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2160    pub fn update(
2161        &self,
2162        set_order: Ordering,
2163        fetch_order: Ordering,
2164        mut f: impl FnMut(*mut T) -> *mut T,
2165    ) -> *mut T {
2166        let mut prev = self.load(fetch_order);
2167        loop {
2168            match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
2169                Ok(x) => break x,
2170                Err(next_prev) => prev = next_prev,
2171            }
2172        }
2173    }
2174
2175    /// Offsets the pointer's address by adding `val` (in units of `T`),
2176    /// returning the previous pointer.
2177    ///
2178    /// This is equivalent to using [`wrapping_add`] to atomically perform the
2179    /// equivalent of `ptr = ptr.wrapping_add(val);`.
2180    ///
2181    /// This method operates in units of `T`, which means that it cannot be used
2182    /// to offset the pointer by an amount which is not a multiple of
2183    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2184    /// work with a deliberately misaligned pointer. In such cases, you may use
2185    /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2186    ///
2187    /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2188    /// memory ordering of this operation. All ordering modes are possible. Note
2189    /// that using [`Acquire`] makes the store part of this operation
2190    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2191    ///
2192    /// **Note**: This method is only available on platforms that support atomic
2193    /// operations on [`AtomicPtr`].
2194    ///
2195    /// [`wrapping_add`]: pointer::wrapping_add
2196    ///
2197    /// # Examples
2198    ///
2199    /// ```
2200    /// #![feature(strict_provenance_atomic_ptr)]
2201    /// use core::sync::atomic::{AtomicPtr, Ordering};
2202    ///
2203    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2204    /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2205    /// // Note: units of `size_of::<i64>()`.
2206    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2207    /// ```
2208    #[inline]
2209    #[cfg(target_has_atomic = "ptr")]
2210    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2211    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2212    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2213        self.fetch_byte_add(val.wrapping_mul(size_of::<T>()), order)
2214    }
2215
2216    /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2217    /// returning the previous pointer.
2218    ///
2219    /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2220    /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2221    ///
2222    /// This method operates in units of `T`, which means that it cannot be used
2223    /// to offset the pointer by an amount which is not a multiple of
2224    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2225    /// work with a deliberately misaligned pointer. In such cases, you may use
2226    /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2227    ///
2228    /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2229    /// ordering of this operation. All ordering modes are possible. Note that
2230    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2231    /// and using [`Release`] makes the load part [`Relaxed`].
2232    ///
2233    /// **Note**: This method is only available on platforms that support atomic
2234    /// operations on [`AtomicPtr`].
2235    ///
2236    /// [`wrapping_sub`]: pointer::wrapping_sub
2237    ///
2238    /// # Examples
2239    ///
2240    /// ```
2241    /// #![feature(strict_provenance_atomic_ptr)]
2242    /// use core::sync::atomic::{AtomicPtr, Ordering};
2243    ///
2244    /// let array = [1i32, 2i32];
2245    /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2246    ///
2247    /// assert!(core::ptr::eq(
2248    ///     atom.fetch_ptr_sub(1, Ordering::Relaxed),
2249    ///     &array[1],
2250    /// ));
2251    /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2252    /// ```
2253    #[inline]
2254    #[cfg(target_has_atomic = "ptr")]
2255    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2256    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2257    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2258        self.fetch_byte_sub(val.wrapping_mul(size_of::<T>()), order)
2259    }
2260
2261    /// Offsets the pointer's address by adding `val` *bytes*, returning the
2262    /// previous pointer.
2263    ///
2264    /// This is equivalent to using [`wrapping_byte_add`] to atomically
2265    /// perform `ptr = ptr.wrapping_byte_add(val)`.
2266    ///
2267    /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2268    /// memory ordering of this operation. All ordering modes are possible. Note
2269    /// that using [`Acquire`] makes the store part of this operation
2270    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2271    ///
2272    /// **Note**: This method is only available on platforms that support atomic
2273    /// operations on [`AtomicPtr`].
2274    ///
2275    /// [`wrapping_byte_add`]: pointer::wrapping_byte_add
2276    ///
2277    /// # Examples
2278    ///
2279    /// ```
2280    /// #![feature(strict_provenance_atomic_ptr)]
2281    /// use core::sync::atomic::{AtomicPtr, Ordering};
2282    ///
2283    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2284    /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2285    /// // Note: in units of bytes, not `size_of::<i64>()`.
2286    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2287    /// ```
2288    #[inline]
2289    #[cfg(target_has_atomic = "ptr")]
2290    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2291    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2292    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2293        // SAFETY: data races are prevented by atomic intrinsics.
2294        unsafe { atomic_add(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2295    }
2296
2297    /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2298    /// previous pointer.
2299    ///
2300    /// This is equivalent to using [`wrapping_byte_sub`] to atomically
2301    /// perform `ptr = ptr.wrapping_byte_sub(val)`.
2302    ///
2303    /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2304    /// memory ordering of this operation. All ordering modes are possible. Note
2305    /// that using [`Acquire`] makes the store part of this operation
2306    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2307    ///
2308    /// **Note**: This method is only available on platforms that support atomic
2309    /// operations on [`AtomicPtr`].
2310    ///
2311    /// [`wrapping_byte_sub`]: pointer::wrapping_byte_sub
2312    ///
2313    /// # Examples
2314    ///
2315    /// ```
2316    /// #![feature(strict_provenance_atomic_ptr)]
2317    /// use core::sync::atomic::{AtomicPtr, Ordering};
2318    ///
2319    /// let atom = AtomicPtr::<i64>::new(core::ptr::without_provenance_mut(1));
2320    /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
2321    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
2322    /// ```
2323    #[inline]
2324    #[cfg(target_has_atomic = "ptr")]
2325    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2326    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2327    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2328        // SAFETY: data races are prevented by atomic intrinsics.
2329        unsafe { atomic_sub(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2330    }
2331
2332    /// Performs a bitwise "or" operation on the address of the current pointer,
2333    /// and the argument `val`, and stores a pointer with provenance of the
2334    /// current pointer and the resulting address.
2335    ///
2336    /// This is equivalent to using [`map_addr`] to atomically perform
2337    /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2338    /// pointer schemes to atomically set tag bits.
2339    ///
2340    /// **Caveat**: This operation returns the previous value. To compute the
2341    /// stored value without losing provenance, you may use [`map_addr`]. For
2342    /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2343    ///
2344    /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2345    /// ordering of this operation. All ordering modes are possible. Note that
2346    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2347    /// and using [`Release`] makes the load part [`Relaxed`].
2348    ///
2349    /// **Note**: This method is only available on platforms that support atomic
2350    /// operations on [`AtomicPtr`].
2351    ///
2352    /// This API and its claimed semantics are part of the Strict Provenance
2353    /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2354    /// details.
2355    ///
2356    /// [`map_addr`]: pointer::map_addr
2357    ///
2358    /// # Examples
2359    ///
2360    /// ```
2361    /// #![feature(strict_provenance_atomic_ptr)]
2362    /// use core::sync::atomic::{AtomicPtr, Ordering};
2363    ///
2364    /// let pointer = &mut 3i64 as *mut i64;
2365    ///
2366    /// let atom = AtomicPtr::<i64>::new(pointer);
2367    /// // Tag the bottom bit of the pointer.
2368    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2369    /// // Extract and untag.
2370    /// let tagged = atom.load(Ordering::Relaxed);
2371    /// assert_eq!(tagged.addr() & 1, 1);
2372    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2373    /// ```
2374    #[inline]
2375    #[cfg(target_has_atomic = "ptr")]
2376    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2377    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2378    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2379        // SAFETY: data races are prevented by atomic intrinsics.
2380        unsafe { atomic_or(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2381    }
2382
2383    /// Performs a bitwise "and" operation on the address of the current
2384    /// pointer, and the argument `val`, and stores a pointer with provenance of
2385    /// the current pointer and the resulting address.
2386    ///
2387    /// This is equivalent to using [`map_addr`] to atomically perform
2388    /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2389    /// pointer schemes to atomically unset tag bits.
2390    ///
2391    /// **Caveat**: This operation returns the previous value. To compute the
2392    /// stored value without losing provenance, you may use [`map_addr`]. For
2393    /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2394    ///
2395    /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2396    /// ordering of this operation. All ordering modes are possible. Note that
2397    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2398    /// and using [`Release`] makes the load part [`Relaxed`].
2399    ///
2400    /// **Note**: This method is only available on platforms that support atomic
2401    /// operations on [`AtomicPtr`].
2402    ///
2403    /// This API and its claimed semantics are part of the Strict Provenance
2404    /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2405    /// details.
2406    ///
2407    /// [`map_addr`]: pointer::map_addr
2408    ///
2409    /// # Examples
2410    ///
2411    /// ```
2412    /// #![feature(strict_provenance_atomic_ptr)]
2413    /// use core::sync::atomic::{AtomicPtr, Ordering};
2414    ///
2415    /// let pointer = &mut 3i64 as *mut i64;
2416    /// // A tagged pointer
2417    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2418    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2419    /// // Untag, and extract the previously tagged pointer.
2420    /// let untagged = atom.fetch_and(!1, Ordering::Relaxed)
2421    ///     .map_addr(|a| a & !1);
2422    /// assert_eq!(untagged, pointer);
2423    /// ```
2424    #[inline]
2425    #[cfg(target_has_atomic = "ptr")]
2426    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2427    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2428    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2429        // SAFETY: data races are prevented by atomic intrinsics.
2430        unsafe { atomic_and(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2431    }
2432
2433    /// Performs a bitwise "xor" operation on the address of the current
2434    /// pointer, and the argument `val`, and stores a pointer with provenance of
2435    /// the current pointer and the resulting address.
2436    ///
2437    /// This is equivalent to using [`map_addr`] to atomically perform
2438    /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2439    /// pointer schemes to atomically toggle tag bits.
2440    ///
2441    /// **Caveat**: This operation returns the previous value. To compute the
2442    /// stored value without losing provenance, you may use [`map_addr`]. For
2443    /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2444    ///
2445    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2446    /// ordering of this operation. All ordering modes are possible. Note that
2447    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2448    /// and using [`Release`] makes the load part [`Relaxed`].
2449    ///
2450    /// **Note**: This method is only available on platforms that support atomic
2451    /// operations on [`AtomicPtr`].
2452    ///
2453    /// This API and its claimed semantics are part of the Strict Provenance
2454    /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2455    /// details.
2456    ///
2457    /// [`map_addr`]: pointer::map_addr
2458    ///
2459    /// # Examples
2460    ///
2461    /// ```
2462    /// #![feature(strict_provenance_atomic_ptr)]
2463    /// use core::sync::atomic::{AtomicPtr, Ordering};
2464    ///
2465    /// let pointer = &mut 3i64 as *mut i64;
2466    /// let atom = AtomicPtr::<i64>::new(pointer);
2467    ///
2468    /// // Toggle a tag bit on the pointer.
2469    /// atom.fetch_xor(1, Ordering::Relaxed);
2470    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2471    /// ```
2472    #[inline]
2473    #[cfg(target_has_atomic = "ptr")]
2474    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2475    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2476    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2477        // SAFETY: data races are prevented by atomic intrinsics.
2478        unsafe { atomic_xor(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2479    }
2480
2481    /// Returns a mutable pointer to the underlying pointer.
2482    ///
2483    /// Doing non-atomic reads and writes on the resulting pointer can be a data race.
2484    /// This method is mostly useful for FFI, where the function signature may use
2485    /// `*mut *mut T` instead of `&AtomicPtr<T>`.
2486    ///
2487    /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
2488    /// atomic types work with interior mutability. All modifications of an atomic change the value
2489    /// through a shared reference, and can do so safely as long as they use atomic operations. Any
2490    /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
2491    /// restriction: operations on it must be atomic.
2492    ///
2493    /// # Examples
2494    ///
2495    /// ```ignore (extern-declaration)
2496    /// use std::sync::atomic::AtomicPtr;
2497    ///
2498    /// extern "C" {
2499    ///     fn my_atomic_op(arg: *mut *mut u32);
2500    /// }
2501    ///
2502    /// let mut value = 17;
2503    /// let atomic = AtomicPtr::new(&mut value);
2504    ///
2505    /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
2506    /// unsafe {
2507    ///     my_atomic_op(atomic.as_ptr());
2508    /// }
2509    /// ```
2510    #[inline]
2511    #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
2512    #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
2513    #[rustc_never_returns_null_ptr]
2514    pub const fn as_ptr(&self) -> *mut *mut T {
2515        self.p.get()
2516    }
2517}
2518
2519#[cfg(target_has_atomic_load_store = "8")]
2520#[stable(feature = "atomic_bool_from", since = "1.24.0")]
2521#[rustc_const_unstable(feature = "const_try", issue = "74935")]
2522impl const From<bool> for AtomicBool {
2523    /// Converts a `bool` into an `AtomicBool`.
2524    ///
2525    /// # Examples
2526    ///
2527    /// ```
2528    /// use std::sync::atomic::AtomicBool;
2529    /// let atomic_bool = AtomicBool::from(true);
2530    /// assert_eq!(format!("{atomic_bool:?}"), "true")
2531    /// ```
2532    #[inline]
2533    fn from(b: bool) -> Self {
2534        Self::new(b)
2535    }
2536}
2537
2538#[cfg(target_has_atomic_load_store = "ptr")]
2539#[stable(feature = "atomic_from", since = "1.23.0")]
2540impl<T> From<*mut T> for AtomicPtr<T> {
2541    /// Converts a `*mut T` into an `AtomicPtr<T>`.
2542    #[inline]
2543    fn from(p: *mut T) -> Self {
2544        Self::new(p)
2545    }
2546}
2547
2548#[allow(unused_macros)] // This macro ends up being unused on some architectures.
2549macro_rules! if_8_bit {
2550    (u8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2551    (i8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2552    ($_:ident, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($no)*)?) };
2553}
2554
2555#[cfg(target_has_atomic_load_store)]
2556macro_rules! atomic_int {
2557    ($cfg_cas:meta,
2558     $cfg_align:meta,
2559     $stable:meta,
2560     $stable_cxchg:meta,
2561     $stable_debug:meta,
2562     $stable_access:meta,
2563     $stable_from:meta,
2564     $stable_nand:meta,
2565     $const_stable_new:meta,
2566     $const_stable_into_inner:meta,
2567     $diagnostic_item:meta,
2568     $s_int_type:literal,
2569     $extra_feature:expr,
2570     $min_fn:ident, $max_fn:ident,
2571     $align:expr,
2572     $int_type:ident $atomic_type:ident) => {
2573        /// An integer type which can be safely shared between threads.
2574        ///
2575        /// This type has the same
2576        #[doc = if_8_bit!(
2577            $int_type,
2578            yes = ["size, alignment, and bit validity"],
2579            no = ["size and bit validity"],
2580        )]
2581        /// as the underlying integer type, [`
2582        #[doc = $s_int_type]
2583        /// `].
2584        #[doc = if_8_bit! {
2585            $int_type,
2586            no = [
2587                "However, the alignment of this type is always equal to its ",
2588                "size, even on targets where [`", $s_int_type, "`] has a ",
2589                "lesser alignment."
2590            ],
2591        }]
2592        ///
2593        /// For more about the differences between atomic types and
2594        /// non-atomic types as well as information about the portability of
2595        /// this type, please see the [module-level documentation].
2596        ///
2597        /// **Note:** This type is only available on platforms that support
2598        /// atomic loads and stores of [`
2599        #[doc = $s_int_type]
2600        /// `].
2601        ///
2602        /// [module-level documentation]: crate::sync::atomic
2603        #[$stable]
2604        #[$diagnostic_item]
2605        #[repr(C, align($align))]
2606        pub struct $atomic_type {
2607            v: UnsafeCell<$int_type>,
2608        }
2609
2610        #[$stable]
2611        impl Default for $atomic_type {
2612            #[inline]
2613            fn default() -> Self {
2614                Self::new(Default::default())
2615            }
2616        }
2617
2618        #[$stable_from]
2619        #[rustc_const_unstable(feature = "const_try", issue = "74935")]
2620        impl const From<$int_type> for $atomic_type {
2621            #[doc = concat!("Converts an `", stringify!($int_type), "` into an `", stringify!($atomic_type), "`.")]
2622            #[inline]
2623            fn from(v: $int_type) -> Self { Self::new(v) }
2624        }
2625
2626        #[$stable_debug]
2627        impl fmt::Debug for $atomic_type {
2628            fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2629                fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
2630            }
2631        }
2632
2633        // Send is implicitly implemented.
2634        #[$stable]
2635        unsafe impl Sync for $atomic_type {}
2636
2637        impl $atomic_type {
2638            /// Creates a new atomic integer.
2639            ///
2640            /// # Examples
2641            ///
2642            /// ```
2643            #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2644            ///
2645            #[doc = concat!("let atomic_forty_two = ", stringify!($atomic_type), "::new(42);")]
2646            /// ```
2647            #[inline]
2648            #[$stable]
2649            #[$const_stable_new]
2650            #[must_use]
2651            pub const fn new(v: $int_type) -> Self {
2652                Self {v: UnsafeCell::new(v)}
2653            }
2654
2655            /// Creates a new reference to an atomic integer from a pointer.
2656            ///
2657            /// # Examples
2658            ///
2659            /// ```
2660            #[doc = concat!($extra_feature, "use std::sync::atomic::{self, ", stringify!($atomic_type), "};")]
2661            ///
2662            /// // Get a pointer to an allocated value
2663            #[doc = concat!("let ptr: *mut ", stringify!($int_type), " = Box::into_raw(Box::new(0));")]
2664            ///
2665            #[doc = concat!("assert!(ptr.cast::<", stringify!($atomic_type), ">().is_aligned());")]
2666            ///
2667            /// {
2668            ///     // Create an atomic view of the allocated value
2669            // SAFETY: this is a doc comment, tidy, it can't hurt you (also guaranteed by the construction of `ptr` and the assert above)
2670            #[doc = concat!("    let atomic = unsafe {", stringify!($atomic_type), "::from_ptr(ptr) };")]
2671            ///
2672            ///     // Use `atomic` for atomic operations, possibly share it with other threads
2673            ///     atomic.store(1, atomic::Ordering::Relaxed);
2674            /// }
2675            ///
2676            /// // It's ok to non-atomically access the value behind `ptr`,
2677            /// // since the reference to the atomic ended its lifetime in the block above
2678            /// assert_eq!(unsafe { *ptr }, 1);
2679            ///
2680            /// // Deallocate the value
2681            /// unsafe { drop(Box::from_raw(ptr)) }
2682            /// ```
2683            ///
2684            /// # Safety
2685            ///
2686            /// * `ptr` must be aligned to
2687            #[doc = concat!("  `align_of::<", stringify!($atomic_type), ">()`")]
2688            #[doc = if_8_bit!{
2689                $int_type,
2690                yes = [
2691                    "  (note that this is always true, since `align_of::<",
2692                    stringify!($atomic_type), ">() == 1`)."
2693                ],
2694                no = [
2695                    "  (note that on some platforms this can be bigger than `align_of::<",
2696                    stringify!($int_type), ">()`)."
2697                ],
2698            }]
2699            /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2700            /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
2701            ///   allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
2702            ///   without synchronization.
2703            ///
2704            /// [valid]: crate::ptr#safety
2705            /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
2706            #[inline]
2707            #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
2708            #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
2709            pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a $atomic_type {
2710                // SAFETY: guaranteed by the caller
2711                unsafe { &*ptr.cast() }
2712            }
2713
2714
2715            /// Returns a mutable reference to the underlying integer.
2716            ///
2717            /// This is safe because the mutable reference guarantees that no other threads are
2718            /// concurrently accessing the atomic data.
2719            ///
2720            /// # Examples
2721            ///
2722            /// ```
2723            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2724            ///
2725            #[doc = concat!("let mut some_var = ", stringify!($atomic_type), "::new(10);")]
2726            /// assert_eq!(*some_var.get_mut(), 10);
2727            /// *some_var.get_mut() = 5;
2728            /// assert_eq!(some_var.load(Ordering::SeqCst), 5);
2729            /// ```
2730            #[inline]
2731            #[$stable_access]
2732            pub fn get_mut(&mut self) -> &mut $int_type {
2733                self.v.get_mut()
2734            }
2735
2736            #[doc = concat!("Get atomic access to a `&mut ", stringify!($int_type), "`.")]
2737            ///
2738            #[doc = if_8_bit! {
2739                $int_type,
2740                no = [
2741                    "**Note:** This function is only available on targets where `",
2742                    stringify!($atomic_type), "` has the same alignment as `", stringify!($int_type), "`."
2743                ],
2744            }]
2745            ///
2746            /// # Examples
2747            ///
2748            /// ```
2749            /// #![feature(atomic_from_mut)]
2750            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2751            ///
2752            /// let mut some_int = 123;
2753            #[doc = concat!("let a = ", stringify!($atomic_type), "::from_mut(&mut some_int);")]
2754            /// a.store(100, Ordering::Relaxed);
2755            /// assert_eq!(some_int, 100);
2756            /// ```
2757            ///
2758            #[inline]
2759            #[$cfg_align]
2760            #[unstable(feature = "atomic_from_mut", issue = "76314")]
2761            pub fn from_mut(v: &mut $int_type) -> &mut Self {
2762                let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2763                // SAFETY:
2764                //  - the mutable reference guarantees unique ownership.
2765                //  - the alignment of `$int_type` and `Self` is the
2766                //    same, as promised by $cfg_align and verified above.
2767                unsafe { &mut *(v as *mut $int_type as *mut Self) }
2768            }
2769
2770            #[doc = concat!("Get non-atomic access to a `&mut [", stringify!($atomic_type), "]` slice")]
2771            ///
2772            /// This is safe because the mutable reference guarantees that no other threads are
2773            /// concurrently accessing the atomic data.
2774            ///
2775            /// # Examples
2776            ///
2777            /// ```ignore-wasm
2778            /// #![feature(atomic_from_mut)]
2779            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2780            ///
2781            #[doc = concat!("let mut some_ints = [const { ", stringify!($atomic_type), "::new(0) }; 10];")]
2782            ///
2783            #[doc = concat!("let view: &mut [", stringify!($int_type), "] = ", stringify!($atomic_type), "::get_mut_slice(&mut some_ints);")]
2784            /// assert_eq!(view, [0; 10]);
2785            /// view
2786            ///     .iter_mut()
2787            ///     .enumerate()
2788            ///     .for_each(|(idx, int)| *int = idx as _);
2789            ///
2790            /// std::thread::scope(|s| {
2791            ///     some_ints
2792            ///         .iter()
2793            ///         .enumerate()
2794            ///         .for_each(|(idx, int)| {
2795            ///             s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
2796            ///         })
2797            /// });
2798            /// ```
2799            #[inline]
2800            #[unstable(feature = "atomic_from_mut", issue = "76314")]
2801            pub fn get_mut_slice(this: &mut [Self]) -> &mut [$int_type] {
2802                // SAFETY: the mutable reference guarantees unique ownership.
2803                unsafe { &mut *(this as *mut [Self] as *mut [$int_type]) }
2804            }
2805
2806            #[doc = concat!("Get atomic access to a `&mut [", stringify!($int_type), "]` slice.")]
2807            ///
2808            /// # Examples
2809            ///
2810            /// ```ignore-wasm
2811            /// #![feature(atomic_from_mut)]
2812            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2813            ///
2814            /// let mut some_ints = [0; 10];
2815            #[doc = concat!("let a = &*", stringify!($atomic_type), "::from_mut_slice(&mut some_ints);")]
2816            /// std::thread::scope(|s| {
2817            ///     for i in 0..a.len() {
2818            ///         s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
2819            ///     }
2820            /// });
2821            /// for (i, n) in some_ints.into_iter().enumerate() {
2822            ///     assert_eq!(i, n as usize);
2823            /// }
2824            /// ```
2825            #[inline]
2826            #[$cfg_align]
2827            #[unstable(feature = "atomic_from_mut", issue = "76314")]
2828            pub fn from_mut_slice(v: &mut [$int_type]) -> &mut [Self] {
2829                let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2830                // SAFETY:
2831                //  - the mutable reference guarantees unique ownership.
2832                //  - the alignment of `$int_type` and `Self` is the
2833                //    same, as promised by $cfg_align and verified above.
2834                unsafe { &mut *(v as *mut [$int_type] as *mut [Self]) }
2835            }
2836
2837            /// Consumes the atomic and returns the contained value.
2838            ///
2839            /// This is safe because passing `self` by value guarantees that no other threads are
2840            /// concurrently accessing the atomic data.
2841            ///
2842            /// # Examples
2843            ///
2844            /// ```
2845            #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2846            ///
2847            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2848            /// assert_eq!(some_var.into_inner(), 5);
2849            /// ```
2850            #[inline]
2851            #[$stable_access]
2852            #[$const_stable_into_inner]
2853            pub const fn into_inner(self) -> $int_type {
2854                self.v.into_inner()
2855            }
2856
2857            /// Loads a value from the atomic integer.
2858            ///
2859            /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2860            /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2861            ///
2862            /// # Panics
2863            ///
2864            /// Panics if `order` is [`Release`] or [`AcqRel`].
2865            ///
2866            /// # Examples
2867            ///
2868            /// ```
2869            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2870            ///
2871            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2872            ///
2873            /// assert_eq!(some_var.load(Ordering::Relaxed), 5);
2874            /// ```
2875            #[inline]
2876            #[$stable]
2877            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2878            pub fn load(&self, order: Ordering) -> $int_type {
2879                // SAFETY: data races are prevented by atomic intrinsics.
2880                unsafe { atomic_load(self.v.get(), order) }
2881            }
2882
2883            /// Stores a value into the atomic integer.
2884            ///
2885            /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2886            ///  Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
2887            ///
2888            /// # Panics
2889            ///
2890            /// Panics if `order` is [`Acquire`] or [`AcqRel`].
2891            ///
2892            /// # Examples
2893            ///
2894            /// ```
2895            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2896            ///
2897            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2898            ///
2899            /// some_var.store(10, Ordering::Relaxed);
2900            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2901            /// ```
2902            #[inline]
2903            #[$stable]
2904            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2905            pub fn store(&self, val: $int_type, order: Ordering) {
2906                // SAFETY: data races are prevented by atomic intrinsics.
2907                unsafe { atomic_store(self.v.get(), val, order); }
2908            }
2909
2910            /// Stores a value into the atomic integer, returning the previous value.
2911            ///
2912            /// `swap` takes an [`Ordering`] argument which describes the memory ordering
2913            /// of this operation. All ordering modes are possible. Note that using
2914            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2915            /// using [`Release`] makes the load part [`Relaxed`].
2916            ///
2917            /// **Note**: This method is only available on platforms that support atomic operations on
2918            #[doc = concat!("[`", $s_int_type, "`].")]
2919            ///
2920            /// # Examples
2921            ///
2922            /// ```
2923            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2924            ///
2925            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2926            ///
2927            /// assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
2928            /// ```
2929            #[inline]
2930            #[$stable]
2931            #[$cfg_cas]
2932            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2933            pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
2934                // SAFETY: data races are prevented by atomic intrinsics.
2935                unsafe { atomic_swap(self.v.get(), val, order) }
2936            }
2937
2938            /// Stores a value into the atomic integer if the current value is the same as
2939            /// the `current` value.
2940            ///
2941            /// The return value is always the previous value. If it is equal to `current`, then the
2942            /// value was updated.
2943            ///
2944            /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
2945            /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
2946            /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
2947            /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
2948            /// happens, and using [`Release`] makes the load part [`Relaxed`].
2949            ///
2950            /// **Note**: This method is only available on platforms that support atomic operations on
2951            #[doc = concat!("[`", $s_int_type, "`].")]
2952            ///
2953            /// # Migrating to `compare_exchange` and `compare_exchange_weak`
2954            ///
2955            /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
2956            /// memory orderings:
2957            ///
2958            /// Original | Success | Failure
2959            /// -------- | ------- | -------
2960            /// Relaxed  | Relaxed | Relaxed
2961            /// Acquire  | Acquire | Acquire
2962            /// Release  | Release | Relaxed
2963            /// AcqRel   | AcqRel  | Acquire
2964            /// SeqCst   | SeqCst  | SeqCst
2965            ///
2966            /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
2967            /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
2968            /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
2969            /// rather than to infer success vs failure based on the value that was read.
2970            ///
2971            /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
2972            /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
2973            /// which allows the compiler to generate better assembly code when the compare and swap
2974            /// is used in a loop.
2975            ///
2976            /// # Examples
2977            ///
2978            /// ```
2979            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2980            ///
2981            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2982            ///
2983            /// assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
2984            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2985            ///
2986            /// assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
2987            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2988            /// ```
2989            #[inline]
2990            #[$stable]
2991            #[deprecated(
2992                since = "1.50.0",
2993                note = "Use `compare_exchange` or `compare_exchange_weak` instead")
2994            ]
2995            #[$cfg_cas]
2996            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2997            pub fn compare_and_swap(&self,
2998                                    current: $int_type,
2999                                    new: $int_type,
3000                                    order: Ordering) -> $int_type {
3001                match self.compare_exchange(current,
3002                                            new,
3003                                            order,
3004                                            strongest_failure_ordering(order)) {
3005                    Ok(x) => x,
3006                    Err(x) => x,
3007                }
3008            }
3009
3010            /// Stores a value into the atomic integer if the current value is the same as
3011            /// the `current` value.
3012            ///
3013            /// The return value is a result indicating whether the new value was written and
3014            /// containing the previous value. On success this value is guaranteed to be equal to
3015            /// `current`.
3016            ///
3017            /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
3018            /// ordering of this operation. `success` describes the required ordering for the
3019            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
3020            /// `failure` describes the required ordering for the load operation that takes place when
3021            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
3022            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
3023            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3024            ///
3025            /// **Note**: This method is only available on platforms that support atomic operations on
3026            #[doc = concat!("[`", $s_int_type, "`].")]
3027            ///
3028            /// # Examples
3029            ///
3030            /// ```
3031            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3032            ///
3033            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
3034            ///
3035            /// assert_eq!(some_var.compare_exchange(5, 10,
3036            ///                                      Ordering::Acquire,
3037            ///                                      Ordering::Relaxed),
3038            ///            Ok(5));
3039            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3040            ///
3041            /// assert_eq!(some_var.compare_exchange(6, 12,
3042            ///                                      Ordering::SeqCst,
3043            ///                                      Ordering::Acquire),
3044            ///            Err(10));
3045            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
3046            /// ```
3047            ///
3048            /// # Considerations
3049            ///
3050            /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
3051            /// of CAS operations. In particular, a load of the value followed by a successful
3052            /// `compare_exchange` with the previous load *does not ensure* that other threads have not
3053            /// changed the value in the interim! This is usually important when the *equality* check in
3054            /// the `compare_exchange` is being used to check the *identity* of a value, but equality
3055            /// does not necessarily imply identity. This is a particularly common case for pointers, as
3056            /// a pointer holding the same address does not imply that the same object exists at that
3057            /// address! In this case, `compare_exchange` can lead to the [ABA problem].
3058            ///
3059            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3060            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3061            #[inline]
3062            #[$stable_cxchg]
3063            #[$cfg_cas]
3064            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3065            pub fn compare_exchange(&self,
3066                                    current: $int_type,
3067                                    new: $int_type,
3068                                    success: Ordering,
3069                                    failure: Ordering) -> Result<$int_type, $int_type> {
3070                // SAFETY: data races are prevented by atomic intrinsics.
3071                unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) }
3072            }
3073
3074            /// Stores a value into the atomic integer if the current value is the same as
3075            /// the `current` value.
3076            ///
3077            #[doc = concat!("Unlike [`", stringify!($atomic_type), "::compare_exchange`],")]
3078            /// this function is allowed to spuriously fail even
3079            /// when the comparison succeeds, which can result in more efficient code on some
3080            /// platforms. The return value is a result indicating whether the new value was
3081            /// written and containing the previous value.
3082            ///
3083            /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3084            /// ordering of this operation. `success` describes the required ordering for the
3085            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
3086            /// `failure` describes the required ordering for the load operation that takes place when
3087            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
3088            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
3089            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3090            ///
3091            /// **Note**: This method is only available on platforms that support atomic operations on
3092            #[doc = concat!("[`", $s_int_type, "`].")]
3093            ///
3094            /// # Examples
3095            ///
3096            /// ```
3097            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3098            ///
3099            #[doc = concat!("let val = ", stringify!($atomic_type), "::new(4);")]
3100            ///
3101            /// let mut old = val.load(Ordering::Relaxed);
3102            /// loop {
3103            ///     let new = old * 2;
3104            ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3105            ///         Ok(_) => break,
3106            ///         Err(x) => old = x,
3107            ///     }
3108            /// }
3109            /// ```
3110            ///
3111            /// # Considerations
3112            ///
3113            /// `compare_exchange` is a [compare-and-swap operation] and thus exhibits the usual downsides
3114            /// of CAS operations. In particular, a load of the value followed by a successful
3115            /// `compare_exchange` with the previous load *does not ensure* that other threads have not
3116            /// changed the value in the interim. This is usually important when the *equality* check in
3117            /// the `compare_exchange` is being used to check the *identity* of a value, but equality
3118            /// does not necessarily imply identity. This is a particularly common case for pointers, as
3119            /// a pointer holding the same address does not imply that the same object exists at that
3120            /// address! In this case, `compare_exchange` can lead to the [ABA problem].
3121            ///
3122            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3123            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3124            #[inline]
3125            #[$stable_cxchg]
3126            #[$cfg_cas]
3127            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3128            pub fn compare_exchange_weak(&self,
3129                                         current: $int_type,
3130                                         new: $int_type,
3131                                         success: Ordering,
3132                                         failure: Ordering) -> Result<$int_type, $int_type> {
3133                // SAFETY: data races are prevented by atomic intrinsics.
3134                unsafe {
3135                    atomic_compare_exchange_weak(self.v.get(), current, new, success, failure)
3136                }
3137            }
3138
3139            /// Adds to the current value, returning the previous value.
3140            ///
3141            /// This operation wraps around on overflow.
3142            ///
3143            /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3144            /// of this operation. All ordering modes are possible. Note that using
3145            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3146            /// using [`Release`] makes the load part [`Relaxed`].
3147            ///
3148            /// **Note**: This method is only available on platforms that support atomic operations on
3149            #[doc = concat!("[`", $s_int_type, "`].")]
3150            ///
3151            /// # Examples
3152            ///
3153            /// ```
3154            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3155            ///
3156            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0);")]
3157            /// assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3158            /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3159            /// ```
3160            #[inline]
3161            #[$stable]
3162            #[$cfg_cas]
3163            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3164            pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3165                // SAFETY: data races are prevented by atomic intrinsics.
3166                unsafe { atomic_add(self.v.get(), val, order) }
3167            }
3168
3169            /// Subtracts from the current value, returning the previous value.
3170            ///
3171            /// This operation wraps around on overflow.
3172            ///
3173            /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3174            /// of this operation. All ordering modes are possible. Note that using
3175            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3176            /// using [`Release`] makes the load part [`Relaxed`].
3177            ///
3178            /// **Note**: This method is only available on platforms that support atomic operations on
3179            #[doc = concat!("[`", $s_int_type, "`].")]
3180            ///
3181            /// # Examples
3182            ///
3183            /// ```
3184            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3185            ///
3186            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(20);")]
3187            /// assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3188            /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3189            /// ```
3190            #[inline]
3191            #[$stable]
3192            #[$cfg_cas]
3193            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3194            pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3195                // SAFETY: data races are prevented by atomic intrinsics.
3196                unsafe { atomic_sub(self.v.get(), val, order) }
3197            }
3198
3199            /// Bitwise "and" with the current value.
3200            ///
3201            /// Performs a bitwise "and" operation on the current value and the argument `val`, and
3202            /// sets the new value to the result.
3203            ///
3204            /// Returns the previous value.
3205            ///
3206            /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3207            /// of this operation. All ordering modes are possible. Note that using
3208            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3209            /// using [`Release`] makes the load part [`Relaxed`].
3210            ///
3211            /// **Note**: This method is only available on platforms that support atomic operations on
3212            #[doc = concat!("[`", $s_int_type, "`].")]
3213            ///
3214            /// # Examples
3215            ///
3216            /// ```
3217            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3218            ///
3219            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3220            /// assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3221            /// assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3222            /// ```
3223            #[inline]
3224            #[$stable]
3225            #[$cfg_cas]
3226            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3227            pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3228                // SAFETY: data races are prevented by atomic intrinsics.
3229                unsafe { atomic_and(self.v.get(), val, order) }
3230            }
3231
3232            /// Bitwise "nand" with the current value.
3233            ///
3234            /// Performs a bitwise "nand" operation on the current value and the argument `val`, and
3235            /// sets the new value to the result.
3236            ///
3237            /// Returns the previous value.
3238            ///
3239            /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3240            /// of this operation. All ordering modes are possible. Note that using
3241            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3242            /// using [`Release`] makes the load part [`Relaxed`].
3243            ///
3244            /// **Note**: This method is only available on platforms that support atomic operations on
3245            #[doc = concat!("[`", $s_int_type, "`].")]
3246            ///
3247            /// # Examples
3248            ///
3249            /// ```
3250            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3251            ///
3252            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0x13);")]
3253            /// assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3254            /// assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3255            /// ```
3256            #[inline]
3257            #[$stable_nand]
3258            #[$cfg_cas]
3259            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3260            pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3261                // SAFETY: data races are prevented by atomic intrinsics.
3262                unsafe { atomic_nand(self.v.get(), val, order) }
3263            }
3264
3265            /// Bitwise "or" with the current value.
3266            ///
3267            /// Performs a bitwise "or" operation on the current value and the argument `val`, and
3268            /// sets the new value to the result.
3269            ///
3270            /// Returns the previous value.
3271            ///
3272            /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3273            /// of this operation. All ordering modes are possible. Note that using
3274            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3275            /// using [`Release`] makes the load part [`Relaxed`].
3276            ///
3277            /// **Note**: This method is only available on platforms that support atomic operations on
3278            #[doc = concat!("[`", $s_int_type, "`].")]
3279            ///
3280            /// # Examples
3281            ///
3282            /// ```
3283            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3284            ///
3285            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3286            /// assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3287            /// assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3288            /// ```
3289            #[inline]
3290            #[$stable]
3291            #[$cfg_cas]
3292            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3293            pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3294                // SAFETY: data races are prevented by atomic intrinsics.
3295                unsafe { atomic_or(self.v.get(), val, order) }
3296            }
3297
3298            /// Bitwise "xor" with the current value.
3299            ///
3300            /// Performs a bitwise "xor" operation on the current value and the argument `val`, and
3301            /// sets the new value to the result.
3302            ///
3303            /// Returns the previous value.
3304            ///
3305            /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3306            /// of this operation. All ordering modes are possible. Note that using
3307            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3308            /// using [`Release`] makes the load part [`Relaxed`].
3309            ///
3310            /// **Note**: This method is only available on platforms that support atomic operations on
3311            #[doc = concat!("[`", $s_int_type, "`].")]
3312            ///
3313            /// # Examples
3314            ///
3315            /// ```
3316            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3317            ///
3318            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3319            /// assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3320            /// assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3321            /// ```
3322            #[inline]
3323            #[$stable]
3324            #[$cfg_cas]
3325            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3326            pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3327                // SAFETY: data races are prevented by atomic intrinsics.
3328                unsafe { atomic_xor(self.v.get(), val, order) }
3329            }
3330
3331            /// Fetches the value, and applies a function to it that returns an optional
3332            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3333            /// `Err(previous_value)`.
3334            ///
3335            /// Note: This may call the function multiple times if the value has been changed from other threads in
3336            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3337            /// only once to the stored value.
3338            ///
3339            /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3340            /// The first describes the required ordering for when the operation finally succeeds while the second
3341            /// describes the required ordering for loads. These correspond to the success and failure orderings of
3342            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3343            /// respectively.
3344            ///
3345            /// Using [`Acquire`] as success ordering makes the store part
3346            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3347            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3348            ///
3349            /// **Note**: This method is only available on platforms that support atomic operations on
3350            #[doc = concat!("[`", $s_int_type, "`].")]
3351            ///
3352            /// # Considerations
3353            ///
3354            /// This method is not magic; it is not provided by the hardware, and does not act like a
3355            /// critical section or mutex.
3356            ///
3357            /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3358            /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3359            /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3360            /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3361            ///
3362            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3363            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3364            ///
3365            /// # Examples
3366            ///
3367            /// ```rust
3368            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3369            ///
3370            #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3371            /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3372            /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3373            /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3374            /// assert_eq!(x.load(Ordering::SeqCst), 9);
3375            /// ```
3376            #[inline]
3377            #[stable(feature = "no_more_cas", since = "1.45.0")]
3378            #[$cfg_cas]
3379            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3380            pub fn fetch_update<F>(&self,
3381                                   set_order: Ordering,
3382                                   fetch_order: Ordering,
3383                                   mut f: F) -> Result<$int_type, $int_type>
3384            where F: FnMut($int_type) -> Option<$int_type> {
3385                let mut prev = self.load(fetch_order);
3386                while let Some(next) = f(prev) {
3387                    match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3388                        x @ Ok(_) => return x,
3389                        Err(next_prev) => prev = next_prev
3390                    }
3391                }
3392                Err(prev)
3393            }
3394
3395            /// Fetches the value, and applies a function to it that returns an optional
3396            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3397            /// `Err(previous_value)`.
3398            ///
3399            #[doc = concat!("See also: [`update`](`", stringify!($atomic_type), "::update`).")]
3400            ///
3401            /// Note: This may call the function multiple times if the value has been changed from other threads in
3402            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3403            /// only once to the stored value.
3404            ///
3405            /// `try_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3406            /// The first describes the required ordering for when the operation finally succeeds while the second
3407            /// describes the required ordering for loads. These correspond to the success and failure orderings of
3408            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3409            /// respectively.
3410            ///
3411            /// Using [`Acquire`] as success ordering makes the store part
3412            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3413            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3414            ///
3415            /// **Note**: This method is only available on platforms that support atomic operations on
3416            #[doc = concat!("[`", $s_int_type, "`].")]
3417            ///
3418            /// # Considerations
3419            ///
3420            /// This method is not magic; it is not provided by the hardware, and does not act like a
3421            /// critical section or mutex.
3422            ///
3423            /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3424            /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3425            /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3426            /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3427            ///
3428            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3429            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3430            ///
3431            /// # Examples
3432            ///
3433            /// ```rust
3434            /// #![feature(atomic_try_update)]
3435            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3436            ///
3437            #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3438            /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3439            /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3440            /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3441            /// assert_eq!(x.load(Ordering::SeqCst), 9);
3442            /// ```
3443            #[inline]
3444            #[unstable(feature = "atomic_try_update", issue = "135894")]
3445            #[$cfg_cas]
3446            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3447            pub fn try_update(
3448                &self,
3449                set_order: Ordering,
3450                fetch_order: Ordering,
3451                f: impl FnMut($int_type) -> Option<$int_type>,
3452            ) -> Result<$int_type, $int_type> {
3453                // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
3454                //      when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
3455                self.fetch_update(set_order, fetch_order, f)
3456            }
3457
3458            /// Fetches the value, applies a function to it that it return a new value.
3459            /// The new value is stored and the old value is returned.
3460            ///
3461            #[doc = concat!("See also: [`try_update`](`", stringify!($atomic_type), "::try_update`).")]
3462            ///
3463            /// Note: This may call the function multiple times if the value has been changed from other threads in
3464            /// the meantime, but the function will have been applied only once to the stored value.
3465            ///
3466            /// `update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3467            /// The first describes the required ordering for when the operation finally succeeds while the second
3468            /// describes the required ordering for loads. These correspond to the success and failure orderings of
3469            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3470            /// respectively.
3471            ///
3472            /// Using [`Acquire`] as success ordering makes the store part
3473            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3474            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3475            ///
3476            /// **Note**: This method is only available on platforms that support atomic operations on
3477            #[doc = concat!("[`", $s_int_type, "`].")]
3478            ///
3479            /// # Considerations
3480            ///
3481            /// [CAS operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3482            /// This method is not magic; it is not provided by the hardware, and does not act like a
3483            /// critical section or mutex.
3484            ///
3485            /// It is implemented on top of an atomic [compare-and-swap operation], and thus is subject to
3486            /// the usual drawbacks of CAS operations. In particular, be careful of the [ABA problem]
3487            /// if this atomic integer is an index or more generally if knowledge of only the *bitwise value*
3488            /// of the atomic is not in and of itself sufficient to ensure any required preconditions.
3489            ///
3490            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3491            /// [compare-and-swap operation]: https://en.wikipedia.org/wiki/Compare-and-swap
3492            ///
3493            /// # Examples
3494            ///
3495            /// ```rust
3496            /// #![feature(atomic_try_update)]
3497            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3498            ///
3499            #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3500            /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 7);
3501            /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 8);
3502            /// assert_eq!(x.load(Ordering::SeqCst), 9);
3503            /// ```
3504            #[inline]
3505            #[unstable(feature = "atomic_try_update", issue = "135894")]
3506            #[$cfg_cas]
3507            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3508            pub fn update(
3509                &self,
3510                set_order: Ordering,
3511                fetch_order: Ordering,
3512                mut f: impl FnMut($int_type) -> $int_type,
3513            ) -> $int_type {
3514                let mut prev = self.load(fetch_order);
3515                loop {
3516                    match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
3517                        Ok(x) => break x,
3518                        Err(next_prev) => prev = next_prev,
3519                    }
3520                }
3521            }
3522
3523            /// Maximum with the current value.
3524            ///
3525            /// Finds the maximum of the current value and the argument `val`, and
3526            /// sets the new value to the result.
3527            ///
3528            /// Returns the previous value.
3529            ///
3530            /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3531            /// of this operation. All ordering modes are possible. Note that using
3532            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3533            /// using [`Release`] makes the load part [`Relaxed`].
3534            ///
3535            /// **Note**: This method is only available on platforms that support atomic operations on
3536            #[doc = concat!("[`", $s_int_type, "`].")]
3537            ///
3538            /// # Examples
3539            ///
3540            /// ```
3541            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3542            ///
3543            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3544            /// assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3545            /// assert_eq!(foo.load(Ordering::SeqCst), 42);
3546            /// ```
3547            ///
3548            /// If you want to obtain the maximum value in one step, you can use the following:
3549            ///
3550            /// ```
3551            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3552            ///
3553            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3554            /// let bar = 42;
3555            /// let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3556            /// assert!(max_foo == 42);
3557            /// ```
3558            #[inline]
3559            #[stable(feature = "atomic_min_max", since = "1.45.0")]
3560            #[$cfg_cas]
3561            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3562            pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3563                // SAFETY: data races are prevented by atomic intrinsics.
3564                unsafe { $max_fn(self.v.get(), val, order) }
3565            }
3566
3567            /// Minimum with the current value.
3568            ///
3569            /// Finds the minimum of the current value and the argument `val`, and
3570            /// sets the new value to the result.
3571            ///
3572            /// Returns the previous value.
3573            ///
3574            /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3575            /// of this operation. All ordering modes are possible. Note that using
3576            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3577            /// using [`Release`] makes the load part [`Relaxed`].
3578            ///
3579            /// **Note**: This method is only available on platforms that support atomic operations on
3580            #[doc = concat!("[`", $s_int_type, "`].")]
3581            ///
3582            /// # Examples
3583            ///
3584            /// ```
3585            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3586            ///
3587            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3588            /// assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3589            /// assert_eq!(foo.load(Ordering::Relaxed), 23);
3590            /// assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3591            /// assert_eq!(foo.load(Ordering::Relaxed), 22);
3592            /// ```
3593            ///
3594            /// If you want to obtain the minimum value in one step, you can use the following:
3595            ///
3596            /// ```
3597            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3598            ///
3599            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3600            /// let bar = 12;
3601            /// let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3602            /// assert_eq!(min_foo, 12);
3603            /// ```
3604            #[inline]
3605            #[stable(feature = "atomic_min_max", since = "1.45.0")]
3606            #[$cfg_cas]
3607            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3608            pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3609                // SAFETY: data races are prevented by atomic intrinsics.
3610                unsafe { $min_fn(self.v.get(), val, order) }
3611            }
3612
3613            /// Returns a mutable pointer to the underlying integer.
3614            ///
3615            /// Doing non-atomic reads and writes on the resulting integer can be a data race.
3616            /// This method is mostly useful for FFI, where the function signature may use
3617            #[doc = concat!("`*mut ", stringify!($int_type), "` instead of `&", stringify!($atomic_type), "`.")]
3618            ///
3619            /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
3620            /// atomic types work with interior mutability. All modifications of an atomic change the value
3621            /// through a shared reference, and can do so safely as long as they use atomic operations. Any
3622            /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
3623            /// restriction: operations on it must be atomic.
3624            ///
3625            /// # Examples
3626            ///
3627            /// ```ignore (extern-declaration)
3628            /// # fn main() {
3629            #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
3630            ///
3631            /// extern "C" {
3632            #[doc = concat!("    fn my_atomic_op(arg: *mut ", stringify!($int_type), ");")]
3633            /// }
3634            ///
3635            #[doc = concat!("let atomic = ", stringify!($atomic_type), "::new(1);")]
3636            ///
3637            /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
3638            /// unsafe {
3639            ///     my_atomic_op(atomic.as_ptr());
3640            /// }
3641            /// # }
3642            /// ```
3643            #[inline]
3644            #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
3645            #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
3646            #[rustc_never_returns_null_ptr]
3647            pub const fn as_ptr(&self) -> *mut $int_type {
3648                self.v.get()
3649            }
3650        }
3651    }
3652}
3653
3654#[cfg(target_has_atomic_load_store = "8")]
3655atomic_int! {
3656    cfg(target_has_atomic = "8"),
3657    cfg(target_has_atomic_equal_alignment = "8"),
3658    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3659    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3660    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3661    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3662    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3663    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3664    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3665    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3666    rustc_diagnostic_item = "AtomicI8",
3667    "i8",
3668    "",
3669    atomic_min, atomic_max,
3670    1,
3671    i8 AtomicI8
3672}
3673#[cfg(target_has_atomic_load_store = "8")]
3674atomic_int! {
3675    cfg(target_has_atomic = "8"),
3676    cfg(target_has_atomic_equal_alignment = "8"),
3677    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3678    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3679    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3680    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3681    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3682    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3683    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3684    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3685    rustc_diagnostic_item = "AtomicU8",
3686    "u8",
3687    "",
3688    atomic_umin, atomic_umax,
3689    1,
3690    u8 AtomicU8
3691}
3692#[cfg(target_has_atomic_load_store = "16")]
3693atomic_int! {
3694    cfg(target_has_atomic = "16"),
3695    cfg(target_has_atomic_equal_alignment = "16"),
3696    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3697    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3698    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3699    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3700    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3701    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3702    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3703    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3704    rustc_diagnostic_item = "AtomicI16",
3705    "i16",
3706    "",
3707    atomic_min, atomic_max,
3708    2,
3709    i16 AtomicI16
3710}
3711#[cfg(target_has_atomic_load_store = "16")]
3712atomic_int! {
3713    cfg(target_has_atomic = "16"),
3714    cfg(target_has_atomic_equal_alignment = "16"),
3715    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3716    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3717    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3718    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3719    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3720    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3721    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3722    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3723    rustc_diagnostic_item = "AtomicU16",
3724    "u16",
3725    "",
3726    atomic_umin, atomic_umax,
3727    2,
3728    u16 AtomicU16
3729}
3730#[cfg(target_has_atomic_load_store = "32")]
3731atomic_int! {
3732    cfg(target_has_atomic = "32"),
3733    cfg(target_has_atomic_equal_alignment = "32"),
3734    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3735    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3736    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3737    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3738    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3739    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3740    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3741    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3742    rustc_diagnostic_item = "AtomicI32",
3743    "i32",
3744    "",
3745    atomic_min, atomic_max,
3746    4,
3747    i32 AtomicI32
3748}
3749#[cfg(target_has_atomic_load_store = "32")]
3750atomic_int! {
3751    cfg(target_has_atomic = "32"),
3752    cfg(target_has_atomic_equal_alignment = "32"),
3753    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3754    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3755    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3756    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3757    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3758    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3759    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3760    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3761    rustc_diagnostic_item = "AtomicU32",
3762    "u32",
3763    "",
3764    atomic_umin, atomic_umax,
3765    4,
3766    u32 AtomicU32
3767}
3768#[cfg(target_has_atomic_load_store = "64")]
3769atomic_int! {
3770    cfg(target_has_atomic = "64"),
3771    cfg(target_has_atomic_equal_alignment = "64"),
3772    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3773    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3774    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3775    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3776    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3777    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3778    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3779    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3780    rustc_diagnostic_item = "AtomicI64",
3781    "i64",
3782    "",
3783    atomic_min, atomic_max,
3784    8,
3785    i64 AtomicI64
3786}
3787#[cfg(target_has_atomic_load_store = "64")]
3788atomic_int! {
3789    cfg(target_has_atomic = "64"),
3790    cfg(target_has_atomic_equal_alignment = "64"),
3791    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3792    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3793    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3794    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3795    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3796    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3797    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3798    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3799    rustc_diagnostic_item = "AtomicU64",
3800    "u64",
3801    "",
3802    atomic_umin, atomic_umax,
3803    8,
3804    u64 AtomicU64
3805}
3806#[cfg(target_has_atomic_load_store = "128")]
3807atomic_int! {
3808    cfg(target_has_atomic = "128"),
3809    cfg(target_has_atomic_equal_alignment = "128"),
3810    unstable(feature = "integer_atomics", issue = "99069"),
3811    unstable(feature = "integer_atomics", issue = "99069"),
3812    unstable(feature = "integer_atomics", issue = "99069"),
3813    unstable(feature = "integer_atomics", issue = "99069"),
3814    unstable(feature = "integer_atomics", issue = "99069"),
3815    unstable(feature = "integer_atomics", issue = "99069"),
3816    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3817    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3818    rustc_diagnostic_item = "AtomicI128",
3819    "i128",
3820    "#![feature(integer_atomics)]\n\n",
3821    atomic_min, atomic_max,
3822    16,
3823    i128 AtomicI128
3824}
3825#[cfg(target_has_atomic_load_store = "128")]
3826atomic_int! {
3827    cfg(target_has_atomic = "128"),
3828    cfg(target_has_atomic_equal_alignment = "128"),
3829    unstable(feature = "integer_atomics", issue = "99069"),
3830    unstable(feature = "integer_atomics", issue = "99069"),
3831    unstable(feature = "integer_atomics", issue = "99069"),
3832    unstable(feature = "integer_atomics", issue = "99069"),
3833    unstable(feature = "integer_atomics", issue = "99069"),
3834    unstable(feature = "integer_atomics", issue = "99069"),
3835    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3836    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3837    rustc_diagnostic_item = "AtomicU128",
3838    "u128",
3839    "#![feature(integer_atomics)]\n\n",
3840    atomic_umin, atomic_umax,
3841    16,
3842    u128 AtomicU128
3843}
3844
3845#[cfg(target_has_atomic_load_store = "ptr")]
3846macro_rules! atomic_int_ptr_sized {
3847    ( $($target_pointer_width:literal $align:literal)* ) => { $(
3848        #[cfg(target_pointer_width = $target_pointer_width)]
3849        atomic_int! {
3850            cfg(target_has_atomic = "ptr"),
3851            cfg(target_has_atomic_equal_alignment = "ptr"),
3852            stable(feature = "rust1", since = "1.0.0"),
3853            stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3854            stable(feature = "atomic_debug", since = "1.3.0"),
3855            stable(feature = "atomic_access", since = "1.15.0"),
3856            stable(feature = "atomic_from", since = "1.23.0"),
3857            stable(feature = "atomic_nand", since = "1.27.0"),
3858            rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3859            rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3860            rustc_diagnostic_item = "AtomicIsize",
3861            "isize",
3862            "",
3863            atomic_min, atomic_max,
3864            $align,
3865            isize AtomicIsize
3866        }
3867        #[cfg(target_pointer_width = $target_pointer_width)]
3868        atomic_int! {
3869            cfg(target_has_atomic = "ptr"),
3870            cfg(target_has_atomic_equal_alignment = "ptr"),
3871            stable(feature = "rust1", since = "1.0.0"),
3872            stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3873            stable(feature = "atomic_debug", since = "1.3.0"),
3874            stable(feature = "atomic_access", since = "1.15.0"),
3875            stable(feature = "atomic_from", since = "1.23.0"),
3876            stable(feature = "atomic_nand", since = "1.27.0"),
3877            rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3878            rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3879            rustc_diagnostic_item = "AtomicUsize",
3880            "usize",
3881            "",
3882            atomic_umin, atomic_umax,
3883            $align,
3884            usize AtomicUsize
3885        }
3886
3887        /// An [`AtomicIsize`] initialized to `0`.
3888        #[cfg(target_pointer_width = $target_pointer_width)]
3889        #[stable(feature = "rust1", since = "1.0.0")]
3890        #[deprecated(
3891            since = "1.34.0",
3892            note = "the `new` function is now preferred",
3893            suggestion = "AtomicIsize::new(0)",
3894        )]
3895        pub const ATOMIC_ISIZE_INIT: AtomicIsize = AtomicIsize::new(0);
3896
3897        /// An [`AtomicUsize`] initialized to `0`.
3898        #[cfg(target_pointer_width = $target_pointer_width)]
3899        #[stable(feature = "rust1", since = "1.0.0")]
3900        #[deprecated(
3901            since = "1.34.0",
3902            note = "the `new` function is now preferred",
3903            suggestion = "AtomicUsize::new(0)",
3904        )]
3905        pub const ATOMIC_USIZE_INIT: AtomicUsize = AtomicUsize::new(0);
3906    )* };
3907}
3908
3909#[cfg(target_has_atomic_load_store = "ptr")]
3910atomic_int_ptr_sized! {
3911    "16" 2
3912    "32" 4
3913    "64" 8
3914}
3915
3916#[inline]
3917#[cfg(target_has_atomic)]
3918fn strongest_failure_ordering(order: Ordering) -> Ordering {
3919    match order {
3920        Release => Relaxed,
3921        Relaxed => Relaxed,
3922        SeqCst => SeqCst,
3923        Acquire => Acquire,
3924        AcqRel => Acquire,
3925    }
3926}
3927
3928#[inline]
3929#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3930unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) {
3931    // SAFETY: the caller must uphold the safety contract for `atomic_store`.
3932    unsafe {
3933        match order {
3934            Relaxed => intrinsics::atomic_store::<T, { AO::Relaxed }>(dst, val),
3935            Release => intrinsics::atomic_store::<T, { AO::Release }>(dst, val),
3936            SeqCst => intrinsics::atomic_store::<T, { AO::SeqCst }>(dst, val),
3937            Acquire => panic!("there is no such thing as an acquire store"),
3938            AcqRel => panic!("there is no such thing as an acquire-release store"),
3939        }
3940    }
3941}
3942
3943#[inline]
3944#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3945unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T {
3946    // SAFETY: the caller must uphold the safety contract for `atomic_load`.
3947    unsafe {
3948        match order {
3949            Relaxed => intrinsics::atomic_load::<T, { AO::Relaxed }>(dst),
3950            Acquire => intrinsics::atomic_load::<T, { AO::Acquire }>(dst),
3951            SeqCst => intrinsics::atomic_load::<T, { AO::SeqCst }>(dst),
3952            Release => panic!("there is no such thing as a release load"),
3953            AcqRel => panic!("there is no such thing as an acquire-release load"),
3954        }
3955    }
3956}
3957
3958#[inline]
3959#[cfg(target_has_atomic)]
3960#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3961unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3962    // SAFETY: the caller must uphold the safety contract for `atomic_swap`.
3963    unsafe {
3964        match order {
3965            Relaxed => intrinsics::atomic_xchg::<T, { AO::Relaxed }>(dst, val),
3966            Acquire => intrinsics::atomic_xchg::<T, { AO::Acquire }>(dst, val),
3967            Release => intrinsics::atomic_xchg::<T, { AO::Release }>(dst, val),
3968            AcqRel => intrinsics::atomic_xchg::<T, { AO::AcqRel }>(dst, val),
3969            SeqCst => intrinsics::atomic_xchg::<T, { AO::SeqCst }>(dst, val),
3970        }
3971    }
3972}
3973
3974/// Returns the previous value (like __sync_fetch_and_add).
3975#[inline]
3976#[cfg(target_has_atomic)]
3977#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3978unsafe fn atomic_add<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3979    // SAFETY: the caller must uphold the safety contract for `atomic_add`.
3980    unsafe {
3981        match order {
3982            Relaxed => intrinsics::atomic_xadd::<T, { AO::Relaxed }>(dst, val),
3983            Acquire => intrinsics::atomic_xadd::<T, { AO::Acquire }>(dst, val),
3984            Release => intrinsics::atomic_xadd::<T, { AO::Release }>(dst, val),
3985            AcqRel => intrinsics::atomic_xadd::<T, { AO::AcqRel }>(dst, val),
3986            SeqCst => intrinsics::atomic_xadd::<T, { AO::SeqCst }>(dst, val),
3987        }
3988    }
3989}
3990
3991/// Returns the previous value (like __sync_fetch_and_sub).
3992#[inline]
3993#[cfg(target_has_atomic)]
3994#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3995unsafe fn atomic_sub<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3996    // SAFETY: the caller must uphold the safety contract for `atomic_sub`.
3997    unsafe {
3998        match order {
3999            Relaxed => intrinsics::atomic_xsub::<T, { AO::Relaxed }>(dst, val),
4000            Acquire => intrinsics::atomic_xsub::<T, { AO::Acquire }>(dst, val),
4001            Release => intrinsics::atomic_xsub::<T, { AO::Release }>(dst, val),
4002            AcqRel => intrinsics::atomic_xsub::<T, { AO::AcqRel }>(dst, val),
4003            SeqCst => intrinsics::atomic_xsub::<T, { AO::SeqCst }>(dst, val),
4004        }
4005    }
4006}
4007
4008/// Publicly exposed for stdarch; nobody else should use this.
4009#[inline]
4010#[cfg(target_has_atomic)]
4011#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4012#[unstable(feature = "core_intrinsics", issue = "none")]
4013#[doc(hidden)]
4014pub unsafe fn atomic_compare_exchange<T: Copy>(
4015    dst: *mut T,
4016    old: T,
4017    new: T,
4018    success: Ordering,
4019    failure: Ordering,
4020) -> Result<T, T> {
4021    // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange`.
4022    let (val, ok) = unsafe {
4023        match (success, failure) {
4024            (Relaxed, Relaxed) => {
4025                intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new)
4026            }
4027            (Relaxed, Acquire) => {
4028                intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new)
4029            }
4030            (Relaxed, SeqCst) => {
4031                intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new)
4032            }
4033            (Acquire, Relaxed) => {
4034                intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new)
4035            }
4036            (Acquire, Acquire) => {
4037                intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new)
4038            }
4039            (Acquire, SeqCst) => {
4040                intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new)
4041            }
4042            (Release, Relaxed) => {
4043                intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new)
4044            }
4045            (Release, Acquire) => {
4046                intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Acquire }>(dst, old, new)
4047            }
4048            (Release, SeqCst) => {
4049                intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new)
4050            }
4051            (AcqRel, Relaxed) => {
4052                intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new)
4053            }
4054            (AcqRel, Acquire) => {
4055                intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new)
4056            }
4057            (AcqRel, SeqCst) => {
4058                intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new)
4059            }
4060            (SeqCst, Relaxed) => {
4061                intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new)
4062            }
4063            (SeqCst, Acquire) => {
4064                intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new)
4065            }
4066            (SeqCst, SeqCst) => {
4067                intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new)
4068            }
4069            (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
4070            (_, Release) => panic!("there is no such thing as a release failure ordering"),
4071        }
4072    };
4073    if ok { Ok(val) } else { Err(val) }
4074}
4075
4076#[inline]
4077#[cfg(target_has_atomic)]
4078#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4079unsafe fn atomic_compare_exchange_weak<T: Copy>(
4080    dst: *mut T,
4081    old: T,
4082    new: T,
4083    success: Ordering,
4084    failure: Ordering,
4085) -> Result<T, T> {
4086    // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange_weak`.
4087    let (val, ok) = unsafe {
4088        match (success, failure) {
4089            (Relaxed, Relaxed) => {
4090                intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new)
4091            }
4092            (Relaxed, Acquire) => {
4093                intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new)
4094            }
4095            (Relaxed, SeqCst) => {
4096                intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new)
4097            }
4098            (Acquire, Relaxed) => {
4099                intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new)
4100            }
4101            (Acquire, Acquire) => {
4102                intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new)
4103            }
4104            (Acquire, SeqCst) => {
4105                intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new)
4106            }
4107            (Release, Relaxed) => {
4108                intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new)
4109            }
4110            (Release, Acquire) => {
4111                intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Acquire }>(dst, old, new)
4112            }
4113            (Release, SeqCst) => {
4114                intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new)
4115            }
4116            (AcqRel, Relaxed) => {
4117                intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new)
4118            }
4119            (AcqRel, Acquire) => {
4120                intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new)
4121            }
4122            (AcqRel, SeqCst) => {
4123                intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new)
4124            }
4125            (SeqCst, Relaxed) => {
4126                intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new)
4127            }
4128            (SeqCst, Acquire) => {
4129                intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new)
4130            }
4131            (SeqCst, SeqCst) => {
4132                intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new)
4133            }
4134            (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
4135            (_, Release) => panic!("there is no such thing as a release failure ordering"),
4136        }
4137    };
4138    if ok { Ok(val) } else { Err(val) }
4139}
4140
4141#[inline]
4142#[cfg(target_has_atomic)]
4143#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4144unsafe fn atomic_and<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4145    // SAFETY: the caller must uphold the safety contract for `atomic_and`
4146    unsafe {
4147        match order {
4148            Relaxed => intrinsics::atomic_and::<T, { AO::Relaxed }>(dst, val),
4149            Acquire => intrinsics::atomic_and::<T, { AO::Acquire }>(dst, val),
4150            Release => intrinsics::atomic_and::<T, { AO::Release }>(dst, val),
4151            AcqRel => intrinsics::atomic_and::<T, { AO::AcqRel }>(dst, val),
4152            SeqCst => intrinsics::atomic_and::<T, { AO::SeqCst }>(dst, val),
4153        }
4154    }
4155}
4156
4157#[inline]
4158#[cfg(target_has_atomic)]
4159#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4160unsafe fn atomic_nand<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4161    // SAFETY: the caller must uphold the safety contract for `atomic_nand`
4162    unsafe {
4163        match order {
4164            Relaxed => intrinsics::atomic_nand::<T, { AO::Relaxed }>(dst, val),
4165            Acquire => intrinsics::atomic_nand::<T, { AO::Acquire }>(dst, val),
4166            Release => intrinsics::atomic_nand::<T, { AO::Release }>(dst, val),
4167            AcqRel => intrinsics::atomic_nand::<T, { AO::AcqRel }>(dst, val),
4168            SeqCst => intrinsics::atomic_nand::<T, { AO::SeqCst }>(dst, val),
4169        }
4170    }
4171}
4172
4173#[inline]
4174#[cfg(target_has_atomic)]
4175#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4176unsafe fn atomic_or<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4177    // SAFETY: the caller must uphold the safety contract for `atomic_or`
4178    unsafe {
4179        match order {
4180            SeqCst => intrinsics::atomic_or::<T, { AO::SeqCst }>(dst, val),
4181            Acquire => intrinsics::atomic_or::<T, { AO::Acquire }>(dst, val),
4182            Release => intrinsics::atomic_or::<T, { AO::Release }>(dst, val),
4183            AcqRel => intrinsics::atomic_or::<T, { AO::AcqRel }>(dst, val),
4184            Relaxed => intrinsics::atomic_or::<T, { AO::Relaxed }>(dst, val),
4185        }
4186    }
4187}
4188
4189#[inline]
4190#[cfg(target_has_atomic)]
4191#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4192unsafe fn atomic_xor<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4193    // SAFETY: the caller must uphold the safety contract for `atomic_xor`
4194    unsafe {
4195        match order {
4196            SeqCst => intrinsics::atomic_xor::<T, { AO::SeqCst }>(dst, val),
4197            Acquire => intrinsics::atomic_xor::<T, { AO::Acquire }>(dst, val),
4198            Release => intrinsics::atomic_xor::<T, { AO::Release }>(dst, val),
4199            AcqRel => intrinsics::atomic_xor::<T, { AO::AcqRel }>(dst, val),
4200            Relaxed => intrinsics::atomic_xor::<T, { AO::Relaxed }>(dst, val),
4201        }
4202    }
4203}
4204
4205/// Updates `*dst` to the max value of `val` and the old value (signed comparison)
4206#[inline]
4207#[cfg(target_has_atomic)]
4208#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4209unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4210    // SAFETY: the caller must uphold the safety contract for `atomic_max`
4211    unsafe {
4212        match order {
4213            Relaxed => intrinsics::atomic_max::<T, { AO::Relaxed }>(dst, val),
4214            Acquire => intrinsics::atomic_max::<T, { AO::Acquire }>(dst, val),
4215            Release => intrinsics::atomic_max::<T, { AO::Release }>(dst, val),
4216            AcqRel => intrinsics::atomic_max::<T, { AO::AcqRel }>(dst, val),
4217            SeqCst => intrinsics::atomic_max::<T, { AO::SeqCst }>(dst, val),
4218        }
4219    }
4220}
4221
4222/// Updates `*dst` to the min value of `val` and the old value (signed comparison)
4223#[inline]
4224#[cfg(target_has_atomic)]
4225#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4226unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4227    // SAFETY: the caller must uphold the safety contract for `atomic_min`
4228    unsafe {
4229        match order {
4230            Relaxed => intrinsics::atomic_min::<T, { AO::Relaxed }>(dst, val),
4231            Acquire => intrinsics::atomic_min::<T, { AO::Acquire }>(dst, val),
4232            Release => intrinsics::atomic_min::<T, { AO::Release }>(dst, val),
4233            AcqRel => intrinsics::atomic_min::<T, { AO::AcqRel }>(dst, val),
4234            SeqCst => intrinsics::atomic_min::<T, { AO::SeqCst }>(dst, val),
4235        }
4236    }
4237}
4238
4239/// Updates `*dst` to the max value of `val` and the old value (unsigned comparison)
4240#[inline]
4241#[cfg(target_has_atomic)]
4242#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4243unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4244    // SAFETY: the caller must uphold the safety contract for `atomic_umax`
4245    unsafe {
4246        match order {
4247            Relaxed => intrinsics::atomic_umax::<T, { AO::Relaxed }>(dst, val),
4248            Acquire => intrinsics::atomic_umax::<T, { AO::Acquire }>(dst, val),
4249            Release => intrinsics::atomic_umax::<T, { AO::Release }>(dst, val),
4250            AcqRel => intrinsics::atomic_umax::<T, { AO::AcqRel }>(dst, val),
4251            SeqCst => intrinsics::atomic_umax::<T, { AO::SeqCst }>(dst, val),
4252        }
4253    }
4254}
4255
4256/// Updates `*dst` to the min value of `val` and the old value (unsigned comparison)
4257#[inline]
4258#[cfg(target_has_atomic)]
4259#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4260unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4261    // SAFETY: the caller must uphold the safety contract for `atomic_umin`
4262    unsafe {
4263        match order {
4264            Relaxed => intrinsics::atomic_umin::<T, { AO::Relaxed }>(dst, val),
4265            Acquire => intrinsics::atomic_umin::<T, { AO::Acquire }>(dst, val),
4266            Release => intrinsics::atomic_umin::<T, { AO::Release }>(dst, val),
4267            AcqRel => intrinsics::atomic_umin::<T, { AO::AcqRel }>(dst, val),
4268            SeqCst => intrinsics::atomic_umin::<T, { AO::SeqCst }>(dst, val),
4269        }
4270    }
4271}
4272
4273/// An atomic fence.
4274///
4275/// Fences create synchronization between themselves and atomic operations or fences in other
4276/// threads. To achieve this, a fence prevents the compiler and CPU from reordering certain types of
4277/// memory operations around it.
4278///
4279/// A fence 'A' which has (at least) [`Release`] ordering semantics, synchronizes
4280/// with a fence 'B' with (at least) [`Acquire`] semantics, if and only if there
4281/// exist operations X and Y, both operating on some atomic object 'm' such
4282/// that A is sequenced before X, Y is sequenced before B and Y observes
4283/// the change to m. This provides a happens-before dependence between A and B.
4284///
4285/// ```text
4286///     Thread 1                                          Thread 2
4287///
4288/// fence(Release);      A --------------
4289/// m.store(3, Relaxed); X ---------    |
4290///                                |    |
4291///                                |    |
4292///                                -------------> Y  if m.load(Relaxed) == 3 {
4293///                                     |-------> B      fence(Acquire);
4294///                                                      ...
4295///                                                  }
4296/// ```
4297///
4298/// Note that in the example above, it is crucial that the accesses to `m` are atomic. Fences cannot
4299/// be used to establish synchronization among non-atomic accesses in different threads. However,
4300/// thanks to the happens-before relationship between A and B, any non-atomic accesses that
4301/// happen-before A are now also properly synchronized with any non-atomic accesses that
4302/// happen-after B.
4303///
4304/// Atomic operations with [`Release`] or [`Acquire`] semantics can also synchronize
4305/// with a fence.
4306///
4307/// A fence which has [`SeqCst`] ordering, in addition to having both [`Acquire`]
4308/// and [`Release`] semantics, participates in the global program order of the
4309/// other [`SeqCst`] operations and/or fences.
4310///
4311/// Accepts [`Acquire`], [`Release`], [`AcqRel`] and [`SeqCst`] orderings.
4312///
4313/// # Panics
4314///
4315/// Panics if `order` is [`Relaxed`].
4316///
4317/// # Examples
4318///
4319/// ```
4320/// use std::sync::atomic::AtomicBool;
4321/// use std::sync::atomic::fence;
4322/// use std::sync::atomic::Ordering;
4323///
4324/// // A mutual exclusion primitive based on spinlock.
4325/// pub struct Mutex {
4326///     flag: AtomicBool,
4327/// }
4328///
4329/// impl Mutex {
4330///     pub fn new() -> Mutex {
4331///         Mutex {
4332///             flag: AtomicBool::new(false),
4333///         }
4334///     }
4335///
4336///     pub fn lock(&self) {
4337///         // Wait until the old value is `false`.
4338///         while self
4339///             .flag
4340///             .compare_exchange_weak(false, true, Ordering::Relaxed, Ordering::Relaxed)
4341///             .is_err()
4342///         {}
4343///         // This fence synchronizes-with store in `unlock`.
4344///         fence(Ordering::Acquire);
4345///     }
4346///
4347///     pub fn unlock(&self) {
4348///         self.flag.store(false, Ordering::Release);
4349///     }
4350/// }
4351/// ```
4352#[inline]
4353#[stable(feature = "rust1", since = "1.0.0")]
4354#[rustc_diagnostic_item = "fence"]
4355#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4356pub fn fence(order: Ordering) {
4357    // SAFETY: using an atomic fence is safe.
4358    unsafe {
4359        match order {
4360            Acquire => intrinsics::atomic_fence::<{ AO::Acquire }>(),
4361            Release => intrinsics::atomic_fence::<{ AO::Release }>(),
4362            AcqRel => intrinsics::atomic_fence::<{ AO::AcqRel }>(),
4363            SeqCst => intrinsics::atomic_fence::<{ AO::SeqCst }>(),
4364            Relaxed => panic!("there is no such thing as a relaxed fence"),
4365        }
4366    }
4367}
4368
4369/// A "compiler-only" atomic fence.
4370///
4371/// Like [`fence`], this function establishes synchronization with other atomic operations and
4372/// fences. However, unlike [`fence`], `compiler_fence` only establishes synchronization with
4373/// operations *in the same thread*. This may at first sound rather useless, since code within a
4374/// thread is typically already totally ordered and does not need any further synchronization.
4375/// However, there are cases where code can run on the same thread without being ordered:
4376/// - The most common case is that of a *signal handler*: a signal handler runs in the same thread
4377///   as the code it interrupted, but it is not ordered with respect to that code. `compiler_fence`
4378///   can be used to establish synchronization between a thread and its signal handler, the same way
4379///   that `fence` can be used to establish synchronization across threads.
4380/// - Similar situations can arise in embedded programming with interrupt handlers, or in custom
4381///   implementations of preemptive green threads. In general, `compiler_fence` can establish
4382///   synchronization with code that is guaranteed to run on the same hardware CPU.
4383///
4384/// See [`fence`] for how a fence can be used to achieve synchronization. Note that just like
4385/// [`fence`], synchronization still requires atomic operations to be used in both threads -- it is
4386/// not possible to perform synchronization entirely with fences and non-atomic operations.
4387///
4388/// `compiler_fence` does not emit any machine code, but restricts the kinds of memory re-ordering
4389/// the compiler is allowed to do. `compiler_fence` corresponds to [`atomic_signal_fence`] in C and
4390/// C++.
4391///
4392/// [`atomic_signal_fence`]: https://en.cppreference.com/w/cpp/atomic/atomic_signal_fence
4393///
4394/// # Panics
4395///
4396/// Panics if `order` is [`Relaxed`].
4397///
4398/// # Examples
4399///
4400/// Without the two `compiler_fence` calls, the read of `IMPORTANT_VARIABLE` in `signal_handler`
4401/// is *undefined behavior* due to a data race, despite everything happening in a single thread.
4402/// This is because the signal handler is considered to run concurrently with its associated
4403/// thread, and explicit synchronization is required to pass data between a thread and its
4404/// signal handler. The code below uses two `compiler_fence` calls to establish the usual
4405/// release-acquire synchronization pattern (see [`fence`] for an image).
4406///
4407/// ```
4408/// use std::sync::atomic::AtomicBool;
4409/// use std::sync::atomic::Ordering;
4410/// use std::sync::atomic::compiler_fence;
4411///
4412/// static mut IMPORTANT_VARIABLE: usize = 0;
4413/// static IS_READY: AtomicBool = AtomicBool::new(false);
4414///
4415/// fn main() {
4416///     unsafe { IMPORTANT_VARIABLE = 42 };
4417///     // Marks earlier writes as being released with future relaxed stores.
4418///     compiler_fence(Ordering::Release);
4419///     IS_READY.store(true, Ordering::Relaxed);
4420/// }
4421///
4422/// fn signal_handler() {
4423///     if IS_READY.load(Ordering::Relaxed) {
4424///         // Acquires writes that were released with relaxed stores that we read from.
4425///         compiler_fence(Ordering::Acquire);
4426///         assert_eq!(unsafe { IMPORTANT_VARIABLE }, 42);
4427///     }
4428/// }
4429/// ```
4430#[inline]
4431#[stable(feature = "compiler_fences", since = "1.21.0")]
4432#[rustc_diagnostic_item = "compiler_fence"]
4433#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4434pub fn compiler_fence(order: Ordering) {
4435    // SAFETY: using an atomic fence is safe.
4436    unsafe {
4437        match order {
4438            Acquire => intrinsics::atomic_singlethreadfence::<{ AO::Acquire }>(),
4439            Release => intrinsics::atomic_singlethreadfence::<{ AO::Release }>(),
4440            AcqRel => intrinsics::atomic_singlethreadfence::<{ AO::AcqRel }>(),
4441            SeqCst => intrinsics::atomic_singlethreadfence::<{ AO::SeqCst }>(),
4442            Relaxed => panic!("there is no such thing as a relaxed fence"),
4443        }
4444    }
4445}
4446
4447#[cfg(target_has_atomic_load_store = "8")]
4448#[stable(feature = "atomic_debug", since = "1.3.0")]
4449impl fmt::Debug for AtomicBool {
4450    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4451        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4452    }
4453}
4454
4455#[cfg(target_has_atomic_load_store = "ptr")]
4456#[stable(feature = "atomic_debug", since = "1.3.0")]
4457impl<T> fmt::Debug for AtomicPtr<T> {
4458    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4459        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4460    }
4461}
4462
4463#[cfg(target_has_atomic_load_store = "ptr")]
4464#[stable(feature = "atomic_pointer", since = "1.24.0")]
4465impl<T> fmt::Pointer for AtomicPtr<T> {
4466    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4467        fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
4468    }
4469}
4470
4471/// Signals the processor that it is inside a busy-wait spin-loop ("spin lock").
4472///
4473/// This function is deprecated in favor of [`hint::spin_loop`].
4474///
4475/// [`hint::spin_loop`]: crate::hint::spin_loop
4476#[inline]
4477#[stable(feature = "spin_loop_hint", since = "1.24.0")]
4478#[deprecated(since = "1.51.0", note = "use hint::spin_loop instead")]
4479pub fn spin_loop_hint() {
4480    spin_loop()
4481}
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy