core/sync/
atomic.rs

1//! Atomic types
2//!
3//! Atomic types provide primitive shared-memory communication between
4//! threads, and are the building blocks of other concurrent
5//! types.
6//!
7//! This module defines atomic versions of a select number of primitive
8//! types, including [`AtomicBool`], [`AtomicIsize`], [`AtomicUsize`],
9//! [`AtomicI8`], [`AtomicU16`], etc.
10//! Atomic types present operations that, when used correctly, synchronize
11//! updates between threads.
12//!
13//! Atomic variables are safe to share between threads (they implement [`Sync`])
14//! but they do not themselves provide the mechanism for sharing and follow the
15//! [threading model](../../../std/thread/index.html#the-threading-model) of Rust.
16//! The most common way to share an atomic variable is to put it into an [`Arc`][arc] (an
17//! atomically-reference-counted shared pointer).
18//!
19//! [arc]: ../../../std/sync/struct.Arc.html
20//!
21//! Atomic types may be stored in static variables, initialized using
22//! the constant initializers like [`AtomicBool::new`]. Atomic statics
23//! are often used for lazy global initialization.
24//!
25//! ## Memory model for atomic accesses
26//!
27//! Rust atomics currently follow the same rules as [C++20 atomics][cpp], specifically the rules
28//! from the [`intro.races`][cpp-intro.races] section, without the "consume" memory ordering. Since
29//! C++ uses an object-based memory model whereas Rust is access-based, a bit of translation work
30//! has to be done to apply the C++ rules to Rust: whenever C++ talks about "the value of an
31//! object", we understand that to mean the resulting bytes obtained when doing a read. When the C++
32//! standard talks about "the value of an atomic object", this refers to the result of doing an
33//! atomic load (via the operations provided in this module). A "modification of an atomic object"
34//! refers to an atomic store.
35//!
36//! The end result is *almost* equivalent to saying that creating a *shared reference* to one of the
37//! Rust atomic types corresponds to creating an `atomic_ref` in C++, with the `atomic_ref` being
38//! destroyed when the lifetime of the shared reference ends. The main difference is that Rust
39//! permits concurrent atomic and non-atomic reads to the same memory as those cause no issue in the
40//! C++ memory model, they are just forbidden in C++ because memory is partitioned into "atomic
41//! objects" and "non-atomic objects" (with `atomic_ref` temporarily converting a non-atomic object
42//! into an atomic object).
43//!
44//! The most important aspect of this model is that *data races* are undefined behavior. A data race
45//! is defined as conflicting non-synchronized accesses where at least one of the accesses is
46//! non-atomic. Here, accesses are *conflicting* if they affect overlapping regions of memory and at
47//! least one of them is a write. (A `compare_exchange` or `compare_exchange_weak` that does not
48//! succeed is not considered a write.) They are *non-synchronized* if neither of them
49//! *happens-before* the other, according to the happens-before order of the memory model.
50//!
51//! The other possible cause of undefined behavior in the memory model are mixed-size accesses: Rust
52//! inherits the C++ limitation that non-synchronized conflicting atomic accesses may not partially
53//! overlap. In other words, every pair of non-synchronized atomic accesses must be either disjoint,
54//! access the exact same memory (including using the same access size), or both be reads.
55//!
56//! Each atomic access takes an [`Ordering`] which defines how the operation interacts with the
57//! happens-before order. These orderings behave the same as the corresponding [C++20 atomic
58//! orderings][cpp_memory_order]. For more information, see the [nomicon].
59//!
60//! [cpp]: https://en.cppreference.com/w/cpp/atomic
61//! [cpp-intro.races]: https://timsong-cpp.github.io/cppwp/n4868/intro.multithread#intro.races
62//! [cpp_memory_order]: https://en.cppreference.com/w/cpp/atomic/memory_order
63//! [nomicon]: ../../../nomicon/atomics.html
64//!
65//! ```rust,no_run undefined_behavior
66//! use std::sync::atomic::{AtomicU16, AtomicU8, Ordering};
67//! use std::mem::transmute;
68//! use std::thread;
69//!
70//! let atomic = AtomicU16::new(0);
71//!
72//! thread::scope(|s| {
73//!     // This is UB: conflicting non-synchronized accesses, at least one of which is non-atomic.
74//!     s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
75//!     s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
76//! });
77//!
78//! thread::scope(|s| {
79//!     // This is fine: the accesses do not conflict (as none of them performs any modification).
80//!     // In C++ this would be disallowed since creating an `atomic_ref` precludes
81//!     // further non-atomic accesses, but Rust does not have that limitation.
82//!     s.spawn(|| atomic.load(Ordering::Relaxed)); // atomic load
83//!     s.spawn(|| unsafe { atomic.as_ptr().read() }); // non-atomic read
84//! });
85//!
86//! thread::scope(|s| {
87//!     // This is fine: `join` synchronizes the code in a way such that the atomic
88//!     // store happens-before the non-atomic write.
89//!     let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
90//!     handle.join().expect("thread won't panic"); // synchronize
91//!     s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
92//! });
93//!
94//! thread::scope(|s| {
95//!     // This is UB: non-synchronized conflicting differently-sized atomic accesses.
96//!     s.spawn(|| atomic.store(1, Ordering::Relaxed));
97//!     s.spawn(|| unsafe {
98//!         let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
99//!         differently_sized.store(2, Ordering::Relaxed);
100//!     });
101//! });
102//!
103//! thread::scope(|s| {
104//!     // This is fine: `join` synchronizes the code in a way such that
105//!     // the 1-byte store happens-before the 2-byte store.
106//!     let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed));
107//!     handle.join().expect("thread won't panic");
108//!     s.spawn(|| unsafe {
109//!         let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
110//!         differently_sized.store(2, Ordering::Relaxed);
111//!     });
112//! });
113//! ```
114//!
115//! # Portability
116//!
117//! All atomic types in this module are guaranteed to be [lock-free] if they're
118//! available. This means they don't internally acquire a global mutex. Atomic
119//! types and operations are not guaranteed to be wait-free. This means that
120//! operations like `fetch_or` may be implemented with a compare-and-swap loop.
121//!
122//! Atomic operations may be implemented at the instruction layer with
123//! larger-size atomics. For example some platforms use 4-byte atomic
124//! instructions to implement `AtomicI8`. Note that this emulation should not
125//! have an impact on correctness of code, it's just something to be aware of.
126//!
127//! The atomic types in this module might not be available on all platforms. The
128//! atomic types here are all widely available, however, and can generally be
129//! relied upon existing. Some notable exceptions are:
130//!
131//! * PowerPC and MIPS platforms with 32-bit pointers do not have `AtomicU64` or
132//!   `AtomicI64` types.
133//! * ARM platforms like `armv5te` that aren't for Linux only provide `load`
134//!   and `store` operations, and do not support Compare and Swap (CAS)
135//!   operations, such as `swap`, `fetch_add`, etc. Additionally on Linux,
136//!   these CAS operations are implemented via [operating system support], which
137//!   may come with a performance penalty.
138//! * ARM targets with `thumbv6m` only provide `load` and `store` operations,
139//!   and do not support Compare and Swap (CAS) operations, such as `swap`,
140//!   `fetch_add`, etc.
141//!
142//! [operating system support]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
143//!
144//! Note that future platforms may be added that also do not have support for
145//! some atomic operations. Maximally portable code will want to be careful
146//! about which atomic types are used. `AtomicUsize` and `AtomicIsize` are
147//! generally the most portable, but even then they're not available everywhere.
148//! For reference, the `std` library requires `AtomicBool`s and pointer-sized atomics, although
149//! `core` does not.
150//!
151//! The `#[cfg(target_has_atomic)]` attribute can be used to conditionally
152//! compile based on the target's supported bit widths. It is a key-value
153//! option set for each supported size, with values "8", "16", "32", "64",
154//! "128", and "ptr" for pointer-sized atomics.
155//!
156//! [lock-free]: https://en.wikipedia.org/wiki/Non-blocking_algorithm
157//!
158//! # Atomic accesses to read-only memory
159//!
160//! In general, *all* atomic accesses on read-only memory are undefined behavior. For instance, attempting
161//! to do a `compare_exchange` that will definitely fail (making it conceptually a read-only
162//! operation) can still cause a segmentation fault if the underlying memory page is mapped read-only. Since
163//! atomic `load`s might be implemented using compare-exchange operations, even a `load` can fault
164//! on read-only memory.
165//!
166//! For the purpose of this section, "read-only memory" is defined as memory that is read-only in
167//! the underlying target, i.e., the pages are mapped with a read-only flag and any attempt to write
168//! will cause a page fault. In particular, an `&u128` reference that points to memory that is
169//! read-write mapped is *not* considered to point to "read-only memory". In Rust, almost all memory
170//! is read-write; the only exceptions are memory created by `const` items or `static` items without
171//! interior mutability, and memory that was specifically marked as read-only by the operating
172//! system via platform-specific APIs.
173//!
174//! As an exception from the general rule stated above, "sufficiently small" atomic loads with
175//! `Ordering::Relaxed` are implemented in a way that works on read-only memory, and are hence not
176//! undefined behavior. The exact size limit for what makes a load "sufficiently small" varies
177//! depending on the target:
178//!
179//! | `target_arch` | Size limit |
180//! |---------------|---------|
181//! | `x86`, `arm`, `mips`, `mips32r6`, `powerpc`, `riscv32`, `sparc`, `hexagon` | 4 bytes |
182//! | `x86_64`, `aarch64`, `loongarch64`, `mips64`, `mips64r6`, `powerpc64`, `riscv64`, `sparc64`, `s390x` | 8 bytes |
183//!
184//! Atomics loads that are larger than this limit as well as atomic loads with ordering other
185//! than `Relaxed`, as well as *all* atomic loads on targets not listed in the table, might still be
186//! read-only under certain conditions, but that is not a stable guarantee and should not be relied
187//! upon.
188//!
189//! If you need to do an acquire load on read-only memory, you can do a relaxed load followed by an
190//! acquire fence instead.
191//!
192//! # Examples
193//!
194//! A simple spinlock:
195//!
196//! ```
197//! use std::sync::Arc;
198//! use std::sync::atomic::{AtomicUsize, Ordering};
199//! use std::{hint, thread};
200//!
201//! fn main() {
202//!     let spinlock = Arc::new(AtomicUsize::new(1));
203//!
204//!     let spinlock_clone = Arc::clone(&spinlock);
205//!
206//!     let thread = thread::spawn(move || {
207//!         spinlock_clone.store(0, Ordering::Release);
208//!     });
209//!
210//!     // Wait for the other thread to release the lock
211//!     while spinlock.load(Ordering::Acquire) != 0 {
212//!         hint::spin_loop();
213//!     }
214//!
215//!     if let Err(panic) = thread.join() {
216//!         println!("Thread had an error: {panic:?}");
217//!     }
218//! }
219//! ```
220//!
221//! Keep a global count of live threads:
222//!
223//! ```
224//! use std::sync::atomic::{AtomicUsize, Ordering};
225//!
226//! static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0);
227//!
228//! // Note that Relaxed ordering doesn't synchronize anything
229//! // except the global thread counter itself.
230//! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::Relaxed);
231//! // Note that this number may not be true at the moment of printing
232//! // because some other thread may have changed static value already.
233//! println!("live threads: {}", old_thread_count + 1);
234//! ```
235
236#![stable(feature = "rust1", since = "1.0.0")]
237#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
238#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
239#![rustc_diagnostic_item = "atomic_mod"]
240// Clippy complains about the pattern of "safe function calling unsafe function taking pointers".
241// This happens with AtomicPtr intrinsics but is fine, as the pointers clippy is concerned about
242// are just normal values that get loaded/stored, but not dereferenced.
243#![allow(clippy::not_unsafe_ptr_arg_deref)]
244
245use self::Ordering::*;
246use crate::cell::UnsafeCell;
247use crate::hint::spin_loop;
248use crate::{fmt, intrinsics};
249
250trait Sealed {}
251
252/// A marker trait for primitive types which can be modified atomically.
253///
254/// This is an implementation detail for <code>[Atomic]\<T></code> which may disappear or be replaced at any time.
255///
256/// # Safety
257///
258/// Types implementing this trait must be primitives that can be modified atomically.
259///
260/// The associated `Self::AtomicInner` type must have the same size and bit validity as `Self`,
261/// but may have a higher alignment requirement, so the following `transmute`s are sound:
262///
263/// - `&mut Self::AtomicInner` as `&mut Self`
264/// - `Self` as `Self::AtomicInner` or the reverse
265#[unstable(
266    feature = "atomic_internals",
267    reason = "implementation detail which may disappear or be replaced at any time",
268    issue = "none"
269)]
270#[expect(private_bounds)]
271pub unsafe trait AtomicPrimitive: Sized + Copy + Sealed {
272    /// Temporary implementation detail.
273    type AtomicInner: Sized;
274}
275
276macro impl_atomic_primitive(
277    $Atom:ident $(<$T:ident>)? ($Primitive:ty),
278    size($size:literal),
279    align($align:literal) $(,)?
280) {
281    impl $(<$T>)? Sealed for $Primitive {}
282
283    #[unstable(
284        feature = "atomic_internals",
285        reason = "implementation detail which may disappear or be replaced at any time",
286        issue = "none"
287    )]
288    #[cfg(target_has_atomic_load_store = $size)]
289    unsafe impl $(<$T>)? AtomicPrimitive for $Primitive {
290        type AtomicInner = $Atom $(<$T>)?;
291    }
292}
293
294impl_atomic_primitive!(AtomicBool(bool), size("8"), align(1));
295impl_atomic_primitive!(AtomicI8(i8), size("8"), align(1));
296impl_atomic_primitive!(AtomicU8(u8), size("8"), align(1));
297impl_atomic_primitive!(AtomicI16(i16), size("16"), align(2));
298impl_atomic_primitive!(AtomicU16(u16), size("16"), align(2));
299impl_atomic_primitive!(AtomicI32(i32), size("32"), align(4));
300impl_atomic_primitive!(AtomicU32(u32), size("32"), align(4));
301impl_atomic_primitive!(AtomicI64(i64), size("64"), align(8));
302impl_atomic_primitive!(AtomicU64(u64), size("64"), align(8));
303impl_atomic_primitive!(AtomicI128(i128), size("128"), align(16));
304impl_atomic_primitive!(AtomicU128(u128), size("128"), align(16));
305
306#[cfg(target_pointer_width = "16")]
307impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(2));
308#[cfg(target_pointer_width = "32")]
309impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(4));
310#[cfg(target_pointer_width = "64")]
311impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(8));
312
313#[cfg(target_pointer_width = "16")]
314impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(2));
315#[cfg(target_pointer_width = "32")]
316impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(4));
317#[cfg(target_pointer_width = "64")]
318impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(8));
319
320#[cfg(target_pointer_width = "16")]
321impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(2));
322#[cfg(target_pointer_width = "32")]
323impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(4));
324#[cfg(target_pointer_width = "64")]
325impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(8));
326
327/// A memory location which can be safely modified from multiple threads.
328///
329/// This has the same size and bit validity as the underlying type `T`. However,
330/// the alignment of this type is always equal to its size, even on targets where
331/// `T` has alignment less than its size.
332///
333/// For more about the differences between atomic types and non-atomic types as
334/// well as information about the portability of this type, please see the
335/// [module-level documentation].
336///
337/// **Note:** This type is only available on platforms that support atomic loads
338/// and stores of `T`.
339///
340/// [module-level documentation]: crate::sync::atomic
341#[unstable(feature = "generic_atomic", issue = "130539")]
342pub type Atomic<T> = <T as AtomicPrimitive>::AtomicInner;
343
344// Some architectures don't have byte-sized atomics, which results in LLVM
345// emulating them using a LL/SC loop. However for AtomicBool we can take
346// advantage of the fact that it only ever contains 0 or 1 and use atomic OR/AND
347// instead, which LLVM can emulate using a larger atomic OR/AND operation.
348//
349// This list should only contain architectures which have word-sized atomic-or/
350// atomic-and instructions but don't natively support byte-sized atomics.
351#[cfg(target_has_atomic = "8")]
352const EMULATE_ATOMIC_BOOL: bool =
353    cfg!(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"));
354
355/// A boolean type which can be safely shared between threads.
356///
357/// This type has the same size, alignment, and bit validity as a [`bool`].
358///
359/// **Note**: This type is only available on platforms that support atomic
360/// loads and stores of `u8`.
361#[cfg(target_has_atomic_load_store = "8")]
362#[stable(feature = "rust1", since = "1.0.0")]
363#[rustc_diagnostic_item = "AtomicBool"]
364#[repr(C, align(1))]
365pub struct AtomicBool {
366    v: UnsafeCell<u8>,
367}
368
369#[cfg(target_has_atomic_load_store = "8")]
370#[stable(feature = "rust1", since = "1.0.0")]
371impl Default for AtomicBool {
372    /// Creates an `AtomicBool` initialized to `false`.
373    #[inline]
374    fn default() -> Self {
375        Self::new(false)
376    }
377}
378
379// Send is implicitly implemented for AtomicBool.
380#[cfg(target_has_atomic_load_store = "8")]
381#[stable(feature = "rust1", since = "1.0.0")]
382unsafe impl Sync for AtomicBool {}
383
384/// A raw pointer type which can be safely shared between threads.
385///
386/// This type has the same size and bit validity as a `*mut T`.
387///
388/// **Note**: This type is only available on platforms that support atomic
389/// loads and stores of pointers. Its size depends on the target pointer's size.
390#[cfg(target_has_atomic_load_store = "ptr")]
391#[stable(feature = "rust1", since = "1.0.0")]
392#[rustc_diagnostic_item = "AtomicPtr"]
393#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
394#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
395#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
396pub struct AtomicPtr<T> {
397    p: UnsafeCell<*mut T>,
398}
399
400#[cfg(target_has_atomic_load_store = "ptr")]
401#[stable(feature = "rust1", since = "1.0.0")]
402impl<T> Default for AtomicPtr<T> {
403    /// Creates a null `AtomicPtr<T>`.
404    fn default() -> AtomicPtr<T> {
405        AtomicPtr::new(crate::ptr::null_mut())
406    }
407}
408
409#[cfg(target_has_atomic_load_store = "ptr")]
410#[stable(feature = "rust1", since = "1.0.0")]
411unsafe impl<T> Send for AtomicPtr<T> {}
412#[cfg(target_has_atomic_load_store = "ptr")]
413#[stable(feature = "rust1", since = "1.0.0")]
414unsafe impl<T> Sync for AtomicPtr<T> {}
415
416/// Atomic memory orderings
417///
418/// Memory orderings specify the way atomic operations synchronize memory.
419/// In its weakest [`Ordering::Relaxed`], only the memory directly touched by the
420/// operation is synchronized. On the other hand, a store-load pair of [`Ordering::SeqCst`]
421/// operations synchronize other memory while additionally preserving a total order of such
422/// operations across all threads.
423///
424/// Rust's memory orderings are [the same as those of
425/// C++20](https://en.cppreference.com/w/cpp/atomic/memory_order).
426///
427/// For more information see the [nomicon].
428///
429/// [nomicon]: ../../../nomicon/atomics.html
430#[stable(feature = "rust1", since = "1.0.0")]
431#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
432#[non_exhaustive]
433#[rustc_diagnostic_item = "Ordering"]
434pub enum Ordering {
435    /// No ordering constraints, only atomic operations.
436    ///
437    /// Corresponds to [`memory_order_relaxed`] in C++20.
438    ///
439    /// [`memory_order_relaxed`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering
440    #[stable(feature = "rust1", since = "1.0.0")]
441    Relaxed,
442    /// When coupled with a store, all previous operations become ordered
443    /// before any load of this value with [`Acquire`] (or stronger) ordering.
444    /// In particular, all previous writes become visible to all threads
445    /// that perform an [`Acquire`] (or stronger) load of this value.
446    ///
447    /// Notice that using this ordering for an operation that combines loads
448    /// and stores leads to a [`Relaxed`] load operation!
449    ///
450    /// This ordering is only applicable for operations that can perform a store.
451    ///
452    /// Corresponds to [`memory_order_release`] in C++20.
453    ///
454    /// [`memory_order_release`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
455    #[stable(feature = "rust1", since = "1.0.0")]
456    Release,
457    /// When coupled with a load, if the loaded value was written by a store operation with
458    /// [`Release`] (or stronger) ordering, then all subsequent operations
459    /// become ordered after that store. In particular, all subsequent loads will see data
460    /// written before the store.
461    ///
462    /// Notice that using this ordering for an operation that combines loads
463    /// and stores leads to a [`Relaxed`] store operation!
464    ///
465    /// This ordering is only applicable for operations that can perform a load.
466    ///
467    /// Corresponds to [`memory_order_acquire`] in C++20.
468    ///
469    /// [`memory_order_acquire`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
470    #[stable(feature = "rust1", since = "1.0.0")]
471    Acquire,
472    /// Has the effects of both [`Acquire`] and [`Release`] together:
473    /// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering.
474    ///
475    /// Notice that in the case of `compare_and_swap`, it is possible that the operation ends up
476    /// not performing any store and hence it has just [`Acquire`] ordering. However,
477    /// `AcqRel` will never perform [`Relaxed`] accesses.
478    ///
479    /// This ordering is only applicable for operations that combine both loads and stores.
480    ///
481    /// Corresponds to [`memory_order_acq_rel`] in C++20.
482    ///
483    /// [`memory_order_acq_rel`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
484    #[stable(feature = "rust1", since = "1.0.0")]
485    AcqRel,
486    /// Like [`Acquire`]/[`Release`]/[`AcqRel`] (for load, store, and load-with-store
487    /// operations, respectively) with the additional guarantee that all threads see all
488    /// sequentially consistent operations in the same order.
489    ///
490    /// Corresponds to [`memory_order_seq_cst`] in C++20.
491    ///
492    /// [`memory_order_seq_cst`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Sequentially-consistent_ordering
493    #[stable(feature = "rust1", since = "1.0.0")]
494    SeqCst,
495}
496
497/// An [`AtomicBool`] initialized to `false`.
498#[cfg(target_has_atomic_load_store = "8")]
499#[stable(feature = "rust1", since = "1.0.0")]
500#[deprecated(
501    since = "1.34.0",
502    note = "the `new` function is now preferred",
503    suggestion = "AtomicBool::new(false)"
504)]
505pub const ATOMIC_BOOL_INIT: AtomicBool = AtomicBool::new(false);
506
507#[cfg(target_has_atomic_load_store = "8")]
508impl AtomicBool {
509    /// Creates a new `AtomicBool`.
510    ///
511    /// # Examples
512    ///
513    /// ```
514    /// use std::sync::atomic::AtomicBool;
515    ///
516    /// let atomic_true = AtomicBool::new(true);
517    /// let atomic_false = AtomicBool::new(false);
518    /// ```
519    #[inline]
520    #[stable(feature = "rust1", since = "1.0.0")]
521    #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
522    #[must_use]
523    pub const fn new(v: bool) -> AtomicBool {
524        AtomicBool { v: UnsafeCell::new(v as u8) }
525    }
526
527    /// Creates a new `AtomicBool` from a pointer.
528    ///
529    /// # Examples
530    ///
531    /// ```
532    /// use std::sync::atomic::{self, AtomicBool};
533    ///
534    /// // Get a pointer to an allocated value
535    /// let ptr: *mut bool = Box::into_raw(Box::new(false));
536    ///
537    /// assert!(ptr.cast::<AtomicBool>().is_aligned());
538    ///
539    /// {
540    ///     // Create an atomic view of the allocated value
541    ///     let atomic = unsafe { AtomicBool::from_ptr(ptr) };
542    ///
543    ///     // Use `atomic` for atomic operations, possibly share it with other threads
544    ///     atomic.store(true, atomic::Ordering::Relaxed);
545    /// }
546    ///
547    /// // It's ok to non-atomically access the value behind `ptr`,
548    /// // since the reference to the atomic ended its lifetime in the block above
549    /// assert_eq!(unsafe { *ptr }, true);
550    ///
551    /// // Deallocate the value
552    /// unsafe { drop(Box::from_raw(ptr)) }
553    /// ```
554    ///
555    /// # Safety
556    ///
557    /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that this is always true, since
558    ///   `align_of::<AtomicBool>() == 1`).
559    /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
560    /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
561    ///   allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
562    ///   without synchronization.
563    ///
564    /// [valid]: crate::ptr#safety
565    /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
566    #[inline]
567    #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
568    #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
569    pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool {
570        // SAFETY: guaranteed by the caller
571        unsafe { &*ptr.cast() }
572    }
573
574    /// Returns a mutable reference to the underlying [`bool`].
575    ///
576    /// This is safe because the mutable reference guarantees that no other threads are
577    /// concurrently accessing the atomic data.
578    ///
579    /// # Examples
580    ///
581    /// ```
582    /// use std::sync::atomic::{AtomicBool, Ordering};
583    ///
584    /// let mut some_bool = AtomicBool::new(true);
585    /// assert_eq!(*some_bool.get_mut(), true);
586    /// *some_bool.get_mut() = false;
587    /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
588    /// ```
589    #[inline]
590    #[stable(feature = "atomic_access", since = "1.15.0")]
591    pub fn get_mut(&mut self) -> &mut bool {
592        // SAFETY: the mutable reference guarantees unique ownership.
593        unsafe { &mut *(self.v.get() as *mut bool) }
594    }
595
596    /// Gets atomic access to a `&mut bool`.
597    ///
598    /// # Examples
599    ///
600    /// ```
601    /// #![feature(atomic_from_mut)]
602    /// use std::sync::atomic::{AtomicBool, Ordering};
603    ///
604    /// let mut some_bool = true;
605    /// let a = AtomicBool::from_mut(&mut some_bool);
606    /// a.store(false, Ordering::Relaxed);
607    /// assert_eq!(some_bool, false);
608    /// ```
609    #[inline]
610    #[cfg(target_has_atomic_equal_alignment = "8")]
611    #[unstable(feature = "atomic_from_mut", issue = "76314")]
612    pub fn from_mut(v: &mut bool) -> &mut Self {
613        // SAFETY: the mutable reference guarantees unique ownership, and
614        // alignment of both `bool` and `Self` is 1.
615        unsafe { &mut *(v as *mut bool as *mut Self) }
616    }
617
618    /// Gets non-atomic access to a `&mut [AtomicBool]` slice.
619    ///
620    /// This is safe because the mutable reference guarantees that no other threads are
621    /// concurrently accessing the atomic data.
622    ///
623    /// # Examples
624    ///
625    /// ```
626    /// #![feature(atomic_from_mut)]
627    /// use std::sync::atomic::{AtomicBool, Ordering};
628    ///
629    /// let mut some_bools = [const { AtomicBool::new(false) }; 10];
630    ///
631    /// let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
632    /// assert_eq!(view, [false; 10]);
633    /// view[..5].copy_from_slice(&[true; 5]);
634    ///
635    /// std::thread::scope(|s| {
636    ///     for t in &some_bools[..5] {
637    ///         s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
638    ///     }
639    ///
640    ///     for f in &some_bools[5..] {
641    ///         s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
642    ///     }
643    /// });
644    /// ```
645    #[inline]
646    #[unstable(feature = "atomic_from_mut", issue = "76314")]
647    pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool] {
648        // SAFETY: the mutable reference guarantees unique ownership.
649        unsafe { &mut *(this as *mut [Self] as *mut [bool]) }
650    }
651
652    /// Gets atomic access to a `&mut [bool]` slice.
653    ///
654    /// # Examples
655    ///
656    /// ```
657    /// #![feature(atomic_from_mut)]
658    /// use std::sync::atomic::{AtomicBool, Ordering};
659    ///
660    /// let mut some_bools = [false; 10];
661    /// let a = &*AtomicBool::from_mut_slice(&mut some_bools);
662    /// std::thread::scope(|s| {
663    ///     for i in 0..a.len() {
664    ///         s.spawn(move || a[i].store(true, Ordering::Relaxed));
665    ///     }
666    /// });
667    /// assert_eq!(some_bools, [true; 10]);
668    /// ```
669    #[inline]
670    #[cfg(target_has_atomic_equal_alignment = "8")]
671    #[unstable(feature = "atomic_from_mut", issue = "76314")]
672    pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self] {
673        // SAFETY: the mutable reference guarantees unique ownership, and
674        // alignment of both `bool` and `Self` is 1.
675        unsafe { &mut *(v as *mut [bool] as *mut [Self]) }
676    }
677
678    /// Consumes the atomic and returns the contained value.
679    ///
680    /// This is safe because passing `self` by value guarantees that no other threads are
681    /// concurrently accessing the atomic data.
682    ///
683    /// # Examples
684    ///
685    /// ```
686    /// use std::sync::atomic::AtomicBool;
687    ///
688    /// let some_bool = AtomicBool::new(true);
689    /// assert_eq!(some_bool.into_inner(), true);
690    /// ```
691    #[inline]
692    #[stable(feature = "atomic_access", since = "1.15.0")]
693    #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
694    pub const fn into_inner(self) -> bool {
695        self.v.into_inner() != 0
696    }
697
698    /// Loads a value from the bool.
699    ///
700    /// `load` takes an [`Ordering`] argument which describes the memory ordering
701    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
702    ///
703    /// # Panics
704    ///
705    /// Panics if `order` is [`Release`] or [`AcqRel`].
706    ///
707    /// # Examples
708    ///
709    /// ```
710    /// use std::sync::atomic::{AtomicBool, Ordering};
711    ///
712    /// let some_bool = AtomicBool::new(true);
713    ///
714    /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
715    /// ```
716    #[inline]
717    #[stable(feature = "rust1", since = "1.0.0")]
718    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
719    pub fn load(&self, order: Ordering) -> bool {
720        // SAFETY: any data races are prevented by atomic intrinsics and the raw
721        // pointer passed in is valid because we got it from a reference.
722        unsafe { atomic_load(self.v.get(), order) != 0 }
723    }
724
725    /// Stores a value into the bool.
726    ///
727    /// `store` takes an [`Ordering`] argument which describes the memory ordering
728    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
729    ///
730    /// # Panics
731    ///
732    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
733    ///
734    /// # Examples
735    ///
736    /// ```
737    /// use std::sync::atomic::{AtomicBool, Ordering};
738    ///
739    /// let some_bool = AtomicBool::new(true);
740    ///
741    /// some_bool.store(false, Ordering::Relaxed);
742    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
743    /// ```
744    #[inline]
745    #[stable(feature = "rust1", since = "1.0.0")]
746    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
747    pub fn store(&self, val: bool, order: Ordering) {
748        // SAFETY: any data races are prevented by atomic intrinsics and the raw
749        // pointer passed in is valid because we got it from a reference.
750        unsafe {
751            atomic_store(self.v.get(), val as u8, order);
752        }
753    }
754
755    /// Stores a value into the bool, returning the previous value.
756    ///
757    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
758    /// of this operation. All ordering modes are possible. Note that using
759    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
760    /// using [`Release`] makes the load part [`Relaxed`].
761    ///
762    /// **Note:** This method is only available on platforms that support atomic
763    /// operations on `u8`.
764    ///
765    /// # Examples
766    ///
767    /// ```
768    /// use std::sync::atomic::{AtomicBool, Ordering};
769    ///
770    /// let some_bool = AtomicBool::new(true);
771    ///
772    /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
773    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
774    /// ```
775    #[inline]
776    #[stable(feature = "rust1", since = "1.0.0")]
777    #[cfg(target_has_atomic = "8")]
778    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
779    pub fn swap(&self, val: bool, order: Ordering) -> bool {
780        if EMULATE_ATOMIC_BOOL {
781            if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
782        } else {
783            // SAFETY: data races are prevented by atomic intrinsics.
784            unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 }
785        }
786    }
787
788    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
789    ///
790    /// The return value is always the previous value. If it is equal to `current`, then the value
791    /// was updated.
792    ///
793    /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
794    /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
795    /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
796    /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
797    /// happens, and using [`Release`] makes the load part [`Relaxed`].
798    ///
799    /// **Note:** This method is only available on platforms that support atomic
800    /// operations on `u8`.
801    ///
802    /// # Migrating to `compare_exchange` and `compare_exchange_weak`
803    ///
804    /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
805    /// memory orderings:
806    ///
807    /// Original | Success | Failure
808    /// -------- | ------- | -------
809    /// Relaxed  | Relaxed | Relaxed
810    /// Acquire  | Acquire | Acquire
811    /// Release  | Release | Relaxed
812    /// AcqRel   | AcqRel  | Acquire
813    /// SeqCst   | SeqCst  | SeqCst
814    ///
815    /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
816    /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
817    /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
818    /// rather than to infer success vs failure based on the value that was read.
819    ///
820    /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
821    /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
822    /// which allows the compiler to generate better assembly code when the compare and swap
823    /// is used in a loop.
824    ///
825    /// # Examples
826    ///
827    /// ```
828    /// use std::sync::atomic::{AtomicBool, Ordering};
829    ///
830    /// let some_bool = AtomicBool::new(true);
831    ///
832    /// assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
833    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
834    ///
835    /// assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
836    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
837    /// ```
838    #[inline]
839    #[stable(feature = "rust1", since = "1.0.0")]
840    #[deprecated(
841        since = "1.50.0",
842        note = "Use `compare_exchange` or `compare_exchange_weak` instead"
843    )]
844    #[cfg(target_has_atomic = "8")]
845    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
846    pub fn compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool {
847        match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
848            Ok(x) => x,
849            Err(x) => x,
850        }
851    }
852
853    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
854    ///
855    /// The return value is a result indicating whether the new value was written and containing
856    /// the previous value. On success this value is guaranteed to be equal to `current`.
857    ///
858    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
859    /// ordering of this operation. `success` describes the required ordering for the
860    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
861    /// `failure` describes the required ordering for the load operation that takes place when
862    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
863    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
864    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
865    ///
866    /// **Note:** This method is only available on platforms that support atomic
867    /// operations on `u8`.
868    ///
869    /// # Examples
870    ///
871    /// ```
872    /// use std::sync::atomic::{AtomicBool, Ordering};
873    ///
874    /// let some_bool = AtomicBool::new(true);
875    ///
876    /// assert_eq!(some_bool.compare_exchange(true,
877    ///                                       false,
878    ///                                       Ordering::Acquire,
879    ///                                       Ordering::Relaxed),
880    ///            Ok(true));
881    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
882    ///
883    /// assert_eq!(some_bool.compare_exchange(true, true,
884    ///                                       Ordering::SeqCst,
885    ///                                       Ordering::Acquire),
886    ///            Err(false));
887    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
888    /// ```
889    #[inline]
890    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
891    #[doc(alias = "compare_and_swap")]
892    #[cfg(target_has_atomic = "8")]
893    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
894    pub fn compare_exchange(
895        &self,
896        current: bool,
897        new: bool,
898        success: Ordering,
899        failure: Ordering,
900    ) -> Result<bool, bool> {
901        if EMULATE_ATOMIC_BOOL {
902            // Pick the strongest ordering from success and failure.
903            let order = match (success, failure) {
904                (SeqCst, _) => SeqCst,
905                (_, SeqCst) => SeqCst,
906                (AcqRel, _) => AcqRel,
907                (_, AcqRel) => {
908                    panic!("there is no such thing as an acquire-release failure ordering")
909                }
910                (Release, Acquire) => AcqRel,
911                (Acquire, _) => Acquire,
912                (_, Acquire) => Acquire,
913                (Release, Relaxed) => Release,
914                (_, Release) => panic!("there is no such thing as a release failure ordering"),
915                (Relaxed, Relaxed) => Relaxed,
916            };
917            let old = if current == new {
918                // This is a no-op, but we still need to perform the operation
919                // for memory ordering reasons.
920                self.fetch_or(false, order)
921            } else {
922                // This sets the value to the new one and returns the old one.
923                self.swap(new, order)
924            };
925            if old == current { Ok(old) } else { Err(old) }
926        } else {
927            // SAFETY: data races are prevented by atomic intrinsics.
928            match unsafe {
929                atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure)
930            } {
931                Ok(x) => Ok(x != 0),
932                Err(x) => Err(x != 0),
933            }
934        }
935    }
936
937    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
938    ///
939    /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
940    /// comparison succeeds, which can result in more efficient code on some platforms. The
941    /// return value is a result indicating whether the new value was written and containing the
942    /// previous value.
943    ///
944    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
945    /// ordering of this operation. `success` describes the required ordering for the
946    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
947    /// `failure` describes the required ordering for the load operation that takes place when
948    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
949    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
950    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
951    ///
952    /// **Note:** This method is only available on platforms that support atomic
953    /// operations on `u8`.
954    ///
955    /// # Examples
956    ///
957    /// ```
958    /// use std::sync::atomic::{AtomicBool, Ordering};
959    ///
960    /// let val = AtomicBool::new(false);
961    ///
962    /// let new = true;
963    /// let mut old = val.load(Ordering::Relaxed);
964    /// loop {
965    ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
966    ///         Ok(_) => break,
967    ///         Err(x) => old = x,
968    ///     }
969    /// }
970    /// ```
971    #[inline]
972    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
973    #[doc(alias = "compare_and_swap")]
974    #[cfg(target_has_atomic = "8")]
975    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
976    pub fn compare_exchange_weak(
977        &self,
978        current: bool,
979        new: bool,
980        success: Ordering,
981        failure: Ordering,
982    ) -> Result<bool, bool> {
983        if EMULATE_ATOMIC_BOOL {
984            return self.compare_exchange(current, new, success, failure);
985        }
986
987        // SAFETY: data races are prevented by atomic intrinsics.
988        match unsafe {
989            atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure)
990        } {
991            Ok(x) => Ok(x != 0),
992            Err(x) => Err(x != 0),
993        }
994    }
995
996    /// Logical "and" with a boolean value.
997    ///
998    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
999    /// the new value to the result.
1000    ///
1001    /// Returns the previous value.
1002    ///
1003    /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
1004    /// of this operation. All ordering modes are possible. Note that using
1005    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1006    /// using [`Release`] makes the load part [`Relaxed`].
1007    ///
1008    /// **Note:** This method is only available on platforms that support atomic
1009    /// operations on `u8`.
1010    ///
1011    /// # Examples
1012    ///
1013    /// ```
1014    /// use std::sync::atomic::{AtomicBool, Ordering};
1015    ///
1016    /// let foo = AtomicBool::new(true);
1017    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
1018    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1019    ///
1020    /// let foo = AtomicBool::new(true);
1021    /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
1022    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1023    ///
1024    /// let foo = AtomicBool::new(false);
1025    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
1026    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1027    /// ```
1028    #[inline]
1029    #[stable(feature = "rust1", since = "1.0.0")]
1030    #[cfg(target_has_atomic = "8")]
1031    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1032    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
1033        // SAFETY: data races are prevented by atomic intrinsics.
1034        unsafe { atomic_and(self.v.get(), val as u8, order) != 0 }
1035    }
1036
1037    /// Logical "nand" with a boolean value.
1038    ///
1039    /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1040    /// the new value to the result.
1041    ///
1042    /// Returns the previous value.
1043    ///
1044    /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1045    /// of this operation. All ordering modes are possible. Note that using
1046    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1047    /// using [`Release`] makes the load part [`Relaxed`].
1048    ///
1049    /// **Note:** This method is only available on platforms that support atomic
1050    /// operations on `u8`.
1051    ///
1052    /// # Examples
1053    ///
1054    /// ```
1055    /// use std::sync::atomic::{AtomicBool, Ordering};
1056    ///
1057    /// let foo = AtomicBool::new(true);
1058    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1059    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1060    ///
1061    /// let foo = AtomicBool::new(true);
1062    /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1063    /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1064    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1065    ///
1066    /// let foo = AtomicBool::new(false);
1067    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1068    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1069    /// ```
1070    #[inline]
1071    #[stable(feature = "rust1", since = "1.0.0")]
1072    #[cfg(target_has_atomic = "8")]
1073    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1074    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1075        // We can't use atomic_nand here because it can result in a bool with
1076        // an invalid value. This happens because the atomic operation is done
1077        // with an 8-bit integer internally, which would set the upper 7 bits.
1078        // So we just use fetch_xor or swap instead.
1079        if val {
1080            // !(x & true) == !x
1081            // We must invert the bool.
1082            self.fetch_xor(true, order)
1083        } else {
1084            // !(x & false) == true
1085            // We must set the bool to true.
1086            self.swap(true, order)
1087        }
1088    }
1089
1090    /// Logical "or" with a boolean value.
1091    ///
1092    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1093    /// new value to the result.
1094    ///
1095    /// Returns the previous value.
1096    ///
1097    /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1098    /// of this operation. All ordering modes are possible. Note that using
1099    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1100    /// using [`Release`] makes the load part [`Relaxed`].
1101    ///
1102    /// **Note:** This method is only available on platforms that support atomic
1103    /// operations on `u8`.
1104    ///
1105    /// # Examples
1106    ///
1107    /// ```
1108    /// use std::sync::atomic::{AtomicBool, Ordering};
1109    ///
1110    /// let foo = AtomicBool::new(true);
1111    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1112    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1113    ///
1114    /// let foo = AtomicBool::new(true);
1115    /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1116    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1117    ///
1118    /// let foo = AtomicBool::new(false);
1119    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1120    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1121    /// ```
1122    #[inline]
1123    #[stable(feature = "rust1", since = "1.0.0")]
1124    #[cfg(target_has_atomic = "8")]
1125    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1126    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1127        // SAFETY: data races are prevented by atomic intrinsics.
1128        unsafe { atomic_or(self.v.get(), val as u8, order) != 0 }
1129    }
1130
1131    /// Logical "xor" with a boolean value.
1132    ///
1133    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1134    /// the new value to the result.
1135    ///
1136    /// Returns the previous value.
1137    ///
1138    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1139    /// of this operation. All ordering modes are possible. Note that using
1140    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1141    /// using [`Release`] makes the load part [`Relaxed`].
1142    ///
1143    /// **Note:** This method is only available on platforms that support atomic
1144    /// operations on `u8`.
1145    ///
1146    /// # Examples
1147    ///
1148    /// ```
1149    /// use std::sync::atomic::{AtomicBool, Ordering};
1150    ///
1151    /// let foo = AtomicBool::new(true);
1152    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1153    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1154    ///
1155    /// let foo = AtomicBool::new(true);
1156    /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1157    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1158    ///
1159    /// let foo = AtomicBool::new(false);
1160    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1161    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1162    /// ```
1163    #[inline]
1164    #[stable(feature = "rust1", since = "1.0.0")]
1165    #[cfg(target_has_atomic = "8")]
1166    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1167    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1168        // SAFETY: data races are prevented by atomic intrinsics.
1169        unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 }
1170    }
1171
1172    /// Logical "not" with a boolean value.
1173    ///
1174    /// Performs a logical "not" operation on the current value, and sets
1175    /// the new value to the result.
1176    ///
1177    /// Returns the previous value.
1178    ///
1179    /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1180    /// of this operation. All ordering modes are possible. Note that using
1181    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1182    /// using [`Release`] makes the load part [`Relaxed`].
1183    ///
1184    /// **Note:** This method is only available on platforms that support atomic
1185    /// operations on `u8`.
1186    ///
1187    /// # Examples
1188    ///
1189    /// ```
1190    /// use std::sync::atomic::{AtomicBool, Ordering};
1191    ///
1192    /// let foo = AtomicBool::new(true);
1193    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1194    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1195    ///
1196    /// let foo = AtomicBool::new(false);
1197    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1198    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1199    /// ```
1200    #[inline]
1201    #[stable(feature = "atomic_bool_fetch_not", since = "1.81.0")]
1202    #[cfg(target_has_atomic = "8")]
1203    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1204    pub fn fetch_not(&self, order: Ordering) -> bool {
1205        self.fetch_xor(true, order)
1206    }
1207
1208    /// Returns a mutable pointer to the underlying [`bool`].
1209    ///
1210    /// Doing non-atomic reads and writes on the resulting boolean can be a data race.
1211    /// This method is mostly useful for FFI, where the function signature may use
1212    /// `*mut bool` instead of `&AtomicBool`.
1213    ///
1214    /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
1215    /// atomic types work with interior mutability. All modifications of an atomic change the value
1216    /// through a shared reference, and can do so safely as long as they use atomic operations. Any
1217    /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
1218    /// restriction: operations on it must be atomic.
1219    ///
1220    /// # Examples
1221    ///
1222    /// ```ignore (extern-declaration)
1223    /// # fn main() {
1224    /// use std::sync::atomic::AtomicBool;
1225    ///
1226    /// extern "C" {
1227    ///     fn my_atomic_op(arg: *mut bool);
1228    /// }
1229    ///
1230    /// let mut atomic = AtomicBool::new(true);
1231    /// unsafe {
1232    ///     my_atomic_op(atomic.as_ptr());
1233    /// }
1234    /// # }
1235    /// ```
1236    #[inline]
1237    #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
1238    #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
1239    #[rustc_never_returns_null_ptr]
1240    pub const fn as_ptr(&self) -> *mut bool {
1241        self.v.get().cast()
1242    }
1243
1244    /// Fetches the value, and applies a function to it that returns an optional
1245    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1246    /// returned `Some(_)`, else `Err(previous_value)`.
1247    ///
1248    /// Note: This may call the function multiple times if the value has been
1249    /// changed from other threads in the meantime, as long as the function
1250    /// returns `Some(_)`, but the function will have been applied only once to
1251    /// the stored value.
1252    ///
1253    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1254    /// ordering of this operation. The first describes the required ordering for
1255    /// when the operation finally succeeds while the second describes the
1256    /// required ordering for loads. These correspond to the success and failure
1257    /// orderings of [`AtomicBool::compare_exchange`] respectively.
1258    ///
1259    /// Using [`Acquire`] as success ordering makes the store part of this
1260    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1261    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1262    /// [`Acquire`] or [`Relaxed`].
1263    ///
1264    /// **Note:** This method is only available on platforms that support atomic
1265    /// operations on `u8`.
1266    ///
1267    /// # Considerations
1268    ///
1269    /// This method is not magic; it is not provided by the hardware.
1270    /// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks.
1271    /// In particular, this method will not circumvent the [ABA Problem].
1272    ///
1273    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1274    ///
1275    /// # Examples
1276    ///
1277    /// ```rust
1278    /// use std::sync::atomic::{AtomicBool, Ordering};
1279    ///
1280    /// let x = AtomicBool::new(false);
1281    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1282    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1283    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1284    /// assert_eq!(x.load(Ordering::SeqCst), false);
1285    /// ```
1286    #[inline]
1287    #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1288    #[cfg(target_has_atomic = "8")]
1289    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1290    pub fn fetch_update<F>(
1291        &self,
1292        set_order: Ordering,
1293        fetch_order: Ordering,
1294        mut f: F,
1295    ) -> Result<bool, bool>
1296    where
1297        F: FnMut(bool) -> Option<bool>,
1298    {
1299        let mut prev = self.load(fetch_order);
1300        while let Some(next) = f(prev) {
1301            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1302                x @ Ok(_) => return x,
1303                Err(next_prev) => prev = next_prev,
1304            }
1305        }
1306        Err(prev)
1307    }
1308
1309    /// Fetches the value, and applies a function to it that returns an optional
1310    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1311    /// returned `Some(_)`, else `Err(previous_value)`.
1312    ///
1313    /// See also: [`update`](`AtomicBool::update`).
1314    ///
1315    /// Note: This may call the function multiple times if the value has been
1316    /// changed from other threads in the meantime, as long as the function
1317    /// returns `Some(_)`, but the function will have been applied only once to
1318    /// the stored value.
1319    ///
1320    /// `try_update` takes two [`Ordering`] arguments to describe the memory
1321    /// ordering of this operation. The first describes the required ordering for
1322    /// when the operation finally succeeds while the second describes the
1323    /// required ordering for loads. These correspond to the success and failure
1324    /// orderings of [`AtomicBool::compare_exchange`] respectively.
1325    ///
1326    /// Using [`Acquire`] as success ordering makes the store part of this
1327    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1328    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1329    /// [`Acquire`] or [`Relaxed`].
1330    ///
1331    /// **Note:** This method is only available on platforms that support atomic
1332    /// operations on `u8`.
1333    ///
1334    /// # Considerations
1335    ///
1336    /// This method is not magic; it is not provided by the hardware.
1337    /// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks.
1338    /// In particular, this method will not circumvent the [ABA Problem].
1339    ///
1340    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1341    ///
1342    /// # Examples
1343    ///
1344    /// ```rust
1345    /// #![feature(atomic_try_update)]
1346    /// use std::sync::atomic::{AtomicBool, Ordering};
1347    ///
1348    /// let x = AtomicBool::new(false);
1349    /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1350    /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1351    /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1352    /// assert_eq!(x.load(Ordering::SeqCst), false);
1353    /// ```
1354    #[inline]
1355    #[unstable(feature = "atomic_try_update", issue = "135894")]
1356    #[cfg(target_has_atomic = "8")]
1357    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1358    pub fn try_update(
1359        &self,
1360        set_order: Ordering,
1361        fetch_order: Ordering,
1362        f: impl FnMut(bool) -> Option<bool>,
1363    ) -> Result<bool, bool> {
1364        // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
1365        //      when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
1366        self.fetch_update(set_order, fetch_order, f)
1367    }
1368
1369    /// Fetches the value, applies a function to it that it return a new value.
1370    /// The new value is stored and the old value is returned.
1371    ///
1372    /// See also: [`try_update`](`AtomicBool::try_update`).
1373    ///
1374    /// Note: This may call the function multiple times if the value has been changed from other threads in
1375    /// the meantime, but the function will have been applied only once to the stored value.
1376    ///
1377    /// `update` takes two [`Ordering`] arguments to describe the memory
1378    /// ordering of this operation. The first describes the required ordering for
1379    /// when the operation finally succeeds while the second describes the
1380    /// required ordering for loads. These correspond to the success and failure
1381    /// orderings of [`AtomicBool::compare_exchange`] respectively.
1382    ///
1383    /// Using [`Acquire`] as success ordering makes the store part
1384    /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
1385    /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1386    ///
1387    /// **Note:** This method is only available on platforms that support atomic operations on `u8`.
1388    ///
1389    /// # Considerations
1390    ///
1391    /// This method is not magic; it is not provided by the hardware.
1392    /// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks.
1393    /// In particular, this method will not circumvent the [ABA Problem].
1394    ///
1395    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1396    ///
1397    /// # Examples
1398    ///
1399    /// ```rust
1400    /// #![feature(atomic_try_update)]
1401    ///
1402    /// use std::sync::atomic::{AtomicBool, Ordering};
1403    ///
1404    /// let x = AtomicBool::new(false);
1405    /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), false);
1406    /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), true);
1407    /// assert_eq!(x.load(Ordering::SeqCst), false);
1408    /// ```
1409    #[inline]
1410    #[unstable(feature = "atomic_try_update", issue = "135894")]
1411    #[cfg(target_has_atomic = "8")]
1412    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1413    pub fn update(
1414        &self,
1415        set_order: Ordering,
1416        fetch_order: Ordering,
1417        mut f: impl FnMut(bool) -> bool,
1418    ) -> bool {
1419        let mut prev = self.load(fetch_order);
1420        loop {
1421            match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
1422                Ok(x) => break x,
1423                Err(next_prev) => prev = next_prev,
1424            }
1425        }
1426    }
1427}
1428
1429#[cfg(target_has_atomic_load_store = "ptr")]
1430impl<T> AtomicPtr<T> {
1431    /// Creates a new `AtomicPtr`.
1432    ///
1433    /// # Examples
1434    ///
1435    /// ```
1436    /// use std::sync::atomic::AtomicPtr;
1437    ///
1438    /// let ptr = &mut 5;
1439    /// let atomic_ptr = AtomicPtr::new(ptr);
1440    /// ```
1441    #[inline]
1442    #[stable(feature = "rust1", since = "1.0.0")]
1443    #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
1444    pub const fn new(p: *mut T) -> AtomicPtr<T> {
1445        AtomicPtr { p: UnsafeCell::new(p) }
1446    }
1447
1448    /// Creates a new `AtomicPtr` from a pointer.
1449    ///
1450    /// # Examples
1451    ///
1452    /// ```
1453    /// use std::sync::atomic::{self, AtomicPtr};
1454    ///
1455    /// // Get a pointer to an allocated value
1456    /// let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut()));
1457    ///
1458    /// assert!(ptr.cast::<AtomicPtr<u8>>().is_aligned());
1459    ///
1460    /// {
1461    ///     // Create an atomic view of the allocated value
1462    ///     let atomic = unsafe { AtomicPtr::from_ptr(ptr) };
1463    ///
1464    ///     // Use `atomic` for atomic operations, possibly share it with other threads
1465    ///     atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed);
1466    /// }
1467    ///
1468    /// // It's ok to non-atomically access the value behind `ptr`,
1469    /// // since the reference to the atomic ended its lifetime in the block above
1470    /// assert!(!unsafe { *ptr }.is_null());
1471    ///
1472    /// // Deallocate the value
1473    /// unsafe { drop(Box::from_raw(ptr)) }
1474    /// ```
1475    ///
1476    /// # Safety
1477    ///
1478    /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1479    ///   can be bigger than `align_of::<*mut T>()`).
1480    /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1481    /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
1482    ///   allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
1483    ///   without synchronization.
1484    ///
1485    /// [valid]: crate::ptr#safety
1486    /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
1487    #[inline]
1488    #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
1489    #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
1490    pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T> {
1491        // SAFETY: guaranteed by the caller
1492        unsafe { &*ptr.cast() }
1493    }
1494
1495    /// Returns a mutable reference to the underlying pointer.
1496    ///
1497    /// This is safe because the mutable reference guarantees that no other threads are
1498    /// concurrently accessing the atomic data.
1499    ///
1500    /// # Examples
1501    ///
1502    /// ```
1503    /// use std::sync::atomic::{AtomicPtr, Ordering};
1504    ///
1505    /// let mut data = 10;
1506    /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1507    /// let mut other_data = 5;
1508    /// *atomic_ptr.get_mut() = &mut other_data;
1509    /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1510    /// ```
1511    #[inline]
1512    #[stable(feature = "atomic_access", since = "1.15.0")]
1513    pub fn get_mut(&mut self) -> &mut *mut T {
1514        self.p.get_mut()
1515    }
1516
1517    /// Gets atomic access to a pointer.
1518    ///
1519    /// # Examples
1520    ///
1521    /// ```
1522    /// #![feature(atomic_from_mut)]
1523    /// use std::sync::atomic::{AtomicPtr, Ordering};
1524    ///
1525    /// let mut data = 123;
1526    /// let mut some_ptr = &mut data as *mut i32;
1527    /// let a = AtomicPtr::from_mut(&mut some_ptr);
1528    /// let mut other_data = 456;
1529    /// a.store(&mut other_data, Ordering::Relaxed);
1530    /// assert_eq!(unsafe { *some_ptr }, 456);
1531    /// ```
1532    #[inline]
1533    #[cfg(target_has_atomic_equal_alignment = "ptr")]
1534    #[unstable(feature = "atomic_from_mut", issue = "76314")]
1535    pub fn from_mut(v: &mut *mut T) -> &mut Self {
1536        let [] = [(); align_of::<AtomicPtr<()>>() - align_of::<*mut ()>()];
1537        // SAFETY:
1538        //  - the mutable reference guarantees unique ownership.
1539        //  - the alignment of `*mut T` and `Self` is the same on all platforms
1540        //    supported by rust, as verified above.
1541        unsafe { &mut *(v as *mut *mut T as *mut Self) }
1542    }
1543
1544    /// Gets non-atomic access to a `&mut [AtomicPtr]` slice.
1545    ///
1546    /// This is safe because the mutable reference guarantees that no other threads are
1547    /// concurrently accessing the atomic data.
1548    ///
1549    /// # Examples
1550    ///
1551    /// ```
1552    /// #![feature(atomic_from_mut)]
1553    /// use std::ptr::null_mut;
1554    /// use std::sync::atomic::{AtomicPtr, Ordering};
1555    ///
1556    /// let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
1557    ///
1558    /// let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
1559    /// assert_eq!(view, [null_mut::<String>(); 10]);
1560    /// view
1561    ///     .iter_mut()
1562    ///     .enumerate()
1563    ///     .for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
1564    ///
1565    /// std::thread::scope(|s| {
1566    ///     for ptr in &some_ptrs {
1567    ///         s.spawn(move || {
1568    ///             let ptr = ptr.load(Ordering::Relaxed);
1569    ///             assert!(!ptr.is_null());
1570    ///
1571    ///             let name = unsafe { Box::from_raw(ptr) };
1572    ///             println!("Hello, {name}!");
1573    ///         });
1574    ///     }
1575    /// });
1576    /// ```
1577    #[inline]
1578    #[unstable(feature = "atomic_from_mut", issue = "76314")]
1579    pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T] {
1580        // SAFETY: the mutable reference guarantees unique ownership.
1581        unsafe { &mut *(this as *mut [Self] as *mut [*mut T]) }
1582    }
1583
1584    /// Gets atomic access to a slice of pointers.
1585    ///
1586    /// # Examples
1587    ///
1588    /// ```
1589    /// #![feature(atomic_from_mut)]
1590    /// use std::ptr::null_mut;
1591    /// use std::sync::atomic::{AtomicPtr, Ordering};
1592    ///
1593    /// let mut some_ptrs = [null_mut::<String>(); 10];
1594    /// let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
1595    /// std::thread::scope(|s| {
1596    ///     for i in 0..a.len() {
1597    ///         s.spawn(move || {
1598    ///             let name = Box::new(format!("thread{i}"));
1599    ///             a[i].store(Box::into_raw(name), Ordering::Relaxed);
1600    ///         });
1601    ///     }
1602    /// });
1603    /// for p in some_ptrs {
1604    ///     assert!(!p.is_null());
1605    ///     let name = unsafe { Box::from_raw(p) };
1606    ///     println!("Hello, {name}!");
1607    /// }
1608    /// ```
1609    #[inline]
1610    #[cfg(target_has_atomic_equal_alignment = "ptr")]
1611    #[unstable(feature = "atomic_from_mut", issue = "76314")]
1612    pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self] {
1613        // SAFETY:
1614        //  - the mutable reference guarantees unique ownership.
1615        //  - the alignment of `*mut T` and `Self` is the same on all platforms
1616        //    supported by rust, as verified above.
1617        unsafe { &mut *(v as *mut [*mut T] as *mut [Self]) }
1618    }
1619
1620    /// Consumes the atomic and returns the contained value.
1621    ///
1622    /// This is safe because passing `self` by value guarantees that no other threads are
1623    /// concurrently accessing the atomic data.
1624    ///
1625    /// # Examples
1626    ///
1627    /// ```
1628    /// use std::sync::atomic::AtomicPtr;
1629    ///
1630    /// let mut data = 5;
1631    /// let atomic_ptr = AtomicPtr::new(&mut data);
1632    /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1633    /// ```
1634    #[inline]
1635    #[stable(feature = "atomic_access", since = "1.15.0")]
1636    #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
1637    pub const fn into_inner(self) -> *mut T {
1638        self.p.into_inner()
1639    }
1640
1641    /// Loads a value from the pointer.
1642    ///
1643    /// `load` takes an [`Ordering`] argument which describes the memory ordering
1644    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1645    ///
1646    /// # Panics
1647    ///
1648    /// Panics if `order` is [`Release`] or [`AcqRel`].
1649    ///
1650    /// # Examples
1651    ///
1652    /// ```
1653    /// use std::sync::atomic::{AtomicPtr, Ordering};
1654    ///
1655    /// let ptr = &mut 5;
1656    /// let some_ptr = AtomicPtr::new(ptr);
1657    ///
1658    /// let value = some_ptr.load(Ordering::Relaxed);
1659    /// ```
1660    #[inline]
1661    #[stable(feature = "rust1", since = "1.0.0")]
1662    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1663    pub fn load(&self, order: Ordering) -> *mut T {
1664        // SAFETY: data races are prevented by atomic intrinsics.
1665        unsafe { atomic_load(self.p.get(), order) }
1666    }
1667
1668    /// Stores a value into the pointer.
1669    ///
1670    /// `store` takes an [`Ordering`] argument which describes the memory ordering
1671    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1672    ///
1673    /// # Panics
1674    ///
1675    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1676    ///
1677    /// # Examples
1678    ///
1679    /// ```
1680    /// use std::sync::atomic::{AtomicPtr, Ordering};
1681    ///
1682    /// let ptr = &mut 5;
1683    /// let some_ptr = AtomicPtr::new(ptr);
1684    ///
1685    /// let other_ptr = &mut 10;
1686    ///
1687    /// some_ptr.store(other_ptr, Ordering::Relaxed);
1688    /// ```
1689    #[inline]
1690    #[stable(feature = "rust1", since = "1.0.0")]
1691    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1692    pub fn store(&self, ptr: *mut T, order: Ordering) {
1693        // SAFETY: data races are prevented by atomic intrinsics.
1694        unsafe {
1695            atomic_store(self.p.get(), ptr, order);
1696        }
1697    }
1698
1699    /// Stores a value into the pointer, returning the previous value.
1700    ///
1701    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1702    /// of this operation. All ordering modes are possible. Note that using
1703    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1704    /// using [`Release`] makes the load part [`Relaxed`].
1705    ///
1706    /// **Note:** This method is only available on platforms that support atomic
1707    /// operations on pointers.
1708    ///
1709    /// # Examples
1710    ///
1711    /// ```
1712    /// use std::sync::atomic::{AtomicPtr, Ordering};
1713    ///
1714    /// let ptr = &mut 5;
1715    /// let some_ptr = AtomicPtr::new(ptr);
1716    ///
1717    /// let other_ptr = &mut 10;
1718    ///
1719    /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1720    /// ```
1721    #[inline]
1722    #[stable(feature = "rust1", since = "1.0.0")]
1723    #[cfg(target_has_atomic = "ptr")]
1724    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1725    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1726        // SAFETY: data races are prevented by atomic intrinsics.
1727        unsafe { atomic_swap(self.p.get(), ptr, order) }
1728    }
1729
1730    /// Stores a value into the pointer if the current value is the same as the `current` value.
1731    ///
1732    /// The return value is always the previous value. If it is equal to `current`, then the value
1733    /// was updated.
1734    ///
1735    /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
1736    /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
1737    /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
1738    /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
1739    /// happens, and using [`Release`] makes the load part [`Relaxed`].
1740    ///
1741    /// **Note:** This method is only available on platforms that support atomic
1742    /// operations on pointers.
1743    ///
1744    /// # Migrating to `compare_exchange` and `compare_exchange_weak`
1745    ///
1746    /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
1747    /// memory orderings:
1748    ///
1749    /// Original | Success | Failure
1750    /// -------- | ------- | -------
1751    /// Relaxed  | Relaxed | Relaxed
1752    /// Acquire  | Acquire | Acquire
1753    /// Release  | Release | Relaxed
1754    /// AcqRel   | AcqRel  | Acquire
1755    /// SeqCst   | SeqCst  | SeqCst
1756    ///
1757    /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
1758    /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
1759    /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
1760    /// rather than to infer success vs failure based on the value that was read.
1761    ///
1762    /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
1763    /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
1764    /// which allows the compiler to generate better assembly code when the compare and swap
1765    /// is used in a loop.
1766    ///
1767    /// # Examples
1768    ///
1769    /// ```
1770    /// use std::sync::atomic::{AtomicPtr, Ordering};
1771    ///
1772    /// let ptr = &mut 5;
1773    /// let some_ptr = AtomicPtr::new(ptr);
1774    ///
1775    /// let other_ptr = &mut 10;
1776    ///
1777    /// let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed);
1778    /// ```
1779    #[inline]
1780    #[stable(feature = "rust1", since = "1.0.0")]
1781    #[deprecated(
1782        since = "1.50.0",
1783        note = "Use `compare_exchange` or `compare_exchange_weak` instead"
1784    )]
1785    #[cfg(target_has_atomic = "ptr")]
1786    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1787    pub fn compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T {
1788        match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
1789            Ok(x) => x,
1790            Err(x) => x,
1791        }
1792    }
1793
1794    /// Stores a value into the pointer if the current value is the same as the `current` value.
1795    ///
1796    /// The return value is a result indicating whether the new value was written and containing
1797    /// the previous value. On success this value is guaranteed to be equal to `current`.
1798    ///
1799    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1800    /// ordering of this operation. `success` describes the required ordering for the
1801    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1802    /// `failure` describes the required ordering for the load operation that takes place when
1803    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1804    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1805    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1806    ///
1807    /// **Note:** This method is only available on platforms that support atomic
1808    /// operations on pointers.
1809    ///
1810    /// # Examples
1811    ///
1812    /// ```
1813    /// use std::sync::atomic::{AtomicPtr, Ordering};
1814    ///
1815    /// let ptr = &mut 5;
1816    /// let some_ptr = AtomicPtr::new(ptr);
1817    ///
1818    /// let other_ptr = &mut 10;
1819    ///
1820    /// let value = some_ptr.compare_exchange(ptr, other_ptr,
1821    ///                                       Ordering::SeqCst, Ordering::Relaxed);
1822    /// ```
1823    #[inline]
1824    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1825    #[cfg(target_has_atomic = "ptr")]
1826    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1827    pub fn compare_exchange(
1828        &self,
1829        current: *mut T,
1830        new: *mut T,
1831        success: Ordering,
1832        failure: Ordering,
1833    ) -> Result<*mut T, *mut T> {
1834        // SAFETY: data races are prevented by atomic intrinsics.
1835        unsafe { atomic_compare_exchange(self.p.get(), current, new, success, failure) }
1836    }
1837
1838    /// Stores a value into the pointer if the current value is the same as the `current` value.
1839    ///
1840    /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1841    /// comparison succeeds, which can result in more efficient code on some platforms. The
1842    /// return value is a result indicating whether the new value was written and containing the
1843    /// previous value.
1844    ///
1845    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1846    /// ordering of this operation. `success` describes the required ordering for the
1847    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1848    /// `failure` describes the required ordering for the load operation that takes place when
1849    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1850    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1851    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1852    ///
1853    /// **Note:** This method is only available on platforms that support atomic
1854    /// operations on pointers.
1855    ///
1856    /// # Examples
1857    ///
1858    /// ```
1859    /// use std::sync::atomic::{AtomicPtr, Ordering};
1860    ///
1861    /// let some_ptr = AtomicPtr::new(&mut 5);
1862    ///
1863    /// let new = &mut 10;
1864    /// let mut old = some_ptr.load(Ordering::Relaxed);
1865    /// loop {
1866    ///     match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1867    ///         Ok(_) => break,
1868    ///         Err(x) => old = x,
1869    ///     }
1870    /// }
1871    /// ```
1872    #[inline]
1873    #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1874    #[cfg(target_has_atomic = "ptr")]
1875    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1876    pub fn compare_exchange_weak(
1877        &self,
1878        current: *mut T,
1879        new: *mut T,
1880        success: Ordering,
1881        failure: Ordering,
1882    ) -> Result<*mut T, *mut T> {
1883        // SAFETY: This intrinsic is unsafe because it operates on a raw pointer
1884        // but we know for sure that the pointer is valid (we just got it from
1885        // an `UnsafeCell` that we have by reference) and the atomic operation
1886        // itself allows us to safely mutate the `UnsafeCell` contents.
1887        unsafe { atomic_compare_exchange_weak(self.p.get(), current, new, success, failure) }
1888    }
1889
1890    /// Fetches the value, and applies a function to it that returns an optional
1891    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1892    /// returned `Some(_)`, else `Err(previous_value)`.
1893    ///
1894    /// Note: This may call the function multiple times if the value has been
1895    /// changed from other threads in the meantime, as long as the function
1896    /// returns `Some(_)`, but the function will have been applied only once to
1897    /// the stored value.
1898    ///
1899    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1900    /// ordering of this operation. The first describes the required ordering for
1901    /// when the operation finally succeeds while the second describes the
1902    /// required ordering for loads. These correspond to the success and failure
1903    /// orderings of [`AtomicPtr::compare_exchange`] respectively.
1904    ///
1905    /// Using [`Acquire`] as success ordering makes the store part of this
1906    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1907    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1908    /// [`Acquire`] or [`Relaxed`].
1909    ///
1910    /// **Note:** This method is only available on platforms that support atomic
1911    /// operations on pointers.
1912    ///
1913    /// # Considerations
1914    ///
1915    /// This method is not magic; it is not provided by the hardware.
1916    /// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks.
1917    /// In particular, this method will not circumvent the [ABA Problem].
1918    ///
1919    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1920    ///
1921    /// # Examples
1922    ///
1923    /// ```rust
1924    /// use std::sync::atomic::{AtomicPtr, Ordering};
1925    ///
1926    /// let ptr: *mut _ = &mut 5;
1927    /// let some_ptr = AtomicPtr::new(ptr);
1928    ///
1929    /// let new: *mut _ = &mut 10;
1930    /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
1931    /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
1932    ///     if x == ptr {
1933    ///         Some(new)
1934    ///     } else {
1935    ///         None
1936    ///     }
1937    /// });
1938    /// assert_eq!(result, Ok(ptr));
1939    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
1940    /// ```
1941    #[inline]
1942    #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1943    #[cfg(target_has_atomic = "ptr")]
1944    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1945    pub fn fetch_update<F>(
1946        &self,
1947        set_order: Ordering,
1948        fetch_order: Ordering,
1949        mut f: F,
1950    ) -> Result<*mut T, *mut T>
1951    where
1952        F: FnMut(*mut T) -> Option<*mut T>,
1953    {
1954        let mut prev = self.load(fetch_order);
1955        while let Some(next) = f(prev) {
1956            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1957                x @ Ok(_) => return x,
1958                Err(next_prev) => prev = next_prev,
1959            }
1960        }
1961        Err(prev)
1962    }
1963    /// Fetches the value, and applies a function to it that returns an optional
1964    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1965    /// returned `Some(_)`, else `Err(previous_value)`.
1966    ///
1967    /// See also: [`update`](`AtomicPtr::update`).
1968    ///
1969    /// Note: This may call the function multiple times if the value has been
1970    /// changed from other threads in the meantime, as long as the function
1971    /// returns `Some(_)`, but the function will have been applied only once to
1972    /// the stored value.
1973    ///
1974    /// `try_update` takes two [`Ordering`] arguments to describe the memory
1975    /// ordering of this operation. The first describes the required ordering for
1976    /// when the operation finally succeeds while the second describes the
1977    /// required ordering for loads. These correspond to the success and failure
1978    /// orderings of [`AtomicPtr::compare_exchange`] respectively.
1979    ///
1980    /// Using [`Acquire`] as success ordering makes the store part of this
1981    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1982    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1983    /// [`Acquire`] or [`Relaxed`].
1984    ///
1985    /// **Note:** This method is only available on platforms that support atomic
1986    /// operations on pointers.
1987    ///
1988    /// # Considerations
1989    ///
1990    /// This method is not magic; it is not provided by the hardware.
1991    /// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks.
1992    /// In particular, this method will not circumvent the [ABA Problem].
1993    ///
1994    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1995    ///
1996    /// # Examples
1997    ///
1998    /// ```rust
1999    /// #![feature(atomic_try_update)]
2000    /// use std::sync::atomic::{AtomicPtr, Ordering};
2001    ///
2002    /// let ptr: *mut _ = &mut 5;
2003    /// let some_ptr = AtomicPtr::new(ptr);
2004    ///
2005    /// let new: *mut _ = &mut 10;
2006    /// assert_eq!(some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2007    /// let result = some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2008    ///     if x == ptr {
2009    ///         Some(new)
2010    ///     } else {
2011    ///         None
2012    ///     }
2013    /// });
2014    /// assert_eq!(result, Ok(ptr));
2015    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2016    /// ```
2017    #[inline]
2018    #[unstable(feature = "atomic_try_update", issue = "135894")]
2019    #[cfg(target_has_atomic = "ptr")]
2020    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2021    pub fn try_update(
2022        &self,
2023        set_order: Ordering,
2024        fetch_order: Ordering,
2025        f: impl FnMut(*mut T) -> Option<*mut T>,
2026    ) -> Result<*mut T, *mut T> {
2027        // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
2028        //      when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
2029        self.fetch_update(set_order, fetch_order, f)
2030    }
2031
2032    /// Fetches the value, applies a function to it that it return a new value.
2033    /// The new value is stored and the old value is returned.
2034    ///
2035    /// See also: [`try_update`](`AtomicPtr::try_update`).
2036    ///
2037    /// Note: This may call the function multiple times if the value has been changed from other threads in
2038    /// the meantime, but the function will have been applied only once to the stored value.
2039    ///
2040    /// `update` takes two [`Ordering`] arguments to describe the memory
2041    /// ordering of this operation. The first describes the required ordering for
2042    /// when the operation finally succeeds while the second describes the
2043    /// required ordering for loads. These correspond to the success and failure
2044    /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2045    ///
2046    /// Using [`Acquire`] as success ordering makes the store part
2047    /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
2048    /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2049    ///
2050    /// **Note:** This method is only available on platforms that support atomic
2051    /// operations on pointers.
2052    ///
2053    /// # Considerations
2054    ///
2055    /// This method is not magic; it is not provided by the hardware.
2056    /// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks.
2057    /// In particular, this method will not circumvent the [ABA Problem].
2058    ///
2059    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2060    ///
2061    /// # Examples
2062    ///
2063    /// ```rust
2064    /// #![feature(atomic_try_update)]
2065    ///
2066    /// use std::sync::atomic::{AtomicPtr, Ordering};
2067    ///
2068    /// let ptr: *mut _ = &mut 5;
2069    /// let some_ptr = AtomicPtr::new(ptr);
2070    ///
2071    /// let new: *mut _ = &mut 10;
2072    /// let result = some_ptr.update(Ordering::SeqCst, Ordering::SeqCst, |_| new);
2073    /// assert_eq!(result, ptr);
2074    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2075    /// ```
2076    #[inline]
2077    #[unstable(feature = "atomic_try_update", issue = "135894")]
2078    #[cfg(target_has_atomic = "8")]
2079    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2080    pub fn update(
2081        &self,
2082        set_order: Ordering,
2083        fetch_order: Ordering,
2084        mut f: impl FnMut(*mut T) -> *mut T,
2085    ) -> *mut T {
2086        let mut prev = self.load(fetch_order);
2087        loop {
2088            match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
2089                Ok(x) => break x,
2090                Err(next_prev) => prev = next_prev,
2091            }
2092        }
2093    }
2094
2095    /// Offsets the pointer's address by adding `val` (in units of `T`),
2096    /// returning the previous pointer.
2097    ///
2098    /// This is equivalent to using [`wrapping_add`] to atomically perform the
2099    /// equivalent of `ptr = ptr.wrapping_add(val);`.
2100    ///
2101    /// This method operates in units of `T`, which means that it cannot be used
2102    /// to offset the pointer by an amount which is not a multiple of
2103    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2104    /// work with a deliberately misaligned pointer. In such cases, you may use
2105    /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2106    ///
2107    /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2108    /// memory ordering of this operation. All ordering modes are possible. Note
2109    /// that using [`Acquire`] makes the store part of this operation
2110    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2111    ///
2112    /// **Note**: This method is only available on platforms that support atomic
2113    /// operations on [`AtomicPtr`].
2114    ///
2115    /// [`wrapping_add`]: pointer::wrapping_add
2116    ///
2117    /// # Examples
2118    ///
2119    /// ```
2120    /// #![feature(strict_provenance_atomic_ptr)]
2121    /// use core::sync::atomic::{AtomicPtr, Ordering};
2122    ///
2123    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2124    /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2125    /// // Note: units of `size_of::<i64>()`.
2126    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2127    /// ```
2128    #[inline]
2129    #[cfg(target_has_atomic = "ptr")]
2130    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2131    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2132    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2133        self.fetch_byte_add(val.wrapping_mul(size_of::<T>()), order)
2134    }
2135
2136    /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2137    /// returning the previous pointer.
2138    ///
2139    /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2140    /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2141    ///
2142    /// This method operates in units of `T`, which means that it cannot be used
2143    /// to offset the pointer by an amount which is not a multiple of
2144    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2145    /// work with a deliberately misaligned pointer. In such cases, you may use
2146    /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2147    ///
2148    /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2149    /// ordering of this operation. All ordering modes are possible. Note that
2150    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2151    /// and using [`Release`] makes the load part [`Relaxed`].
2152    ///
2153    /// **Note**: This method is only available on platforms that support atomic
2154    /// operations on [`AtomicPtr`].
2155    ///
2156    /// [`wrapping_sub`]: pointer::wrapping_sub
2157    ///
2158    /// # Examples
2159    ///
2160    /// ```
2161    /// #![feature(strict_provenance_atomic_ptr)]
2162    /// use core::sync::atomic::{AtomicPtr, Ordering};
2163    ///
2164    /// let array = [1i32, 2i32];
2165    /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2166    ///
2167    /// assert!(core::ptr::eq(
2168    ///     atom.fetch_ptr_sub(1, Ordering::Relaxed),
2169    ///     &array[1],
2170    /// ));
2171    /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2172    /// ```
2173    #[inline]
2174    #[cfg(target_has_atomic = "ptr")]
2175    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2176    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2177    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2178        self.fetch_byte_sub(val.wrapping_mul(size_of::<T>()), order)
2179    }
2180
2181    /// Offsets the pointer's address by adding `val` *bytes*, returning the
2182    /// previous pointer.
2183    ///
2184    /// This is equivalent to using [`wrapping_byte_add`] to atomically
2185    /// perform `ptr = ptr.wrapping_byte_add(val)`.
2186    ///
2187    /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2188    /// memory ordering of this operation. All ordering modes are possible. Note
2189    /// that using [`Acquire`] makes the store part of this operation
2190    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2191    ///
2192    /// **Note**: This method is only available on platforms that support atomic
2193    /// operations on [`AtomicPtr`].
2194    ///
2195    /// [`wrapping_byte_add`]: pointer::wrapping_byte_add
2196    ///
2197    /// # Examples
2198    ///
2199    /// ```
2200    /// #![feature(strict_provenance_atomic_ptr)]
2201    /// use core::sync::atomic::{AtomicPtr, Ordering};
2202    ///
2203    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2204    /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2205    /// // Note: in units of bytes, not `size_of::<i64>()`.
2206    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2207    /// ```
2208    #[inline]
2209    #[cfg(target_has_atomic = "ptr")]
2210    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2211    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2212    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2213        // SAFETY: data races are prevented by atomic intrinsics.
2214        unsafe { atomic_add(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2215    }
2216
2217    /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2218    /// previous pointer.
2219    ///
2220    /// This is equivalent to using [`wrapping_byte_sub`] to atomically
2221    /// perform `ptr = ptr.wrapping_byte_sub(val)`.
2222    ///
2223    /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2224    /// memory ordering of this operation. All ordering modes are possible. Note
2225    /// that using [`Acquire`] makes the store part of this operation
2226    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2227    ///
2228    /// **Note**: This method is only available on platforms that support atomic
2229    /// operations on [`AtomicPtr`].
2230    ///
2231    /// [`wrapping_byte_sub`]: pointer::wrapping_byte_sub
2232    ///
2233    /// # Examples
2234    ///
2235    /// ```
2236    /// #![feature(strict_provenance_atomic_ptr)]
2237    /// use core::sync::atomic::{AtomicPtr, Ordering};
2238    ///
2239    /// let atom = AtomicPtr::<i64>::new(core::ptr::without_provenance_mut(1));
2240    /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
2241    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
2242    /// ```
2243    #[inline]
2244    #[cfg(target_has_atomic = "ptr")]
2245    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2246    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2247    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2248        // SAFETY: data races are prevented by atomic intrinsics.
2249        unsafe { atomic_sub(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2250    }
2251
2252    /// Performs a bitwise "or" operation on the address of the current pointer,
2253    /// and the argument `val`, and stores a pointer with provenance of the
2254    /// current pointer and the resulting address.
2255    ///
2256    /// This is equivalent to using [`map_addr`] to atomically perform
2257    /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2258    /// pointer schemes to atomically set tag bits.
2259    ///
2260    /// **Caveat**: This operation returns the previous value. To compute the
2261    /// stored value without losing provenance, you may use [`map_addr`]. For
2262    /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2263    ///
2264    /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2265    /// ordering of this operation. All ordering modes are possible. Note that
2266    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2267    /// and using [`Release`] makes the load part [`Relaxed`].
2268    ///
2269    /// **Note**: This method is only available on platforms that support atomic
2270    /// operations on [`AtomicPtr`].
2271    ///
2272    /// This API and its claimed semantics are part of the Strict Provenance
2273    /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2274    /// details.
2275    ///
2276    /// [`map_addr`]: pointer::map_addr
2277    ///
2278    /// # Examples
2279    ///
2280    /// ```
2281    /// #![feature(strict_provenance_atomic_ptr)]
2282    /// use core::sync::atomic::{AtomicPtr, Ordering};
2283    ///
2284    /// let pointer = &mut 3i64 as *mut i64;
2285    ///
2286    /// let atom = AtomicPtr::<i64>::new(pointer);
2287    /// // Tag the bottom bit of the pointer.
2288    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2289    /// // Extract and untag.
2290    /// let tagged = atom.load(Ordering::Relaxed);
2291    /// assert_eq!(tagged.addr() & 1, 1);
2292    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2293    /// ```
2294    #[inline]
2295    #[cfg(target_has_atomic = "ptr")]
2296    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2297    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2298    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2299        // SAFETY: data races are prevented by atomic intrinsics.
2300        unsafe { atomic_or(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2301    }
2302
2303    /// Performs a bitwise "and" operation on the address of the current
2304    /// pointer, and the argument `val`, and stores a pointer with provenance of
2305    /// the current pointer and the resulting address.
2306    ///
2307    /// This is equivalent to using [`map_addr`] to atomically perform
2308    /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2309    /// pointer schemes to atomically unset tag bits.
2310    ///
2311    /// **Caveat**: This operation returns the previous value. To compute the
2312    /// stored value without losing provenance, you may use [`map_addr`]. For
2313    /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2314    ///
2315    /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2316    /// ordering of this operation. All ordering modes are possible. Note that
2317    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2318    /// and using [`Release`] makes the load part [`Relaxed`].
2319    ///
2320    /// **Note**: This method is only available on platforms that support atomic
2321    /// operations on [`AtomicPtr`].
2322    ///
2323    /// This API and its claimed semantics are part of the Strict Provenance
2324    /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2325    /// details.
2326    ///
2327    /// [`map_addr`]: pointer::map_addr
2328    ///
2329    /// # Examples
2330    ///
2331    /// ```
2332    /// #![feature(strict_provenance_atomic_ptr)]
2333    /// use core::sync::atomic::{AtomicPtr, Ordering};
2334    ///
2335    /// let pointer = &mut 3i64 as *mut i64;
2336    /// // A tagged pointer
2337    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2338    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2339    /// // Untag, and extract the previously tagged pointer.
2340    /// let untagged = atom.fetch_and(!1, Ordering::Relaxed)
2341    ///     .map_addr(|a| a & !1);
2342    /// assert_eq!(untagged, pointer);
2343    /// ```
2344    #[inline]
2345    #[cfg(target_has_atomic = "ptr")]
2346    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2347    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2348    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2349        // SAFETY: data races are prevented by atomic intrinsics.
2350        unsafe { atomic_and(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2351    }
2352
2353    /// Performs a bitwise "xor" operation on the address of the current
2354    /// pointer, and the argument `val`, and stores a pointer with provenance of
2355    /// the current pointer and the resulting address.
2356    ///
2357    /// This is equivalent to using [`map_addr`] to atomically perform
2358    /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2359    /// pointer schemes to atomically toggle tag bits.
2360    ///
2361    /// **Caveat**: This operation returns the previous value. To compute the
2362    /// stored value without losing provenance, you may use [`map_addr`]. For
2363    /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2364    ///
2365    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2366    /// ordering of this operation. All ordering modes are possible. Note that
2367    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2368    /// and using [`Release`] makes the load part [`Relaxed`].
2369    ///
2370    /// **Note**: This method is only available on platforms that support atomic
2371    /// operations on [`AtomicPtr`].
2372    ///
2373    /// This API and its claimed semantics are part of the Strict Provenance
2374    /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2375    /// details.
2376    ///
2377    /// [`map_addr`]: pointer::map_addr
2378    ///
2379    /// # Examples
2380    ///
2381    /// ```
2382    /// #![feature(strict_provenance_atomic_ptr)]
2383    /// use core::sync::atomic::{AtomicPtr, Ordering};
2384    ///
2385    /// let pointer = &mut 3i64 as *mut i64;
2386    /// let atom = AtomicPtr::<i64>::new(pointer);
2387    ///
2388    /// // Toggle a tag bit on the pointer.
2389    /// atom.fetch_xor(1, Ordering::Relaxed);
2390    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2391    /// ```
2392    #[inline]
2393    #[cfg(target_has_atomic = "ptr")]
2394    #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2395    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2396    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2397        // SAFETY: data races are prevented by atomic intrinsics.
2398        unsafe { atomic_xor(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2399    }
2400
2401    /// Returns a mutable pointer to the underlying pointer.
2402    ///
2403    /// Doing non-atomic reads and writes on the resulting pointer can be a data race.
2404    /// This method is mostly useful for FFI, where the function signature may use
2405    /// `*mut *mut T` instead of `&AtomicPtr<T>`.
2406    ///
2407    /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
2408    /// atomic types work with interior mutability. All modifications of an atomic change the value
2409    /// through a shared reference, and can do so safely as long as they use atomic operations. Any
2410    /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
2411    /// restriction: operations on it must be atomic.
2412    ///
2413    /// # Examples
2414    ///
2415    /// ```ignore (extern-declaration)
2416    /// use std::sync::atomic::AtomicPtr;
2417    ///
2418    /// extern "C" {
2419    ///     fn my_atomic_op(arg: *mut *mut u32);
2420    /// }
2421    ///
2422    /// let mut value = 17;
2423    /// let atomic = AtomicPtr::new(&mut value);
2424    ///
2425    /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
2426    /// unsafe {
2427    ///     my_atomic_op(atomic.as_ptr());
2428    /// }
2429    /// ```
2430    #[inline]
2431    #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
2432    #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
2433    #[rustc_never_returns_null_ptr]
2434    pub const fn as_ptr(&self) -> *mut *mut T {
2435        self.p.get()
2436    }
2437}
2438
2439#[cfg(target_has_atomic_load_store = "8")]
2440#[stable(feature = "atomic_bool_from", since = "1.24.0")]
2441impl From<bool> for AtomicBool {
2442    /// Converts a `bool` into an `AtomicBool`.
2443    ///
2444    /// # Examples
2445    ///
2446    /// ```
2447    /// use std::sync::atomic::AtomicBool;
2448    /// let atomic_bool = AtomicBool::from(true);
2449    /// assert_eq!(format!("{atomic_bool:?}"), "true")
2450    /// ```
2451    #[inline]
2452    fn from(b: bool) -> Self {
2453        Self::new(b)
2454    }
2455}
2456
2457#[cfg(target_has_atomic_load_store = "ptr")]
2458#[stable(feature = "atomic_from", since = "1.23.0")]
2459impl<T> From<*mut T> for AtomicPtr<T> {
2460    /// Converts a `*mut T` into an `AtomicPtr<T>`.
2461    #[inline]
2462    fn from(p: *mut T) -> Self {
2463        Self::new(p)
2464    }
2465}
2466
2467#[allow(unused_macros)] // This macro ends up being unused on some architectures.
2468macro_rules! if_8_bit {
2469    (u8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2470    (i8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2471    ($_:ident, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($no)*)?) };
2472}
2473
2474#[cfg(target_has_atomic_load_store)]
2475macro_rules! atomic_int {
2476    ($cfg_cas:meta,
2477     $cfg_align:meta,
2478     $stable:meta,
2479     $stable_cxchg:meta,
2480     $stable_debug:meta,
2481     $stable_access:meta,
2482     $stable_from:meta,
2483     $stable_nand:meta,
2484     $const_stable_new:meta,
2485     $const_stable_into_inner:meta,
2486     $diagnostic_item:meta,
2487     $s_int_type:literal,
2488     $extra_feature:expr,
2489     $min_fn:ident, $max_fn:ident,
2490     $align:expr,
2491     $int_type:ident $atomic_type:ident) => {
2492        /// An integer type which can be safely shared between threads.
2493        ///
2494        /// This type has the same
2495        #[doc = if_8_bit!(
2496            $int_type,
2497            yes = ["size, alignment, and bit validity"],
2498            no = ["size and bit validity"],
2499        )]
2500        /// as the underlying integer type, [`
2501        #[doc = $s_int_type]
2502        /// `].
2503        #[doc = if_8_bit! {
2504            $int_type,
2505            no = [
2506                "However, the alignment of this type is always equal to its ",
2507                "size, even on targets where [`", $s_int_type, "`] has a ",
2508                "lesser alignment."
2509            ],
2510        }]
2511        ///
2512        /// For more about the differences between atomic types and
2513        /// non-atomic types as well as information about the portability of
2514        /// this type, please see the [module-level documentation].
2515        ///
2516        /// **Note:** This type is only available on platforms that support
2517        /// atomic loads and stores of [`
2518        #[doc = $s_int_type]
2519        /// `].
2520        ///
2521        /// [module-level documentation]: crate::sync::atomic
2522        #[$stable]
2523        #[$diagnostic_item]
2524        #[repr(C, align($align))]
2525        pub struct $atomic_type {
2526            v: UnsafeCell<$int_type>,
2527        }
2528
2529        #[$stable]
2530        impl Default for $atomic_type {
2531            #[inline]
2532            fn default() -> Self {
2533                Self::new(Default::default())
2534            }
2535        }
2536
2537        #[$stable_from]
2538        impl From<$int_type> for $atomic_type {
2539            #[doc = concat!("Converts an `", stringify!($int_type), "` into an `", stringify!($atomic_type), "`.")]
2540            #[inline]
2541            fn from(v: $int_type) -> Self { Self::new(v) }
2542        }
2543
2544        #[$stable_debug]
2545        impl fmt::Debug for $atomic_type {
2546            fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2547                fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
2548            }
2549        }
2550
2551        // Send is implicitly implemented.
2552        #[$stable]
2553        unsafe impl Sync for $atomic_type {}
2554
2555        impl $atomic_type {
2556            /// Creates a new atomic integer.
2557            ///
2558            /// # Examples
2559            ///
2560            /// ```
2561            #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2562            ///
2563            #[doc = concat!("let atomic_forty_two = ", stringify!($atomic_type), "::new(42);")]
2564            /// ```
2565            #[inline]
2566            #[$stable]
2567            #[$const_stable_new]
2568            #[must_use]
2569            pub const fn new(v: $int_type) -> Self {
2570                Self {v: UnsafeCell::new(v)}
2571            }
2572
2573            /// Creates a new reference to an atomic integer from a pointer.
2574            ///
2575            /// # Examples
2576            ///
2577            /// ```
2578            #[doc = concat!($extra_feature, "use std::sync::atomic::{self, ", stringify!($atomic_type), "};")]
2579            ///
2580            /// // Get a pointer to an allocated value
2581            #[doc = concat!("let ptr: *mut ", stringify!($int_type), " = Box::into_raw(Box::new(0));")]
2582            ///
2583            #[doc = concat!("assert!(ptr.cast::<", stringify!($atomic_type), ">().is_aligned());")]
2584            ///
2585            /// {
2586            ///     // Create an atomic view of the allocated value
2587            // SAFETY: this is a doc comment, tidy, it can't hurt you (also guaranteed by the construction of `ptr` and the assert above)
2588            #[doc = concat!("    let atomic = unsafe {", stringify!($atomic_type), "::from_ptr(ptr) };")]
2589            ///
2590            ///     // Use `atomic` for atomic operations, possibly share it with other threads
2591            ///     atomic.store(1, atomic::Ordering::Relaxed);
2592            /// }
2593            ///
2594            /// // It's ok to non-atomically access the value behind `ptr`,
2595            /// // since the reference to the atomic ended its lifetime in the block above
2596            /// assert_eq!(unsafe { *ptr }, 1);
2597            ///
2598            /// // Deallocate the value
2599            /// unsafe { drop(Box::from_raw(ptr)) }
2600            /// ```
2601            ///
2602            /// # Safety
2603            ///
2604            /// * `ptr` must be aligned to
2605            #[doc = concat!("  `align_of::<", stringify!($atomic_type), ">()`")]
2606            #[doc = if_8_bit!{
2607                $int_type,
2608                yes = [
2609                    "  (note that this is always true, since `align_of::<",
2610                    stringify!($atomic_type), ">() == 1`)."
2611                ],
2612                no = [
2613                    "  (note that on some platforms this can be bigger than `align_of::<",
2614                    stringify!($int_type), ">()`)."
2615                ],
2616            }]
2617            /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2618            /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
2619            ///   allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
2620            ///   without synchronization.
2621            ///
2622            /// [valid]: crate::ptr#safety
2623            /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
2624            #[inline]
2625            #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
2626            #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
2627            pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a $atomic_type {
2628                // SAFETY: guaranteed by the caller
2629                unsafe { &*ptr.cast() }
2630            }
2631
2632
2633            /// Returns a mutable reference to the underlying integer.
2634            ///
2635            /// This is safe because the mutable reference guarantees that no other threads are
2636            /// concurrently accessing the atomic data.
2637            ///
2638            /// # Examples
2639            ///
2640            /// ```
2641            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2642            ///
2643            #[doc = concat!("let mut some_var = ", stringify!($atomic_type), "::new(10);")]
2644            /// assert_eq!(*some_var.get_mut(), 10);
2645            /// *some_var.get_mut() = 5;
2646            /// assert_eq!(some_var.load(Ordering::SeqCst), 5);
2647            /// ```
2648            #[inline]
2649            #[$stable_access]
2650            pub fn get_mut(&mut self) -> &mut $int_type {
2651                self.v.get_mut()
2652            }
2653
2654            #[doc = concat!("Get atomic access to a `&mut ", stringify!($int_type), "`.")]
2655            ///
2656            #[doc = if_8_bit! {
2657                $int_type,
2658                no = [
2659                    "**Note:** This function is only available on targets where `",
2660                    stringify!($atomic_type), "` has the same alignment as `", stringify!($int_type), "`."
2661                ],
2662            }]
2663            ///
2664            /// # Examples
2665            ///
2666            /// ```
2667            /// #![feature(atomic_from_mut)]
2668            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2669            ///
2670            /// let mut some_int = 123;
2671            #[doc = concat!("let a = ", stringify!($atomic_type), "::from_mut(&mut some_int);")]
2672            /// a.store(100, Ordering::Relaxed);
2673            /// assert_eq!(some_int, 100);
2674            /// ```
2675            ///
2676            #[inline]
2677            #[$cfg_align]
2678            #[unstable(feature = "atomic_from_mut", issue = "76314")]
2679            pub fn from_mut(v: &mut $int_type) -> &mut Self {
2680                let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2681                // SAFETY:
2682                //  - the mutable reference guarantees unique ownership.
2683                //  - the alignment of `$int_type` and `Self` is the
2684                //    same, as promised by $cfg_align and verified above.
2685                unsafe { &mut *(v as *mut $int_type as *mut Self) }
2686            }
2687
2688            #[doc = concat!("Get non-atomic access to a `&mut [", stringify!($atomic_type), "]` slice")]
2689            ///
2690            /// This is safe because the mutable reference guarantees that no other threads are
2691            /// concurrently accessing the atomic data.
2692            ///
2693            /// # Examples
2694            ///
2695            /// ```
2696            /// #![feature(atomic_from_mut)]
2697            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2698            ///
2699            #[doc = concat!("let mut some_ints = [const { ", stringify!($atomic_type), "::new(0) }; 10];")]
2700            ///
2701            #[doc = concat!("let view: &mut [", stringify!($int_type), "] = ", stringify!($atomic_type), "::get_mut_slice(&mut some_ints);")]
2702            /// assert_eq!(view, [0; 10]);
2703            /// view
2704            ///     .iter_mut()
2705            ///     .enumerate()
2706            ///     .for_each(|(idx, int)| *int = idx as _);
2707            ///
2708            /// std::thread::scope(|s| {
2709            ///     some_ints
2710            ///         .iter()
2711            ///         .enumerate()
2712            ///         .for_each(|(idx, int)| {
2713            ///             s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
2714            ///         })
2715            /// });
2716            /// ```
2717            #[inline]
2718            #[unstable(feature = "atomic_from_mut", issue = "76314")]
2719            pub fn get_mut_slice(this: &mut [Self]) -> &mut [$int_type] {
2720                // SAFETY: the mutable reference guarantees unique ownership.
2721                unsafe { &mut *(this as *mut [Self] as *mut [$int_type]) }
2722            }
2723
2724            #[doc = concat!("Get atomic access to a `&mut [", stringify!($int_type), "]` slice.")]
2725            ///
2726            /// # Examples
2727            ///
2728            /// ```
2729            /// #![feature(atomic_from_mut)]
2730            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2731            ///
2732            /// let mut some_ints = [0; 10];
2733            #[doc = concat!("let a = &*", stringify!($atomic_type), "::from_mut_slice(&mut some_ints);")]
2734            /// std::thread::scope(|s| {
2735            ///     for i in 0..a.len() {
2736            ///         s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
2737            ///     }
2738            /// });
2739            /// for (i, n) in some_ints.into_iter().enumerate() {
2740            ///     assert_eq!(i, n as usize);
2741            /// }
2742            /// ```
2743            #[inline]
2744            #[$cfg_align]
2745            #[unstable(feature = "atomic_from_mut", issue = "76314")]
2746            pub fn from_mut_slice(v: &mut [$int_type]) -> &mut [Self] {
2747                let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2748                // SAFETY:
2749                //  - the mutable reference guarantees unique ownership.
2750                //  - the alignment of `$int_type` and `Self` is the
2751                //    same, as promised by $cfg_align and verified above.
2752                unsafe { &mut *(v as *mut [$int_type] as *mut [Self]) }
2753            }
2754
2755            /// Consumes the atomic and returns the contained value.
2756            ///
2757            /// This is safe because passing `self` by value guarantees that no other threads are
2758            /// concurrently accessing the atomic data.
2759            ///
2760            /// # Examples
2761            ///
2762            /// ```
2763            #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2764            ///
2765            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2766            /// assert_eq!(some_var.into_inner(), 5);
2767            /// ```
2768            #[inline]
2769            #[$stable_access]
2770            #[$const_stable_into_inner]
2771            pub const fn into_inner(self) -> $int_type {
2772                self.v.into_inner()
2773            }
2774
2775            /// Loads a value from the atomic integer.
2776            ///
2777            /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2778            /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2779            ///
2780            /// # Panics
2781            ///
2782            /// Panics if `order` is [`Release`] or [`AcqRel`].
2783            ///
2784            /// # Examples
2785            ///
2786            /// ```
2787            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2788            ///
2789            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2790            ///
2791            /// assert_eq!(some_var.load(Ordering::Relaxed), 5);
2792            /// ```
2793            #[inline]
2794            #[$stable]
2795            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2796            pub fn load(&self, order: Ordering) -> $int_type {
2797                // SAFETY: data races are prevented by atomic intrinsics.
2798                unsafe { atomic_load(self.v.get(), order) }
2799            }
2800
2801            /// Stores a value into the atomic integer.
2802            ///
2803            /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2804            ///  Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
2805            ///
2806            /// # Panics
2807            ///
2808            /// Panics if `order` is [`Acquire`] or [`AcqRel`].
2809            ///
2810            /// # Examples
2811            ///
2812            /// ```
2813            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2814            ///
2815            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2816            ///
2817            /// some_var.store(10, Ordering::Relaxed);
2818            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2819            /// ```
2820            #[inline]
2821            #[$stable]
2822            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2823            pub fn store(&self, val: $int_type, order: Ordering) {
2824                // SAFETY: data races are prevented by atomic intrinsics.
2825                unsafe { atomic_store(self.v.get(), val, order); }
2826            }
2827
2828            /// Stores a value into the atomic integer, returning the previous value.
2829            ///
2830            /// `swap` takes an [`Ordering`] argument which describes the memory ordering
2831            /// of this operation. All ordering modes are possible. Note that using
2832            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2833            /// using [`Release`] makes the load part [`Relaxed`].
2834            ///
2835            /// **Note**: This method is only available on platforms that support atomic operations on
2836            #[doc = concat!("[`", $s_int_type, "`].")]
2837            ///
2838            /// # Examples
2839            ///
2840            /// ```
2841            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2842            ///
2843            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2844            ///
2845            /// assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
2846            /// ```
2847            #[inline]
2848            #[$stable]
2849            #[$cfg_cas]
2850            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2851            pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
2852                // SAFETY: data races are prevented by atomic intrinsics.
2853                unsafe { atomic_swap(self.v.get(), val, order) }
2854            }
2855
2856            /// Stores a value into the atomic integer if the current value is the same as
2857            /// the `current` value.
2858            ///
2859            /// The return value is always the previous value. If it is equal to `current`, then the
2860            /// value was updated.
2861            ///
2862            /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
2863            /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
2864            /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
2865            /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
2866            /// happens, and using [`Release`] makes the load part [`Relaxed`].
2867            ///
2868            /// **Note**: This method is only available on platforms that support atomic operations on
2869            #[doc = concat!("[`", $s_int_type, "`].")]
2870            ///
2871            /// # Migrating to `compare_exchange` and `compare_exchange_weak`
2872            ///
2873            /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
2874            /// memory orderings:
2875            ///
2876            /// Original | Success | Failure
2877            /// -------- | ------- | -------
2878            /// Relaxed  | Relaxed | Relaxed
2879            /// Acquire  | Acquire | Acquire
2880            /// Release  | Release | Relaxed
2881            /// AcqRel   | AcqRel  | Acquire
2882            /// SeqCst   | SeqCst  | SeqCst
2883            ///
2884            /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
2885            /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
2886            /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
2887            /// rather than to infer success vs failure based on the value that was read.
2888            ///
2889            /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
2890            /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
2891            /// which allows the compiler to generate better assembly code when the compare and swap
2892            /// is used in a loop.
2893            ///
2894            /// # Examples
2895            ///
2896            /// ```
2897            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2898            ///
2899            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2900            ///
2901            /// assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
2902            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2903            ///
2904            /// assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
2905            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2906            /// ```
2907            #[inline]
2908            #[$stable]
2909            #[deprecated(
2910                since = "1.50.0",
2911                note = "Use `compare_exchange` or `compare_exchange_weak` instead")
2912            ]
2913            #[$cfg_cas]
2914            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2915            pub fn compare_and_swap(&self,
2916                                    current: $int_type,
2917                                    new: $int_type,
2918                                    order: Ordering) -> $int_type {
2919                match self.compare_exchange(current,
2920                                            new,
2921                                            order,
2922                                            strongest_failure_ordering(order)) {
2923                    Ok(x) => x,
2924                    Err(x) => x,
2925                }
2926            }
2927
2928            /// Stores a value into the atomic integer if the current value is the same as
2929            /// the `current` value.
2930            ///
2931            /// The return value is a result indicating whether the new value was written and
2932            /// containing the previous value. On success this value is guaranteed to be equal to
2933            /// `current`.
2934            ///
2935            /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
2936            /// ordering of this operation. `success` describes the required ordering for the
2937            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2938            /// `failure` describes the required ordering for the load operation that takes place when
2939            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2940            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2941            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2942            ///
2943            /// **Note**: This method is only available on platforms that support atomic operations on
2944            #[doc = concat!("[`", $s_int_type, "`].")]
2945            ///
2946            /// # Examples
2947            ///
2948            /// ```
2949            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2950            ///
2951            #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2952            ///
2953            /// assert_eq!(some_var.compare_exchange(5, 10,
2954            ///                                      Ordering::Acquire,
2955            ///                                      Ordering::Relaxed),
2956            ///            Ok(5));
2957            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2958            ///
2959            /// assert_eq!(some_var.compare_exchange(6, 12,
2960            ///                                      Ordering::SeqCst,
2961            ///                                      Ordering::Acquire),
2962            ///            Err(10));
2963            /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2964            /// ```
2965            #[inline]
2966            #[$stable_cxchg]
2967            #[$cfg_cas]
2968            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2969            pub fn compare_exchange(&self,
2970                                    current: $int_type,
2971                                    new: $int_type,
2972                                    success: Ordering,
2973                                    failure: Ordering) -> Result<$int_type, $int_type> {
2974                // SAFETY: data races are prevented by atomic intrinsics.
2975                unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) }
2976            }
2977
2978            /// Stores a value into the atomic integer if the current value is the same as
2979            /// the `current` value.
2980            ///
2981            #[doc = concat!("Unlike [`", stringify!($atomic_type), "::compare_exchange`],")]
2982            /// this function is allowed to spuriously fail even
2983            /// when the comparison succeeds, which can result in more efficient code on some
2984            /// platforms. The return value is a result indicating whether the new value was
2985            /// written and containing the previous value.
2986            ///
2987            /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
2988            /// ordering of this operation. `success` describes the required ordering for the
2989            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2990            /// `failure` describes the required ordering for the load operation that takes place when
2991            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2992            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2993            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2994            ///
2995            /// **Note**: This method is only available on platforms that support atomic operations on
2996            #[doc = concat!("[`", $s_int_type, "`].")]
2997            ///
2998            /// # Examples
2999            ///
3000            /// ```
3001            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3002            ///
3003            #[doc = concat!("let val = ", stringify!($atomic_type), "::new(4);")]
3004            ///
3005            /// let mut old = val.load(Ordering::Relaxed);
3006            /// loop {
3007            ///     let new = old * 2;
3008            ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3009            ///         Ok(_) => break,
3010            ///         Err(x) => old = x,
3011            ///     }
3012            /// }
3013            /// ```
3014            #[inline]
3015            #[$stable_cxchg]
3016            #[$cfg_cas]
3017            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3018            pub fn compare_exchange_weak(&self,
3019                                         current: $int_type,
3020                                         new: $int_type,
3021                                         success: Ordering,
3022                                         failure: Ordering) -> Result<$int_type, $int_type> {
3023                // SAFETY: data races are prevented by atomic intrinsics.
3024                unsafe {
3025                    atomic_compare_exchange_weak(self.v.get(), current, new, success, failure)
3026                }
3027            }
3028
3029            /// Adds to the current value, returning the previous value.
3030            ///
3031            /// This operation wraps around on overflow.
3032            ///
3033            /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3034            /// of this operation. All ordering modes are possible. Note that using
3035            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3036            /// using [`Release`] makes the load part [`Relaxed`].
3037            ///
3038            /// **Note**: This method is only available on platforms that support atomic operations on
3039            #[doc = concat!("[`", $s_int_type, "`].")]
3040            ///
3041            /// # Examples
3042            ///
3043            /// ```
3044            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3045            ///
3046            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0);")]
3047            /// assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3048            /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3049            /// ```
3050            #[inline]
3051            #[$stable]
3052            #[$cfg_cas]
3053            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3054            pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3055                // SAFETY: data races are prevented by atomic intrinsics.
3056                unsafe { atomic_add(self.v.get(), val, order) }
3057            }
3058
3059            /// Subtracts from the current value, returning the previous value.
3060            ///
3061            /// This operation wraps around on overflow.
3062            ///
3063            /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3064            /// of this operation. All ordering modes are possible. Note that using
3065            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3066            /// using [`Release`] makes the load part [`Relaxed`].
3067            ///
3068            /// **Note**: This method is only available on platforms that support atomic operations on
3069            #[doc = concat!("[`", $s_int_type, "`].")]
3070            ///
3071            /// # Examples
3072            ///
3073            /// ```
3074            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3075            ///
3076            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(20);")]
3077            /// assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3078            /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3079            /// ```
3080            #[inline]
3081            #[$stable]
3082            #[$cfg_cas]
3083            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3084            pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3085                // SAFETY: data races are prevented by atomic intrinsics.
3086                unsafe { atomic_sub(self.v.get(), val, order) }
3087            }
3088
3089            /// Bitwise "and" with the current value.
3090            ///
3091            /// Performs a bitwise "and" operation on the current value and the argument `val`, and
3092            /// sets the new value to the result.
3093            ///
3094            /// Returns the previous value.
3095            ///
3096            /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3097            /// of this operation. All ordering modes are possible. Note that using
3098            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3099            /// using [`Release`] makes the load part [`Relaxed`].
3100            ///
3101            /// **Note**: This method is only available on platforms that support atomic operations on
3102            #[doc = concat!("[`", $s_int_type, "`].")]
3103            ///
3104            /// # Examples
3105            ///
3106            /// ```
3107            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3108            ///
3109            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3110            /// assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3111            /// assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3112            /// ```
3113            #[inline]
3114            #[$stable]
3115            #[$cfg_cas]
3116            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3117            pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3118                // SAFETY: data races are prevented by atomic intrinsics.
3119                unsafe { atomic_and(self.v.get(), val, order) }
3120            }
3121
3122            /// Bitwise "nand" with the current value.
3123            ///
3124            /// Performs a bitwise "nand" operation on the current value and the argument `val`, and
3125            /// sets the new value to the result.
3126            ///
3127            /// Returns the previous value.
3128            ///
3129            /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3130            /// of this operation. All ordering modes are possible. Note that using
3131            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3132            /// using [`Release`] makes the load part [`Relaxed`].
3133            ///
3134            /// **Note**: This method is only available on platforms that support atomic operations on
3135            #[doc = concat!("[`", $s_int_type, "`].")]
3136            ///
3137            /// # Examples
3138            ///
3139            /// ```
3140            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3141            ///
3142            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0x13);")]
3143            /// assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3144            /// assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3145            /// ```
3146            #[inline]
3147            #[$stable_nand]
3148            #[$cfg_cas]
3149            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3150            pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3151                // SAFETY: data races are prevented by atomic intrinsics.
3152                unsafe { atomic_nand(self.v.get(), val, order) }
3153            }
3154
3155            /// Bitwise "or" with the current value.
3156            ///
3157            /// Performs a bitwise "or" operation on the current value and the argument `val`, and
3158            /// sets the new value to the result.
3159            ///
3160            /// Returns the previous value.
3161            ///
3162            /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3163            /// of this operation. All ordering modes are possible. Note that using
3164            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3165            /// using [`Release`] makes the load part [`Relaxed`].
3166            ///
3167            /// **Note**: This method is only available on platforms that support atomic operations on
3168            #[doc = concat!("[`", $s_int_type, "`].")]
3169            ///
3170            /// # Examples
3171            ///
3172            /// ```
3173            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3174            ///
3175            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3176            /// assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3177            /// assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3178            /// ```
3179            #[inline]
3180            #[$stable]
3181            #[$cfg_cas]
3182            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3183            pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3184                // SAFETY: data races are prevented by atomic intrinsics.
3185                unsafe { atomic_or(self.v.get(), val, order) }
3186            }
3187
3188            /// Bitwise "xor" with the current value.
3189            ///
3190            /// Performs a bitwise "xor" operation on the current value and the argument `val`, and
3191            /// sets the new value to the result.
3192            ///
3193            /// Returns the previous value.
3194            ///
3195            /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3196            /// of this operation. All ordering modes are possible. Note that using
3197            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3198            /// using [`Release`] makes the load part [`Relaxed`].
3199            ///
3200            /// **Note**: This method is only available on platforms that support atomic operations on
3201            #[doc = concat!("[`", $s_int_type, "`].")]
3202            ///
3203            /// # Examples
3204            ///
3205            /// ```
3206            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3207            ///
3208            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3209            /// assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3210            /// assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3211            /// ```
3212            #[inline]
3213            #[$stable]
3214            #[$cfg_cas]
3215            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3216            pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3217                // SAFETY: data races are prevented by atomic intrinsics.
3218                unsafe { atomic_xor(self.v.get(), val, order) }
3219            }
3220
3221            /// Fetches the value, and applies a function to it that returns an optional
3222            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3223            /// `Err(previous_value)`.
3224            ///
3225            /// Note: This may call the function multiple times if the value has been changed from other threads in
3226            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3227            /// only once to the stored value.
3228            ///
3229            /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3230            /// The first describes the required ordering for when the operation finally succeeds while the second
3231            /// describes the required ordering for loads. These correspond to the success and failure orderings of
3232            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3233            /// respectively.
3234            ///
3235            /// Using [`Acquire`] as success ordering makes the store part
3236            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3237            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3238            ///
3239            /// **Note**: This method is only available on platforms that support atomic operations on
3240            #[doc = concat!("[`", $s_int_type, "`].")]
3241            ///
3242            /// # Considerations
3243            ///
3244            /// This method is not magic; it is not provided by the hardware.
3245            /// It is implemented in terms of
3246            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")]
3247            /// and suffers from the same drawbacks.
3248            /// In particular, this method will not circumvent the [ABA Problem].
3249            ///
3250            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3251            ///
3252            /// # Examples
3253            ///
3254            /// ```rust
3255            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3256            ///
3257            #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3258            /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3259            /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3260            /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3261            /// assert_eq!(x.load(Ordering::SeqCst), 9);
3262            /// ```
3263            #[inline]
3264            #[stable(feature = "no_more_cas", since = "1.45.0")]
3265            #[$cfg_cas]
3266            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3267            pub fn fetch_update<F>(&self,
3268                                   set_order: Ordering,
3269                                   fetch_order: Ordering,
3270                                   mut f: F) -> Result<$int_type, $int_type>
3271            where F: FnMut($int_type) -> Option<$int_type> {
3272                let mut prev = self.load(fetch_order);
3273                while let Some(next) = f(prev) {
3274                    match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3275                        x @ Ok(_) => return x,
3276                        Err(next_prev) => prev = next_prev
3277                    }
3278                }
3279                Err(prev)
3280            }
3281
3282            /// Fetches the value, and applies a function to it that returns an optional
3283            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3284            /// `Err(previous_value)`.
3285            ///
3286            #[doc = concat!("See also: [`update`](`", stringify!($atomic_type), "::update`).")]
3287            ///
3288            /// Note: This may call the function multiple times if the value has been changed from other threads in
3289            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3290            /// only once to the stored value.
3291            ///
3292            /// `try_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3293            /// The first describes the required ordering for when the operation finally succeeds while the second
3294            /// describes the required ordering for loads. These correspond to the success and failure orderings of
3295            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3296            /// respectively.
3297            ///
3298            /// Using [`Acquire`] as success ordering makes the store part
3299            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3300            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3301            ///
3302            /// **Note**: This method is only available on platforms that support atomic operations on
3303            #[doc = concat!("[`", $s_int_type, "`].")]
3304            ///
3305            /// # Considerations
3306            ///
3307            /// This method is not magic; it is not provided by the hardware.
3308            /// It is implemented in terms of
3309            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")]
3310            /// and suffers from the same drawbacks.
3311            /// In particular, this method will not circumvent the [ABA Problem].
3312            ///
3313            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3314            ///
3315            /// # Examples
3316            ///
3317            /// ```rust
3318            /// #![feature(atomic_try_update)]
3319            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3320            ///
3321            #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3322            /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3323            /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3324            /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3325            /// assert_eq!(x.load(Ordering::SeqCst), 9);
3326            /// ```
3327            #[inline]
3328            #[unstable(feature = "atomic_try_update", issue = "135894")]
3329            #[$cfg_cas]
3330            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3331            pub fn try_update(
3332                &self,
3333                set_order: Ordering,
3334                fetch_order: Ordering,
3335                f: impl FnMut($int_type) -> Option<$int_type>,
3336            ) -> Result<$int_type, $int_type> {
3337                // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
3338                //      when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
3339                self.fetch_update(set_order, fetch_order, f)
3340            }
3341
3342            /// Fetches the value, applies a function to it that it return a new value.
3343            /// The new value is stored and the old value is returned.
3344            ///
3345            #[doc = concat!("See also: [`try_update`](`", stringify!($atomic_type), "::try_update`).")]
3346            ///
3347            /// Note: This may call the function multiple times if the value has been changed from other threads in
3348            /// the meantime, but the function will have been applied only once to the stored value.
3349            ///
3350            /// `update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3351            /// The first describes the required ordering for when the operation finally succeeds while the second
3352            /// describes the required ordering for loads. These correspond to the success and failure orderings of
3353            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3354            /// respectively.
3355            ///
3356            /// Using [`Acquire`] as success ordering makes the store part
3357            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3358            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3359            ///
3360            /// **Note**: This method is only available on platforms that support atomic operations on
3361            #[doc = concat!("[`", $s_int_type, "`].")]
3362            ///
3363            /// # Considerations
3364            ///
3365            /// This method is not magic; it is not provided by the hardware.
3366            /// It is implemented in terms of
3367            #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")]
3368            /// and suffers from the same drawbacks.
3369            /// In particular, this method will not circumvent the [ABA Problem].
3370            ///
3371            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3372            ///
3373            /// # Examples
3374            ///
3375            /// ```rust
3376            /// #![feature(atomic_try_update)]
3377            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3378            ///
3379            #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3380            /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 7);
3381            /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 8);
3382            /// assert_eq!(x.load(Ordering::SeqCst), 9);
3383            /// ```
3384            #[inline]
3385            #[unstable(feature = "atomic_try_update", issue = "135894")]
3386            #[$cfg_cas]
3387            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3388            pub fn update(
3389                &self,
3390                set_order: Ordering,
3391                fetch_order: Ordering,
3392                mut f: impl FnMut($int_type) -> $int_type,
3393            ) -> $int_type {
3394                let mut prev = self.load(fetch_order);
3395                loop {
3396                    match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
3397                        Ok(x) => break x,
3398                        Err(next_prev) => prev = next_prev,
3399                    }
3400                }
3401            }
3402
3403            /// Maximum with the current value.
3404            ///
3405            /// Finds the maximum of the current value and the argument `val`, and
3406            /// sets the new value to the result.
3407            ///
3408            /// Returns the previous value.
3409            ///
3410            /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3411            /// of this operation. All ordering modes are possible. Note that using
3412            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3413            /// using [`Release`] makes the load part [`Relaxed`].
3414            ///
3415            /// **Note**: This method is only available on platforms that support atomic operations on
3416            #[doc = concat!("[`", $s_int_type, "`].")]
3417            ///
3418            /// # Examples
3419            ///
3420            /// ```
3421            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3422            ///
3423            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3424            /// assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3425            /// assert_eq!(foo.load(Ordering::SeqCst), 42);
3426            /// ```
3427            ///
3428            /// If you want to obtain the maximum value in one step, you can use the following:
3429            ///
3430            /// ```
3431            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3432            ///
3433            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3434            /// let bar = 42;
3435            /// let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3436            /// assert!(max_foo == 42);
3437            /// ```
3438            #[inline]
3439            #[stable(feature = "atomic_min_max", since = "1.45.0")]
3440            #[$cfg_cas]
3441            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3442            pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3443                // SAFETY: data races are prevented by atomic intrinsics.
3444                unsafe { $max_fn(self.v.get(), val, order) }
3445            }
3446
3447            /// Minimum with the current value.
3448            ///
3449            /// Finds the minimum of the current value and the argument `val`, and
3450            /// sets the new value to the result.
3451            ///
3452            /// Returns the previous value.
3453            ///
3454            /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3455            /// of this operation. All ordering modes are possible. Note that using
3456            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3457            /// using [`Release`] makes the load part [`Relaxed`].
3458            ///
3459            /// **Note**: This method is only available on platforms that support atomic operations on
3460            #[doc = concat!("[`", $s_int_type, "`].")]
3461            ///
3462            /// # Examples
3463            ///
3464            /// ```
3465            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3466            ///
3467            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3468            /// assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3469            /// assert_eq!(foo.load(Ordering::Relaxed), 23);
3470            /// assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3471            /// assert_eq!(foo.load(Ordering::Relaxed), 22);
3472            /// ```
3473            ///
3474            /// If you want to obtain the minimum value in one step, you can use the following:
3475            ///
3476            /// ```
3477            #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3478            ///
3479            #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3480            /// let bar = 12;
3481            /// let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3482            /// assert_eq!(min_foo, 12);
3483            /// ```
3484            #[inline]
3485            #[stable(feature = "atomic_min_max", since = "1.45.0")]
3486            #[$cfg_cas]
3487            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3488            pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3489                // SAFETY: data races are prevented by atomic intrinsics.
3490                unsafe { $min_fn(self.v.get(), val, order) }
3491            }
3492
3493            /// Returns a mutable pointer to the underlying integer.
3494            ///
3495            /// Doing non-atomic reads and writes on the resulting integer can be a data race.
3496            /// This method is mostly useful for FFI, where the function signature may use
3497            #[doc = concat!("`*mut ", stringify!($int_type), "` instead of `&", stringify!($atomic_type), "`.")]
3498            ///
3499            /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
3500            /// atomic types work with interior mutability. All modifications of an atomic change the value
3501            /// through a shared reference, and can do so safely as long as they use atomic operations. Any
3502            /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
3503            /// restriction: operations on it must be atomic.
3504            ///
3505            /// # Examples
3506            ///
3507            /// ```ignore (extern-declaration)
3508            /// # fn main() {
3509            #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
3510            ///
3511            /// extern "C" {
3512            #[doc = concat!("    fn my_atomic_op(arg: *mut ", stringify!($int_type), ");")]
3513            /// }
3514            ///
3515            #[doc = concat!("let atomic = ", stringify!($atomic_type), "::new(1);")]
3516            ///
3517            /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
3518            /// unsafe {
3519            ///     my_atomic_op(atomic.as_ptr());
3520            /// }
3521            /// # }
3522            /// ```
3523            #[inline]
3524            #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
3525            #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
3526            #[rustc_never_returns_null_ptr]
3527            pub const fn as_ptr(&self) -> *mut $int_type {
3528                self.v.get()
3529            }
3530        }
3531    }
3532}
3533
3534#[cfg(target_has_atomic_load_store = "8")]
3535atomic_int! {
3536    cfg(target_has_atomic = "8"),
3537    cfg(target_has_atomic_equal_alignment = "8"),
3538    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3539    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3540    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3541    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3542    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3543    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3544    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3545    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3546    rustc_diagnostic_item = "AtomicI8",
3547    "i8",
3548    "",
3549    atomic_min, atomic_max,
3550    1,
3551    i8 AtomicI8
3552}
3553#[cfg(target_has_atomic_load_store = "8")]
3554atomic_int! {
3555    cfg(target_has_atomic = "8"),
3556    cfg(target_has_atomic_equal_alignment = "8"),
3557    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3558    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3559    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3560    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3561    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3562    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3563    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3564    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3565    rustc_diagnostic_item = "AtomicU8",
3566    "u8",
3567    "",
3568    atomic_umin, atomic_umax,
3569    1,
3570    u8 AtomicU8
3571}
3572#[cfg(target_has_atomic_load_store = "16")]
3573atomic_int! {
3574    cfg(target_has_atomic = "16"),
3575    cfg(target_has_atomic_equal_alignment = "16"),
3576    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3577    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3578    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3579    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3580    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3581    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3582    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3583    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3584    rustc_diagnostic_item = "AtomicI16",
3585    "i16",
3586    "",
3587    atomic_min, atomic_max,
3588    2,
3589    i16 AtomicI16
3590}
3591#[cfg(target_has_atomic_load_store = "16")]
3592atomic_int! {
3593    cfg(target_has_atomic = "16"),
3594    cfg(target_has_atomic_equal_alignment = "16"),
3595    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3596    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3597    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3598    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3599    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3600    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3601    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3602    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3603    rustc_diagnostic_item = "AtomicU16",
3604    "u16",
3605    "",
3606    atomic_umin, atomic_umax,
3607    2,
3608    u16 AtomicU16
3609}
3610#[cfg(target_has_atomic_load_store = "32")]
3611atomic_int! {
3612    cfg(target_has_atomic = "32"),
3613    cfg(target_has_atomic_equal_alignment = "32"),
3614    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3615    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3616    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3617    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3618    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3619    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3620    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3621    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3622    rustc_diagnostic_item = "AtomicI32",
3623    "i32",
3624    "",
3625    atomic_min, atomic_max,
3626    4,
3627    i32 AtomicI32
3628}
3629#[cfg(target_has_atomic_load_store = "32")]
3630atomic_int! {
3631    cfg(target_has_atomic = "32"),
3632    cfg(target_has_atomic_equal_alignment = "32"),
3633    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3634    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3635    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3636    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3637    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3638    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3639    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3640    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3641    rustc_diagnostic_item = "AtomicU32",
3642    "u32",
3643    "",
3644    atomic_umin, atomic_umax,
3645    4,
3646    u32 AtomicU32
3647}
3648#[cfg(target_has_atomic_load_store = "64")]
3649atomic_int! {
3650    cfg(target_has_atomic = "64"),
3651    cfg(target_has_atomic_equal_alignment = "64"),
3652    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3653    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3654    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3655    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3656    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3657    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3658    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3659    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3660    rustc_diagnostic_item = "AtomicI64",
3661    "i64",
3662    "",
3663    atomic_min, atomic_max,
3664    8,
3665    i64 AtomicI64
3666}
3667#[cfg(target_has_atomic_load_store = "64")]
3668atomic_int! {
3669    cfg(target_has_atomic = "64"),
3670    cfg(target_has_atomic_equal_alignment = "64"),
3671    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3672    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3673    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3674    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3675    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3676    stable(feature = "integer_atomics_stable", since = "1.34.0"),
3677    rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3678    rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3679    rustc_diagnostic_item = "AtomicU64",
3680    "u64",
3681    "",
3682    atomic_umin, atomic_umax,
3683    8,
3684    u64 AtomicU64
3685}
3686#[cfg(target_has_atomic_load_store = "128")]
3687atomic_int! {
3688    cfg(target_has_atomic = "128"),
3689    cfg(target_has_atomic_equal_alignment = "128"),
3690    unstable(feature = "integer_atomics", issue = "99069"),
3691    unstable(feature = "integer_atomics", issue = "99069"),
3692    unstable(feature = "integer_atomics", issue = "99069"),
3693    unstable(feature = "integer_atomics", issue = "99069"),
3694    unstable(feature = "integer_atomics", issue = "99069"),
3695    unstable(feature = "integer_atomics", issue = "99069"),
3696    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3697    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3698    rustc_diagnostic_item = "AtomicI128",
3699    "i128",
3700    "#![feature(integer_atomics)]\n\n",
3701    atomic_min, atomic_max,
3702    16,
3703    i128 AtomicI128
3704}
3705#[cfg(target_has_atomic_load_store = "128")]
3706atomic_int! {
3707    cfg(target_has_atomic = "128"),
3708    cfg(target_has_atomic_equal_alignment = "128"),
3709    unstable(feature = "integer_atomics", issue = "99069"),
3710    unstable(feature = "integer_atomics", issue = "99069"),
3711    unstable(feature = "integer_atomics", issue = "99069"),
3712    unstable(feature = "integer_atomics", issue = "99069"),
3713    unstable(feature = "integer_atomics", issue = "99069"),
3714    unstable(feature = "integer_atomics", issue = "99069"),
3715    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3716    rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3717    rustc_diagnostic_item = "AtomicU128",
3718    "u128",
3719    "#![feature(integer_atomics)]\n\n",
3720    atomic_umin, atomic_umax,
3721    16,
3722    u128 AtomicU128
3723}
3724
3725#[cfg(target_has_atomic_load_store = "ptr")]
3726macro_rules! atomic_int_ptr_sized {
3727    ( $($target_pointer_width:literal $align:literal)* ) => { $(
3728        #[cfg(target_pointer_width = $target_pointer_width)]
3729        atomic_int! {
3730            cfg(target_has_atomic = "ptr"),
3731            cfg(target_has_atomic_equal_alignment = "ptr"),
3732            stable(feature = "rust1", since = "1.0.0"),
3733            stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3734            stable(feature = "atomic_debug", since = "1.3.0"),
3735            stable(feature = "atomic_access", since = "1.15.0"),
3736            stable(feature = "atomic_from", since = "1.23.0"),
3737            stable(feature = "atomic_nand", since = "1.27.0"),
3738            rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3739            rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3740            rustc_diagnostic_item = "AtomicIsize",
3741            "isize",
3742            "",
3743            atomic_min, atomic_max,
3744            $align,
3745            isize AtomicIsize
3746        }
3747        #[cfg(target_pointer_width = $target_pointer_width)]
3748        atomic_int! {
3749            cfg(target_has_atomic = "ptr"),
3750            cfg(target_has_atomic_equal_alignment = "ptr"),
3751            stable(feature = "rust1", since = "1.0.0"),
3752            stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3753            stable(feature = "atomic_debug", since = "1.3.0"),
3754            stable(feature = "atomic_access", since = "1.15.0"),
3755            stable(feature = "atomic_from", since = "1.23.0"),
3756            stable(feature = "atomic_nand", since = "1.27.0"),
3757            rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3758            rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3759            rustc_diagnostic_item = "AtomicUsize",
3760            "usize",
3761            "",
3762            atomic_umin, atomic_umax,
3763            $align,
3764            usize AtomicUsize
3765        }
3766
3767        /// An [`AtomicIsize`] initialized to `0`.
3768        #[cfg(target_pointer_width = $target_pointer_width)]
3769        #[stable(feature = "rust1", since = "1.0.0")]
3770        #[deprecated(
3771            since = "1.34.0",
3772            note = "the `new` function is now preferred",
3773            suggestion = "AtomicIsize::new(0)",
3774        )]
3775        pub const ATOMIC_ISIZE_INIT: AtomicIsize = AtomicIsize::new(0);
3776
3777        /// An [`AtomicUsize`] initialized to `0`.
3778        #[cfg(target_pointer_width = $target_pointer_width)]
3779        #[stable(feature = "rust1", since = "1.0.0")]
3780        #[deprecated(
3781            since = "1.34.0",
3782            note = "the `new` function is now preferred",
3783            suggestion = "AtomicUsize::new(0)",
3784        )]
3785        pub const ATOMIC_USIZE_INIT: AtomicUsize = AtomicUsize::new(0);
3786    )* };
3787}
3788
3789#[cfg(target_has_atomic_load_store = "ptr")]
3790atomic_int_ptr_sized! {
3791    "16" 2
3792    "32" 4
3793    "64" 8
3794}
3795
3796#[inline]
3797#[cfg(target_has_atomic)]
3798fn strongest_failure_ordering(order: Ordering) -> Ordering {
3799    match order {
3800        Release => Relaxed,
3801        Relaxed => Relaxed,
3802        SeqCst => SeqCst,
3803        Acquire => Acquire,
3804        AcqRel => Acquire,
3805    }
3806}
3807
3808#[inline]
3809#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3810unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) {
3811    // SAFETY: the caller must uphold the safety contract for `atomic_store`.
3812    unsafe {
3813        match order {
3814            Relaxed => intrinsics::atomic_store_relaxed(dst, val),
3815            Release => intrinsics::atomic_store_release(dst, val),
3816            SeqCst => intrinsics::atomic_store_seqcst(dst, val),
3817            Acquire => panic!("there is no such thing as an acquire store"),
3818            AcqRel => panic!("there is no such thing as an acquire-release store"),
3819        }
3820    }
3821}
3822
3823#[inline]
3824#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3825unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T {
3826    // SAFETY: the caller must uphold the safety contract for `atomic_load`.
3827    unsafe {
3828        match order {
3829            Relaxed => intrinsics::atomic_load_relaxed(dst),
3830            Acquire => intrinsics::atomic_load_acquire(dst),
3831            SeqCst => intrinsics::atomic_load_seqcst(dst),
3832            Release => panic!("there is no such thing as a release load"),
3833            AcqRel => panic!("there is no such thing as an acquire-release load"),
3834        }
3835    }
3836}
3837
3838#[inline]
3839#[cfg(target_has_atomic)]
3840#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3841unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3842    // SAFETY: the caller must uphold the safety contract for `atomic_swap`.
3843    unsafe {
3844        match order {
3845            Relaxed => intrinsics::atomic_xchg_relaxed(dst, val),
3846            Acquire => intrinsics::atomic_xchg_acquire(dst, val),
3847            Release => intrinsics::atomic_xchg_release(dst, val),
3848            AcqRel => intrinsics::atomic_xchg_acqrel(dst, val),
3849            SeqCst => intrinsics::atomic_xchg_seqcst(dst, val),
3850        }
3851    }
3852}
3853
3854/// Returns the previous value (like __sync_fetch_and_add).
3855#[inline]
3856#[cfg(target_has_atomic)]
3857#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3858unsafe fn atomic_add<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3859    // SAFETY: the caller must uphold the safety contract for `atomic_add`.
3860    unsafe {
3861        match order {
3862            Relaxed => intrinsics::atomic_xadd_relaxed(dst, val),
3863            Acquire => intrinsics::atomic_xadd_acquire(dst, val),
3864            Release => intrinsics::atomic_xadd_release(dst, val),
3865            AcqRel => intrinsics::atomic_xadd_acqrel(dst, val),
3866            SeqCst => intrinsics::atomic_xadd_seqcst(dst, val),
3867        }
3868    }
3869}
3870
3871/// Returns the previous value (like __sync_fetch_and_sub).
3872#[inline]
3873#[cfg(target_has_atomic)]
3874#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3875unsafe fn atomic_sub<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3876    // SAFETY: the caller must uphold the safety contract for `atomic_sub`.
3877    unsafe {
3878        match order {
3879            Relaxed => intrinsics::atomic_xsub_relaxed(dst, val),
3880            Acquire => intrinsics::atomic_xsub_acquire(dst, val),
3881            Release => intrinsics::atomic_xsub_release(dst, val),
3882            AcqRel => intrinsics::atomic_xsub_acqrel(dst, val),
3883            SeqCst => intrinsics::atomic_xsub_seqcst(dst, val),
3884        }
3885    }
3886}
3887
3888#[inline]
3889#[cfg(target_has_atomic)]
3890#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3891unsafe fn atomic_compare_exchange<T: Copy>(
3892    dst: *mut T,
3893    old: T,
3894    new: T,
3895    success: Ordering,
3896    failure: Ordering,
3897) -> Result<T, T> {
3898    // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange`.
3899    let (val, ok) = unsafe {
3900        match (success, failure) {
3901            (Relaxed, Relaxed) => intrinsics::atomic_cxchg_relaxed_relaxed(dst, old, new),
3902            (Relaxed, Acquire) => intrinsics::atomic_cxchg_relaxed_acquire(dst, old, new),
3903            (Relaxed, SeqCst) => intrinsics::atomic_cxchg_relaxed_seqcst(dst, old, new),
3904            (Acquire, Relaxed) => intrinsics::atomic_cxchg_acquire_relaxed(dst, old, new),
3905            (Acquire, Acquire) => intrinsics::atomic_cxchg_acquire_acquire(dst, old, new),
3906            (Acquire, SeqCst) => intrinsics::atomic_cxchg_acquire_seqcst(dst, old, new),
3907            (Release, Relaxed) => intrinsics::atomic_cxchg_release_relaxed(dst, old, new),
3908            (Release, Acquire) => intrinsics::atomic_cxchg_release_acquire(dst, old, new),
3909            (Release, SeqCst) => intrinsics::atomic_cxchg_release_seqcst(dst, old, new),
3910            (AcqRel, Relaxed) => intrinsics::atomic_cxchg_acqrel_relaxed(dst, old, new),
3911            (AcqRel, Acquire) => intrinsics::atomic_cxchg_acqrel_acquire(dst, old, new),
3912            (AcqRel, SeqCst) => intrinsics::atomic_cxchg_acqrel_seqcst(dst, old, new),
3913            (SeqCst, Relaxed) => intrinsics::atomic_cxchg_seqcst_relaxed(dst, old, new),
3914            (SeqCst, Acquire) => intrinsics::atomic_cxchg_seqcst_acquire(dst, old, new),
3915            (SeqCst, SeqCst) => intrinsics::atomic_cxchg_seqcst_seqcst(dst, old, new),
3916            (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
3917            (_, Release) => panic!("there is no such thing as a release failure ordering"),
3918        }
3919    };
3920    if ok { Ok(val) } else { Err(val) }
3921}
3922
3923#[inline]
3924#[cfg(target_has_atomic)]
3925#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3926unsafe fn atomic_compare_exchange_weak<T: Copy>(
3927    dst: *mut T,
3928    old: T,
3929    new: T,
3930    success: Ordering,
3931    failure: Ordering,
3932) -> Result<T, T> {
3933    // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange_weak`.
3934    let (val, ok) = unsafe {
3935        match (success, failure) {
3936            (Relaxed, Relaxed) => intrinsics::atomic_cxchgweak_relaxed_relaxed(dst, old, new),
3937            (Relaxed, Acquire) => intrinsics::atomic_cxchgweak_relaxed_acquire(dst, old, new),
3938            (Relaxed, SeqCst) => intrinsics::atomic_cxchgweak_relaxed_seqcst(dst, old, new),
3939            (Acquire, Relaxed) => intrinsics::atomic_cxchgweak_acquire_relaxed(dst, old, new),
3940            (Acquire, Acquire) => intrinsics::atomic_cxchgweak_acquire_acquire(dst, old, new),
3941            (Acquire, SeqCst) => intrinsics::atomic_cxchgweak_acquire_seqcst(dst, old, new),
3942            (Release, Relaxed) => intrinsics::atomic_cxchgweak_release_relaxed(dst, old, new),
3943            (Release, Acquire) => intrinsics::atomic_cxchgweak_release_acquire(dst, old, new),
3944            (Release, SeqCst) => intrinsics::atomic_cxchgweak_release_seqcst(dst, old, new),
3945            (AcqRel, Relaxed) => intrinsics::atomic_cxchgweak_acqrel_relaxed(dst, old, new),
3946            (AcqRel, Acquire) => intrinsics::atomic_cxchgweak_acqrel_acquire(dst, old, new),
3947            (AcqRel, SeqCst) => intrinsics::atomic_cxchgweak_acqrel_seqcst(dst, old, new),
3948            (SeqCst, Relaxed) => intrinsics::atomic_cxchgweak_seqcst_relaxed(dst, old, new),
3949            (SeqCst, Acquire) => intrinsics::atomic_cxchgweak_seqcst_acquire(dst, old, new),
3950            (SeqCst, SeqCst) => intrinsics::atomic_cxchgweak_seqcst_seqcst(dst, old, new),
3951            (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
3952            (_, Release) => panic!("there is no such thing as a release failure ordering"),
3953        }
3954    };
3955    if ok { Ok(val) } else { Err(val) }
3956}
3957
3958#[inline]
3959#[cfg(target_has_atomic)]
3960#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3961unsafe fn atomic_and<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3962    // SAFETY: the caller must uphold the safety contract for `atomic_and`
3963    unsafe {
3964        match order {
3965            Relaxed => intrinsics::atomic_and_relaxed(dst, val),
3966            Acquire => intrinsics::atomic_and_acquire(dst, val),
3967            Release => intrinsics::atomic_and_release(dst, val),
3968            AcqRel => intrinsics::atomic_and_acqrel(dst, val),
3969            SeqCst => intrinsics::atomic_and_seqcst(dst, val),
3970        }
3971    }
3972}
3973
3974#[inline]
3975#[cfg(target_has_atomic)]
3976#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3977unsafe fn atomic_nand<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3978    // SAFETY: the caller must uphold the safety contract for `atomic_nand`
3979    unsafe {
3980        match order {
3981            Relaxed => intrinsics::atomic_nand_relaxed(dst, val),
3982            Acquire => intrinsics::atomic_nand_acquire(dst, val),
3983            Release => intrinsics::atomic_nand_release(dst, val),
3984            AcqRel => intrinsics::atomic_nand_acqrel(dst, val),
3985            SeqCst => intrinsics::atomic_nand_seqcst(dst, val),
3986        }
3987    }
3988}
3989
3990#[inline]
3991#[cfg(target_has_atomic)]
3992#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3993unsafe fn atomic_or<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3994    // SAFETY: the caller must uphold the safety contract for `atomic_or`
3995    unsafe {
3996        match order {
3997            SeqCst => intrinsics::atomic_or_seqcst(dst, val),
3998            Acquire => intrinsics::atomic_or_acquire(dst, val),
3999            Release => intrinsics::atomic_or_release(dst, val),
4000            AcqRel => intrinsics::atomic_or_acqrel(dst, val),
4001            Relaxed => intrinsics::atomic_or_relaxed(dst, val),
4002        }
4003    }
4004}
4005
4006#[inline]
4007#[cfg(target_has_atomic)]
4008#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4009unsafe fn atomic_xor<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4010    // SAFETY: the caller must uphold the safety contract for `atomic_xor`
4011    unsafe {
4012        match order {
4013            SeqCst => intrinsics::atomic_xor_seqcst(dst, val),
4014            Acquire => intrinsics::atomic_xor_acquire(dst, val),
4015            Release => intrinsics::atomic_xor_release(dst, val),
4016            AcqRel => intrinsics::atomic_xor_acqrel(dst, val),
4017            Relaxed => intrinsics::atomic_xor_relaxed(dst, val),
4018        }
4019    }
4020}
4021
4022/// returns the max value (signed comparison)
4023#[inline]
4024#[cfg(target_has_atomic)]
4025#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4026unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4027    // SAFETY: the caller must uphold the safety contract for `atomic_max`
4028    unsafe {
4029        match order {
4030            Relaxed => intrinsics::atomic_max_relaxed(dst, val),
4031            Acquire => intrinsics::atomic_max_acquire(dst, val),
4032            Release => intrinsics::atomic_max_release(dst, val),
4033            AcqRel => intrinsics::atomic_max_acqrel(dst, val),
4034            SeqCst => intrinsics::atomic_max_seqcst(dst, val),
4035        }
4036    }
4037}
4038
4039/// returns the min value (signed comparison)
4040#[inline]
4041#[cfg(target_has_atomic)]
4042#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4043unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4044    // SAFETY: the caller must uphold the safety contract for `atomic_min`
4045    unsafe {
4046        match order {
4047            Relaxed => intrinsics::atomic_min_relaxed(dst, val),
4048            Acquire => intrinsics::atomic_min_acquire(dst, val),
4049            Release => intrinsics::atomic_min_release(dst, val),
4050            AcqRel => intrinsics::atomic_min_acqrel(dst, val),
4051            SeqCst => intrinsics::atomic_min_seqcst(dst, val),
4052        }
4053    }
4054}
4055
4056/// returns the max value (unsigned comparison)
4057#[inline]
4058#[cfg(target_has_atomic)]
4059#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4060unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4061    // SAFETY: the caller must uphold the safety contract for `atomic_umax`
4062    unsafe {
4063        match order {
4064            Relaxed => intrinsics::atomic_umax_relaxed(dst, val),
4065            Acquire => intrinsics::atomic_umax_acquire(dst, val),
4066            Release => intrinsics::atomic_umax_release(dst, val),
4067            AcqRel => intrinsics::atomic_umax_acqrel(dst, val),
4068            SeqCst => intrinsics::atomic_umax_seqcst(dst, val),
4069        }
4070    }
4071}
4072
4073/// returns the min value (unsigned comparison)
4074#[inline]
4075#[cfg(target_has_atomic)]
4076#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4077unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4078    // SAFETY: the caller must uphold the safety contract for `atomic_umin`
4079    unsafe {
4080        match order {
4081            Relaxed => intrinsics::atomic_umin_relaxed(dst, val),
4082            Acquire => intrinsics::atomic_umin_acquire(dst, val),
4083            Release => intrinsics::atomic_umin_release(dst, val),
4084            AcqRel => intrinsics::atomic_umin_acqrel(dst, val),
4085            SeqCst => intrinsics::atomic_umin_seqcst(dst, val),
4086        }
4087    }
4088}
4089
4090/// An atomic fence.
4091///
4092/// Fences create synchronization between themselves and atomic operations or fences in other
4093/// threads. To achieve this, a fence prevents the compiler and CPU from reordering certain types of
4094/// memory operations around it.
4095///
4096/// A fence 'A' which has (at least) [`Release`] ordering semantics, synchronizes
4097/// with a fence 'B' with (at least) [`Acquire`] semantics, if and only if there
4098/// exist operations X and Y, both operating on some atomic object 'm' such
4099/// that A is sequenced before X, Y is sequenced before B and Y observes
4100/// the change to m. This provides a happens-before dependence between A and B.
4101///
4102/// ```text
4103///     Thread 1                                          Thread 2
4104///
4105/// fence(Release);      A --------------
4106/// m.store(3, Relaxed); X ---------    |
4107///                                |    |
4108///                                |    |
4109///                                -------------> Y  if m.load(Relaxed) == 3 {
4110///                                     |-------> B      fence(Acquire);
4111///                                                      ...
4112///                                                  }
4113/// ```
4114///
4115/// Note that in the example above, it is crucial that the accesses to `m` are atomic. Fences cannot
4116/// be used to establish synchronization among non-atomic accesses in different threads. However,
4117/// thanks to the happens-before relationship between A and B, any non-atomic accesses that
4118/// happen-before A are now also properly synchronized with any non-atomic accesses that
4119/// happen-after B.
4120///
4121/// Atomic operations with [`Release`] or [`Acquire`] semantics can also synchronize
4122/// with a fence.
4123///
4124/// A fence which has [`SeqCst`] ordering, in addition to having both [`Acquire`]
4125/// and [`Release`] semantics, participates in the global program order of the
4126/// other [`SeqCst`] operations and/or fences.
4127///
4128/// Accepts [`Acquire`], [`Release`], [`AcqRel`] and [`SeqCst`] orderings.
4129///
4130/// # Panics
4131///
4132/// Panics if `order` is [`Relaxed`].
4133///
4134/// # Examples
4135///
4136/// ```
4137/// use std::sync::atomic::AtomicBool;
4138/// use std::sync::atomic::fence;
4139/// use std::sync::atomic::Ordering;
4140///
4141/// // A mutual exclusion primitive based on spinlock.
4142/// pub struct Mutex {
4143///     flag: AtomicBool,
4144/// }
4145///
4146/// impl Mutex {
4147///     pub fn new() -> Mutex {
4148///         Mutex {
4149///             flag: AtomicBool::new(false),
4150///         }
4151///     }
4152///
4153///     pub fn lock(&self) {
4154///         // Wait until the old value is `false`.
4155///         while self
4156///             .flag
4157///             .compare_exchange_weak(false, true, Ordering::Relaxed, Ordering::Relaxed)
4158///             .is_err()
4159///         {}
4160///         // This fence synchronizes-with store in `unlock`.
4161///         fence(Ordering::Acquire);
4162///     }
4163///
4164///     pub fn unlock(&self) {
4165///         self.flag.store(false, Ordering::Release);
4166///     }
4167/// }
4168/// ```
4169#[inline]
4170#[stable(feature = "rust1", since = "1.0.0")]
4171#[rustc_diagnostic_item = "fence"]
4172#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4173pub fn fence(order: Ordering) {
4174    // SAFETY: using an atomic fence is safe.
4175    unsafe {
4176        match order {
4177            Acquire => intrinsics::atomic_fence_acquire(),
4178            Release => intrinsics::atomic_fence_release(),
4179            AcqRel => intrinsics::atomic_fence_acqrel(),
4180            SeqCst => intrinsics::atomic_fence_seqcst(),
4181            Relaxed => panic!("there is no such thing as a relaxed fence"),
4182        }
4183    }
4184}
4185
4186/// A "compiler-only" atomic fence.
4187///
4188/// Like [`fence`], this function establishes synchronization with other atomic operations and
4189/// fences. However, unlike [`fence`], `compiler_fence` only establishes synchronization with
4190/// operations *in the same thread*. This may at first sound rather useless, since code within a
4191/// thread is typically already totally ordered and does not need any further synchronization.
4192/// However, there are cases where code can run on the same thread without being ordered:
4193/// - The most common case is that of a *signal handler*: a signal handler runs in the same thread
4194///   as the code it interrupted, but it is not ordered with respect to that code. `compiler_fence`
4195///   can be used to establish synchronization between a thread and its signal handler, the same way
4196///   that `fence` can be used to establish synchronization across threads.
4197/// - Similar situations can arise in embedded programming with interrupt handlers, or in custom
4198///   implementations of preemptive green threads. In general, `compiler_fence` can establish
4199///   synchronization with code that is guaranteed to run on the same hardware CPU.
4200///
4201/// See [`fence`] for how a fence can be used to achieve synchronization. Note that just like
4202/// [`fence`], synchronization still requires atomic operations to be used in both threads -- it is
4203/// not possible to perform synchronization entirely with fences and non-atomic operations.
4204///
4205/// `compiler_fence` does not emit any machine code, but restricts the kinds of memory re-ordering
4206/// the compiler is allowed to do. `compiler_fence` corresponds to [`atomic_signal_fence`] in C and
4207/// C++.
4208///
4209/// [`atomic_signal_fence`]: https://en.cppreference.com/w/cpp/atomic/atomic_signal_fence
4210///
4211/// # Panics
4212///
4213/// Panics if `order` is [`Relaxed`].
4214///
4215/// # Examples
4216///
4217/// Without the two `compiler_fence` calls, the read of `IMPORTANT_VARIABLE` in `signal_handler`
4218/// is *undefined behavior* due to a data race, despite everything happening in a single thread.
4219/// This is because the signal handler is considered to run concurrently with its associated
4220/// thread, and explicit synchronization is required to pass data between a thread and its
4221/// signal handler. The code below uses two `compiler_fence` calls to establish the usual
4222/// release-acquire synchronization pattern (see [`fence`] for an image).
4223///
4224/// ```
4225/// use std::sync::atomic::AtomicBool;
4226/// use std::sync::atomic::Ordering;
4227/// use std::sync::atomic::compiler_fence;
4228///
4229/// static mut IMPORTANT_VARIABLE: usize = 0;
4230/// static IS_READY: AtomicBool = AtomicBool::new(false);
4231///
4232/// fn main() {
4233///     unsafe { IMPORTANT_VARIABLE = 42 };
4234///     // Marks earlier writes as being released with future relaxed stores.
4235///     compiler_fence(Ordering::Release);
4236///     IS_READY.store(true, Ordering::Relaxed);
4237/// }
4238///
4239/// fn signal_handler() {
4240///     if IS_READY.load(Ordering::Relaxed) {
4241///         // Acquires writes that were released with relaxed stores that we read from.
4242///         compiler_fence(Ordering::Acquire);
4243///         assert_eq!(unsafe { IMPORTANT_VARIABLE }, 42);
4244///     }
4245/// }
4246/// ```
4247#[inline]
4248#[stable(feature = "compiler_fences", since = "1.21.0")]
4249#[rustc_diagnostic_item = "compiler_fence"]
4250#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4251pub fn compiler_fence(order: Ordering) {
4252    // SAFETY: using an atomic fence is safe.
4253    unsafe {
4254        match order {
4255            Acquire => intrinsics::atomic_singlethreadfence_acquire(),
4256            Release => intrinsics::atomic_singlethreadfence_release(),
4257            AcqRel => intrinsics::atomic_singlethreadfence_acqrel(),
4258            SeqCst => intrinsics::atomic_singlethreadfence_seqcst(),
4259            Relaxed => panic!("there is no such thing as a relaxed compiler fence"),
4260        }
4261    }
4262}
4263
4264#[cfg(target_has_atomic_load_store = "8")]
4265#[stable(feature = "atomic_debug", since = "1.3.0")]
4266impl fmt::Debug for AtomicBool {
4267    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4268        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4269    }
4270}
4271
4272#[cfg(target_has_atomic_load_store = "ptr")]
4273#[stable(feature = "atomic_debug", since = "1.3.0")]
4274impl<T> fmt::Debug for AtomicPtr<T> {
4275    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4276        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4277    }
4278}
4279
4280#[cfg(target_has_atomic_load_store = "ptr")]
4281#[stable(feature = "atomic_pointer", since = "1.24.0")]
4282impl<T> fmt::Pointer for AtomicPtr<T> {
4283    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4284        fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
4285    }
4286}
4287
4288/// Signals the processor that it is inside a busy-wait spin-loop ("spin lock").
4289///
4290/// This function is deprecated in favor of [`hint::spin_loop`].
4291///
4292/// [`hint::spin_loop`]: crate::hint::spin_loop
4293#[inline]
4294#[stable(feature = "spin_loop_hint", since = "1.24.0")]
4295#[deprecated(since = "1.51.0", note = "use hint::spin_loop instead")]
4296pub fn spin_loop_hint() {
4297    spin_loop()
4298}
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy