core/ptr/
mod.rs

1//! Manually manage memory through raw pointers.
2//!
3//! *[See also the pointer primitive types](pointer).*
4//!
5//! # Safety
6//!
7//! Many functions in this module take raw pointers as arguments and read from or write to them. For
8//! this to be safe, these pointers must be *valid* for the given access. Whether a pointer is valid
9//! depends on the operation it is used for (read or write), and the extent of the memory that is
10//! accessed (i.e., how many bytes are read/written) -- it makes no sense to ask "is this pointer
11//! valid"; one has to ask "is this pointer valid for a given access". Most functions use `*mut T`
12//! and `*const T` to access only a single value, in which case the documentation omits the size and
13//! implicitly assumes it to be `size_of::<T>()` bytes.
14//!
15//! The precise rules for validity are not determined yet. The guarantees that are
16//! provided at this point are very minimal:
17//!
18//! * For memory accesses of [size zero][zst], *every* pointer is valid, including the [null]
19//!   pointer. The following points are only concerned with non-zero-sized accesses.
20//! * A [null] pointer is *never* valid.
21//! * For a pointer to be valid, it is necessary, but not always sufficient, that the pointer be
22//!   *dereferenceable*. The [provenance] of the pointer is used to determine which [allocation]
23//!   it is derived from; a pointer is dereferenceable if the memory range of the given size
24//!   starting at the pointer is entirely contained within the bounds of that allocation. Note
25//!   that in Rust, every (stack-allocated) variable is considered a separate allocation.
26//! * All accesses performed by functions in this module are *non-atomic* in the sense
27//!   of [atomic operations] used to synchronize between threads. This means it is
28//!   undefined behavior to perform two concurrent accesses to the same location from different
29//!   threads unless both accesses only read from memory. Notice that this explicitly
30//!   includes [`read_volatile`] and [`write_volatile`]: Volatile accesses cannot
31//!   be used for inter-thread synchronization, regardless of whether they are acting on
32//!   Rust memory or not.
33//! * The result of casting a reference to a pointer is valid for as long as the
34//!   underlying allocation is live and no reference (just raw pointers) is used to
35//!   access the same memory. That is, reference and pointer accesses cannot be
36//!   interleaved.
37//!
38//! These axioms, along with careful use of [`offset`] for pointer arithmetic,
39//! are enough to correctly implement many useful things in unsafe code. Stronger guarantees
40//! will be provided eventually, as the [aliasing] rules are being determined. For more
41//! information, see the [book] as well as the section in the reference devoted
42//! to [undefined behavior][ub].
43//!
44//! We say that a pointer is "dangling" if it is not valid for any non-zero-sized accesses. This
45//! means out-of-bounds pointers, pointers to freed memory, null pointers, and pointers created with
46//! [`NonNull::dangling`] are all dangling.
47//!
48//! ## Alignment
49//!
50//! Valid raw pointers as defined above are not necessarily properly aligned (where
51//! "proper" alignment is defined by the pointee type, i.e., `*const T` must be
52//! aligned to `align_of::<T>()`). However, most functions require their
53//! arguments to be properly aligned, and will explicitly state
54//! this requirement in their documentation. Notable exceptions to this are
55//! [`read_unaligned`] and [`write_unaligned`].
56//!
57//! When a function requires proper alignment, it does so even if the access
58//! has size 0, i.e., even if memory is not actually touched. Consider using
59//! [`NonNull::dangling`] in such cases.
60//!
61//! ## Pointer to reference conversion
62//!
63//! When converting a pointer to a reference (e.g. via `&*ptr` or `&mut *ptr`),
64//! there are several rules that must be followed:
65//!
66//! * The pointer must be properly aligned.
67//!
68//! * It must be non-null.
69//!
70//! * It must be "dereferenceable" in the sense defined above.
71//!
72//! * The pointer must point to a [valid value] of type `T`.
73//!
74//! * You must enforce Rust's aliasing rules. The exact aliasing rules are not decided yet, so we
75//!   only give a rough overview here. The rules also depend on whether a mutable or a shared
76//!   reference is being created.
77//!   * When creating a mutable reference, then while this reference exists, the memory it points to
78//!     must not get accessed (read or written) through any other pointer or reference not derived
79//!     from this reference.
80//!   * When creating a shared reference, then while this reference exists, the memory it points to
81//!     must not get mutated (except inside `UnsafeCell`).
82//!
83//! If a pointer follows all of these rules, it is said to be
84//! *convertible to a (mutable or shared) reference*.
85// ^ we use this term instead of saying that the produced reference must
86// be valid, as the validity of a reference is easily confused for the
87// validity of the thing it refers to, and while the two concepts are
88// closely related, they are not identical.
89//!
90//! These rules apply even if the result is unused!
91//! (The part about being initialized is not yet fully decided, but until
92//! it is, the only safe approach is to ensure that they are indeed initialized.)
93//!
94//! An example of the implications of the above rules is that an expression such
95//! as `unsafe { &*(0 as *const u8) }` is Immediate Undefined Behavior.
96//!
97//! [valid value]: ../../reference/behavior-considered-undefined.html#invalid-values
98//!
99//! ## Allocation
100//!
101//! <a id="allocated-object"></a> <!-- keep old URLs working -->
102//!
103//! An *allocation* is a subset of program memory which is addressable
104//! from Rust, and within which pointer arithmetic is possible. Examples of
105//! allocations include heap allocations, stack-allocated variables,
106//! statics, and consts. The safety preconditions of some Rust operations -
107//! such as `offset` and field projections (`expr.field`) - are defined in
108//! terms of the allocations on which they operate.
109//!
110//! An allocation has a base address, a size, and a set of memory
111//! addresses. It is possible for an allocation to have zero size, but
112//! such an allocation will still have a base address. The base address
113//! of an allocation is not necessarily unique. While it is currently the
114//! case that an allocation always has a set of memory addresses which is
115//! fully contiguous (i.e., has no "holes"), there is no guarantee that this
116//! will not change in the future.
117//!
118//! Allocations must behave like "normal" memory: in particular, reads must not have
119//! side-effects, and writes must become visible to other threads using the usual synchronization
120//! primitives.
121//!
122//! For any allocation with `base` address, `size`, and a set of
123//! `addresses`, the following are guaranteed:
124//! - For all addresses `a` in `addresses`, `a` is in the range `base .. (base +
125//!   size)` (note that this requires `a < base + size`, not `a <= base + size`)
126//! - `base` is not equal to [`null()`] (i.e., the address with the numerical
127//!   value 0)
128//! - `base + size <= usize::MAX`
129//! - `size <= isize::MAX`
130//!
131//! As a consequence of these guarantees, given any address `a` within the set
132//! of addresses of an allocation:
133//! - It is guaranteed that `a - base` does not overflow `isize`
134//! - It is guaranteed that `a - base` is non-negative
135//! - It is guaranteed that, given `o = a - base` (i.e., the offset of `a` within
136//!   the allocation), `base + o` will not wrap around the address space (in
137//!   other words, will not overflow `usize`)
138//!
139//! [`null()`]: null
140//!
141//! # Provenance
142//!
143//! Pointers are not *simply* an "integer" or "address". For instance, it's uncontroversial
144//! to say that a Use After Free is clearly Undefined Behavior, even if you "get lucky"
145//! and the freed memory gets reallocated before your read/write (in fact this is the
146//! worst-case scenario, UAFs would be much less concerning if this didn't happen!).
147//! As another example, consider that [`wrapping_offset`] is documented to "remember"
148//! the allocation that the original pointer points to, even if it is offset far
149//! outside the memory range occupied by that allocation.
150//! To rationalize claims like this, pointers need to somehow be *more* than just their addresses:
151//! they must have **provenance**.
152//!
153//! A pointer value in Rust semantically contains the following information:
154//!
155//! * The **address** it points to, which can be represented by a `usize`.
156//! * The **provenance** it has, defining the memory it has permission to access. Provenance can be
157//!   absent, in which case the pointer does not have permission to access any memory.
158//!
159//! The exact structure of provenance is not yet specified, but the permission defined by a
160//! pointer's provenance have a *spatial* component, a *temporal* component, and a *mutability*
161//! component:
162//!
163//! * Spatial: The set of memory addresses that the pointer is allowed to access.
164//! * Temporal: The timespan during which the pointer is allowed to access those memory addresses.
165//! * Mutability: Whether the pointer may only access the memory for reads, or also access it for
166//!   writes. Note that this can interact with the other components, e.g. a pointer might permit
167//!   mutation only for a subset of addresses, or only for a subset of its maximal timespan.
168//!
169//! When an [allocation] is created, it has a unique Original Pointer. For alloc
170//! APIs this is literally the pointer the call returns, and for local variables and statics,
171//! this is the name of the variable/static. (This is mildly overloading the term "pointer"
172//! for the sake of brevity/exposition.)
173//!
174//! The Original Pointer for an allocation has provenance that constrains the *spatial*
175//! permissions of this pointer to the memory range of the allocation, and the *temporal*
176//! permissions to the lifetime of the allocation. Provenance is implicitly inherited by all
177//! pointers transitively derived from the Original Pointer through operations like [`offset`],
178//! borrowing, and pointer casts. Some operations may *shrink* the permissions of the derived
179//! provenance, limiting how much memory it can access or how long it's valid for (i.e. borrowing a
180//! subfield and subslicing can shrink the spatial component of provenance, and all borrowing can
181//! shrink the temporal component of provenance). However, no operation can ever *grow* the
182//! permissions of the derived provenance: even if you "know" there is a larger allocation, you
183//! can't derive a pointer with a larger provenance. Similarly, you cannot "recombine" two
184//! contiguous provenances back into one (i.e. with a `fn merge(&[T], &[T]) -> &[T]`).
185//!
186//! A reference to a place always has provenance over at least the memory that place occupies.
187//! A reference to a slice always has provenance over at least the range that slice describes.
188//! Whether and when exactly the provenance of a reference gets "shrunk" to *exactly* fit
189//! the memory it points to is not yet determined.
190//!
191//! A *shared* reference only ever has provenance that permits reading from memory,
192//! and never permits writes, except inside [`UnsafeCell`].
193//!
194//! Provenance can affect whether a program has undefined behavior:
195//!
196//! * It is undefined behavior to access memory through a pointer that does not have provenance over
197//!   that memory. Note that a pointer "at the end" of its provenance is not actually outside its
198//!   provenance, it just has 0 bytes it can load/store. Zero-sized accesses do not require any
199//!   provenance since they access an empty range of memory.
200//!
201//! * It is undefined behavior to [`offset`] a pointer across a memory range that is not contained
202//!   in the allocation it is derived from, or to [`offset_from`] two pointers not derived
203//!   from the same allocation. Provenance is used to say what exactly "derived from" even
204//!   means: the lineage of a pointer is traced back to the Original Pointer it descends from, and
205//!   that identifies the relevant allocation. In particular, it's always UB to offset a
206//!   pointer derived from something that is now deallocated, except if the offset is 0.
207//!
208//! But it *is* still sound to:
209//!
210//! * Create a pointer without provenance from just an address (see [`without_provenance`]). Such a
211//!   pointer cannot be used for memory accesses (except for zero-sized accesses). This can still be
212//!   useful for sentinel values like `null` *or* to represent a tagged pointer that will never be
213//!   dereferenceable. In general, it is always sound for an integer to pretend to be a pointer "for
214//!   fun" as long as you don't use operations on it which require it to be valid (non-zero-sized
215//!   offset, read, write, etc).
216//!
217//! * Forge an allocation of size zero at any sufficiently aligned non-null address.
218//!   i.e. the usual "ZSTs are fake, do what you want" rules apply.
219//!
220//! * [`wrapping_offset`] a pointer outside its provenance. This includes pointers
221//!   which have "no" provenance. In particular, this makes it sound to do pointer tagging tricks.
222//!
223//! * Compare arbitrary pointers by address. Pointer comparison ignores provenance and addresses
224//!   *are* just integers, so there is always a coherent answer, even if the pointers are dangling
225//!   or from different provenances. Note that if you get "lucky" and notice that a pointer at the
226//!   end of one allocation is the "same" address as the start of another allocation,
227//!   anything you do with that fact is *probably* going to be gibberish. The scope of that
228//!   gibberish is kept under control by the fact that the two pointers *still* aren't allowed to
229//!   access the other's allocation (bytes), because they still have different provenance.
230//!
231//! Note that the full definition of provenance in Rust is not decided yet, as this interacts
232//! with the as-yet undecided [aliasing] rules.
233//!
234//! ## Pointers Vs Integers
235//!
236//! From this discussion, it becomes very clear that a `usize` *cannot* accurately represent a pointer,
237//! and converting from a pointer to a `usize` is generally an operation which *only* extracts the
238//! address. Converting this address back into pointer requires somehow answering the question:
239//! which provenance should the resulting pointer have?
240//!
241//! Rust provides two ways of dealing with this situation: *Strict Provenance* and *Exposed Provenance*.
242//!
243//! Note that a pointer *can* represent a `usize` (via [`without_provenance`]), so the right type to
244//! use in situations where a value is "sometimes a pointer and sometimes a bare `usize`" is a
245//! pointer type.
246//!
247//! ## Strict Provenance
248//!
249//! "Strict Provenance" refers to a set of APIs designed to make working with provenance more
250//! explicit. They are intended as substitutes for casting a pointer to an integer and back.
251//!
252//! Entirely avoiding integer-to-pointer casts successfully side-steps the inherent ambiguity of
253//! that operation. This benefits compiler optimizations, and it is pretty much a requirement for
254//! using tools like [Miri] and architectures like [CHERI] that aim to detect and diagnose pointer
255//! misuse.
256//!
257//! The key insight to making programming without integer-to-pointer casts *at all* viable is the
258//! [`with_addr`] method:
259//!
260//! ```text
261//!     /// Creates a new pointer with the given address.
262//!     ///
263//!     /// This performs the same operation as an `addr as ptr` cast, but copies
264//!     /// the *provenance* of `self` to the new pointer.
265//!     /// This allows us to dynamically preserve and propagate this important
266//!     /// information in a way that is otherwise impossible with a unary cast.
267//!     ///
268//!     /// This is equivalent to using `wrapping_offset` to offset `self` to the
269//!     /// given address, and therefore has all the same capabilities and restrictions.
270//!     pub fn with_addr(self, addr: usize) -> Self;
271//! ```
272//!
273//! So you're still able to drop down to the address representation and do whatever
274//! clever bit tricks you want *as long as* you're able to keep around a pointer
275//! into the allocation you care about that can "reconstitute" the provenance.
276//! Usually this is very easy, because you only are taking a pointer, messing with the address,
277//! and then immediately converting back to a pointer. To make this use case more ergonomic,
278//! we provide the [`map_addr`] method.
279//!
280//! To help make it clear that code is "following" Strict Provenance semantics, we also provide an
281//! [`addr`] method which promises that the returned address is not part of a
282//! pointer-integer-pointer roundtrip. In the future we may provide a lint for pointer<->integer
283//! casts to help you audit if your code conforms to strict provenance.
284//!
285//! ### Using Strict Provenance
286//!
287//! Most code needs no changes to conform to strict provenance, as the only really concerning
288//! operation is casts from `usize` to a pointer. For code which *does* cast a `usize` to a pointer,
289//! the scope of the change depends on exactly what you're doing.
290//!
291//! In general, you just need to make sure that if you want to convert a `usize` address to a
292//! pointer and then use that pointer to read/write memory, you need to keep around a pointer
293//! that has sufficient provenance to perform that read/write itself. In this way all of your
294//! casts from an address to a pointer are essentially just applying offsets/indexing.
295//!
296//! This is generally trivial to do for simple cases like tagged pointers *as long as you
297//! represent the tagged pointer as an actual pointer and not a `usize`*. For instance:
298//!
299//! ```
300//! unsafe {
301//!     // A flag we want to pack into our pointer
302//!     static HAS_DATA: usize = 0x1;
303//!     static FLAG_MASK: usize = !HAS_DATA;
304//!
305//!     // Our value, which must have enough alignment to have spare least-significant-bits.
306//!     let my_precious_data: u32 = 17;
307//!     assert!(align_of::<u32>() > 1);
308//!
309//!     // Create a tagged pointer
310//!     let ptr = &my_precious_data as *const u32;
311//!     let tagged = ptr.map_addr(|addr| addr | HAS_DATA);
312//!
313//!     // Check the flag:
314//!     if tagged.addr() & HAS_DATA != 0 {
315//!         // Untag and read the pointer
316//!         let data = *tagged.map_addr(|addr| addr & FLAG_MASK);
317//!         assert_eq!(data, 17);
318//!     } else {
319//!         unreachable!()
320//!     }
321//! }
322//! ```
323//!
324//! (Yes, if you've been using [`AtomicUsize`] for pointers in concurrent datastructures, you should
325//! be using [`AtomicPtr`] instead. If that messes up the way you atomically manipulate pointers,
326//! we would like to know why, and what needs to be done to fix it.)
327//!
328//! Situations where a valid pointer *must* be created from just an address, such as baremetal code
329//! accessing a memory-mapped interface at a fixed address, cannot currently be handled with strict
330//! provenance APIs and should use [exposed provenance](#exposed-provenance).
331//!
332//! ## Exposed Provenance
333//!
334//! As discussed above, integer-to-pointer casts are not possible with Strict Provenance APIs.
335//! This is by design: the goal of Strict Provenance is to provide a clear specification that we are
336//! confident can be formalized unambiguously and can be subject to precise formal reasoning.
337//! Integer-to-pointer casts do not (currently) have such a clear specification.
338//!
339//! However, there exist situations where integer-to-pointer casts cannot be avoided, or
340//! where avoiding them would require major refactoring. Legacy platform APIs also regularly assume
341//! that `usize` can capture all the information that makes up a pointer.
342//! Bare-metal platforms can also require the synthesis of a pointer "out of thin air" without
343//! anywhere to obtain proper provenance from.
344//!
345//! Rust's model for dealing with integer-to-pointer casts is called *Exposed Provenance*. However,
346//! the semantics of Exposed Provenance are on much less solid footing than Strict Provenance, and
347//! at this point it is not yet clear whether a satisfying unambiguous semantics can be defined for
348//! Exposed Provenance. (If that sounds bad, be reassured that other popular languages that provide
349//! integer-to-pointer casts are not faring any better.) Furthermore, Exposed Provenance will not
350//! work (well) with tools like [Miri] and [CHERI].
351//!
352//! Exposed Provenance is provided by the [`expose_provenance`] and [`with_exposed_provenance`] methods,
353//! which are equivalent to `as` casts between pointers and integers.
354//! - [`expose_provenance`] is a lot like [`addr`], but additionally adds the provenance of the
355//!   pointer to a global list of 'exposed' provenances. (This list is purely conceptual, it exists
356//!   for the purpose of specifying Rust but is not materialized in actual executions, except in
357//!   tools like [Miri].)
358//!   Memory which is outside the control of the Rust abstract machine (MMIO registers, for example)
359//!   is always considered to be exposed, so long as this memory is disjoint from memory that will
360//!   be used by the abstract machine such as the stack, heap, and statics.
361//! - [`with_exposed_provenance`] can be used to construct a pointer with one of these previously
362//!   'exposed' provenances. [`with_exposed_provenance`] takes only `addr: usize` as arguments, so
363//!   unlike in [`with_addr`] there is no indication of what the correct provenance for the returned
364//!   pointer is -- and that is exactly what makes integer-to-pointer casts so tricky to rigorously
365//!   specify! The compiler will do its best to pick the right provenance for you, but currently we
366//!   cannot provide any guarantees about which provenance the resulting pointer will have. Only one
367//!   thing is clear: if there is *no* previously 'exposed' provenance that justifies the way the
368//!   returned pointer will be used, the program has undefined behavior.
369//!
370//! If at all possible, we encourage code to be ported to [Strict Provenance] APIs, thus avoiding
371//! the need for Exposed Provenance. Maximizing the amount of such code is a major win for avoiding
372//! specification complexity and to facilitate adoption of tools like [CHERI] and [Miri] that can be
373//! a big help in increasing the confidence in (unsafe) Rust code. However, we acknowledge that this
374//! is not always possible, and offer Exposed Provenance as a way to explicit "opt out" of the
375//! well-defined semantics of Strict Provenance, and "opt in" to the unclear semantics of
376//! integer-to-pointer casts.
377//!
378//! [aliasing]: ../../nomicon/aliasing.html
379//! [allocation]: #allocation
380//! [provenance]: #provenance
381//! [book]: ../../book/ch19-01-unsafe-rust.html#dereferencing-a-raw-pointer
382//! [ub]: ../../reference/behavior-considered-undefined.html
383//! [zst]: ../../nomicon/exotic-sizes.html#zero-sized-types-zsts
384//! [atomic operations]: crate::sync::atomic
385//! [`offset`]: pointer::offset
386//! [`offset_from`]: pointer::offset_from
387//! [`wrapping_offset`]: pointer::wrapping_offset
388//! [`with_addr`]: pointer::with_addr
389//! [`map_addr`]: pointer::map_addr
390//! [`addr`]: pointer::addr
391//! [`AtomicUsize`]: crate::sync::atomic::AtomicUsize
392//! [`AtomicPtr`]: crate::sync::atomic::AtomicPtr
393//! [`expose_provenance`]: pointer::expose_provenance
394//! [`with_exposed_provenance`]: with_exposed_provenance
395//! [Miri]: https://github.com/rust-lang/miri
396//! [CHERI]: https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
397//! [Strict Provenance]: #strict-provenance
398//! [`UnsafeCell`]: core::cell::UnsafeCell
399
400#![stable(feature = "rust1", since = "1.0.0")]
401// There are many unsafe functions taking pointers that don't dereference them.
402#![allow(clippy::not_unsafe_ptr_arg_deref)]
403
404use crate::cmp::Ordering;
405use crate::intrinsics::const_eval_select;
406use crate::marker::{FnPtr, PointeeSized};
407use crate::mem::{self, MaybeUninit, SizedTypeProperties};
408use crate::num::NonZero;
409use crate::{fmt, hash, intrinsics, ub_checks};
410
411mod alignment;
412#[unstable(feature = "ptr_alignment_type", issue = "102070")]
413pub use alignment::Alignment;
414
415mod metadata;
416#[unstable(feature = "ptr_metadata", issue = "81513")]
417pub use metadata::{DynMetadata, Pointee, Thin, from_raw_parts, from_raw_parts_mut, metadata};
418
419mod non_null;
420#[stable(feature = "nonnull", since = "1.25.0")]
421pub use non_null::NonNull;
422
423mod unique;
424#[unstable(feature = "ptr_internals", issue = "none")]
425pub use unique::Unique;
426
427mod const_ptr;
428mod mut_ptr;
429
430// Some functions are defined here because they accidentally got made
431// available in this module on stable. See <https://github.com/rust-lang/rust/issues/15702>.
432// (`transmute` also falls into this category, but it cannot be wrapped due to the
433// check that `T` and `U` have the same size.)
434
435/// Copies `count * size_of::<T>()` bytes from `src` to `dst`. The source
436/// and destination must *not* overlap.
437///
438/// For regions of memory which might overlap, use [`copy`] instead.
439///
440/// `copy_nonoverlapping` is semantically equivalent to C's [`memcpy`], but
441/// with the source and destination arguments swapped,
442/// and `count` counting the number of `T`s instead of bytes.
443///
444/// The copy is "untyped" in the sense that data may be uninitialized or otherwise violate the
445/// requirements of `T`. The initialization state is preserved exactly.
446///
447/// [`memcpy`]: https://en.cppreference.com/w/c/string/byte/memcpy
448///
449/// # Safety
450///
451/// Behavior is undefined if any of the following conditions are violated:
452///
453/// * `src` must be [valid] for reads of `count * size_of::<T>()` bytes.
454///
455/// * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes.
456///
457/// * Both `src` and `dst` must be properly aligned.
458///
459/// * The region of memory beginning at `src` with a size of `count *
460///   size_of::<T>()` bytes must *not* overlap with the region of memory
461///   beginning at `dst` with the same size.
462///
463/// Like [`read`], `copy_nonoverlapping` creates a bitwise copy of `T`, regardless of
464/// whether `T` is [`Copy`]. If `T` is not [`Copy`], using *both* the values
465/// in the region beginning at `*src` and the region beginning at `*dst` can
466/// [violate memory safety][read-ownership].
467///
468/// Note that even if the effectively copied size (`count * size_of::<T>()`) is
469/// `0`, the pointers must be properly aligned.
470///
471/// [`read`]: crate::ptr::read
472/// [read-ownership]: crate::ptr::read#ownership-of-the-returned-value
473/// [valid]: crate::ptr#safety
474///
475/// # Examples
476///
477/// Manually implement [`Vec::append`]:
478///
479/// ```
480/// use std::ptr;
481///
482/// /// Moves all the elements of `src` into `dst`, leaving `src` empty.
483/// fn append<T>(dst: &mut Vec<T>, src: &mut Vec<T>) {
484///     let src_len = src.len();
485///     let dst_len = dst.len();
486///
487///     // Ensure that `dst` has enough capacity to hold all of `src`.
488///     dst.reserve(src_len);
489///
490///     unsafe {
491///         // The call to add is always safe because `Vec` will never
492///         // allocate more than `isize::MAX` bytes.
493///         let dst_ptr = dst.as_mut_ptr().add(dst_len);
494///         let src_ptr = src.as_ptr();
495///
496///         // Truncate `src` without dropping its contents. We do this first,
497///         // to avoid problems in case something further down panics.
498///         src.set_len(0);
499///
500///         // The two regions cannot overlap because mutable references do
501///         // not alias, and two different vectors cannot own the same
502///         // memory.
503///         ptr::copy_nonoverlapping(src_ptr, dst_ptr, src_len);
504///
505///         // Notify `dst` that it now holds the contents of `src`.
506///         dst.set_len(dst_len + src_len);
507///     }
508/// }
509///
510/// let mut a = vec!['r'];
511/// let mut b = vec!['u', 's', 't'];
512///
513/// append(&mut a, &mut b);
514///
515/// assert_eq!(a, &['r', 'u', 's', 't']);
516/// assert!(b.is_empty());
517/// ```
518///
519/// [`Vec::append`]: ../../std/vec/struct.Vec.html#method.append
520#[doc(alias = "memcpy")]
521#[stable(feature = "rust1", since = "1.0.0")]
522#[rustc_const_stable(feature = "const_intrinsic_copy", since = "1.83.0")]
523#[inline(always)]
524#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
525#[rustc_diagnostic_item = "ptr_copy_nonoverlapping"]
526pub const unsafe fn copy_nonoverlapping<T>(src: *const T, dst: *mut T, count: usize) {
527    ub_checks::assert_unsafe_precondition!(
528        check_language_ub,
529        "ptr::copy_nonoverlapping requires that both pointer arguments are aligned and non-null \
530        and the specified memory ranges do not overlap",
531        (
532            src: *const () = src as *const (),
533            dst: *mut () = dst as *mut (),
534            size: usize = size_of::<T>(),
535            align: usize = align_of::<T>(),
536            count: usize = count,
537        ) => {
538            let zero_size = count == 0 || size == 0;
539            ub_checks::maybe_is_aligned_and_not_null(src, align, zero_size)
540                && ub_checks::maybe_is_aligned_and_not_null(dst, align, zero_size)
541                && ub_checks::maybe_is_nonoverlapping(src, dst, size, count)
542        }
543    );
544
545    // SAFETY: the safety contract for `copy_nonoverlapping` must be
546    // upheld by the caller.
547    unsafe { crate::intrinsics::copy_nonoverlapping(src, dst, count) }
548}
549
550/// Copies `count * size_of::<T>()` bytes from `src` to `dst`. The source
551/// and destination may overlap.
552///
553/// If the source and destination will *never* overlap,
554/// [`copy_nonoverlapping`] can be used instead.
555///
556/// `copy` is semantically equivalent to C's [`memmove`], but
557/// with the source and destination arguments swapped,
558/// and `count` counting the number of `T`s instead of bytes.
559/// Copying takes place as if the bytes were copied from `src`
560/// to a temporary array and then copied from the array to `dst`.
561///
562/// The copy is "untyped" in the sense that data may be uninitialized or otherwise violate the
563/// requirements of `T`. The initialization state is preserved exactly.
564///
565/// [`memmove`]: https://en.cppreference.com/w/c/string/byte/memmove
566///
567/// # Safety
568///
569/// Behavior is undefined if any of the following conditions are violated:
570///
571/// * `src` must be [valid] for reads of `count * size_of::<T>()` bytes.
572///
573/// * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes, and must remain valid even
574///   when `src` is read for `count * size_of::<T>()` bytes. (This means if the memory ranges
575///   overlap, the `dst` pointer must not be invalidated by `src` reads.)
576///
577/// * Both `src` and `dst` must be properly aligned.
578///
579/// Like [`read`], `copy` creates a bitwise copy of `T`, regardless of
580/// whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the values
581/// in the region beginning at `*src` and the region beginning at `*dst` can
582/// [violate memory safety][read-ownership].
583///
584/// Note that even if the effectively copied size (`count * size_of::<T>()`) is
585/// `0`, the pointers must be properly aligned.
586///
587/// [`read`]: crate::ptr::read
588/// [read-ownership]: crate::ptr::read#ownership-of-the-returned-value
589/// [valid]: crate::ptr#safety
590///
591/// # Examples
592///
593/// Efficiently create a Rust vector from an unsafe buffer:
594///
595/// ```
596/// use std::ptr;
597///
598/// /// # Safety
599/// ///
600/// /// * `ptr` must be correctly aligned for its type and non-zero.
601/// /// * `ptr` must be valid for reads of `elts` contiguous elements of type `T`.
602/// /// * Those elements must not be used after calling this function unless `T: Copy`.
603/// # #[allow(dead_code)]
604/// unsafe fn from_buf_raw<T>(ptr: *const T, elts: usize) -> Vec<T> {
605///     let mut dst = Vec::with_capacity(elts);
606///
607///     // SAFETY: Our precondition ensures the source is aligned and valid,
608///     // and `Vec::with_capacity` ensures that we have usable space to write them.
609///     unsafe { ptr::copy(ptr, dst.as_mut_ptr(), elts); }
610///
611///     // SAFETY: We created it with this much capacity earlier,
612///     // and the previous `copy` has initialized these elements.
613///     unsafe { dst.set_len(elts); }
614///     dst
615/// }
616/// ```
617#[doc(alias = "memmove")]
618#[stable(feature = "rust1", since = "1.0.0")]
619#[rustc_const_stable(feature = "const_intrinsic_copy", since = "1.83.0")]
620#[inline(always)]
621#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
622#[rustc_diagnostic_item = "ptr_copy"]
623pub const unsafe fn copy<T>(src: *const T, dst: *mut T, count: usize) {
624    // SAFETY: the safety contract for `copy` must be upheld by the caller.
625    unsafe {
626        ub_checks::assert_unsafe_precondition!(
627            check_language_ub,
628            "ptr::copy requires that both pointer arguments are aligned and non-null",
629            (
630                src: *const () = src as *const (),
631                dst: *mut () = dst as *mut (),
632                align: usize = align_of::<T>(),
633                zero_size: bool = T::IS_ZST || count == 0,
634            ) =>
635            ub_checks::maybe_is_aligned_and_not_null(src, align, zero_size)
636                && ub_checks::maybe_is_aligned_and_not_null(dst, align, zero_size)
637        );
638        crate::intrinsics::copy(src, dst, count)
639    }
640}
641
642/// Sets `count * size_of::<T>()` bytes of memory starting at `dst` to
643/// `val`.
644///
645/// `write_bytes` is similar to C's [`memset`], but sets `count *
646/// size_of::<T>()` bytes to `val`.
647///
648/// [`memset`]: https://en.cppreference.com/w/c/string/byte/memset
649///
650/// # Safety
651///
652/// Behavior is undefined if any of the following conditions are violated:
653///
654/// * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes.
655///
656/// * `dst` must be properly aligned.
657///
658/// Note that even if the effectively copied size (`count * size_of::<T>()`) is
659/// `0`, the pointer must be properly aligned.
660///
661/// Additionally, note that changing `*dst` in this way can easily lead to undefined behavior (UB)
662/// later if the written bytes are not a valid representation of some `T`. For instance, the
663/// following is an **incorrect** use of this function:
664///
665/// ```rust,no_run
666/// unsafe {
667///     let mut value: u8 = 0;
668///     let ptr: *mut bool = &mut value as *mut u8 as *mut bool;
669///     let _bool = ptr.read(); // This is fine, `ptr` points to a valid `bool`.
670///     ptr.write_bytes(42u8, 1); // This function itself does not cause UB...
671///     let _bool = ptr.read(); // ...but it makes this operation UB! ⚠️
672/// }
673/// ```
674///
675/// [valid]: crate::ptr#safety
676///
677/// # Examples
678///
679/// Basic usage:
680///
681/// ```
682/// use std::ptr;
683///
684/// let mut vec = vec![0u32; 4];
685/// unsafe {
686///     let vec_ptr = vec.as_mut_ptr();
687///     ptr::write_bytes(vec_ptr, 0xfe, 2);
688/// }
689/// assert_eq!(vec, [0xfefefefe, 0xfefefefe, 0, 0]);
690/// ```
691#[doc(alias = "memset")]
692#[stable(feature = "rust1", since = "1.0.0")]
693#[rustc_const_stable(feature = "const_ptr_write", since = "1.83.0")]
694#[inline(always)]
695#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
696#[rustc_diagnostic_item = "ptr_write_bytes"]
697pub const unsafe fn write_bytes<T>(dst: *mut T, val: u8, count: usize) {
698    // SAFETY: the safety contract for `write_bytes` must be upheld by the caller.
699    unsafe {
700        ub_checks::assert_unsafe_precondition!(
701            check_language_ub,
702            "ptr::write_bytes requires that the destination pointer is aligned and non-null",
703            (
704                addr: *const () = dst as *const (),
705                align: usize = align_of::<T>(),
706                zero_size: bool = T::IS_ZST || count == 0,
707            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, zero_size)
708        );
709        crate::intrinsics::write_bytes(dst, val, count)
710    }
711}
712
713/// Executes the destructor (if any) of the pointed-to value.
714///
715/// This is almost the same as calling [`ptr::read`] and discarding
716/// the result, but has the following advantages:
717// FIXME: say something more useful than "almost the same"?
718// There are open questions here: `read` requires the value to be fully valid, e.g. if `T` is a
719// `bool` it must be 0 or 1, if it is a reference then it must be dereferenceable. `drop_in_place`
720// only requires that `*to_drop` be "valid for dropping" and we have not defined what that means. In
721// Miri it currently (May 2024) requires nothing at all for types without drop glue.
722///
723/// * It is *required* to use `drop_in_place` to drop unsized types like
724///   trait objects, because they can't be read out onto the stack and
725///   dropped normally.
726///
727/// * It is friendlier to the optimizer to do this over [`ptr::read`] when
728///   dropping manually allocated memory (e.g., in the implementations of
729///   `Box`/`Rc`/`Vec`), as the compiler doesn't need to prove that it's
730///   sound to elide the copy.
731///
732/// * It can be used to drop [pinned] data when `T` is not `repr(packed)`
733///   (pinned data must not be moved before it is dropped).
734///
735/// Unaligned values cannot be dropped in place, they must be copied to an aligned
736/// location first using [`ptr::read_unaligned`]. For packed structs, this move is
737/// done automatically by the compiler. This means the fields of packed structs
738/// are not dropped in-place.
739///
740/// [`ptr::read`]: self::read
741/// [`ptr::read_unaligned`]: self::read_unaligned
742/// [pinned]: crate::pin
743///
744/// # Safety
745///
746/// Behavior is undefined if any of the following conditions are violated:
747///
748/// * `to_drop` must be [valid] for both reads and writes.
749///
750/// * `to_drop` must be properly aligned, even if `T` has size 0.
751///
752/// * `to_drop` must be nonnull, even if `T` has size 0.
753///
754/// * The value `to_drop` points to must be valid for dropping, which may mean
755///   it must uphold additional invariants. These invariants depend on the type
756///   of the value being dropped. For instance, when dropping a Box, the box's
757///   pointer to the heap must be valid.
758///
759/// * While `drop_in_place` is executing, the only way to access parts of
760///   `to_drop` is through the `&mut self` references supplied to the
761///   `Drop::drop` methods that `drop_in_place` invokes.
762///
763/// Additionally, if `T` is not [`Copy`], using the pointed-to value after
764/// calling `drop_in_place` can cause undefined behavior. Note that `*to_drop =
765/// foo` counts as a use because it will cause the value to be dropped
766/// again. [`write()`] can be used to overwrite data without causing it to be
767/// dropped.
768///
769/// [valid]: self#safety
770///
771/// # Examples
772///
773/// Manually remove the last item from a vector:
774///
775/// ```
776/// use std::ptr;
777/// use std::rc::Rc;
778///
779/// let last = Rc::new(1);
780/// let weak = Rc::downgrade(&last);
781///
782/// let mut v = vec![Rc::new(0), last];
783///
784/// unsafe {
785///     // Get a raw pointer to the last element in `v`.
786///     let ptr = &mut v[1] as *mut _;
787///     // Shorten `v` to prevent the last item from being dropped. We do that first,
788///     // to prevent issues if the `drop_in_place` below panics.
789///     v.set_len(1);
790///     // Without a call `drop_in_place`, the last item would never be dropped,
791///     // and the memory it manages would be leaked.
792///     ptr::drop_in_place(ptr);
793/// }
794///
795/// assert_eq!(v, &[0.into()]);
796///
797/// // Ensure that the last item was dropped.
798/// assert!(weak.upgrade().is_none());
799/// ```
800#[stable(feature = "drop_in_place", since = "1.8.0")]
801#[lang = "drop_in_place"]
802#[allow(unconditional_recursion)]
803#[rustc_diagnostic_item = "ptr_drop_in_place"]
804pub unsafe fn drop_in_place<T: PointeeSized>(to_drop: *mut T) {
805    // Code here does not matter - this is replaced by the
806    // real drop glue by the compiler.
807
808    // SAFETY: see comment above
809    unsafe { drop_in_place(to_drop) }
810}
811
812/// Creates a null raw pointer.
813///
814/// This function is equivalent to zero-initializing the pointer:
815/// `MaybeUninit::<*const T>::zeroed().assume_init()`.
816/// The resulting pointer has the address 0.
817///
818/// # Examples
819///
820/// ```
821/// use std::ptr;
822///
823/// let p: *const i32 = ptr::null();
824/// assert!(p.is_null());
825/// assert_eq!(p as usize, 0); // this pointer has the address 0
826/// ```
827#[inline(always)]
828#[must_use]
829#[stable(feature = "rust1", since = "1.0.0")]
830#[rustc_promotable]
831#[rustc_const_stable(feature = "const_ptr_null", since = "1.24.0")]
832#[rustc_diagnostic_item = "ptr_null"]
833pub const fn null<T: PointeeSized + Thin>() -> *const T {
834    from_raw_parts(without_provenance::<()>(0), ())
835}
836
837/// Creates a null mutable raw pointer.
838///
839/// This function is equivalent to zero-initializing the pointer:
840/// `MaybeUninit::<*mut T>::zeroed().assume_init()`.
841/// The resulting pointer has the address 0.
842///
843/// # Examples
844///
845/// ```
846/// use std::ptr;
847///
848/// let p: *mut i32 = ptr::null_mut();
849/// assert!(p.is_null());
850/// assert_eq!(p as usize, 0); // this pointer has the address 0
851/// ```
852#[inline(always)]
853#[must_use]
854#[stable(feature = "rust1", since = "1.0.0")]
855#[rustc_promotable]
856#[rustc_const_stable(feature = "const_ptr_null", since = "1.24.0")]
857#[rustc_diagnostic_item = "ptr_null_mut"]
858pub const fn null_mut<T: PointeeSized + Thin>() -> *mut T {
859    from_raw_parts_mut(without_provenance_mut::<()>(0), ())
860}
861
862/// Creates a pointer with the given address and no [provenance][crate::ptr#provenance].
863///
864/// This is equivalent to `ptr::null().with_addr(addr)`.
865///
866/// Without provenance, this pointer is not associated with any actual allocation. Such a
867/// no-provenance pointer may be used for zero-sized memory accesses (if suitably aligned), but
868/// non-zero-sized memory accesses with a no-provenance pointer are UB. No-provenance pointers are
869/// little more than a `usize` address in disguise.
870///
871/// This is different from `addr as *const T`, which creates a pointer that picks up a previously
872/// exposed provenance. See [`with_exposed_provenance`] for more details on that operation.
873///
874/// This is a [Strict Provenance][crate::ptr#strict-provenance] API.
875#[inline(always)]
876#[must_use]
877#[stable(feature = "strict_provenance", since = "1.84.0")]
878#[rustc_const_stable(feature = "strict_provenance", since = "1.84.0")]
879pub const fn without_provenance<T>(addr: usize) -> *const T {
880    without_provenance_mut(addr)
881}
882
883/// Creates a new pointer that is dangling, but non-null and well-aligned.
884///
885/// This is useful for initializing types which lazily allocate, like
886/// `Vec::new` does.
887///
888/// Note that the pointer value may potentially represent a valid pointer to
889/// a `T`, which means this must not be used as a "not yet initialized"
890/// sentinel value. Types that lazily allocate must track initialization by
891/// some other means.
892#[inline(always)]
893#[must_use]
894#[stable(feature = "strict_provenance", since = "1.84.0")]
895#[rustc_const_stable(feature = "strict_provenance", since = "1.84.0")]
896pub const fn dangling<T>() -> *const T {
897    dangling_mut()
898}
899
900/// Creates a pointer with the given address and no [provenance][crate::ptr#provenance].
901///
902/// This is equivalent to `ptr::null_mut().with_addr(addr)`.
903///
904/// Without provenance, this pointer is not associated with any actual allocation. Such a
905/// no-provenance pointer may be used for zero-sized memory accesses (if suitably aligned), but
906/// non-zero-sized memory accesses with a no-provenance pointer are UB. No-provenance pointers are
907/// little more than a `usize` address in disguise.
908///
909/// This is different from `addr as *mut T`, which creates a pointer that picks up a previously
910/// exposed provenance. See [`with_exposed_provenance_mut`] for more details on that operation.
911///
912/// This is a [Strict Provenance][crate::ptr#strict-provenance] API.
913#[inline(always)]
914#[must_use]
915#[stable(feature = "strict_provenance", since = "1.84.0")]
916#[rustc_const_stable(feature = "strict_provenance", since = "1.84.0")]
917pub const fn without_provenance_mut<T>(addr: usize) -> *mut T {
918    // An int-to-pointer transmute currently has exactly the intended semantics: it creates a
919    // pointer without provenance. Note that this is *not* a stable guarantee about transmute
920    // semantics, it relies on sysroot crates having special status.
921    // SAFETY: every valid integer is also a valid pointer (as long as you don't dereference that
922    // pointer).
923    unsafe { mem::transmute(addr) }
924}
925
926/// Creates a new pointer that is dangling, but non-null and well-aligned.
927///
928/// This is useful for initializing types which lazily allocate, like
929/// `Vec::new` does.
930///
931/// Note that the pointer value may potentially represent a valid pointer to
932/// a `T`, which means this must not be used as a "not yet initialized"
933/// sentinel value. Types that lazily allocate must track initialization by
934/// some other means.
935#[inline(always)]
936#[must_use]
937#[stable(feature = "strict_provenance", since = "1.84.0")]
938#[rustc_const_stable(feature = "strict_provenance", since = "1.84.0")]
939pub const fn dangling_mut<T>() -> *mut T {
940    NonNull::dangling().as_ptr()
941}
942
943/// Converts an address back to a pointer, picking up some previously 'exposed'
944/// [provenance][crate::ptr#provenance].
945///
946/// This is fully equivalent to `addr as *const T`. The provenance of the returned pointer is that
947/// of *some* pointer that was previously exposed by passing it to
948/// [`expose_provenance`][pointer::expose_provenance], or a `ptr as usize` cast. In addition, memory
949/// which is outside the control of the Rust abstract machine (MMIO registers, for example) is
950/// always considered to be accessible with an exposed provenance, so long as this memory is disjoint
951/// from memory that will be used by the abstract machine such as the stack, heap, and statics.
952///
953/// The exact provenance that gets picked is not specified. The compiler will do its best to pick
954/// the "right" provenance for you (whatever that may be), but currently we cannot provide any
955/// guarantees about which provenance the resulting pointer will have -- and therefore there
956/// is no definite specification for which memory the resulting pointer may access.
957///
958/// If there is *no* previously 'exposed' provenance that justifies the way the returned pointer
959/// will be used, the program has undefined behavior. In particular, the aliasing rules still apply:
960/// pointers and references that have been invalidated due to aliasing accesses cannot be used
961/// anymore, even if they have been exposed!
962///
963/// Due to its inherent ambiguity, this operation may not be supported by tools that help you to
964/// stay conformant with the Rust memory model. It is recommended to use [Strict
965/// Provenance][self#strict-provenance] APIs such as [`with_addr`][pointer::with_addr] wherever
966/// possible.
967///
968/// On most platforms this will produce a value with the same bytes as the address. Platforms
969/// which need to store additional information in a pointer may not support this operation,
970/// since it is generally not possible to actually *compute* which provenance the returned
971/// pointer has to pick up.
972///
973/// This is an [Exposed Provenance][crate::ptr#exposed-provenance] API.
974#[must_use]
975#[inline(always)]
976#[stable(feature = "exposed_provenance", since = "1.84.0")]
977#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
978#[allow(fuzzy_provenance_casts)] // this *is* the explicit provenance API one should use instead
979pub fn with_exposed_provenance<T>(addr: usize) -> *const T {
980    addr as *const T
981}
982
983/// Converts an address back to a mutable pointer, picking up some previously 'exposed'
984/// [provenance][crate::ptr#provenance].
985///
986/// This is fully equivalent to `addr as *mut T`. The provenance of the returned pointer is that
987/// of *some* pointer that was previously exposed by passing it to
988/// [`expose_provenance`][pointer::expose_provenance], or a `ptr as usize` cast. In addition, memory
989/// which is outside the control of the Rust abstract machine (MMIO registers, for example) is
990/// always considered to be accessible with an exposed provenance, so long as this memory is disjoint
991/// from memory that will be used by the abstract machine such as the stack, heap, and statics.
992///
993/// The exact provenance that gets picked is not specified. The compiler will do its best to pick
994/// the "right" provenance for you (whatever that may be), but currently we cannot provide any
995/// guarantees about which provenance the resulting pointer will have -- and therefore there
996/// is no definite specification for which memory the resulting pointer may access.
997///
998/// If there is *no* previously 'exposed' provenance that justifies the way the returned pointer
999/// will be used, the program has undefined behavior. In particular, the aliasing rules still apply:
1000/// pointers and references that have been invalidated due to aliasing accesses cannot be used
1001/// anymore, even if they have been exposed!
1002///
1003/// Due to its inherent ambiguity, this operation may not be supported by tools that help you to
1004/// stay conformant with the Rust memory model. It is recommended to use [Strict
1005/// Provenance][self#strict-provenance] APIs such as [`with_addr`][pointer::with_addr] wherever
1006/// possible.
1007///
1008/// On most platforms this will produce a value with the same bytes as the address. Platforms
1009/// which need to store additional information in a pointer may not support this operation,
1010/// since it is generally not possible to actually *compute* which provenance the returned
1011/// pointer has to pick up.
1012///
1013/// This is an [Exposed Provenance][crate::ptr#exposed-provenance] API.
1014#[must_use]
1015#[inline(always)]
1016#[stable(feature = "exposed_provenance", since = "1.84.0")]
1017#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1018#[allow(fuzzy_provenance_casts)] // this *is* the explicit provenance API one should use instead
1019pub fn with_exposed_provenance_mut<T>(addr: usize) -> *mut T {
1020    addr as *mut T
1021}
1022
1023/// Converts a reference to a raw pointer.
1024///
1025/// For `r: &T`, `from_ref(r)` is equivalent to `r as *const T` (except for the caveat noted below),
1026/// but is a bit safer since it will never silently change type or mutability, in particular if the
1027/// code is refactored.
1028///
1029/// The caller must ensure that the pointee outlives the pointer this function returns, or else it
1030/// will end up dangling.
1031///
1032/// The caller must also ensure that the memory the pointer (non-transitively) points to is never
1033/// written to (except inside an `UnsafeCell`) using this pointer or any pointer derived from it. If
1034/// you need to mutate the pointee, use [`from_mut`]. Specifically, to turn a mutable reference `m:
1035/// &mut T` into `*const T`, prefer `from_mut(m).cast_const()` to obtain a pointer that can later be
1036/// used for mutation.
1037///
1038/// ## Interaction with lifetime extension
1039///
1040/// Note that this has subtle interactions with the rules for lifetime extension of temporaries in
1041/// tail expressions. This code is valid, albeit in a non-obvious way:
1042/// ```rust
1043/// # type T = i32;
1044/// # fn foo() -> T { 42 }
1045/// // The temporary holding the return value of `foo` has its lifetime extended,
1046/// // because the surrounding expression involves no function call.
1047/// let p = &foo() as *const T;
1048/// unsafe { p.read() };
1049/// ```
1050/// Naively replacing the cast with `from_ref` is not valid:
1051/// ```rust,no_run
1052/// # use std::ptr;
1053/// # type T = i32;
1054/// # fn foo() -> T { 42 }
1055/// // The temporary holding the return value of `foo` does *not* have its lifetime extended,
1056/// // because the surrounding expression involves a function call.
1057/// let p = ptr::from_ref(&foo());
1058/// unsafe { p.read() }; // UB! Reading from a dangling pointer ⚠️
1059/// ```
1060/// The recommended way to write this code is to avoid relying on lifetime extension
1061/// when raw pointers are involved:
1062/// ```rust
1063/// # use std::ptr;
1064/// # type T = i32;
1065/// # fn foo() -> T { 42 }
1066/// let x = foo();
1067/// let p = ptr::from_ref(&x);
1068/// unsafe { p.read() };
1069/// ```
1070#[inline(always)]
1071#[must_use]
1072#[stable(feature = "ptr_from_ref", since = "1.76.0")]
1073#[rustc_const_stable(feature = "ptr_from_ref", since = "1.76.0")]
1074#[rustc_never_returns_null_ptr]
1075#[rustc_diagnostic_item = "ptr_from_ref"]
1076pub const fn from_ref<T: PointeeSized>(r: &T) -> *const T {
1077    r
1078}
1079
1080/// Converts a mutable reference to a raw pointer.
1081///
1082/// For `r: &mut T`, `from_mut(r)` is equivalent to `r as *mut T` (except for the caveat noted
1083/// below), but is a bit safer since it will never silently change type or mutability, in particular
1084/// if the code is refactored.
1085///
1086/// The caller must ensure that the pointee outlives the pointer this function returns, or else it
1087/// will end up dangling.
1088///
1089/// ## Interaction with lifetime extension
1090///
1091/// Note that this has subtle interactions with the rules for lifetime extension of temporaries in
1092/// tail expressions. This code is valid, albeit in a non-obvious way:
1093/// ```rust
1094/// # type T = i32;
1095/// # fn foo() -> T { 42 }
1096/// // The temporary holding the return value of `foo` has its lifetime extended,
1097/// // because the surrounding expression involves no function call.
1098/// let p = &mut foo() as *mut T;
1099/// unsafe { p.write(T::default()) };
1100/// ```
1101/// Naively replacing the cast with `from_mut` is not valid:
1102/// ```rust,no_run
1103/// # use std::ptr;
1104/// # type T = i32;
1105/// # fn foo() -> T { 42 }
1106/// // The temporary holding the return value of `foo` does *not* have its lifetime extended,
1107/// // because the surrounding expression involves a function call.
1108/// let p = ptr::from_mut(&mut foo());
1109/// unsafe { p.write(T::default()) }; // UB! Writing to a dangling pointer ⚠️
1110/// ```
1111/// The recommended way to write this code is to avoid relying on lifetime extension
1112/// when raw pointers are involved:
1113/// ```rust
1114/// # use std::ptr;
1115/// # type T = i32;
1116/// # fn foo() -> T { 42 }
1117/// let mut x = foo();
1118/// let p = ptr::from_mut(&mut x);
1119/// unsafe { p.write(T::default()) };
1120/// ```
1121#[inline(always)]
1122#[must_use]
1123#[stable(feature = "ptr_from_ref", since = "1.76.0")]
1124#[rustc_const_stable(feature = "ptr_from_ref", since = "1.76.0")]
1125#[rustc_never_returns_null_ptr]
1126pub const fn from_mut<T: PointeeSized>(r: &mut T) -> *mut T {
1127    r
1128}
1129
1130/// Forms a raw slice from a pointer and a length.
1131///
1132/// The `len` argument is the number of **elements**, not the number of bytes.
1133///
1134/// This function is safe, but actually using the return value is unsafe.
1135/// See the documentation of [`slice::from_raw_parts`] for slice safety requirements.
1136///
1137/// [`slice::from_raw_parts`]: crate::slice::from_raw_parts
1138///
1139/// # Examples
1140///
1141/// ```rust
1142/// use std::ptr;
1143///
1144/// // create a slice pointer when starting out with a pointer to the first element
1145/// let x = [5, 6, 7];
1146/// let raw_pointer = x.as_ptr();
1147/// let slice = ptr::slice_from_raw_parts(raw_pointer, 3);
1148/// assert_eq!(unsafe { &*slice }[2], 7);
1149/// ```
1150///
1151/// You must ensure that the pointer is valid and not null before dereferencing
1152/// the raw slice. A slice reference must never have a null pointer, even if it's empty.
1153///
1154/// ```rust,should_panic
1155/// use std::ptr;
1156/// let danger: *const [u8] = ptr::slice_from_raw_parts(ptr::null(), 0);
1157/// unsafe {
1158///     danger.as_ref().expect("references must not be null");
1159/// }
1160/// ```
1161#[inline]
1162#[stable(feature = "slice_from_raw_parts", since = "1.42.0")]
1163#[rustc_const_stable(feature = "const_slice_from_raw_parts", since = "1.64.0")]
1164#[rustc_diagnostic_item = "ptr_slice_from_raw_parts"]
1165pub const fn slice_from_raw_parts<T>(data: *const T, len: usize) -> *const [T] {
1166    from_raw_parts(data, len)
1167}
1168
1169/// Forms a raw mutable slice from a pointer and a length.
1170///
1171/// The `len` argument is the number of **elements**, not the number of bytes.
1172///
1173/// Performs the same functionality as [`slice_from_raw_parts`], except that a
1174/// raw mutable slice is returned, as opposed to a raw immutable slice.
1175///
1176/// This function is safe, but actually using the return value is unsafe.
1177/// See the documentation of [`slice::from_raw_parts_mut`] for slice safety requirements.
1178///
1179/// [`slice::from_raw_parts_mut`]: crate::slice::from_raw_parts_mut
1180///
1181/// # Examples
1182///
1183/// ```rust
1184/// use std::ptr;
1185///
1186/// let x = &mut [5, 6, 7];
1187/// let raw_pointer = x.as_mut_ptr();
1188/// let slice = ptr::slice_from_raw_parts_mut(raw_pointer, 3);
1189///
1190/// unsafe {
1191///     (*slice)[2] = 99; // assign a value at an index in the slice
1192/// };
1193///
1194/// assert_eq!(unsafe { &*slice }[2], 99);
1195/// ```
1196///
1197/// You must ensure that the pointer is valid and not null before dereferencing
1198/// the raw slice. A slice reference must never have a null pointer, even if it's empty.
1199///
1200/// ```rust,should_panic
1201/// use std::ptr;
1202/// let danger: *mut [u8] = ptr::slice_from_raw_parts_mut(ptr::null_mut(), 0);
1203/// unsafe {
1204///     danger.as_mut().expect("references must not be null");
1205/// }
1206/// ```
1207#[inline]
1208#[stable(feature = "slice_from_raw_parts", since = "1.42.0")]
1209#[rustc_const_stable(feature = "const_slice_from_raw_parts_mut", since = "1.83.0")]
1210#[rustc_diagnostic_item = "ptr_slice_from_raw_parts_mut"]
1211pub const fn slice_from_raw_parts_mut<T>(data: *mut T, len: usize) -> *mut [T] {
1212    from_raw_parts_mut(data, len)
1213}
1214
1215/// Swaps the values at two mutable locations of the same type, without
1216/// deinitializing either.
1217///
1218/// But for the following exceptions, this function is semantically
1219/// equivalent to [`mem::swap`]:
1220///
1221/// * It operates on raw pointers instead of references. When references are
1222///   available, [`mem::swap`] should be preferred.
1223///
1224/// * The two pointed-to values may overlap. If the values do overlap, then the
1225///   overlapping region of memory from `x` will be used. This is demonstrated
1226///   in the second example below.
1227///
1228/// * The operation is "untyped" in the sense that data may be uninitialized or otherwise violate
1229///   the requirements of `T`. The initialization state is preserved exactly.
1230///
1231/// # Safety
1232///
1233/// Behavior is undefined if any of the following conditions are violated:
1234///
1235/// * Both `x` and `y` must be [valid] for both reads and writes. They must remain valid even when the
1236///   other pointer is written. (This means if the memory ranges overlap, the two pointers must not
1237///   be subject to aliasing restrictions relative to each other.)
1238///
1239/// * Both `x` and `y` must be properly aligned.
1240///
1241/// Note that even if `T` has size `0`, the pointers must be properly aligned.
1242///
1243/// [valid]: self#safety
1244///
1245/// # Examples
1246///
1247/// Swapping two non-overlapping regions:
1248///
1249/// ```
1250/// use std::ptr;
1251///
1252/// let mut array = [0, 1, 2, 3];
1253///
1254/// let (x, y) = array.split_at_mut(2);
1255/// let x = x.as_mut_ptr().cast::<[u32; 2]>(); // this is `array[0..2]`
1256/// let y = y.as_mut_ptr().cast::<[u32; 2]>(); // this is `array[2..4]`
1257///
1258/// unsafe {
1259///     ptr::swap(x, y);
1260///     assert_eq!([2, 3, 0, 1], array);
1261/// }
1262/// ```
1263///
1264/// Swapping two overlapping regions:
1265///
1266/// ```
1267/// use std::ptr;
1268///
1269/// let mut array: [i32; 4] = [0, 1, 2, 3];
1270///
1271/// let array_ptr: *mut i32 = array.as_mut_ptr();
1272///
1273/// let x = array_ptr as *mut [i32; 3]; // this is `array[0..3]`
1274/// let y = unsafe { array_ptr.add(1) } as *mut [i32; 3]; // this is `array[1..4]`
1275///
1276/// unsafe {
1277///     ptr::swap(x, y);
1278///     // The indices `1..3` of the slice overlap between `x` and `y`.
1279///     // Reasonable results would be for to them be `[2, 3]`, so that indices `0..3` are
1280///     // `[1, 2, 3]` (matching `y` before the `swap`); or for them to be `[0, 1]`
1281///     // so that indices `1..4` are `[0, 1, 2]` (matching `x` before the `swap`).
1282///     // This implementation is defined to make the latter choice.
1283///     assert_eq!([1, 0, 1, 2], array);
1284/// }
1285/// ```
1286#[inline]
1287#[stable(feature = "rust1", since = "1.0.0")]
1288#[rustc_const_stable(feature = "const_swap", since = "1.85.0")]
1289#[rustc_diagnostic_item = "ptr_swap"]
1290pub const unsafe fn swap<T>(x: *mut T, y: *mut T) {
1291    // Give ourselves some scratch space to work with.
1292    // We do not have to worry about drops: `MaybeUninit` does nothing when dropped.
1293    let mut tmp = MaybeUninit::<T>::uninit();
1294
1295    // Perform the swap
1296    // SAFETY: the caller must guarantee that `x` and `y` are
1297    // valid for writes and properly aligned. `tmp` cannot be
1298    // overlapping either `x` or `y` because `tmp` was just allocated
1299    // on the stack as a separate allocation.
1300    unsafe {
1301        copy_nonoverlapping(x, tmp.as_mut_ptr(), 1);
1302        copy(y, x, 1); // `x` and `y` may overlap
1303        copy_nonoverlapping(tmp.as_ptr(), y, 1);
1304    }
1305}
1306
1307/// Swaps `count * size_of::<T>()` bytes between the two regions of memory
1308/// beginning at `x` and `y`. The two regions must *not* overlap.
1309///
1310/// The operation is "untyped" in the sense that data may be uninitialized or otherwise violate the
1311/// requirements of `T`. The initialization state is preserved exactly.
1312///
1313/// # Safety
1314///
1315/// Behavior is undefined if any of the following conditions are violated:
1316///
1317/// * Both `x` and `y` must be [valid] for both reads and writes of `count *
1318///   size_of::<T>()` bytes.
1319///
1320/// * Both `x` and `y` must be properly aligned.
1321///
1322/// * The region of memory beginning at `x` with a size of `count *
1323///   size_of::<T>()` bytes must *not* overlap with the region of memory
1324///   beginning at `y` with the same size.
1325///
1326/// Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`,
1327/// the pointers must be properly aligned.
1328///
1329/// [valid]: self#safety
1330///
1331/// # Examples
1332///
1333/// Basic usage:
1334///
1335/// ```
1336/// use std::ptr;
1337///
1338/// let mut x = [1, 2, 3, 4];
1339/// let mut y = [7, 8, 9];
1340///
1341/// unsafe {
1342///     ptr::swap_nonoverlapping(x.as_mut_ptr(), y.as_mut_ptr(), 2);
1343/// }
1344///
1345/// assert_eq!(x, [7, 8, 3, 4]);
1346/// assert_eq!(y, [1, 2, 9]);
1347/// ```
1348///
1349/// # Const evaluation limitations
1350///
1351/// If this function is invoked during const-evaluation, the current implementation has a small (and
1352/// rarely relevant) limitation: if `count` is at least 2 and the data pointed to by `x` or `y`
1353/// contains a pointer that crosses the boundary of two `T`-sized chunks of memory, the function may
1354/// fail to evaluate (similar to a panic during const-evaluation). This behavior may change in the
1355/// future.
1356///
1357/// The limitation is illustrated by the following example:
1358///
1359/// ```
1360/// use std::mem::size_of;
1361/// use std::ptr;
1362///
1363/// const { unsafe {
1364///     const PTR_SIZE: usize = size_of::<*const i32>();
1365///     let mut data1 = [0u8; PTR_SIZE];
1366///     let mut data2 = [0u8; PTR_SIZE];
1367///     // Store a pointer in `data1`.
1368///     data1.as_mut_ptr().cast::<*const i32>().write_unaligned(&42);
1369///     // Swap the contents of `data1` and `data2` by swapping `PTR_SIZE` many `u8`-sized chunks.
1370///     // This call will fail, because the pointer in `data1` crosses the boundary
1371///     // between several of the 1-byte chunks that are being swapped here.
1372///     //ptr::swap_nonoverlapping(data1.as_mut_ptr(), data2.as_mut_ptr(), PTR_SIZE);
1373///     // Swap the contents of `data1` and `data2` by swapping a single chunk of size
1374///     // `[u8; PTR_SIZE]`. That works, as there is no pointer crossing the boundary between
1375///     // two chunks.
1376///     ptr::swap_nonoverlapping(&mut data1, &mut data2, 1);
1377///     // Read the pointer from `data2` and dereference it.
1378///     let ptr = data2.as_ptr().cast::<*const i32>().read_unaligned();
1379///     assert!(*ptr == 42);
1380/// } }
1381/// ```
1382#[inline]
1383#[stable(feature = "swap_nonoverlapping", since = "1.27.0")]
1384#[rustc_const_stable(feature = "const_swap_nonoverlapping", since = "1.88.0")]
1385#[rustc_diagnostic_item = "ptr_swap_nonoverlapping"]
1386#[rustc_allow_const_fn_unstable(const_eval_select)] // both implementations behave the same
1387#[track_caller]
1388pub const unsafe fn swap_nonoverlapping<T>(x: *mut T, y: *mut T, count: usize) {
1389    ub_checks::assert_unsafe_precondition!(
1390        check_library_ub,
1391        "ptr::swap_nonoverlapping requires that both pointer arguments are aligned and non-null \
1392        and the specified memory ranges do not overlap",
1393        (
1394            x: *mut () = x as *mut (),
1395            y: *mut () = y as *mut (),
1396            size: usize = size_of::<T>(),
1397            align: usize = align_of::<T>(),
1398            count: usize = count,
1399        ) => {
1400            let zero_size = size == 0 || count == 0;
1401            ub_checks::maybe_is_aligned_and_not_null(x, align, zero_size)
1402                && ub_checks::maybe_is_aligned_and_not_null(y, align, zero_size)
1403                && ub_checks::maybe_is_nonoverlapping(x, y, size, count)
1404        }
1405    );
1406
1407    const_eval_select!(
1408        @capture[T] { x: *mut T, y: *mut T, count: usize }:
1409        if const {
1410            // At compile-time we want to always copy this in chunks of `T`, to ensure that if there
1411            // are pointers inside `T` we will copy them in one go rather than trying to copy a part
1412            // of a pointer (which would not work).
1413            // SAFETY: Same preconditions as this function
1414            unsafe { swap_nonoverlapping_const(x, y, count) }
1415        } else {
1416            // Going though a slice here helps codegen know the size fits in `isize`
1417            let slice = slice_from_raw_parts_mut(x, count);
1418            // SAFETY: This is all readable from the pointer, meaning it's one
1419            // allocation, and thus cannot be more than isize::MAX bytes.
1420            let bytes = unsafe { mem::size_of_val_raw::<[T]>(slice) };
1421            if let Some(bytes) = NonZero::new(bytes) {
1422                // SAFETY: These are the same ranges, just expressed in a different
1423                // type, so they're still non-overlapping.
1424                unsafe { swap_nonoverlapping_bytes(x.cast(), y.cast(), bytes) };
1425            }
1426        }
1427    )
1428}
1429
1430/// Same behavior and safety conditions as [`swap_nonoverlapping`]
1431#[inline]
1432const unsafe fn swap_nonoverlapping_const<T>(x: *mut T, y: *mut T, count: usize) {
1433    let mut i = 0;
1434    while i < count {
1435        // SAFETY: By precondition, `i` is in-bounds because it's below `n`
1436        let x = unsafe { x.add(i) };
1437        // SAFETY: By precondition, `i` is in-bounds because it's below `n`
1438        // and it's distinct from `x` since the ranges are non-overlapping
1439        let y = unsafe { y.add(i) };
1440
1441        // SAFETY: we're only ever given pointers that are valid to read/write,
1442        // including being aligned, and nothing here panics so it's drop-safe.
1443        unsafe {
1444            // Note that it's critical that these use `copy_nonoverlapping`,
1445            // rather than `read`/`write`, to avoid #134713 if T has padding.
1446            let mut temp = MaybeUninit::<T>::uninit();
1447            copy_nonoverlapping(x, temp.as_mut_ptr(), 1);
1448            copy_nonoverlapping(y, x, 1);
1449            copy_nonoverlapping(temp.as_ptr(), y, 1);
1450        }
1451
1452        i += 1;
1453    }
1454}
1455
1456// Don't let MIR inline this, because we really want it to keep its noalias metadata
1457#[rustc_no_mir_inline]
1458#[inline]
1459fn swap_chunk<const N: usize>(x: &mut MaybeUninit<[u8; N]>, y: &mut MaybeUninit<[u8; N]>) {
1460    let a = *x;
1461    let b = *y;
1462    *x = b;
1463    *y = a;
1464}
1465
1466#[inline]
1467unsafe fn swap_nonoverlapping_bytes(x: *mut u8, y: *mut u8, bytes: NonZero<usize>) {
1468    // Same as `swap_nonoverlapping::<[u8; N]>`.
1469    unsafe fn swap_nonoverlapping_chunks<const N: usize>(
1470        x: *mut MaybeUninit<[u8; N]>,
1471        y: *mut MaybeUninit<[u8; N]>,
1472        chunks: NonZero<usize>,
1473    ) {
1474        let chunks = chunks.get();
1475        for i in 0..chunks {
1476            // SAFETY: i is in [0, chunks) so the adds and dereferences are in-bounds.
1477            unsafe { swap_chunk(&mut *x.add(i), &mut *y.add(i)) };
1478        }
1479    }
1480
1481    // Same as `swap_nonoverlapping_bytes`, but accepts at most 1+2+4=7 bytes
1482    #[inline]
1483    unsafe fn swap_nonoverlapping_short(x: *mut u8, y: *mut u8, bytes: NonZero<usize>) {
1484        // Tail handling for auto-vectorized code sometimes has element-at-a-time behaviour,
1485        // see <https://github.com/rust-lang/rust/issues/134946>.
1486        // By swapping as different sizes, rather than as a loop over bytes,
1487        // we make sure not to end up with, say, seven byte-at-a-time copies.
1488
1489        let bytes = bytes.get();
1490        let mut i = 0;
1491        macro_rules! swap_prefix {
1492            ($($n:literal)+) => {$(
1493                if (bytes & $n) != 0 {
1494                    // SAFETY: `i` can only have the same bits set as those in bytes,
1495                    // so these `add`s are in-bounds of `bytes`.  But the bit for
1496                    // `$n` hasn't been set yet, so the `$n` bytes that `swap_chunk`
1497                    // will read and write are within the usable range.
1498                    unsafe { swap_chunk::<$n>(&mut*x.add(i).cast(), &mut*y.add(i).cast()) };
1499                    i |= $n;
1500                }
1501            )+};
1502        }
1503        swap_prefix!(4 2 1);
1504        debug_assert_eq!(i, bytes);
1505    }
1506
1507    const CHUNK_SIZE: usize = size_of::<*const ()>();
1508    let bytes = bytes.get();
1509
1510    let chunks = bytes / CHUNK_SIZE;
1511    let tail = bytes % CHUNK_SIZE;
1512    if let Some(chunks) = NonZero::new(chunks) {
1513        // SAFETY: this is bytes/CHUNK_SIZE*CHUNK_SIZE bytes, which is <= bytes,
1514        // so it's within the range of our non-overlapping bytes.
1515        unsafe { swap_nonoverlapping_chunks::<CHUNK_SIZE>(x.cast(), y.cast(), chunks) };
1516    }
1517    if let Some(tail) = NonZero::new(tail) {
1518        const { assert!(CHUNK_SIZE <= 8) };
1519        let delta = chunks * CHUNK_SIZE;
1520        // SAFETY: the tail length is below CHUNK SIZE because of the remainder,
1521        // and CHUNK_SIZE is at most 8 by the const assert, so tail <= 7
1522        unsafe { swap_nonoverlapping_short(x.add(delta), y.add(delta), tail) };
1523    }
1524}
1525
1526/// Moves `src` into the pointed `dst`, returning the previous `dst` value.
1527///
1528/// Neither value is dropped.
1529///
1530/// This function is semantically equivalent to [`mem::replace`] except that it
1531/// operates on raw pointers instead of references. When references are
1532/// available, [`mem::replace`] should be preferred.
1533///
1534/// # Safety
1535///
1536/// Behavior is undefined if any of the following conditions are violated:
1537///
1538/// * `dst` must be [valid] for both reads and writes.
1539///
1540/// * `dst` must be properly aligned.
1541///
1542/// * `dst` must point to a properly initialized value of type `T`.
1543///
1544/// Note that even if `T` has size `0`, the pointer must be properly aligned.
1545///
1546/// [valid]: self#safety
1547///
1548/// # Examples
1549///
1550/// ```
1551/// use std::ptr;
1552///
1553/// let mut rust = vec!['b', 'u', 's', 't'];
1554///
1555/// // `mem::replace` would have the same effect without requiring the unsafe
1556/// // block.
1557/// let b = unsafe {
1558///     ptr::replace(&mut rust[0], 'r')
1559/// };
1560///
1561/// assert_eq!(b, 'b');
1562/// assert_eq!(rust, &['r', 'u', 's', 't']);
1563/// ```
1564#[inline]
1565#[stable(feature = "rust1", since = "1.0.0")]
1566#[rustc_const_stable(feature = "const_replace", since = "1.83.0")]
1567#[rustc_diagnostic_item = "ptr_replace"]
1568#[track_caller]
1569pub const unsafe fn replace<T>(dst: *mut T, src: T) -> T {
1570    // SAFETY: the caller must guarantee that `dst` is valid to be
1571    // cast to a mutable reference (valid for writes, aligned, initialized),
1572    // and cannot overlap `src` since `dst` must point to a distinct
1573    // allocation.
1574    unsafe {
1575        ub_checks::assert_unsafe_precondition!(
1576            check_language_ub,
1577            "ptr::replace requires that the pointer argument is aligned and non-null",
1578            (
1579                addr: *const () = dst as *const (),
1580                align: usize = align_of::<T>(),
1581                is_zst: bool = T::IS_ZST,
1582            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, is_zst)
1583        );
1584        mem::replace(&mut *dst, src)
1585    }
1586}
1587
1588/// Reads the value from `src` without moving it. This leaves the
1589/// memory in `src` unchanged.
1590///
1591/// # Safety
1592///
1593/// Behavior is undefined if any of the following conditions are violated:
1594///
1595/// * `src` must be [valid] for reads.
1596///
1597/// * `src` must be properly aligned. Use [`read_unaligned`] if this is not the
1598///   case.
1599///
1600/// * `src` must point to a properly initialized value of type `T`.
1601///
1602/// Note that even if `T` has size `0`, the pointer must be properly aligned.
1603///
1604/// # Examples
1605///
1606/// Basic usage:
1607///
1608/// ```
1609/// let x = 12;
1610/// let y = &x as *const i32;
1611///
1612/// unsafe {
1613///     assert_eq!(std::ptr::read(y), 12);
1614/// }
1615/// ```
1616///
1617/// Manually implement [`mem::swap`]:
1618///
1619/// ```
1620/// use std::ptr;
1621///
1622/// fn swap<T>(a: &mut T, b: &mut T) {
1623///     unsafe {
1624///         // Create a bitwise copy of the value at `a` in `tmp`.
1625///         let tmp = ptr::read(a);
1626///
1627///         // Exiting at this point (either by explicitly returning or by
1628///         // calling a function which panics) would cause the value in `tmp` to
1629///         // be dropped while the same value is still referenced by `a`. This
1630///         // could trigger undefined behavior if `T` is not `Copy`.
1631///
1632///         // Create a bitwise copy of the value at `b` in `a`.
1633///         // This is safe because mutable references cannot alias.
1634///         ptr::copy_nonoverlapping(b, a, 1);
1635///
1636///         // As above, exiting here could trigger undefined behavior because
1637///         // the same value is referenced by `a` and `b`.
1638///
1639///         // Move `tmp` into `b`.
1640///         ptr::write(b, tmp);
1641///
1642///         // `tmp` has been moved (`write` takes ownership of its second argument),
1643///         // so nothing is dropped implicitly here.
1644///     }
1645/// }
1646///
1647/// let mut foo = "foo".to_owned();
1648/// let mut bar = "bar".to_owned();
1649///
1650/// swap(&mut foo, &mut bar);
1651///
1652/// assert_eq!(foo, "bar");
1653/// assert_eq!(bar, "foo");
1654/// ```
1655///
1656/// ## Ownership of the Returned Value
1657///
1658/// `read` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`].
1659/// If `T` is not [`Copy`], using both the returned value and the value at
1660/// `*src` can violate memory safety. Note that assigning to `*src` counts as a
1661/// use because it will attempt to drop the value at `*src`.
1662///
1663/// [`write()`] can be used to overwrite data without causing it to be dropped.
1664///
1665/// ```
1666/// use std::ptr;
1667///
1668/// let mut s = String::from("foo");
1669/// unsafe {
1670///     // `s2` now points to the same underlying memory as `s`.
1671///     let mut s2: String = ptr::read(&s);
1672///
1673///     assert_eq!(s2, "foo");
1674///
1675///     // Assigning to `s2` causes its original value to be dropped. Beyond
1676///     // this point, `s` must no longer be used, as the underlying memory has
1677///     // been freed.
1678///     s2 = String::default();
1679///     assert_eq!(s2, "");
1680///
1681///     // Assigning to `s` would cause the old value to be dropped again,
1682///     // resulting in undefined behavior.
1683///     // s = String::from("bar"); // ERROR
1684///
1685///     // `ptr::write` can be used to overwrite a value without dropping it.
1686///     ptr::write(&mut s, String::from("bar"));
1687/// }
1688///
1689/// assert_eq!(s, "bar");
1690/// ```
1691///
1692/// [valid]: self#safety
1693#[inline]
1694#[stable(feature = "rust1", since = "1.0.0")]
1695#[rustc_const_stable(feature = "const_ptr_read", since = "1.71.0")]
1696#[track_caller]
1697#[rustc_diagnostic_item = "ptr_read"]
1698pub const unsafe fn read<T>(src: *const T) -> T {
1699    // It would be semantically correct to implement this via `copy_nonoverlapping`
1700    // and `MaybeUninit`, as was done before PR #109035. Calling `assume_init`
1701    // provides enough information to know that this is a typed operation.
1702
1703    // However, as of March 2023 the compiler was not capable of taking advantage
1704    // of that information. Thus, the implementation here switched to an intrinsic,
1705    // which lowers to `_0 = *src` in MIR, to address a few issues:
1706    //
1707    // - Using `MaybeUninit::assume_init` after a `copy_nonoverlapping` was not
1708    //   turning the untyped copy into a typed load. As such, the generated
1709    //   `load` in LLVM didn't get various metadata, such as `!range` (#73258),
1710    //   `!nonnull`, and `!noundef`, resulting in poorer optimization.
1711    // - Going through the extra local resulted in multiple extra copies, even
1712    //   in optimized MIR.  (Ignoring StorageLive/Dead, the intrinsic is one
1713    //   MIR statement, while the previous implementation was eight.)  LLVM
1714    //   could sometimes optimize them away, but because `read` is at the core
1715    //   of so many things, not having them in the first place improves what we
1716    //   hand off to the backend.  For example, `mem::replace::<Big>` previously
1717    //   emitted 4 `alloca` and 6 `memcpy`s, but is now 1 `alloc` and 3 `memcpy`s.
1718    // - In general, this approach keeps us from getting any more bugs (like
1719    //   #106369) that boil down to "`read(p)` is worse than `*p`", as this
1720    //   makes them look identical to the backend (or other MIR consumers).
1721    //
1722    // Future enhancements to MIR optimizations might well allow this to return
1723    // to the previous implementation, rather than using an intrinsic.
1724
1725    // SAFETY: the caller must guarantee that `src` is valid for reads.
1726    unsafe {
1727        #[cfg(debug_assertions)] // Too expensive to always enable (for now?)
1728        ub_checks::assert_unsafe_precondition!(
1729            check_language_ub,
1730            "ptr::read requires that the pointer argument is aligned and non-null",
1731            (
1732                addr: *const () = src as *const (),
1733                align: usize = align_of::<T>(),
1734                is_zst: bool = T::IS_ZST,
1735            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, is_zst)
1736        );
1737        crate::intrinsics::read_via_copy(src)
1738    }
1739}
1740
1741/// Reads the value from `src` without moving it. This leaves the
1742/// memory in `src` unchanged.
1743///
1744/// Unlike [`read`], `read_unaligned` works with unaligned pointers.
1745///
1746/// # Safety
1747///
1748/// Behavior is undefined if any of the following conditions are violated:
1749///
1750/// * `src` must be [valid] for reads.
1751///
1752/// * `src` must point to a properly initialized value of type `T`.
1753///
1754/// Like [`read`], `read_unaligned` creates a bitwise copy of `T`, regardless of
1755/// whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned
1756/// value and the value at `*src` can [violate memory safety][read-ownership].
1757///
1758/// [read-ownership]: read#ownership-of-the-returned-value
1759/// [valid]: self#safety
1760///
1761/// ## On `packed` structs
1762///
1763/// Attempting to create a raw pointer to an `unaligned` struct field with
1764/// an expression such as `&packed.unaligned as *const FieldType` creates an
1765/// intermediate unaligned reference before converting that to a raw pointer.
1766/// That this reference is temporary and immediately cast is inconsequential
1767/// as the compiler always expects references to be properly aligned.
1768/// As a result, using `&packed.unaligned as *const FieldType` causes immediate
1769/// *undefined behavior* in your program.
1770///
1771/// Instead you must use the `&raw const` syntax to create the pointer.
1772/// You may use that constructed pointer together with this function.
1773///
1774/// An example of what not to do and how this relates to `read_unaligned` is:
1775///
1776/// ```
1777/// #[repr(packed, C)]
1778/// struct Packed {
1779///     _padding: u8,
1780///     unaligned: u32,
1781/// }
1782///
1783/// let packed = Packed {
1784///     _padding: 0x00,
1785///     unaligned: 0x01020304,
1786/// };
1787///
1788/// // Take the address of a 32-bit integer which is not aligned.
1789/// // In contrast to `&packed.unaligned as *const _`, this has no undefined behavior.
1790/// let unaligned = &raw const packed.unaligned;
1791///
1792/// let v = unsafe { std::ptr::read_unaligned(unaligned) };
1793/// assert_eq!(v, 0x01020304);
1794/// ```
1795///
1796/// Accessing unaligned fields directly with e.g. `packed.unaligned` is safe however.
1797///
1798/// # Examples
1799///
1800/// Read a `usize` value from a byte buffer:
1801///
1802/// ```
1803/// fn read_usize(x: &[u8]) -> usize {
1804///     assert!(x.len() >= size_of::<usize>());
1805///
1806///     let ptr = x.as_ptr() as *const usize;
1807///
1808///     unsafe { ptr.read_unaligned() }
1809/// }
1810/// ```
1811#[inline]
1812#[stable(feature = "ptr_unaligned", since = "1.17.0")]
1813#[rustc_const_stable(feature = "const_ptr_read", since = "1.71.0")]
1814#[track_caller]
1815#[rustc_diagnostic_item = "ptr_read_unaligned"]
1816pub const unsafe fn read_unaligned<T>(src: *const T) -> T {
1817    let mut tmp = MaybeUninit::<T>::uninit();
1818    // SAFETY: the caller must guarantee that `src` is valid for reads.
1819    // `src` cannot overlap `tmp` because `tmp` was just allocated on
1820    // the stack as a separate allocation.
1821    //
1822    // Also, since we just wrote a valid value into `tmp`, it is guaranteed
1823    // to be properly initialized.
1824    unsafe {
1825        copy_nonoverlapping(src as *const u8, tmp.as_mut_ptr() as *mut u8, size_of::<T>());
1826        tmp.assume_init()
1827    }
1828}
1829
1830/// Overwrites a memory location with the given value without reading or
1831/// dropping the old value.
1832///
1833/// `write` does not drop the contents of `dst`. This is safe, but it could leak
1834/// allocations or resources, so care should be taken not to overwrite an object
1835/// that should be dropped.
1836///
1837/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
1838/// location pointed to by `dst`.
1839///
1840/// This is appropriate for initializing uninitialized memory, or overwriting
1841/// memory that has previously been [`read`] from.
1842///
1843/// # Safety
1844///
1845/// Behavior is undefined if any of the following conditions are violated:
1846///
1847/// * `dst` must be [valid] for writes.
1848///
1849/// * `dst` must be properly aligned. Use [`write_unaligned`] if this is not the
1850///   case.
1851///
1852/// Note that even if `T` has size `0`, the pointer must be properly aligned.
1853///
1854/// [valid]: self#safety
1855///
1856/// # Examples
1857///
1858/// Basic usage:
1859///
1860/// ```
1861/// let mut x = 0;
1862/// let y = &mut x as *mut i32;
1863/// let z = 12;
1864///
1865/// unsafe {
1866///     std::ptr::write(y, z);
1867///     assert_eq!(std::ptr::read(y), 12);
1868/// }
1869/// ```
1870///
1871/// Manually implement [`mem::swap`]:
1872///
1873/// ```
1874/// use std::ptr;
1875///
1876/// fn swap<T>(a: &mut T, b: &mut T) {
1877///     unsafe {
1878///         // Create a bitwise copy of the value at `a` in `tmp`.
1879///         let tmp = ptr::read(a);
1880///
1881///         // Exiting at this point (either by explicitly returning or by
1882///         // calling a function which panics) would cause the value in `tmp` to
1883///         // be dropped while the same value is still referenced by `a`. This
1884///         // could trigger undefined behavior if `T` is not `Copy`.
1885///
1886///         // Create a bitwise copy of the value at `b` in `a`.
1887///         // This is safe because mutable references cannot alias.
1888///         ptr::copy_nonoverlapping(b, a, 1);
1889///
1890///         // As above, exiting here could trigger undefined behavior because
1891///         // the same value is referenced by `a` and `b`.
1892///
1893///         // Move `tmp` into `b`.
1894///         ptr::write(b, tmp);
1895///
1896///         // `tmp` has been moved (`write` takes ownership of its second argument),
1897///         // so nothing is dropped implicitly here.
1898///     }
1899/// }
1900///
1901/// let mut foo = "foo".to_owned();
1902/// let mut bar = "bar".to_owned();
1903///
1904/// swap(&mut foo, &mut bar);
1905///
1906/// assert_eq!(foo, "bar");
1907/// assert_eq!(bar, "foo");
1908/// ```
1909#[inline]
1910#[stable(feature = "rust1", since = "1.0.0")]
1911#[rustc_const_stable(feature = "const_ptr_write", since = "1.83.0")]
1912#[rustc_diagnostic_item = "ptr_write"]
1913#[track_caller]
1914pub const unsafe fn write<T>(dst: *mut T, src: T) {
1915    // Semantically, it would be fine for this to be implemented as a
1916    // `copy_nonoverlapping` and appropriate drop suppression of `src`.
1917
1918    // However, implementing via that currently produces more MIR than is ideal.
1919    // Using an intrinsic keeps it down to just the simple `*dst = move src` in
1920    // MIR (11 statements shorter, at the time of writing), and also allows
1921    // `src` to stay an SSA value in codegen_ssa, rather than a memory one.
1922
1923    // SAFETY: the caller must guarantee that `dst` is valid for writes.
1924    // `dst` cannot overlap `src` because the caller has mutable access
1925    // to `dst` while `src` is owned by this function.
1926    unsafe {
1927        #[cfg(debug_assertions)] // Too expensive to always enable (for now?)
1928        ub_checks::assert_unsafe_precondition!(
1929            check_language_ub,
1930            "ptr::write requires that the pointer argument is aligned and non-null",
1931            (
1932                addr: *mut () = dst as *mut (),
1933                align: usize = align_of::<T>(),
1934                is_zst: bool = T::IS_ZST,
1935            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, is_zst)
1936        );
1937        intrinsics::write_via_move(dst, src)
1938    }
1939}
1940
1941/// Overwrites a memory location with the given value without reading or
1942/// dropping the old value.
1943///
1944/// Unlike [`write()`], the pointer may be unaligned.
1945///
1946/// `write_unaligned` does not drop the contents of `dst`. This is safe, but it
1947/// could leak allocations or resources, so care should be taken not to overwrite
1948/// an object that should be dropped.
1949///
1950/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
1951/// location pointed to by `dst`.
1952///
1953/// This is appropriate for initializing uninitialized memory, or overwriting
1954/// memory that has previously been read with [`read_unaligned`].
1955///
1956/// # Safety
1957///
1958/// Behavior is undefined if any of the following conditions are violated:
1959///
1960/// * `dst` must be [valid] for writes.
1961///
1962/// [valid]: self#safety
1963///
1964/// ## On `packed` structs
1965///
1966/// Attempting to create a raw pointer to an `unaligned` struct field with
1967/// an expression such as `&packed.unaligned as *const FieldType` creates an
1968/// intermediate unaligned reference before converting that to a raw pointer.
1969/// That this reference is temporary and immediately cast is inconsequential
1970/// as the compiler always expects references to be properly aligned.
1971/// As a result, using `&packed.unaligned as *const FieldType` causes immediate
1972/// *undefined behavior* in your program.
1973///
1974/// Instead, you must use the `&raw mut` syntax to create the pointer.
1975/// You may use that constructed pointer together with this function.
1976///
1977/// An example of how to do it and how this relates to `write_unaligned` is:
1978///
1979/// ```
1980/// #[repr(packed, C)]
1981/// struct Packed {
1982///     _padding: u8,
1983///     unaligned: u32,
1984/// }
1985///
1986/// let mut packed: Packed = unsafe { std::mem::zeroed() };
1987///
1988/// // Take the address of a 32-bit integer which is not aligned.
1989/// // In contrast to `&packed.unaligned as *mut _`, this has no undefined behavior.
1990/// let unaligned = &raw mut packed.unaligned;
1991///
1992/// unsafe { std::ptr::write_unaligned(unaligned, 42) };
1993///
1994/// assert_eq!({packed.unaligned}, 42); // `{...}` forces copying the field instead of creating a reference.
1995/// ```
1996///
1997/// Accessing unaligned fields directly with e.g. `packed.unaligned` is safe however
1998/// (as can be seen in the `assert_eq!` above).
1999///
2000/// # Examples
2001///
2002/// Write a `usize` value to a byte buffer:
2003///
2004/// ```
2005/// fn write_usize(x: &mut [u8], val: usize) {
2006///     assert!(x.len() >= size_of::<usize>());
2007///
2008///     let ptr = x.as_mut_ptr() as *mut usize;
2009///
2010///     unsafe { ptr.write_unaligned(val) }
2011/// }
2012/// ```
2013#[inline]
2014#[stable(feature = "ptr_unaligned", since = "1.17.0")]
2015#[rustc_const_stable(feature = "const_ptr_write", since = "1.83.0")]
2016#[rustc_diagnostic_item = "ptr_write_unaligned"]
2017#[track_caller]
2018pub const unsafe fn write_unaligned<T>(dst: *mut T, src: T) {
2019    // SAFETY: the caller must guarantee that `dst` is valid for writes.
2020    // `dst` cannot overlap `src` because the caller has mutable access
2021    // to `dst` while `src` is owned by this function.
2022    unsafe {
2023        copy_nonoverlapping((&raw const src) as *const u8, dst as *mut u8, size_of::<T>());
2024        // We are calling the intrinsic directly to avoid function calls in the generated code.
2025        intrinsics::forget(src);
2026    }
2027}
2028
2029/// Performs a volatile read of the value from `src` without moving it.
2030///
2031/// Volatile operations are intended to act on I/O memory. As such, they are considered externally
2032/// observable events (just like syscalls, but less opaque), and are guaranteed to not be elided or
2033/// reordered by the compiler across other externally observable events. With this in mind, there
2034/// are two cases of usage that need to be distinguished:
2035///
2036/// - When a volatile operation is used for memory inside an [allocation], it behaves exactly like
2037///   [`read`], except for the additional guarantee that it won't be elided or reordered (see
2038///   above). This implies that the operation will actually access memory and not e.g. be lowered to
2039///   reusing data from a previous read. Other than that, all the usual rules for memory accesses
2040///   apply (including provenance).  In particular, just like in C, whether an operation is volatile
2041///   has no bearing whatsoever on questions involving concurrent accesses from multiple threads.
2042///   Volatile accesses behave exactly like non-atomic accesses in that regard.
2043///
2044/// - Volatile operations, however, may also be used to access memory that is _outside_ of any Rust
2045///   allocation. In this use-case, the pointer does *not* have to be [valid] for reads. This is
2046///   typically used for CPU and peripheral registers that must be accessed via an I/O memory
2047///   mapping, most commonly at fixed addresses reserved by the hardware. These often have special
2048///   semantics associated to their manipulation, and cannot be used as general purpose memory.
2049///   Here, any address value is possible, including 0 and [`usize::MAX`], so long as the semantics
2050///   of such a read are well-defined by the target hardware. The provenance of the pointer is
2051///   irrelevant, and it can be created with [`without_provenance`]. The access must not trap. It
2052///   can cause side-effects, but those must not affect Rust-allocated memory in any way. This
2053///   access is still not considered [atomic], and as such it cannot be used for inter-thread
2054///   synchronization.
2055///
2056/// Note that volatile memory operations where T is a zero-sized type are noops and may be ignored.
2057///
2058/// [allocation]: crate::ptr#allocated-object
2059/// [atomic]: crate::sync::atomic#memory-model-for-atomic-accesses
2060///
2061/// # Safety
2062///
2063/// Like [`read`], `read_volatile` creates a bitwise copy of `T`, regardless of whether `T` is
2064/// [`Copy`]. If `T` is not [`Copy`], using both the returned value and the value at `*src` can
2065/// [violate memory safety][read-ownership]. However, storing non-[`Copy`] types in volatile memory
2066/// is almost certainly incorrect.
2067///
2068/// Behavior is undefined if any of the following conditions are violated:
2069///
2070/// * `src` must be either [valid] for reads, or it must point to memory outside of all Rust
2071///   allocations and reading from that memory must:
2072///   - not trap, and
2073///   - not cause any memory inside a Rust allocation to be modified.
2074///
2075/// * `src` must be properly aligned.
2076///
2077/// * Reading from `src` must produce a properly initialized value of type `T`.
2078///
2079/// Note that even if `T` has size `0`, the pointer must be properly aligned.
2080///
2081/// [valid]: self#safety
2082/// [read-ownership]: read#ownership-of-the-returned-value
2083///
2084/// # Examples
2085///
2086/// Basic usage:
2087///
2088/// ```
2089/// let x = 12;
2090/// let y = &x as *const i32;
2091///
2092/// unsafe {
2093///     assert_eq!(std::ptr::read_volatile(y), 12);
2094/// }
2095/// ```
2096#[inline]
2097#[stable(feature = "volatile", since = "1.9.0")]
2098#[track_caller]
2099#[rustc_diagnostic_item = "ptr_read_volatile"]
2100pub unsafe fn read_volatile<T>(src: *const T) -> T {
2101    // SAFETY: the caller must uphold the safety contract for `volatile_load`.
2102    unsafe {
2103        ub_checks::assert_unsafe_precondition!(
2104            check_language_ub,
2105            "ptr::read_volatile requires that the pointer argument is aligned",
2106            (
2107                addr: *const () = src as *const (),
2108                align: usize = align_of::<T>(),
2109            ) => ub_checks::maybe_is_aligned(addr, align)
2110        );
2111        intrinsics::volatile_load(src)
2112    }
2113}
2114
2115/// Performs a volatile write of a memory location with the given value without reading or dropping
2116/// the old value.
2117///
2118/// Volatile operations are intended to act on I/O memory. As such, they are considered externally
2119/// observable events (just like syscalls), and are guaranteed to not be elided or reordered by the
2120/// compiler across other externally observable events. With this in mind, there are two cases of
2121/// usage that need to be distinguished:
2122///
2123/// - When a volatile operation is used for memory inside an [allocation], it behaves exactly like
2124///   [`write`][write()], except for the additional guarantee that it won't be elided or reordered
2125///   (see above). This implies that the operation will actually access memory and not e.g. be
2126///   lowered to a register access. Other than that, all the usual rules for memory accesses apply
2127///   (including provenance). In particular, just like in C, whether an operation is volatile has no
2128///   bearing whatsoever on questions involving concurrent access from multiple threads. Volatile
2129///   accesses behave exactly like non-atomic accesses in that regard.
2130///
2131/// - Volatile operations, however, may also be used to access memory that is _outside_ of any Rust
2132///   allocation. In this use-case, the pointer does *not* have to be [valid] for writes. This is
2133///   typically used for CPU and peripheral registers that must be accessed via an I/O memory
2134///   mapping, most commonly at fixed addresses reserved by the hardware. These often have special
2135///   semantics associated to their manipulation, and cannot be used as general purpose memory.
2136///   Here, any address value is possible, including 0 and [`usize::MAX`], so long as the semantics
2137///   of such a write are well-defined by the target hardware. The provenance of the pointer is
2138///   irrelevant, and it can be created with [`without_provenance`]. The access must not trap. It
2139///   can cause side-effects, but those must not affect Rust-allocated memory in any way. This
2140///   access is still not considered [atomic], and as such it cannot be used for inter-thread
2141///   synchronization.
2142///
2143/// Note that volatile memory operations on zero-sized types (e.g., if a zero-sized type is passed
2144/// to `write_volatile`) are noops and may be ignored.
2145///
2146/// `write_volatile` does not drop the contents of `dst`. This is safe, but it could leak
2147/// allocations or resources, so care should be taken not to overwrite an object that should be
2148/// dropped when operating on Rust memory. Additionally, it does not drop `src`. Semantically, `src`
2149/// is moved into the location pointed to by `dst`.
2150///
2151/// [allocation]: crate::ptr#allocated-object
2152/// [atomic]: crate::sync::atomic#memory-model-for-atomic-accesses
2153///
2154/// # Safety
2155///
2156/// Behavior is undefined if any of the following conditions are violated:
2157///
2158/// * `dst` must be either [valid] for writes, or it must point to memory outside of all Rust
2159///   allocations and writing to that memory must:
2160///   - not trap, and
2161///   - not cause any memory inside a Rust allocation to be modified.
2162///
2163/// * `dst` must be properly aligned.
2164///
2165/// Note that even if `T` has size `0`, the pointer must be properly aligned.
2166///
2167/// [valid]: self#safety
2168///
2169/// # Examples
2170///
2171/// Basic usage:
2172///
2173/// ```
2174/// let mut x = 0;
2175/// let y = &mut x as *mut i32;
2176/// let z = 12;
2177///
2178/// unsafe {
2179///     std::ptr::write_volatile(y, z);
2180///     assert_eq!(std::ptr::read_volatile(y), 12);
2181/// }
2182/// ```
2183#[inline]
2184#[stable(feature = "volatile", since = "1.9.0")]
2185#[rustc_diagnostic_item = "ptr_write_volatile"]
2186#[track_caller]
2187pub unsafe fn write_volatile<T>(dst: *mut T, src: T) {
2188    // SAFETY: the caller must uphold the safety contract for `volatile_store`.
2189    unsafe {
2190        ub_checks::assert_unsafe_precondition!(
2191            check_language_ub,
2192            "ptr::write_volatile requires that the pointer argument is aligned",
2193            (
2194                addr: *mut () = dst as *mut (),
2195                align: usize = align_of::<T>(),
2196            ) => ub_checks::maybe_is_aligned(addr, align)
2197        );
2198        intrinsics::volatile_store(dst, src);
2199    }
2200}
2201
2202/// Align pointer `p`.
2203///
2204/// Calculate offset (in terms of elements of `size_of::<T>()` stride) that has to be applied
2205/// to pointer `p` so that pointer `p` would get aligned to `a`.
2206///
2207/// # Safety
2208/// `a` must be a power of two.
2209///
2210/// # Notes
2211/// This implementation has been carefully tailored to not panic. It is UB for this to panic.
2212/// The only real change that can be made here is change of `INV_TABLE_MOD_16` and associated
2213/// constants.
2214///
2215/// If we ever decide to make it possible to call the intrinsic with `a` that is not a
2216/// power-of-two, it will probably be more prudent to just change to a naive implementation rather
2217/// than trying to adapt this to accommodate that change.
2218///
2219/// Any questions go to @nagisa.
2220#[allow(ptr_to_integer_transmute_in_consts)]
2221pub(crate) unsafe fn align_offset<T: Sized>(p: *const T, a: usize) -> usize {
2222    // FIXME(#75598): Direct use of these intrinsics improves codegen significantly at opt-level <=
2223    // 1, where the method versions of these operations are not inlined.
2224    use intrinsics::{
2225        assume, cttz_nonzero, exact_div, mul_with_overflow, unchecked_rem, unchecked_shl,
2226        unchecked_shr, unchecked_sub, wrapping_add, wrapping_mul, wrapping_sub,
2227    };
2228
2229    /// Calculate multiplicative modular inverse of `x` modulo `m`.
2230    ///
2231    /// This implementation is tailored for `align_offset` and has following preconditions:
2232    ///
2233    /// * `m` is a power-of-two;
2234    /// * `x < m`; (if `x ≥ m`, pass in `x % m` instead)
2235    ///
2236    /// Implementation of this function shall not panic. Ever.
2237    #[inline]
2238    const unsafe fn mod_inv(x: usize, m: usize) -> usize {
2239        /// Multiplicative modular inverse table modulo 2⁴ = 16.
2240        ///
2241        /// Note, that this table does not contain values where inverse does not exist (i.e., for
2242        /// `0⁻¹ mod 16`, `2⁻¹ mod 16`, etc.)
2243        const INV_TABLE_MOD_16: [u8; 8] = [1, 11, 13, 7, 9, 3, 5, 15];
2244        /// Modulo for which the `INV_TABLE_MOD_16` is intended.
2245        const INV_TABLE_MOD: usize = 16;
2246
2247        // SAFETY: `m` is required to be a power-of-two, hence non-zero.
2248        let m_minus_one = unsafe { unchecked_sub(m, 1) };
2249        let mut inverse = INV_TABLE_MOD_16[(x & (INV_TABLE_MOD - 1)) >> 1] as usize;
2250        let mut mod_gate = INV_TABLE_MOD;
2251        // We iterate "up" using the following formula:
2252        //
2253        // $$ xy ≡ 1 (mod 2ⁿ) → xy (2 - xy) ≡ 1 (mod 2²ⁿ) $$
2254        //
2255        // This application needs to be applied at least until `2²ⁿ ≥ m`, at which point we can
2256        // finally reduce the computation to our desired `m` by taking `inverse mod m`.
2257        //
2258        // This computation is `O(log log m)`, which is to say, that on 64-bit machines this loop
2259        // will always finish in at most 4 iterations.
2260        loop {
2261            // y = y * (2 - xy) mod n
2262            //
2263            // Note, that we use wrapping operations here intentionally – the original formula
2264            // uses e.g., subtraction `mod n`. It is entirely fine to do them `mod
2265            // usize::MAX` instead, because we take the result `mod n` at the end
2266            // anyway.
2267            if mod_gate >= m {
2268                break;
2269            }
2270            inverse = wrapping_mul(inverse, wrapping_sub(2usize, wrapping_mul(x, inverse)));
2271            let (new_gate, overflow) = mul_with_overflow(mod_gate, mod_gate);
2272            if overflow {
2273                break;
2274            }
2275            mod_gate = new_gate;
2276        }
2277        inverse & m_minus_one
2278    }
2279
2280    let stride = size_of::<T>();
2281
2282    let addr: usize = p.addr();
2283
2284    // SAFETY: `a` is a power-of-two, therefore non-zero.
2285    let a_minus_one = unsafe { unchecked_sub(a, 1) };
2286
2287    if stride == 0 {
2288        // SPECIAL_CASE: handle 0-sized types. No matter how many times we step, the address will
2289        // stay the same, so no offset will be able to align the pointer unless it is already
2290        // aligned. This branch _will_ be optimized out as `stride` is known at compile-time.
2291        let p_mod_a = addr & a_minus_one;
2292        return if p_mod_a == 0 { 0 } else { usize::MAX };
2293    }
2294
2295    // SAFETY: `stride == 0` case has been handled by the special case above.
2296    let a_mod_stride = unsafe { unchecked_rem(a, stride) };
2297    if a_mod_stride == 0 {
2298        // SPECIAL_CASE: In cases where the `a` is divisible by `stride`, byte offset to align a
2299        // pointer can be computed more simply through `-p (mod a)`. In the off-chance the byte
2300        // offset is not a multiple of `stride`, the input pointer was misaligned and no pointer
2301        // offset will be able to produce a `p` aligned to the specified `a`.
2302        //
2303        // The naive `-p (mod a)` equation inhibits LLVM's ability to select instructions
2304        // like `lea`. We compute `(round_up_to_next_alignment(p, a) - p)` instead. This
2305        // redistributes operations around the load-bearing, but pessimizing `and` instruction
2306        // sufficiently for LLVM to be able to utilize the various optimizations it knows about.
2307        //
2308        // LLVM handles the branch here particularly nicely. If this branch needs to be evaluated
2309        // at runtime, it will produce a mask `if addr_mod_stride == 0 { 0 } else { usize::MAX }`
2310        // in a branch-free way and then bitwise-OR it with whatever result the `-p mod a`
2311        // computation produces.
2312
2313        let aligned_address = wrapping_add(addr, a_minus_one) & wrapping_sub(0, a);
2314        let byte_offset = wrapping_sub(aligned_address, addr);
2315        // FIXME: Remove the assume after <https://github.com/llvm/llvm-project/issues/62502>
2316        // SAFETY: Masking by `-a` can only affect the low bits, and thus cannot have reduced
2317        // the value by more than `a-1`, so even though the intermediate values might have
2318        // wrapped, the byte_offset is always in `[0, a)`.
2319        unsafe { assume(byte_offset < a) };
2320
2321        // SAFETY: `stride == 0` case has been handled by the special case above.
2322        let addr_mod_stride = unsafe { unchecked_rem(addr, stride) };
2323
2324        return if addr_mod_stride == 0 {
2325            // SAFETY: `stride` is non-zero. This is guaranteed to divide exactly as well, because
2326            // addr has been verified to be aligned to the original type’s alignment requirements.
2327            unsafe { exact_div(byte_offset, stride) }
2328        } else {
2329            usize::MAX
2330        };
2331    }
2332
2333    // GENERAL_CASE: From here on we’re handling the very general case where `addr` may be
2334    // misaligned, there isn’t an obvious relationship between `stride` and `a` that we can take an
2335    // advantage of, etc. This case produces machine code that isn’t particularly high quality,
2336    // compared to the special cases above. The code produced here is still within the realm of
2337    // miracles, given the situations this case has to deal with.
2338
2339    // SAFETY: a is power-of-two hence non-zero. stride == 0 case is handled above.
2340    // FIXME(const-hack) replace with min
2341    let gcdpow = unsafe {
2342        let x = cttz_nonzero(stride);
2343        let y = cttz_nonzero(a);
2344        if x < y { x } else { y }
2345    };
2346    // SAFETY: gcdpow has an upper-bound that’s at most the number of bits in a `usize`.
2347    let gcd = unsafe { unchecked_shl(1usize, gcdpow) };
2348    // SAFETY: gcd is always greater or equal to 1.
2349    if addr & unsafe { unchecked_sub(gcd, 1) } == 0 {
2350        // This branch solves for the following linear congruence equation:
2351        //
2352        // ` p + so = 0 mod a `
2353        //
2354        // `p` here is the pointer value, `s` - stride of `T`, `o` offset in `T`s, and `a` - the
2355        // requested alignment.
2356        //
2357        // With `g = gcd(a, s)`, and the above condition asserting that `p` is also divisible by
2358        // `g`, we can denote `a' = a/g`, `s' = s/g`, `p' = p/g`, then this becomes equivalent to:
2359        //
2360        // ` p' + s'o = 0 mod a' `
2361        // ` o = (a' - (p' mod a')) * (s'^-1 mod a') `
2362        //
2363        // The first term is "the relative alignment of `p` to `a`" (divided by the `g`), the
2364        // second term is "how does incrementing `p` by `s` bytes change the relative alignment of
2365        // `p`" (again divided by `g`). Division by `g` is necessary to make the inverse well
2366        // formed if `a` and `s` are not co-prime.
2367        //
2368        // Furthermore, the result produced by this solution is not "minimal", so it is necessary
2369        // to take the result `o mod lcm(s, a)`. This `lcm(s, a)` is the same as `a'`.
2370
2371        // SAFETY: `gcdpow` has an upper-bound not greater than the number of trailing 0-bits in
2372        // `a`.
2373        let a2 = unsafe { unchecked_shr(a, gcdpow) };
2374        // SAFETY: `a2` is non-zero. Shifting `a` by `gcdpow` cannot shift out any of the set bits
2375        // in `a` (of which it has exactly one).
2376        let a2minus1 = unsafe { unchecked_sub(a2, 1) };
2377        // SAFETY: `gcdpow` has an upper-bound not greater than the number of trailing 0-bits in
2378        // `a`.
2379        let s2 = unsafe { unchecked_shr(stride & a_minus_one, gcdpow) };
2380        // SAFETY: `gcdpow` has an upper-bound not greater than the number of trailing 0-bits in
2381        // `a`. Furthermore, the subtraction cannot overflow, because `a2 = a >> gcdpow` will
2382        // always be strictly greater than `(p % a) >> gcdpow`.
2383        let minusp2 = unsafe { unchecked_sub(a2, unchecked_shr(addr & a_minus_one, gcdpow)) };
2384        // SAFETY: `a2` is a power-of-two, as proven above. `s2` is strictly less than `a2`
2385        // because `(s % a) >> gcdpow` is strictly less than `a >> gcdpow`.
2386        return wrapping_mul(minusp2, unsafe { mod_inv(s2, a2) }) & a2minus1;
2387    }
2388
2389    // Cannot be aligned at all.
2390    usize::MAX
2391}
2392
2393/// Compares raw pointers for equality.
2394///
2395/// This is the same as using the `==` operator, but less generic:
2396/// the arguments have to be `*const T` raw pointers,
2397/// not anything that implements `PartialEq`.
2398///
2399/// This can be used to compare `&T` references (which coerce to `*const T` implicitly)
2400/// by their address rather than comparing the values they point to
2401/// (which is what the `PartialEq for &T` implementation does).
2402///
2403/// When comparing wide pointers, both the address and the metadata are tested for equality.
2404/// However, note that comparing trait object pointers (`*const dyn Trait`) is unreliable: pointers
2405/// to values of the same underlying type can compare inequal (because vtables are duplicated in
2406/// multiple codegen units), and pointers to values of *different* underlying type can compare equal
2407/// (since identical vtables can be deduplicated within a codegen unit).
2408///
2409/// # Examples
2410///
2411/// ```
2412/// use std::ptr;
2413///
2414/// let five = 5;
2415/// let other_five = 5;
2416/// let five_ref = &five;
2417/// let same_five_ref = &five;
2418/// let other_five_ref = &other_five;
2419///
2420/// assert!(five_ref == same_five_ref);
2421/// assert!(ptr::eq(five_ref, same_five_ref));
2422///
2423/// assert!(five_ref == other_five_ref);
2424/// assert!(!ptr::eq(five_ref, other_five_ref));
2425/// ```
2426///
2427/// Slices are also compared by their length (fat pointers):
2428///
2429/// ```
2430/// let a = [1, 2, 3];
2431/// assert!(std::ptr::eq(&a[..3], &a[..3]));
2432/// assert!(!std::ptr::eq(&a[..2], &a[..3]));
2433/// assert!(!std::ptr::eq(&a[0..2], &a[1..3]));
2434/// ```
2435#[stable(feature = "ptr_eq", since = "1.17.0")]
2436#[inline(always)]
2437#[must_use = "pointer comparison produces a value"]
2438#[rustc_diagnostic_item = "ptr_eq"]
2439#[allow(ambiguous_wide_pointer_comparisons)] // it's actually clear here
2440pub fn eq<T: PointeeSized>(a: *const T, b: *const T) -> bool {
2441    a == b
2442}
2443
2444/// Compares the *addresses* of the two pointers for equality,
2445/// ignoring any metadata in fat pointers.
2446///
2447/// If the arguments are thin pointers of the same type,
2448/// then this is the same as [`eq`].
2449///
2450/// # Examples
2451///
2452/// ```
2453/// use std::ptr;
2454///
2455/// let whole: &[i32; 3] = &[1, 2, 3];
2456/// let first: &i32 = &whole[0];
2457///
2458/// assert!(ptr::addr_eq(whole, first));
2459/// assert!(!ptr::eq::<dyn std::fmt::Debug>(whole, first));
2460/// ```
2461#[stable(feature = "ptr_addr_eq", since = "1.76.0")]
2462#[inline(always)]
2463#[must_use = "pointer comparison produces a value"]
2464pub fn addr_eq<T: PointeeSized, U: PointeeSized>(p: *const T, q: *const U) -> bool {
2465    (p as *const ()) == (q as *const ())
2466}
2467
2468/// Compares the *addresses* of the two function pointers for equality.
2469///
2470/// This is the same as `f == g`, but using this function makes clear that the potentially
2471/// surprising semantics of function pointer comparison are involved.
2472///
2473/// There are **very few guarantees** about how functions are compiled and they have no intrinsic
2474/// “identity”; in particular, this comparison:
2475///
2476/// * May return `true` unexpectedly, in cases where functions are equivalent.
2477///
2478///   For example, the following program is likely (but not guaranteed) to print `(true, true)`
2479///   when compiled with optimization:
2480///
2481///   ```
2482///   let f: fn(i32) -> i32 = |x| x;
2483///   let g: fn(i32) -> i32 = |x| x + 0;  // different closure, different body
2484///   let h: fn(u32) -> u32 = |x| x + 0;  // different signature too
2485///   dbg!(std::ptr::fn_addr_eq(f, g), std::ptr::fn_addr_eq(f, h)); // not guaranteed to be equal
2486///   ```
2487///
2488/// * May return `false` in any case.
2489///
2490///   This is particularly likely with generic functions but may happen with any function.
2491///   (From an implementation perspective, this is possible because functions may sometimes be
2492///   processed more than once by the compiler, resulting in duplicate machine code.)
2493///
2494/// Despite these false positives and false negatives, this comparison can still be useful.
2495/// Specifically, if
2496///
2497/// * `T` is the same type as `U`, `T` is a [subtype] of `U`, or `U` is a [subtype] of `T`, and
2498/// * `ptr::fn_addr_eq(f, g)` returns true,
2499///
2500/// then calling `f` and calling `g` will be equivalent.
2501///
2502///
2503/// # Examples
2504///
2505/// ```
2506/// use std::ptr;
2507///
2508/// fn a() { println!("a"); }
2509/// fn b() { println!("b"); }
2510/// assert!(!ptr::fn_addr_eq(a as fn(), b as fn()));
2511/// ```
2512///
2513/// [subtype]: https://doc.rust-lang.org/reference/subtyping.html
2514#[stable(feature = "ptr_fn_addr_eq", since = "1.85.0")]
2515#[inline(always)]
2516#[must_use = "function pointer comparison produces a value"]
2517pub fn fn_addr_eq<T: FnPtr, U: FnPtr>(f: T, g: U) -> bool {
2518    f.addr() == g.addr()
2519}
2520
2521/// Hash a raw pointer.
2522///
2523/// This can be used to hash a `&T` reference (which coerces to `*const T` implicitly)
2524/// by its address rather than the value it points to
2525/// (which is what the `Hash for &T` implementation does).
2526///
2527/// # Examples
2528///
2529/// ```
2530/// use std::hash::{DefaultHasher, Hash, Hasher};
2531/// use std::ptr;
2532///
2533/// let five = 5;
2534/// let five_ref = &five;
2535///
2536/// let mut hasher = DefaultHasher::new();
2537/// ptr::hash(five_ref, &mut hasher);
2538/// let actual = hasher.finish();
2539///
2540/// let mut hasher = DefaultHasher::new();
2541/// (five_ref as *const i32).hash(&mut hasher);
2542/// let expected = hasher.finish();
2543///
2544/// assert_eq!(actual, expected);
2545/// ```
2546#[stable(feature = "ptr_hash", since = "1.35.0")]
2547pub fn hash<T: PointeeSized, S: hash::Hasher>(hashee: *const T, into: &mut S) {
2548    use crate::hash::Hash;
2549    hashee.hash(into);
2550}
2551
2552#[stable(feature = "fnptr_impls", since = "1.4.0")]
2553impl<F: FnPtr> PartialEq for F {
2554    #[inline]
2555    fn eq(&self, other: &Self) -> bool {
2556        self.addr() == other.addr()
2557    }
2558}
2559#[stable(feature = "fnptr_impls", since = "1.4.0")]
2560impl<F: FnPtr> Eq for F {}
2561
2562#[stable(feature = "fnptr_impls", since = "1.4.0")]
2563impl<F: FnPtr> PartialOrd for F {
2564    #[inline]
2565    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
2566        self.addr().partial_cmp(&other.addr())
2567    }
2568}
2569#[stable(feature = "fnptr_impls", since = "1.4.0")]
2570impl<F: FnPtr> Ord for F {
2571    #[inline]
2572    fn cmp(&self, other: &Self) -> Ordering {
2573        self.addr().cmp(&other.addr())
2574    }
2575}
2576
2577#[stable(feature = "fnptr_impls", since = "1.4.0")]
2578impl<F: FnPtr> hash::Hash for F {
2579    fn hash<HH: hash::Hasher>(&self, state: &mut HH) {
2580        state.write_usize(self.addr() as _)
2581    }
2582}
2583
2584#[stable(feature = "fnptr_impls", since = "1.4.0")]
2585impl<F: FnPtr> fmt::Pointer for F {
2586    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2587        fmt::pointer_fmt_inner(self.addr() as _, f)
2588    }
2589}
2590
2591#[stable(feature = "fnptr_impls", since = "1.4.0")]
2592impl<F: FnPtr> fmt::Debug for F {
2593    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2594        fmt::pointer_fmt_inner(self.addr() as _, f)
2595    }
2596}
2597
2598/// Creates a `const` raw pointer to a place, without creating an intermediate reference.
2599///
2600/// `addr_of!(expr)` is equivalent to `&raw const expr`. The macro is *soft-deprecated*;
2601/// use `&raw const` instead.
2602///
2603/// It is still an open question under which conditions writing through an `addr_of!`-created
2604/// pointer is permitted. If the place `expr` evaluates to is based on a raw pointer, then the
2605/// result of `addr_of!` inherits all permissions from that raw pointer. However, if the place is
2606/// based on a reference, local variable, or `static`, then until all details are decided, the same
2607/// rules as for shared references apply: it is UB to write through a pointer created with this
2608/// operation, except for bytes located inside an `UnsafeCell`. Use `&raw mut` (or [`addr_of_mut`])
2609/// to create a raw pointer that definitely permits mutation.
2610///
2611/// Creating a reference with `&`/`&mut` is only allowed if the pointer is properly aligned
2612/// and points to initialized data. For cases where those requirements do not hold,
2613/// raw pointers should be used instead. However, `&expr as *const _` creates a reference
2614/// before casting it to a raw pointer, and that reference is subject to the same rules
2615/// as all other references. This macro can create a raw pointer *without* creating
2616/// a reference first.
2617///
2618/// See [`addr_of_mut`] for how to create a pointer to uninitialized data.
2619/// Doing that with `addr_of` would not make much sense since one could only
2620/// read the data, and that would be Undefined Behavior.
2621///
2622/// # Safety
2623///
2624/// The `expr` in `addr_of!(expr)` is evaluated as a place expression, but never loads from the
2625/// place or requires the place to be dereferenceable. This means that `addr_of!((*ptr).field)`
2626/// still requires the projection to `field` to be in-bounds, using the same rules as [`offset`].
2627/// However, `addr_of!(*ptr)` is defined behavior even if `ptr` is null, dangling, or misaligned.
2628///
2629/// Note that `Deref`/`Index` coercions (and their mutable counterparts) are applied inside
2630/// `addr_of!` like everywhere else, in which case a reference is created to call `Deref::deref` or
2631/// `Index::index`, respectively. The statements above only apply when no such coercions are
2632/// applied.
2633///
2634/// [`offset`]: pointer::offset
2635///
2636/// # Example
2637///
2638/// **Correct usage: Creating a pointer to unaligned data**
2639///
2640/// ```
2641/// use std::ptr;
2642///
2643/// #[repr(packed)]
2644/// struct Packed {
2645///     f1: u8,
2646///     f2: u16,
2647/// }
2648///
2649/// let packed = Packed { f1: 1, f2: 2 };
2650/// // `&packed.f2` would create an unaligned reference, and thus be Undefined Behavior!
2651/// let raw_f2 = ptr::addr_of!(packed.f2);
2652/// assert_eq!(unsafe { raw_f2.read_unaligned() }, 2);
2653/// ```
2654///
2655/// **Incorrect usage: Out-of-bounds fields projection**
2656///
2657/// ```rust,no_run
2658/// use std::ptr;
2659///
2660/// #[repr(C)]
2661/// struct MyStruct {
2662///     field1: i32,
2663///     field2: i32,
2664/// }
2665///
2666/// let ptr: *const MyStruct = ptr::null();
2667/// let fieldptr = unsafe { ptr::addr_of!((*ptr).field2) }; // Undefined Behavior ⚠️
2668/// ```
2669///
2670/// The field projection `.field2` would offset the pointer by 4 bytes,
2671/// but the pointer is not in-bounds of an allocation for 4 bytes,
2672/// so this offset is Undefined Behavior.
2673/// See the [`offset`] docs for a full list of requirements for inbounds pointer arithmetic; the
2674/// same requirements apply to field projections, even inside `addr_of!`. (In particular, it makes
2675/// no difference whether the pointer is null or dangling.)
2676#[stable(feature = "raw_ref_macros", since = "1.51.0")]
2677#[rustc_macro_transparency = "semitransparent"]
2678pub macro addr_of($place:expr) {
2679    &raw const $place
2680}
2681
2682/// Creates a `mut` raw pointer to a place, without creating an intermediate reference.
2683///
2684/// `addr_of_mut!(expr)` is equivalent to `&raw mut expr`. The macro is *soft-deprecated*;
2685/// use `&raw mut` instead.
2686///
2687/// Creating a reference with `&`/`&mut` is only allowed if the pointer is properly aligned
2688/// and points to initialized data. For cases where those requirements do not hold,
2689/// raw pointers should be used instead. However, `&mut expr as *mut _` creates a reference
2690/// before casting it to a raw pointer, and that reference is subject to the same rules
2691/// as all other references. This macro can create a raw pointer *without* creating
2692/// a reference first.
2693///
2694/// # Safety
2695///
2696/// The `expr` in `addr_of_mut!(expr)` is evaluated as a place expression, but never loads from the
2697/// place or requires the place to be dereferenceable. This means that `addr_of_mut!((*ptr).field)`
2698/// still requires the projection to `field` to be in-bounds, using the same rules as [`offset`].
2699/// However, `addr_of_mut!(*ptr)` is defined behavior even if `ptr` is null, dangling, or misaligned.
2700///
2701/// Note that `Deref`/`Index` coercions (and their mutable counterparts) are applied inside
2702/// `addr_of_mut!` like everywhere else, in which case a reference is created to call `Deref::deref`
2703/// or `Index::index`, respectively. The statements above only apply when no such coercions are
2704/// applied.
2705///
2706/// [`offset`]: pointer::offset
2707///
2708/// # Examples
2709///
2710/// **Correct usage: Creating a pointer to unaligned data**
2711///
2712/// ```
2713/// use std::ptr;
2714///
2715/// #[repr(packed)]
2716/// struct Packed {
2717///     f1: u8,
2718///     f2: u16,
2719/// }
2720///
2721/// let mut packed = Packed { f1: 1, f2: 2 };
2722/// // `&mut packed.f2` would create an unaligned reference, and thus be Undefined Behavior!
2723/// let raw_f2 = ptr::addr_of_mut!(packed.f2);
2724/// unsafe { raw_f2.write_unaligned(42); }
2725/// assert_eq!({packed.f2}, 42); // `{...}` forces copying the field instead of creating a reference.
2726/// ```
2727///
2728/// **Correct usage: Creating a pointer to uninitialized data**
2729///
2730/// ```rust
2731/// use std::{ptr, mem::MaybeUninit};
2732///
2733/// struct Demo {
2734///     field: bool,
2735/// }
2736///
2737/// let mut uninit = MaybeUninit::<Demo>::uninit();
2738/// // `&uninit.as_mut().field` would create a reference to an uninitialized `bool`,
2739/// // and thus be Undefined Behavior!
2740/// let f1_ptr = unsafe { ptr::addr_of_mut!((*uninit.as_mut_ptr()).field) };
2741/// unsafe { f1_ptr.write(true); }
2742/// let init = unsafe { uninit.assume_init() };
2743/// ```
2744///
2745/// **Incorrect usage: Out-of-bounds fields projection**
2746///
2747/// ```rust,no_run
2748/// use std::ptr;
2749///
2750/// #[repr(C)]
2751/// struct MyStruct {
2752///     field1: i32,
2753///     field2: i32,
2754/// }
2755///
2756/// let ptr: *mut MyStruct = ptr::null_mut();
2757/// let fieldptr = unsafe { ptr::addr_of_mut!((*ptr).field2) }; // Undefined Behavior ⚠️
2758/// ```
2759///
2760/// The field projection `.field2` would offset the pointer by 4 bytes,
2761/// but the pointer is not in-bounds of an allocation for 4 bytes,
2762/// so this offset is Undefined Behavior.
2763/// See the [`offset`] docs for a full list of requirements for inbounds pointer arithmetic; the
2764/// same requirements apply to field projections, even inside `addr_of_mut!`. (In particular, it
2765/// makes no difference whether the pointer is null or dangling.)
2766#[stable(feature = "raw_ref_macros", since = "1.51.0")]
2767#[rustc_macro_transparency = "semitransparent"]
2768pub macro addr_of_mut($place:expr) {
2769    &raw mut $place
2770}
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy