core/ptr/
mod.rs

1//! Manually manage memory through raw pointers.
2//!
3//! *[See also the pointer primitive types](pointer).*
4//!
5//! # Safety
6//!
7//! Many functions in this module take raw pointers as arguments and read from or write to them. For
8//! this to be safe, these pointers must be *valid* for the given access. Whether a pointer is valid
9//! depends on the operation it is used for (read or write), and the extent of the memory that is
10//! accessed (i.e., how many bytes are read/written) -- it makes no sense to ask "is this pointer
11//! valid"; one has to ask "is this pointer valid for a given access". Most functions use `*mut T`
12//! and `*const T` to access only a single value, in which case the documentation omits the size and
13//! implicitly assumes it to be `size_of::<T>()` bytes.
14//!
15//! The precise rules for validity are not determined yet. The guarantees that are
16//! provided at this point are very minimal:
17//!
18//! * For memory accesses of [size zero][zst], *every* pointer is valid, including the [null]
19//!   pointer. The following points are only concerned with non-zero-sized accesses.
20//! * A [null] pointer is *never* valid.
21//! * For a pointer to be valid, it is necessary, but not always sufficient, that the pointer be
22//!   *dereferenceable*. The [provenance] of the pointer is used to determine which [allocation]
23//!   it is derived from; a pointer is dereferenceable if the memory range of the given size
24//!   starting at the pointer is entirely contained within the bounds of that allocation. Note
25//!   that in Rust, every (stack-allocated) variable is considered a separate allocation.
26//! * All accesses performed by functions in this module are *non-atomic* in the sense
27//!   of [atomic operations] used to synchronize between threads. This means it is
28//!   undefined behavior to perform two concurrent accesses to the same location from different
29//!   threads unless both accesses only read from memory. Notice that this explicitly
30//!   includes [`read_volatile`] and [`write_volatile`]: Volatile accesses cannot
31//!   be used for inter-thread synchronization, regardless of whether they are acting on
32//!   Rust memory or not.
33//! * The result of casting a reference to a pointer is valid for as long as the
34//!   underlying allocation is live and no reference (just raw pointers) is used to
35//!   access the same memory. That is, reference and pointer accesses cannot be
36//!   interleaved.
37//!
38//! These axioms, along with careful use of [`offset`] for pointer arithmetic,
39//! are enough to correctly implement many useful things in unsafe code. Stronger guarantees
40//! will be provided eventually, as the [aliasing] rules are being determined. For more
41//! information, see the [book] as well as the section in the reference devoted
42//! to [undefined behavior][ub].
43//!
44//! We say that a pointer is "dangling" if it is not valid for any non-zero-sized accesses. This
45//! means out-of-bounds pointers, pointers to freed memory, null pointers, and pointers created with
46//! [`NonNull::dangling`] are all dangling.
47//!
48//! ## Alignment
49//!
50//! Valid raw pointers as defined above are not necessarily properly aligned (where
51//! "proper" alignment is defined by the pointee type, i.e., `*const T` must be
52//! aligned to `align_of::<T>()`). However, most functions require their
53//! arguments to be properly aligned, and will explicitly state
54//! this requirement in their documentation. Notable exceptions to this are
55//! [`read_unaligned`] and [`write_unaligned`].
56//!
57//! When a function requires proper alignment, it does so even if the access
58//! has size 0, i.e., even if memory is not actually touched. Consider using
59//! [`NonNull::dangling`] in such cases.
60//!
61//! ## Pointer to reference conversion
62//!
63//! When converting a pointer to a reference (e.g. via `&*ptr` or `&mut *ptr`),
64//! there are several rules that must be followed:
65//!
66//! * The pointer must be properly aligned.
67//!
68//! * It must be non-null.
69//!
70//! * It must be "dereferenceable" in the sense defined above.
71//!
72//! * The pointer must point to a [valid value] of type `T`.
73//!
74//! * You must enforce Rust's aliasing rules. The exact aliasing rules are not decided yet, so we
75//!   only give a rough overview here. The rules also depend on whether a mutable or a shared
76//!   reference is being created.
77//!   * When creating a mutable reference, then while this reference exists, the memory it points to
78//!     must not get accessed (read or written) through any other pointer or reference not derived
79//!     from this reference.
80//!   * When creating a shared reference, then while this reference exists, the memory it points to
81//!     must not get mutated (except inside `UnsafeCell`).
82//!
83//! If a pointer follows all of these rules, it is said to be
84//! *convertible to a (mutable or shared) reference*.
85// ^ we use this term instead of saying that the produced reference must
86// be valid, as the validity of a reference is easily confused for the
87// validity of the thing it refers to, and while the two concepts are
88// closely related, they are not identical.
89//!
90//! These rules apply even if the result is unused!
91//! (The part about being initialized is not yet fully decided, but until
92//! it is, the only safe approach is to ensure that they are indeed initialized.)
93//!
94//! An example of the implications of the above rules is that an expression such
95//! as `unsafe { &*(0 as *const u8) }` is Immediate Undefined Behavior.
96//!
97//! [valid value]: ../../reference/behavior-considered-undefined.html#invalid-values
98//!
99//! ## Allocation
100//!
101//! <a id="allocated-object"></a> <!-- keep old URLs working -->
102//!
103//! An *allocation* is a subset of program memory which is addressable
104//! from Rust, and within which pointer arithmetic is possible. Examples of
105//! allocations include heap allocations, stack-allocated variables,
106//! statics, and consts. The safety preconditions of some Rust operations -
107//! such as `offset` and field projections (`expr.field`) - are defined in
108//! terms of the allocations on which they operate.
109//!
110//! An allocation has a base address, a size, and a set of memory
111//! addresses. It is possible for an allocation to have zero size, but
112//! such an allocation will still have a base address. The base address
113//! of an allocation is not necessarily unique. While it is currently the
114//! case that an allocation always has a set of memory addresses which is
115//! fully contiguous (i.e., has no "holes"), there is no guarantee that this
116//! will not change in the future.
117//!
118//! Allocations must behave like "normal" memory: in particular, reads must not have
119//! side-effects, and writes must become visible to other threads using the usual synchronization
120//! primitives.
121//!
122//! For any allocation with `base` address, `size`, and a set of
123//! `addresses`, the following are guaranteed:
124//! - For all addresses `a` in `addresses`, `a` is in the range `base .. (base +
125//!   size)` (note that this requires `a < base + size`, not `a <= base + size`)
126//! - `base` is not equal to [`null()`] (i.e., the address with the numerical
127//!   value 0)
128//! - `base + size <= usize::MAX`
129//! - `size <= isize::MAX`
130//!
131//! As a consequence of these guarantees, given any address `a` within the set
132//! of addresses of an allocation:
133//! - It is guaranteed that `a - base` does not overflow `isize`
134//! - It is guaranteed that `a - base` is non-negative
135//! - It is guaranteed that, given `o = a - base` (i.e., the offset of `a` within
136//!   the allocation), `base + o` will not wrap around the address space (in
137//!   other words, will not overflow `usize`)
138//!
139//! [`null()`]: null
140//!
141//! # Provenance
142//!
143//! Pointers are not *simply* an "integer" or "address". For instance, it's uncontroversial
144//! to say that a Use After Free is clearly Undefined Behavior, even if you "get lucky"
145//! and the freed memory gets reallocated before your read/write (in fact this is the
146//! worst-case scenario, UAFs would be much less concerning if this didn't happen!).
147//! As another example, consider that [`wrapping_offset`] is documented to "remember"
148//! the allocation that the original pointer points to, even if it is offset far
149//! outside the memory range occupied by that allocation.
150//! To rationalize claims like this, pointers need to somehow be *more* than just their addresses:
151//! they must have **provenance**.
152//!
153//! A pointer value in Rust semantically contains the following information:
154//!
155//! * The **address** it points to, which can be represented by a `usize`.
156//! * The **provenance** it has, defining the memory it has permission to access. Provenance can be
157//!   absent, in which case the pointer does not have permission to access any memory.
158//!
159//! The exact structure of provenance is not yet specified, but the permission defined by a
160//! pointer's provenance have a *spatial* component, a *temporal* component, and a *mutability*
161//! component:
162//!
163//! * Spatial: The set of memory addresses that the pointer is allowed to access.
164//! * Temporal: The timespan during which the pointer is allowed to access those memory addresses.
165//! * Mutability: Whether the pointer may only access the memory for reads, or also access it for
166//!   writes. Note that this can interact with the other components, e.g. a pointer might permit
167//!   mutation only for a subset of addresses, or only for a subset of its maximal timespan.
168//!
169//! When an [allocation] is created, it has a unique Original Pointer. For alloc
170//! APIs this is literally the pointer the call returns, and for local variables and statics,
171//! this is the name of the variable/static. (This is mildly overloading the term "pointer"
172//! for the sake of brevity/exposition.)
173//!
174//! The Original Pointer for an allocation has provenance that constrains the *spatial*
175//! permissions of this pointer to the memory range of the allocation, and the *temporal*
176//! permissions to the lifetime of the allocation. Provenance is implicitly inherited by all
177//! pointers transitively derived from the Original Pointer through operations like [`offset`],
178//! borrowing, and pointer casts. Some operations may *shrink* the permissions of the derived
179//! provenance, limiting how much memory it can access or how long it's valid for (i.e. borrowing a
180//! subfield and subslicing can shrink the spatial component of provenance, and all borrowing can
181//! shrink the temporal component of provenance). However, no operation can ever *grow* the
182//! permissions of the derived provenance: even if you "know" there is a larger allocation, you
183//! can't derive a pointer with a larger provenance. Similarly, you cannot "recombine" two
184//! contiguous provenances back into one (i.e. with a `fn merge(&[T], &[T]) -> &[T]`).
185//!
186//! A reference to a place always has provenance over at least the memory that place occupies.
187//! A reference to a slice always has provenance over at least the range that slice describes.
188//! Whether and when exactly the provenance of a reference gets "shrunk" to *exactly* fit
189//! the memory it points to is not yet determined.
190//!
191//! A *shared* reference only ever has provenance that permits reading from memory,
192//! and never permits writes, except inside [`UnsafeCell`].
193//!
194//! Provenance can affect whether a program has undefined behavior:
195//!
196//! * It is undefined behavior to access memory through a pointer that does not have provenance over
197//!   that memory. Note that a pointer "at the end" of its provenance is not actually outside its
198//!   provenance, it just has 0 bytes it can load/store. Zero-sized accesses do not require any
199//!   provenance since they access an empty range of memory.
200//!
201//! * It is undefined behavior to [`offset`] a pointer across a memory range that is not contained
202//!   in the allocation it is derived from, or to [`offset_from`] two pointers not derived
203//!   from the same allocation. Provenance is used to say what exactly "derived from" even
204//!   means: the lineage of a pointer is traced back to the Original Pointer it descends from, and
205//!   that identifies the relevant allocation. In particular, it's always UB to offset a
206//!   pointer derived from something that is now deallocated, except if the offset is 0.
207//!
208//! But it *is* still sound to:
209//!
210//! * Create a pointer without provenance from just an address (see [`without_provenance`]). Such a
211//!   pointer cannot be used for memory accesses (except for zero-sized accesses). This can still be
212//!   useful for sentinel values like `null` *or* to represent a tagged pointer that will never be
213//!   dereferenceable. In general, it is always sound for an integer to pretend to be a pointer "for
214//!   fun" as long as you don't use operations on it which require it to be valid (non-zero-sized
215//!   offset, read, write, etc).
216//!
217//! * Forge an allocation of size zero at any sufficiently aligned non-null address.
218//!   i.e. the usual "ZSTs are fake, do what you want" rules apply.
219//!
220//! * [`wrapping_offset`] a pointer outside its provenance. This includes pointers
221//!   which have "no" provenance. In particular, this makes it sound to do pointer tagging tricks.
222//!
223//! * Compare arbitrary pointers by address. Pointer comparison ignores provenance and addresses
224//!   *are* just integers, so there is always a coherent answer, even if the pointers are dangling
225//!   or from different provenances. Note that if you get "lucky" and notice that a pointer at the
226//!   end of one allocation is the "same" address as the start of another allocation,
227//!   anything you do with that fact is *probably* going to be gibberish. The scope of that
228//!   gibberish is kept under control by the fact that the two pointers *still* aren't allowed to
229//!   access the other's allocation (bytes), because they still have different provenance.
230//!
231//! Note that the full definition of provenance in Rust is not decided yet, as this interacts
232//! with the as-yet undecided [aliasing] rules.
233//!
234//! ## Pointers Vs Integers
235//!
236//! From this discussion, it becomes very clear that a `usize` *cannot* accurately represent a pointer,
237//! and converting from a pointer to a `usize` is generally an operation which *only* extracts the
238//! address. Converting this address back into pointer requires somehow answering the question:
239//! which provenance should the resulting pointer have?
240//!
241//! Rust provides two ways of dealing with this situation: *Strict Provenance* and *Exposed Provenance*.
242//!
243//! Note that a pointer *can* represent a `usize` (via [`without_provenance`]), so the right type to
244//! use in situations where a value is "sometimes a pointer and sometimes a bare `usize`" is a
245//! pointer type.
246//!
247//! ## Strict Provenance
248//!
249//! "Strict Provenance" refers to a set of APIs designed to make working with provenance more
250//! explicit. They are intended as substitutes for casting a pointer to an integer and back.
251//!
252//! Entirely avoiding integer-to-pointer casts successfully side-steps the inherent ambiguity of
253//! that operation. This benefits compiler optimizations, and it is pretty much a requirement for
254//! using tools like [Miri] and architectures like [CHERI] that aim to detect and diagnose pointer
255//! misuse.
256//!
257//! The key insight to making programming without integer-to-pointer casts *at all* viable is the
258//! [`with_addr`] method:
259//!
260//! ```text
261//!     /// Creates a new pointer with the given address.
262//!     ///
263//!     /// This performs the same operation as an `addr as ptr` cast, but copies
264//!     /// the *provenance* of `self` to the new pointer.
265//!     /// This allows us to dynamically preserve and propagate this important
266//!     /// information in a way that is otherwise impossible with a unary cast.
267//!     ///
268//!     /// This is equivalent to using `wrapping_offset` to offset `self` to the
269//!     /// given address, and therefore has all the same capabilities and restrictions.
270//!     pub fn with_addr(self, addr: usize) -> Self;
271//! ```
272//!
273//! So you're still able to drop down to the address representation and do whatever
274//! clever bit tricks you want *as long as* you're able to keep around a pointer
275//! into the allocation you care about that can "reconstitute" the provenance.
276//! Usually this is very easy, because you only are taking a pointer, messing with the address,
277//! and then immediately converting back to a pointer. To make this use case more ergonomic,
278//! we provide the [`map_addr`] method.
279//!
280//! To help make it clear that code is "following" Strict Provenance semantics, we also provide an
281//! [`addr`] method which promises that the returned address is not part of a
282//! pointer-integer-pointer roundtrip. In the future we may provide a lint for pointer<->integer
283//! casts to help you audit if your code conforms to strict provenance.
284//!
285//! ### Using Strict Provenance
286//!
287//! Most code needs no changes to conform to strict provenance, as the only really concerning
288//! operation is casts from `usize` to a pointer. For code which *does* cast a `usize` to a pointer,
289//! the scope of the change depends on exactly what you're doing.
290//!
291//! In general, you just need to make sure that if you want to convert a `usize` address to a
292//! pointer and then use that pointer to read/write memory, you need to keep around a pointer
293//! that has sufficient provenance to perform that read/write itself. In this way all of your
294//! casts from an address to a pointer are essentially just applying offsets/indexing.
295//!
296//! This is generally trivial to do for simple cases like tagged pointers *as long as you
297//! represent the tagged pointer as an actual pointer and not a `usize`*. For instance:
298//!
299//! ```
300//! unsafe {
301//!     // A flag we want to pack into our pointer
302//!     static HAS_DATA: usize = 0x1;
303//!     static FLAG_MASK: usize = !HAS_DATA;
304//!
305//!     // Our value, which must have enough alignment to have spare least-significant-bits.
306//!     let my_precious_data: u32 = 17;
307//!     assert!(align_of::<u32>() > 1);
308//!
309//!     // Create a tagged pointer
310//!     let ptr = &my_precious_data as *const u32;
311//!     let tagged = ptr.map_addr(|addr| addr | HAS_DATA);
312//!
313//!     // Check the flag:
314//!     if tagged.addr() & HAS_DATA != 0 {
315//!         // Untag and read the pointer
316//!         let data = *tagged.map_addr(|addr| addr & FLAG_MASK);
317//!         assert_eq!(data, 17);
318//!     } else {
319//!         unreachable!()
320//!     }
321//! }
322//! ```
323//!
324//! (Yes, if you've been using [`AtomicUsize`] for pointers in concurrent datastructures, you should
325//! be using [`AtomicPtr`] instead. If that messes up the way you atomically manipulate pointers,
326//! we would like to know why, and what needs to be done to fix it.)
327//!
328//! Situations where a valid pointer *must* be created from just an address, such as baremetal code
329//! accessing a memory-mapped interface at a fixed address, cannot currently be handled with strict
330//! provenance APIs and should use [exposed provenance](#exposed-provenance).
331//!
332//! ## Exposed Provenance
333//!
334//! As discussed above, integer-to-pointer casts are not possible with Strict Provenance APIs.
335//! This is by design: the goal of Strict Provenance is to provide a clear specification that we are
336//! confident can be formalized unambiguously and can be subject to precise formal reasoning.
337//! Integer-to-pointer casts do not (currently) have such a clear specification.
338//!
339//! However, there exist situations where integer-to-pointer casts cannot be avoided, or
340//! where avoiding them would require major refactoring. Legacy platform APIs also regularly assume
341//! that `usize` can capture all the information that makes up a pointer.
342//! Bare-metal platforms can also require the synthesis of a pointer "out of thin air" without
343//! anywhere to obtain proper provenance from.
344//!
345//! Rust's model for dealing with integer-to-pointer casts is called *Exposed Provenance*. However,
346//! the semantics of Exposed Provenance are on much less solid footing than Strict Provenance, and
347//! at this point it is not yet clear whether a satisfying unambiguous semantics can be defined for
348//! Exposed Provenance. (If that sounds bad, be reassured that other popular languages that provide
349//! integer-to-pointer casts are not faring any better.) Furthermore, Exposed Provenance will not
350//! work (well) with tools like [Miri] and [CHERI].
351//!
352//! Exposed Provenance is provided by the [`expose_provenance`] and [`with_exposed_provenance`] methods,
353//! which are equivalent to `as` casts between pointers and integers.
354//! - [`expose_provenance`] is a lot like [`addr`], but additionally adds the provenance of the
355//!   pointer to a global list of 'exposed' provenances. (This list is purely conceptual, it exists
356//!   for the purpose of specifying Rust but is not materialized in actual executions, except in
357//!   tools like [Miri].)
358//!   Memory which is outside the control of the Rust abstract machine (MMIO registers, for example)
359//!   is always considered to be exposed, so long as this memory is disjoint from memory that will
360//!   be used by the abstract machine such as the stack, heap, and statics.
361//! - [`with_exposed_provenance`] can be used to construct a pointer with one of these previously
362//!   'exposed' provenances. [`with_exposed_provenance`] takes only `addr: usize` as arguments, so
363//!   unlike in [`with_addr`] there is no indication of what the correct provenance for the returned
364//!   pointer is -- and that is exactly what makes integer-to-pointer casts so tricky to rigorously
365//!   specify! The compiler will do its best to pick the right provenance for you, but currently we
366//!   cannot provide any guarantees about which provenance the resulting pointer will have. Only one
367//!   thing is clear: if there is *no* previously 'exposed' provenance that justifies the way the
368//!   returned pointer will be used, the program has undefined behavior.
369//!
370//! If at all possible, we encourage code to be ported to [Strict Provenance] APIs, thus avoiding
371//! the need for Exposed Provenance. Maximizing the amount of such code is a major win for avoiding
372//! specification complexity and to facilitate adoption of tools like [CHERI] and [Miri] that can be
373//! a big help in increasing the confidence in (unsafe) Rust code. However, we acknowledge that this
374//! is not always possible, and offer Exposed Provenance as a way to explicit "opt out" of the
375//! well-defined semantics of Strict Provenance, and "opt in" to the unclear semantics of
376//! integer-to-pointer casts.
377//!
378//! [aliasing]: ../../nomicon/aliasing.html
379//! [allocation]: #allocation
380//! [provenance]: #provenance
381//! [book]: ../../book/ch19-01-unsafe-rust.html#dereferencing-a-raw-pointer
382//! [ub]: ../../reference/behavior-considered-undefined.html
383//! [zst]: ../../nomicon/exotic-sizes.html#zero-sized-types-zsts
384//! [atomic operations]: crate::sync::atomic
385//! [`offset`]: pointer::offset
386//! [`offset_from`]: pointer::offset_from
387//! [`wrapping_offset`]: pointer::wrapping_offset
388//! [`with_addr`]: pointer::with_addr
389//! [`map_addr`]: pointer::map_addr
390//! [`addr`]: pointer::addr
391//! [`AtomicUsize`]: crate::sync::atomic::AtomicUsize
392//! [`AtomicPtr`]: crate::sync::atomic::AtomicPtr
393//! [`expose_provenance`]: pointer::expose_provenance
394//! [`with_exposed_provenance`]: with_exposed_provenance
395//! [Miri]: https://github.com/rust-lang/miri
396//! [CHERI]: https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
397//! [Strict Provenance]: #strict-provenance
398//! [`UnsafeCell`]: core::cell::UnsafeCell
399
400#![stable(feature = "rust1", since = "1.0.0")]
401// There are many unsafe functions taking pointers that don't dereference them.
402#![allow(clippy::not_unsafe_ptr_arg_deref)]
403
404use crate::cmp::Ordering;
405use crate::intrinsics::const_eval_select;
406use crate::marker::{FnPtr, PointeeSized};
407use crate::mem::{self, MaybeUninit, SizedTypeProperties};
408use crate::num::NonZero;
409use crate::{fmt, hash, intrinsics, ub_checks};
410
411mod alignment;
412#[unstable(feature = "ptr_alignment_type", issue = "102070")]
413pub use alignment::Alignment;
414
415mod metadata;
416#[unstable(feature = "ptr_metadata", issue = "81513")]
417pub use metadata::{DynMetadata, Pointee, Thin, from_raw_parts, from_raw_parts_mut, metadata};
418
419mod non_null;
420#[stable(feature = "nonnull", since = "1.25.0")]
421pub use non_null::NonNull;
422
423mod unique;
424#[unstable(feature = "ptr_internals", issue = "none")]
425pub use unique::Unique;
426
427mod const_ptr;
428mod mut_ptr;
429
430// Some functions are defined here because they accidentally got made
431// available in this module on stable. See <https://github.com/rust-lang/rust/issues/15702>.
432// (`transmute` also falls into this category, but it cannot be wrapped due to the
433// check that `T` and `U` have the same size.)
434
435/// Copies `count * size_of::<T>()` bytes from `src` to `dst`. The source
436/// and destination must *not* overlap.
437///
438/// For regions of memory which might overlap, use [`copy`] instead.
439///
440/// `copy_nonoverlapping` is semantically equivalent to C's [`memcpy`], but
441/// with the source and destination arguments swapped,
442/// and `count` counting the number of `T`s instead of bytes.
443///
444/// The copy is "untyped" in the sense that data may be uninitialized or otherwise violate the
445/// requirements of `T`. The initialization state is preserved exactly.
446///
447/// [`memcpy`]: https://en.cppreference.com/w/c/string/byte/memcpy
448///
449/// # Safety
450///
451/// Behavior is undefined if any of the following conditions are violated:
452///
453/// * `src` must be [valid] for reads of `count * size_of::<T>()` bytes.
454///
455/// * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes.
456///
457/// * Both `src` and `dst` must be properly aligned.
458///
459/// * The region of memory beginning at `src` with a size of `count *
460///   size_of::<T>()` bytes must *not* overlap with the region of memory
461///   beginning at `dst` with the same size.
462///
463/// Like [`read`], `copy_nonoverlapping` creates a bitwise copy of `T`, regardless of
464/// whether `T` is [`Copy`]. If `T` is not [`Copy`], using *both* the values
465/// in the region beginning at `*src` and the region beginning at `*dst` can
466/// [violate memory safety][read-ownership].
467///
468/// Note that even if the effectively copied size (`count * size_of::<T>()`) is
469/// `0`, the pointers must be properly aligned.
470///
471/// [`read`]: crate::ptr::read
472/// [read-ownership]: crate::ptr::read#ownership-of-the-returned-value
473/// [valid]: crate::ptr#safety
474///
475/// # Examples
476///
477/// Manually implement [`Vec::append`]:
478///
479/// ```
480/// use std::ptr;
481///
482/// /// Moves all the elements of `src` into `dst`, leaving `src` empty.
483/// fn append<T>(dst: &mut Vec<T>, src: &mut Vec<T>) {
484///     let src_len = src.len();
485///     let dst_len = dst.len();
486///
487///     // Ensure that `dst` has enough capacity to hold all of `src`.
488///     dst.reserve(src_len);
489///
490///     unsafe {
491///         // The call to add is always safe because `Vec` will never
492///         // allocate more than `isize::MAX` bytes.
493///         let dst_ptr = dst.as_mut_ptr().add(dst_len);
494///         let src_ptr = src.as_ptr();
495///
496///         // Truncate `src` without dropping its contents. We do this first,
497///         // to avoid problems in case something further down panics.
498///         src.set_len(0);
499///
500///         // The two regions cannot overlap because mutable references do
501///         // not alias, and two different vectors cannot own the same
502///         // memory.
503///         ptr::copy_nonoverlapping(src_ptr, dst_ptr, src_len);
504///
505///         // Notify `dst` that it now holds the contents of `src`.
506///         dst.set_len(dst_len + src_len);
507///     }
508/// }
509///
510/// let mut a = vec!['r'];
511/// let mut b = vec!['u', 's', 't'];
512///
513/// append(&mut a, &mut b);
514///
515/// assert_eq!(a, &['r', 'u', 's', 't']);
516/// assert!(b.is_empty());
517/// ```
518///
519/// [`Vec::append`]: ../../std/vec/struct.Vec.html#method.append
520#[doc(alias = "memcpy")]
521#[stable(feature = "rust1", since = "1.0.0")]
522#[rustc_const_stable(feature = "const_intrinsic_copy", since = "1.83.0")]
523#[inline(always)]
524#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
525#[rustc_diagnostic_item = "ptr_copy_nonoverlapping"]
526pub const unsafe fn copy_nonoverlapping<T>(src: *const T, dst: *mut T, count: usize) {
527    ub_checks::assert_unsafe_precondition!(
528        check_language_ub,
529        "ptr::copy_nonoverlapping requires that both pointer arguments are aligned and non-null \
530        and the specified memory ranges do not overlap",
531        (
532            src: *const () = src as *const (),
533            dst: *mut () = dst as *mut (),
534            size: usize = size_of::<T>(),
535            align: usize = align_of::<T>(),
536            count: usize = count,
537        ) => {
538            let zero_size = count == 0 || size == 0;
539            ub_checks::maybe_is_aligned_and_not_null(src, align, zero_size)
540                && ub_checks::maybe_is_aligned_and_not_null(dst, align, zero_size)
541                && ub_checks::maybe_is_nonoverlapping(src, dst, size, count)
542        }
543    );
544
545    // SAFETY: the safety contract for `copy_nonoverlapping` must be
546    // upheld by the caller.
547    unsafe { crate::intrinsics::copy_nonoverlapping(src, dst, count) }
548}
549
550/// Copies `count * size_of::<T>()` bytes from `src` to `dst`. The source
551/// and destination may overlap.
552///
553/// If the source and destination will *never* overlap,
554/// [`copy_nonoverlapping`] can be used instead.
555///
556/// `copy` is semantically equivalent to C's [`memmove`], but
557/// with the source and destination arguments swapped,
558/// and `count` counting the number of `T`s instead of bytes.
559/// Copying takes place as if the bytes were copied from `src`
560/// to a temporary array and then copied from the array to `dst`.
561///
562/// The copy is "untyped" in the sense that data may be uninitialized or otherwise violate the
563/// requirements of `T`. The initialization state is preserved exactly.
564///
565/// [`memmove`]: https://en.cppreference.com/w/c/string/byte/memmove
566///
567/// # Safety
568///
569/// Behavior is undefined if any of the following conditions are violated:
570///
571/// * `src` must be [valid] for reads of `count * size_of::<T>()` bytes.
572///
573/// * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes, and must remain valid even
574///   when `src` is read for `count * size_of::<T>()` bytes. (This means if the memory ranges
575///   overlap, the `dst` pointer must not be invalidated by `src` reads.)
576///
577/// * Both `src` and `dst` must be properly aligned.
578///
579/// Like [`read`], `copy` creates a bitwise copy of `T`, regardless of
580/// whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the values
581/// in the region beginning at `*src` and the region beginning at `*dst` can
582/// [violate memory safety][read-ownership].
583///
584/// Note that even if the effectively copied size (`count * size_of::<T>()`) is
585/// `0`, the pointers must be properly aligned.
586///
587/// [`read`]: crate::ptr::read
588/// [read-ownership]: crate::ptr::read#ownership-of-the-returned-value
589/// [valid]: crate::ptr#safety
590///
591/// # Examples
592///
593/// Efficiently create a Rust vector from an unsafe buffer:
594///
595/// ```
596/// use std::ptr;
597///
598/// /// # Safety
599/// ///
600/// /// * `ptr` must be correctly aligned for its type and non-zero.
601/// /// * `ptr` must be valid for reads of `elts` contiguous elements of type `T`.
602/// /// * Those elements must not be used after calling this function unless `T: Copy`.
603/// # #[allow(dead_code)]
604/// unsafe fn from_buf_raw<T>(ptr: *const T, elts: usize) -> Vec<T> {
605///     let mut dst = Vec::with_capacity(elts);
606///
607///     // SAFETY: Our precondition ensures the source is aligned and valid,
608///     // and `Vec::with_capacity` ensures that we have usable space to write them.
609///     unsafe { ptr::copy(ptr, dst.as_mut_ptr(), elts); }
610///
611///     // SAFETY: We created it with this much capacity earlier,
612///     // and the previous `copy` has initialized these elements.
613///     unsafe { dst.set_len(elts); }
614///     dst
615/// }
616/// ```
617#[doc(alias = "memmove")]
618#[stable(feature = "rust1", since = "1.0.0")]
619#[rustc_const_stable(feature = "const_intrinsic_copy", since = "1.83.0")]
620#[inline(always)]
621#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
622#[rustc_diagnostic_item = "ptr_copy"]
623pub const unsafe fn copy<T>(src: *const T, dst: *mut T, count: usize) {
624    // SAFETY: the safety contract for `copy` must be upheld by the caller.
625    unsafe {
626        ub_checks::assert_unsafe_precondition!(
627            check_language_ub,
628            "ptr::copy requires that both pointer arguments are aligned and non-null",
629            (
630                src: *const () = src as *const (),
631                dst: *mut () = dst as *mut (),
632                align: usize = align_of::<T>(),
633                zero_size: bool = T::IS_ZST || count == 0,
634            ) =>
635            ub_checks::maybe_is_aligned_and_not_null(src, align, zero_size)
636                && ub_checks::maybe_is_aligned_and_not_null(dst, align, zero_size)
637        );
638        crate::intrinsics::copy(src, dst, count)
639    }
640}
641
642/// Sets `count * size_of::<T>()` bytes of memory starting at `dst` to
643/// `val`.
644///
645/// `write_bytes` is similar to C's [`memset`], but sets `count *
646/// size_of::<T>()` bytes to `val`.
647///
648/// [`memset`]: https://en.cppreference.com/w/c/string/byte/memset
649///
650/// # Safety
651///
652/// Behavior is undefined if any of the following conditions are violated:
653///
654/// * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes.
655///
656/// * `dst` must be properly aligned.
657///
658/// Note that even if the effectively copied size (`count * size_of::<T>()`) is
659/// `0`, the pointer must be properly aligned.
660///
661/// Additionally, note that changing `*dst` in this way can easily lead to undefined behavior (UB)
662/// later if the written bytes are not a valid representation of some `T`. For instance, the
663/// following is an **incorrect** use of this function:
664///
665/// ```rust,no_run
666/// unsafe {
667///     let mut value: u8 = 0;
668///     let ptr: *mut bool = &mut value as *mut u8 as *mut bool;
669///     let _bool = ptr.read(); // This is fine, `ptr` points to a valid `bool`.
670///     ptr.write_bytes(42u8, 1); // This function itself does not cause UB...
671///     let _bool = ptr.read(); // ...but it makes this operation UB! ⚠️
672/// }
673/// ```
674///
675/// [valid]: crate::ptr#safety
676///
677/// # Examples
678///
679/// Basic usage:
680///
681/// ```
682/// use std::ptr;
683///
684/// let mut vec = vec![0u32; 4];
685/// unsafe {
686///     let vec_ptr = vec.as_mut_ptr();
687///     ptr::write_bytes(vec_ptr, 0xfe, 2);
688/// }
689/// assert_eq!(vec, [0xfefefefe, 0xfefefefe, 0, 0]);
690/// ```
691#[doc(alias = "memset")]
692#[stable(feature = "rust1", since = "1.0.0")]
693#[rustc_const_stable(feature = "const_ptr_write", since = "1.83.0")]
694#[inline(always)]
695#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
696#[rustc_diagnostic_item = "ptr_write_bytes"]
697pub const unsafe fn write_bytes<T>(dst: *mut T, val: u8, count: usize) {
698    // SAFETY: the safety contract for `write_bytes` must be upheld by the caller.
699    unsafe {
700        ub_checks::assert_unsafe_precondition!(
701            check_language_ub,
702            "ptr::write_bytes requires that the destination pointer is aligned and non-null",
703            (
704                addr: *const () = dst as *const (),
705                align: usize = align_of::<T>(),
706                zero_size: bool = T::IS_ZST || count == 0,
707            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, zero_size)
708        );
709        crate::intrinsics::write_bytes(dst, val, count)
710    }
711}
712
713/// Executes the destructor (if any) of the pointed-to value.
714///
715/// This is almost the same as calling [`ptr::read`] and discarding
716/// the result, but has the following advantages:
717// FIXME: say something more useful than "almost the same"?
718// There are open questions here: `read` requires the value to be fully valid, e.g. if `T` is a
719// `bool` it must be 0 or 1, if it is a reference then it must be dereferenceable. `drop_in_place`
720// only requires that `*to_drop` be "valid for dropping" and we have not defined what that means. In
721// Miri it currently (May 2024) requires nothing at all for types without drop glue.
722///
723/// * It is *required* to use `drop_in_place` to drop unsized types like
724///   trait objects, because they can't be read out onto the stack and
725///   dropped normally.
726///
727/// * It is friendlier to the optimizer to do this over [`ptr::read`] when
728///   dropping manually allocated memory (e.g., in the implementations of
729///   `Box`/`Rc`/`Vec`), as the compiler doesn't need to prove that it's
730///   sound to elide the copy.
731///
732/// * It can be used to drop [pinned] data when `T` is not `repr(packed)`
733///   (pinned data must not be moved before it is dropped).
734///
735/// Unaligned values cannot be dropped in place, they must be copied to an aligned
736/// location first using [`ptr::read_unaligned`]. For packed structs, this move is
737/// done automatically by the compiler. This means the fields of packed structs
738/// are not dropped in-place.
739///
740/// [`ptr::read`]: self::read
741/// [`ptr::read_unaligned`]: self::read_unaligned
742/// [pinned]: crate::pin
743///
744/// # Safety
745///
746/// Behavior is undefined if any of the following conditions are violated:
747///
748/// * `to_drop` must be [valid] for both reads and writes.
749///
750/// * `to_drop` must be properly aligned, even if `T` has size 0.
751///
752/// * `to_drop` must be nonnull, even if `T` has size 0.
753///
754/// * The value `to_drop` points to must be valid for dropping, which may mean
755///   it must uphold additional invariants. These invariants depend on the type
756///   of the value being dropped. For instance, when dropping a Box, the box's
757///   pointer to the heap must be valid.
758///
759/// * While `drop_in_place` is executing, the only way to access parts of
760///   `to_drop` is through the `&mut self` references supplied to the
761///   `Drop::drop` methods that `drop_in_place` invokes.
762///
763/// Additionally, if `T` is not [`Copy`], using the pointed-to value after
764/// calling `drop_in_place` can cause undefined behavior. Note that `*to_drop =
765/// foo` counts as a use because it will cause the value to be dropped
766/// again. [`write()`] can be used to overwrite data without causing it to be
767/// dropped.
768///
769/// [valid]: self#safety
770///
771/// # Examples
772///
773/// Manually remove the last item from a vector:
774///
775/// ```
776/// use std::ptr;
777/// use std::rc::Rc;
778///
779/// let last = Rc::new(1);
780/// let weak = Rc::downgrade(&last);
781///
782/// let mut v = vec![Rc::new(0), last];
783///
784/// unsafe {
785///     // Get a raw pointer to the last element in `v`.
786///     let ptr = &mut v[1] as *mut _;
787///     // Shorten `v` to prevent the last item from being dropped. We do that first,
788///     // to prevent issues if the `drop_in_place` below panics.
789///     v.set_len(1);
790///     // Without a call `drop_in_place`, the last item would never be dropped,
791///     // and the memory it manages would be leaked.
792///     ptr::drop_in_place(ptr);
793/// }
794///
795/// assert_eq!(v, &[0.into()]);
796///
797/// // Ensure that the last item was dropped.
798/// assert!(weak.upgrade().is_none());
799/// ```
800#[stable(feature = "drop_in_place", since = "1.8.0")]
801#[lang = "drop_in_place"]
802#[allow(unconditional_recursion)]
803#[rustc_diagnostic_item = "ptr_drop_in_place"]
804pub unsafe fn drop_in_place<T: PointeeSized>(to_drop: *mut T) {
805    // Code here does not matter - this is replaced by the
806    // real drop glue by the compiler.
807
808    // SAFETY: see comment above
809    unsafe { drop_in_place(to_drop) }
810}
811
812/// Creates a null raw pointer.
813///
814/// This function is equivalent to zero-initializing the pointer:
815/// `MaybeUninit::<*const T>::zeroed().assume_init()`.
816/// The resulting pointer has the address 0.
817///
818/// # Examples
819///
820/// ```
821/// use std::ptr;
822///
823/// let p: *const i32 = ptr::null();
824/// assert!(p.is_null());
825/// assert_eq!(p as usize, 0); // this pointer has the address 0
826/// ```
827#[inline(always)]
828#[must_use]
829#[stable(feature = "rust1", since = "1.0.0")]
830#[rustc_promotable]
831#[rustc_const_stable(feature = "const_ptr_null", since = "1.24.0")]
832#[rustc_diagnostic_item = "ptr_null"]
833pub const fn null<T: PointeeSized + Thin>() -> *const T {
834    from_raw_parts(without_provenance::<()>(0), ())
835}
836
837/// Creates a null mutable raw pointer.
838///
839/// This function is equivalent to zero-initializing the pointer:
840/// `MaybeUninit::<*mut T>::zeroed().assume_init()`.
841/// The resulting pointer has the address 0.
842///
843/// # Examples
844///
845/// ```
846/// use std::ptr;
847///
848/// let p: *mut i32 = ptr::null_mut();
849/// assert!(p.is_null());
850/// assert_eq!(p as usize, 0); // this pointer has the address 0
851/// ```
852#[inline(always)]
853#[must_use]
854#[stable(feature = "rust1", since = "1.0.0")]
855#[rustc_promotable]
856#[rustc_const_stable(feature = "const_ptr_null", since = "1.24.0")]
857#[rustc_diagnostic_item = "ptr_null_mut"]
858pub const fn null_mut<T: PointeeSized + Thin>() -> *mut T {
859    from_raw_parts_mut(without_provenance_mut::<()>(0), ())
860}
861
862/// Creates a pointer with the given address and no [provenance][crate::ptr#provenance].
863///
864/// This is equivalent to `ptr::null().with_addr(addr)`.
865///
866/// Without provenance, this pointer is not associated with any actual allocation. Such a
867/// no-provenance pointer may be used for zero-sized memory accesses (if suitably aligned), but
868/// non-zero-sized memory accesses with a no-provenance pointer are UB. No-provenance pointers are
869/// little more than a `usize` address in disguise.
870///
871/// This is different from `addr as *const T`, which creates a pointer that picks up a previously
872/// exposed provenance. See [`with_exposed_provenance`] for more details on that operation.
873///
874/// This is a [Strict Provenance][crate::ptr#strict-provenance] API.
875#[inline(always)]
876#[must_use]
877#[stable(feature = "strict_provenance", since = "1.84.0")]
878#[rustc_const_stable(feature = "strict_provenance", since = "1.84.0")]
879pub const fn without_provenance<T>(addr: usize) -> *const T {
880    without_provenance_mut(addr)
881}
882
883/// Creates a new pointer that is dangling, but non-null and well-aligned.
884///
885/// This is useful for initializing types which lazily allocate, like
886/// `Vec::new` does.
887///
888/// Note that the pointer value may potentially represent a valid pointer to
889/// a `T`, which means this must not be used as a "not yet initialized"
890/// sentinel value. Types that lazily allocate must track initialization by
891/// some other means.
892#[inline(always)]
893#[must_use]
894#[stable(feature = "strict_provenance", since = "1.84.0")]
895#[rustc_const_stable(feature = "strict_provenance", since = "1.84.0")]
896pub const fn dangling<T>() -> *const T {
897    dangling_mut()
898}
899
900/// Creates a pointer with the given address and no [provenance][crate::ptr#provenance].
901///
902/// This is equivalent to `ptr::null_mut().with_addr(addr)`.
903///
904/// Without provenance, this pointer is not associated with any actual allocation. Such a
905/// no-provenance pointer may be used for zero-sized memory accesses (if suitably aligned), but
906/// non-zero-sized memory accesses with a no-provenance pointer are UB. No-provenance pointers are
907/// little more than a `usize` address in disguise.
908///
909/// This is different from `addr as *mut T`, which creates a pointer that picks up a previously
910/// exposed provenance. See [`with_exposed_provenance_mut`] for more details on that operation.
911///
912/// This is a [Strict Provenance][crate::ptr#strict-provenance] API.
913#[inline(always)]
914#[must_use]
915#[stable(feature = "strict_provenance", since = "1.84.0")]
916#[rustc_const_stable(feature = "strict_provenance", since = "1.84.0")]
917pub const fn without_provenance_mut<T>(addr: usize) -> *mut T {
918    // An int-to-pointer transmute currently has exactly the intended semantics: it creates a
919    // pointer without provenance. Note that this is *not* a stable guarantee about transmute
920    // semantics, it relies on sysroot crates having special status.
921    // SAFETY: every valid integer is also a valid pointer (as long as you don't dereference that
922    // pointer).
923    unsafe { mem::transmute(addr) }
924}
925
926/// Creates a new pointer that is dangling, but non-null and well-aligned.
927///
928/// This is useful for initializing types which lazily allocate, like
929/// `Vec::new` does.
930///
931/// Note that the pointer value may potentially represent a valid pointer to
932/// a `T`, which means this must not be used as a "not yet initialized"
933/// sentinel value. Types that lazily allocate must track initialization by
934/// some other means.
935#[inline(always)]
936#[must_use]
937#[stable(feature = "strict_provenance", since = "1.84.0")]
938#[rustc_const_stable(feature = "strict_provenance", since = "1.84.0")]
939pub const fn dangling_mut<T>() -> *mut T {
940    NonNull::dangling().as_ptr()
941}
942
943/// Converts an address back to a pointer, picking up some previously 'exposed'
944/// [provenance][crate::ptr#provenance].
945///
946/// This is fully equivalent to `addr as *const T`. The provenance of the returned pointer is that
947/// of *some* pointer that was previously exposed by passing it to
948/// [`expose_provenance`][pointer::expose_provenance], or a `ptr as usize` cast. In addition, memory
949/// which is outside the control of the Rust abstract machine (MMIO registers, for example) is
950/// always considered to be accessible with an exposed provenance, so long as this memory is disjoint
951/// from memory that will be used by the abstract machine such as the stack, heap, and statics.
952///
953/// The exact provenance that gets picked is not specified. The compiler will do its best to pick
954/// the "right" provenance for you (whatever that may be), but currently we cannot provide any
955/// guarantees about which provenance the resulting pointer will have -- and therefore there
956/// is no definite specification for which memory the resulting pointer may access.
957///
958/// If there is *no* previously 'exposed' provenance that justifies the way the returned pointer
959/// will be used, the program has undefined behavior. In particular, the aliasing rules still apply:
960/// pointers and references that have been invalidated due to aliasing accesses cannot be used
961/// anymore, even if they have been exposed!
962///
963/// Due to its inherent ambiguity, this operation may not be supported by tools that help you to
964/// stay conformant with the Rust memory model. It is recommended to use [Strict
965/// Provenance][self#strict-provenance] APIs such as [`with_addr`][pointer::with_addr] wherever
966/// possible.
967///
968/// On most platforms this will produce a value with the same bytes as the address. Platforms
969/// which need to store additional information in a pointer may not support this operation,
970/// since it is generally not possible to actually *compute* which provenance the returned
971/// pointer has to pick up.
972///
973/// This is an [Exposed Provenance][crate::ptr#exposed-provenance] API.
974#[must_use]
975#[inline(always)]
976#[stable(feature = "exposed_provenance", since = "1.84.0")]
977#[rustc_const_unstable(feature = "const_exposed_provenance", issue = "144538")]
978#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
979#[allow(fuzzy_provenance_casts)] // this *is* the explicit provenance API one should use instead
980pub const fn with_exposed_provenance<T>(addr: usize) -> *const T {
981    addr as *const T
982}
983
984/// Converts an address back to a mutable pointer, picking up some previously 'exposed'
985/// [provenance][crate::ptr#provenance].
986///
987/// This is fully equivalent to `addr as *mut T`. The provenance of the returned pointer is that
988/// of *some* pointer that was previously exposed by passing it to
989/// [`expose_provenance`][pointer::expose_provenance], or a `ptr as usize` cast. In addition, memory
990/// which is outside the control of the Rust abstract machine (MMIO registers, for example) is
991/// always considered to be accessible with an exposed provenance, so long as this memory is disjoint
992/// from memory that will be used by the abstract machine such as the stack, heap, and statics.
993///
994/// The exact provenance that gets picked is not specified. The compiler will do its best to pick
995/// the "right" provenance for you (whatever that may be), but currently we cannot provide any
996/// guarantees about which provenance the resulting pointer will have -- and therefore there
997/// is no definite specification for which memory the resulting pointer may access.
998///
999/// If there is *no* previously 'exposed' provenance that justifies the way the returned pointer
1000/// will be used, the program has undefined behavior. In particular, the aliasing rules still apply:
1001/// pointers and references that have been invalidated due to aliasing accesses cannot be used
1002/// anymore, even if they have been exposed!
1003///
1004/// Due to its inherent ambiguity, this operation may not be supported by tools that help you to
1005/// stay conformant with the Rust memory model. It is recommended to use [Strict
1006/// Provenance][self#strict-provenance] APIs such as [`with_addr`][pointer::with_addr] wherever
1007/// possible.
1008///
1009/// On most platforms this will produce a value with the same bytes as the address. Platforms
1010/// which need to store additional information in a pointer may not support this operation,
1011/// since it is generally not possible to actually *compute* which provenance the returned
1012/// pointer has to pick up.
1013///
1014/// This is an [Exposed Provenance][crate::ptr#exposed-provenance] API.
1015#[must_use]
1016#[inline(always)]
1017#[stable(feature = "exposed_provenance", since = "1.84.0")]
1018#[rustc_const_unstable(feature = "const_exposed_provenance", issue = "144538")]
1019#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1020#[allow(fuzzy_provenance_casts)] // this *is* the explicit provenance API one should use instead
1021pub const fn with_exposed_provenance_mut<T>(addr: usize) -> *mut T {
1022    addr as *mut T
1023}
1024
1025/// Converts a reference to a raw pointer.
1026///
1027/// For `r: &T`, `from_ref(r)` is equivalent to `r as *const T` (except for the caveat noted below),
1028/// but is a bit safer since it will never silently change type or mutability, in particular if the
1029/// code is refactored.
1030///
1031/// The caller must ensure that the pointee outlives the pointer this function returns, or else it
1032/// will end up dangling.
1033///
1034/// The caller must also ensure that the memory the pointer (non-transitively) points to is never
1035/// written to (except inside an `UnsafeCell`) using this pointer or any pointer derived from it. If
1036/// you need to mutate the pointee, use [`from_mut`]. Specifically, to turn a mutable reference `m:
1037/// &mut T` into `*const T`, prefer `from_mut(m).cast_const()` to obtain a pointer that can later be
1038/// used for mutation.
1039///
1040/// ## Interaction with lifetime extension
1041///
1042/// Note that this has subtle interactions with the rules for lifetime extension of temporaries in
1043/// tail expressions. This code is valid, albeit in a non-obvious way:
1044/// ```rust
1045/// # type T = i32;
1046/// # fn foo() -> T { 42 }
1047/// // The temporary holding the return value of `foo` has its lifetime extended,
1048/// // because the surrounding expression involves no function call.
1049/// let p = &foo() as *const T;
1050/// unsafe { p.read() };
1051/// ```
1052/// Naively replacing the cast with `from_ref` is not valid:
1053/// ```rust,no_run
1054/// # use std::ptr;
1055/// # type T = i32;
1056/// # fn foo() -> T { 42 }
1057/// // The temporary holding the return value of `foo` does *not* have its lifetime extended,
1058/// // because the surrounding expression involves a function call.
1059/// let p = ptr::from_ref(&foo());
1060/// unsafe { p.read() }; // UB! Reading from a dangling pointer ⚠️
1061/// ```
1062/// The recommended way to write this code is to avoid relying on lifetime extension
1063/// when raw pointers are involved:
1064/// ```rust
1065/// # use std::ptr;
1066/// # type T = i32;
1067/// # fn foo() -> T { 42 }
1068/// let x = foo();
1069/// let p = ptr::from_ref(&x);
1070/// unsafe { p.read() };
1071/// ```
1072#[inline(always)]
1073#[must_use]
1074#[stable(feature = "ptr_from_ref", since = "1.76.0")]
1075#[rustc_const_stable(feature = "ptr_from_ref", since = "1.76.0")]
1076#[rustc_never_returns_null_ptr]
1077#[rustc_diagnostic_item = "ptr_from_ref"]
1078pub const fn from_ref<T: PointeeSized>(r: &T) -> *const T {
1079    r
1080}
1081
1082/// Converts a mutable reference to a raw pointer.
1083///
1084/// For `r: &mut T`, `from_mut(r)` is equivalent to `r as *mut T` (except for the caveat noted
1085/// below), but is a bit safer since it will never silently change type or mutability, in particular
1086/// if the code is refactored.
1087///
1088/// The caller must ensure that the pointee outlives the pointer this function returns, or else it
1089/// will end up dangling.
1090///
1091/// ## Interaction with lifetime extension
1092///
1093/// Note that this has subtle interactions with the rules for lifetime extension of temporaries in
1094/// tail expressions. This code is valid, albeit in a non-obvious way:
1095/// ```rust
1096/// # type T = i32;
1097/// # fn foo() -> T { 42 }
1098/// // The temporary holding the return value of `foo` has its lifetime extended,
1099/// // because the surrounding expression involves no function call.
1100/// let p = &mut foo() as *mut T;
1101/// unsafe { p.write(T::default()) };
1102/// ```
1103/// Naively replacing the cast with `from_mut` is not valid:
1104/// ```rust,no_run
1105/// # use std::ptr;
1106/// # type T = i32;
1107/// # fn foo() -> T { 42 }
1108/// // The temporary holding the return value of `foo` does *not* have its lifetime extended,
1109/// // because the surrounding expression involves a function call.
1110/// let p = ptr::from_mut(&mut foo());
1111/// unsafe { p.write(T::default()) }; // UB! Writing to a dangling pointer ⚠️
1112/// ```
1113/// The recommended way to write this code is to avoid relying on lifetime extension
1114/// when raw pointers are involved:
1115/// ```rust
1116/// # use std::ptr;
1117/// # type T = i32;
1118/// # fn foo() -> T { 42 }
1119/// let mut x = foo();
1120/// let p = ptr::from_mut(&mut x);
1121/// unsafe { p.write(T::default()) };
1122/// ```
1123#[inline(always)]
1124#[must_use]
1125#[stable(feature = "ptr_from_ref", since = "1.76.0")]
1126#[rustc_const_stable(feature = "ptr_from_ref", since = "1.76.0")]
1127#[rustc_never_returns_null_ptr]
1128pub const fn from_mut<T: PointeeSized>(r: &mut T) -> *mut T {
1129    r
1130}
1131
1132/// Forms a raw slice from a pointer and a length.
1133///
1134/// The `len` argument is the number of **elements**, not the number of bytes.
1135///
1136/// This function is safe, but actually using the return value is unsafe.
1137/// See the documentation of [`slice::from_raw_parts`] for slice safety requirements.
1138///
1139/// [`slice::from_raw_parts`]: crate::slice::from_raw_parts
1140///
1141/// # Examples
1142///
1143/// ```rust
1144/// use std::ptr;
1145///
1146/// // create a slice pointer when starting out with a pointer to the first element
1147/// let x = [5, 6, 7];
1148/// let raw_pointer = x.as_ptr();
1149/// let slice = ptr::slice_from_raw_parts(raw_pointer, 3);
1150/// assert_eq!(unsafe { &*slice }[2], 7);
1151/// ```
1152///
1153/// You must ensure that the pointer is valid and not null before dereferencing
1154/// the raw slice. A slice reference must never have a null pointer, even if it's empty.
1155///
1156/// ```rust,should_panic
1157/// use std::ptr;
1158/// let danger: *const [u8] = ptr::slice_from_raw_parts(ptr::null(), 0);
1159/// unsafe {
1160///     danger.as_ref().expect("references must not be null");
1161/// }
1162/// ```
1163#[inline]
1164#[stable(feature = "slice_from_raw_parts", since = "1.42.0")]
1165#[rustc_const_stable(feature = "const_slice_from_raw_parts", since = "1.64.0")]
1166#[rustc_diagnostic_item = "ptr_slice_from_raw_parts"]
1167pub const fn slice_from_raw_parts<T>(data: *const T, len: usize) -> *const [T] {
1168    from_raw_parts(data, len)
1169}
1170
1171/// Forms a raw mutable slice from a pointer and a length.
1172///
1173/// The `len` argument is the number of **elements**, not the number of bytes.
1174///
1175/// Performs the same functionality as [`slice_from_raw_parts`], except that a
1176/// raw mutable slice is returned, as opposed to a raw immutable slice.
1177///
1178/// This function is safe, but actually using the return value is unsafe.
1179/// See the documentation of [`slice::from_raw_parts_mut`] for slice safety requirements.
1180///
1181/// [`slice::from_raw_parts_mut`]: crate::slice::from_raw_parts_mut
1182///
1183/// # Examples
1184///
1185/// ```rust
1186/// use std::ptr;
1187///
1188/// let x = &mut [5, 6, 7];
1189/// let raw_pointer = x.as_mut_ptr();
1190/// let slice = ptr::slice_from_raw_parts_mut(raw_pointer, 3);
1191///
1192/// unsafe {
1193///     (*slice)[2] = 99; // assign a value at an index in the slice
1194/// };
1195///
1196/// assert_eq!(unsafe { &*slice }[2], 99);
1197/// ```
1198///
1199/// You must ensure that the pointer is valid and not null before dereferencing
1200/// the raw slice. A slice reference must never have a null pointer, even if it's empty.
1201///
1202/// ```rust,should_panic
1203/// use std::ptr;
1204/// let danger: *mut [u8] = ptr::slice_from_raw_parts_mut(ptr::null_mut(), 0);
1205/// unsafe {
1206///     danger.as_mut().expect("references must not be null");
1207/// }
1208/// ```
1209#[inline]
1210#[stable(feature = "slice_from_raw_parts", since = "1.42.0")]
1211#[rustc_const_stable(feature = "const_slice_from_raw_parts_mut", since = "1.83.0")]
1212#[rustc_diagnostic_item = "ptr_slice_from_raw_parts_mut"]
1213pub const fn slice_from_raw_parts_mut<T>(data: *mut T, len: usize) -> *mut [T] {
1214    from_raw_parts_mut(data, len)
1215}
1216
1217/// Swaps the values at two mutable locations of the same type, without
1218/// deinitializing either.
1219///
1220/// But for the following exceptions, this function is semantically
1221/// equivalent to [`mem::swap`]:
1222///
1223/// * It operates on raw pointers instead of references. When references are
1224///   available, [`mem::swap`] should be preferred.
1225///
1226/// * The two pointed-to values may overlap. If the values do overlap, then the
1227///   overlapping region of memory from `x` will be used. This is demonstrated
1228///   in the second example below.
1229///
1230/// * The operation is "untyped" in the sense that data may be uninitialized or otherwise violate
1231///   the requirements of `T`. The initialization state is preserved exactly.
1232///
1233/// # Safety
1234///
1235/// Behavior is undefined if any of the following conditions are violated:
1236///
1237/// * Both `x` and `y` must be [valid] for both reads and writes. They must remain valid even when the
1238///   other pointer is written. (This means if the memory ranges overlap, the two pointers must not
1239///   be subject to aliasing restrictions relative to each other.)
1240///
1241/// * Both `x` and `y` must be properly aligned.
1242///
1243/// Note that even if `T` has size `0`, the pointers must be properly aligned.
1244///
1245/// [valid]: self#safety
1246///
1247/// # Examples
1248///
1249/// Swapping two non-overlapping regions:
1250///
1251/// ```
1252/// use std::ptr;
1253///
1254/// let mut array = [0, 1, 2, 3];
1255///
1256/// let (x, y) = array.split_at_mut(2);
1257/// let x = x.as_mut_ptr().cast::<[u32; 2]>(); // this is `array[0..2]`
1258/// let y = y.as_mut_ptr().cast::<[u32; 2]>(); // this is `array[2..4]`
1259///
1260/// unsafe {
1261///     ptr::swap(x, y);
1262///     assert_eq!([2, 3, 0, 1], array);
1263/// }
1264/// ```
1265///
1266/// Swapping two overlapping regions:
1267///
1268/// ```
1269/// use std::ptr;
1270///
1271/// let mut array: [i32; 4] = [0, 1, 2, 3];
1272///
1273/// let array_ptr: *mut i32 = array.as_mut_ptr();
1274///
1275/// let x = array_ptr as *mut [i32; 3]; // this is `array[0..3]`
1276/// let y = unsafe { array_ptr.add(1) } as *mut [i32; 3]; // this is `array[1..4]`
1277///
1278/// unsafe {
1279///     ptr::swap(x, y);
1280///     // The indices `1..3` of the slice overlap between `x` and `y`.
1281///     // Reasonable results would be for to them be `[2, 3]`, so that indices `0..3` are
1282///     // `[1, 2, 3]` (matching `y` before the `swap`); or for them to be `[0, 1]`
1283///     // so that indices `1..4` are `[0, 1, 2]` (matching `x` before the `swap`).
1284///     // This implementation is defined to make the latter choice.
1285///     assert_eq!([1, 0, 1, 2], array);
1286/// }
1287/// ```
1288#[inline]
1289#[stable(feature = "rust1", since = "1.0.0")]
1290#[rustc_const_stable(feature = "const_swap", since = "1.85.0")]
1291#[rustc_diagnostic_item = "ptr_swap"]
1292pub const unsafe fn swap<T>(x: *mut T, y: *mut T) {
1293    // Give ourselves some scratch space to work with.
1294    // We do not have to worry about drops: `MaybeUninit` does nothing when dropped.
1295    let mut tmp = MaybeUninit::<T>::uninit();
1296
1297    // Perform the swap
1298    // SAFETY: the caller must guarantee that `x` and `y` are
1299    // valid for writes and properly aligned. `tmp` cannot be
1300    // overlapping either `x` or `y` because `tmp` was just allocated
1301    // on the stack as a separate allocation.
1302    unsafe {
1303        copy_nonoverlapping(x, tmp.as_mut_ptr(), 1);
1304        copy(y, x, 1); // `x` and `y` may overlap
1305        copy_nonoverlapping(tmp.as_ptr(), y, 1);
1306    }
1307}
1308
1309/// Swaps `count * size_of::<T>()` bytes between the two regions of memory
1310/// beginning at `x` and `y`. The two regions must *not* overlap.
1311///
1312/// The operation is "untyped" in the sense that data may be uninitialized or otherwise violate the
1313/// requirements of `T`. The initialization state is preserved exactly.
1314///
1315/// # Safety
1316///
1317/// Behavior is undefined if any of the following conditions are violated:
1318///
1319/// * Both `x` and `y` must be [valid] for both reads and writes of `count *
1320///   size_of::<T>()` bytes.
1321///
1322/// * Both `x` and `y` must be properly aligned.
1323///
1324/// * The region of memory beginning at `x` with a size of `count *
1325///   size_of::<T>()` bytes must *not* overlap with the region of memory
1326///   beginning at `y` with the same size.
1327///
1328/// Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`,
1329/// the pointers must be properly aligned.
1330///
1331/// [valid]: self#safety
1332///
1333/// # Examples
1334///
1335/// Basic usage:
1336///
1337/// ```
1338/// use std::ptr;
1339///
1340/// let mut x = [1, 2, 3, 4];
1341/// let mut y = [7, 8, 9];
1342///
1343/// unsafe {
1344///     ptr::swap_nonoverlapping(x.as_mut_ptr(), y.as_mut_ptr(), 2);
1345/// }
1346///
1347/// assert_eq!(x, [7, 8, 3, 4]);
1348/// assert_eq!(y, [1, 2, 9]);
1349/// ```
1350///
1351/// # Const evaluation limitations
1352///
1353/// If this function is invoked during const-evaluation, the current implementation has a small (and
1354/// rarely relevant) limitation: if `count` is at least 2 and the data pointed to by `x` or `y`
1355/// contains a pointer that crosses the boundary of two `T`-sized chunks of memory, the function may
1356/// fail to evaluate (similar to a panic during const-evaluation). This behavior may change in the
1357/// future.
1358///
1359/// The limitation is illustrated by the following example:
1360///
1361/// ```
1362/// use std::mem::size_of;
1363/// use std::ptr;
1364///
1365/// const { unsafe {
1366///     const PTR_SIZE: usize = size_of::<*const i32>();
1367///     let mut data1 = [0u8; PTR_SIZE];
1368///     let mut data2 = [0u8; PTR_SIZE];
1369///     // Store a pointer in `data1`.
1370///     data1.as_mut_ptr().cast::<*const i32>().write_unaligned(&42);
1371///     // Swap the contents of `data1` and `data2` by swapping `PTR_SIZE` many `u8`-sized chunks.
1372///     // This call will fail, because the pointer in `data1` crosses the boundary
1373///     // between several of the 1-byte chunks that are being swapped here.
1374///     //ptr::swap_nonoverlapping(data1.as_mut_ptr(), data2.as_mut_ptr(), PTR_SIZE);
1375///     // Swap the contents of `data1` and `data2` by swapping a single chunk of size
1376///     // `[u8; PTR_SIZE]`. That works, as there is no pointer crossing the boundary between
1377///     // two chunks.
1378///     ptr::swap_nonoverlapping(&mut data1, &mut data2, 1);
1379///     // Read the pointer from `data2` and dereference it.
1380///     let ptr = data2.as_ptr().cast::<*const i32>().read_unaligned();
1381///     assert!(*ptr == 42);
1382/// } }
1383/// ```
1384#[inline]
1385#[stable(feature = "swap_nonoverlapping", since = "1.27.0")]
1386#[rustc_const_stable(feature = "const_swap_nonoverlapping", since = "1.88.0")]
1387#[rustc_diagnostic_item = "ptr_swap_nonoverlapping"]
1388#[rustc_allow_const_fn_unstable(const_eval_select)] // both implementations behave the same
1389#[track_caller]
1390pub const unsafe fn swap_nonoverlapping<T>(x: *mut T, y: *mut T, count: usize) {
1391    ub_checks::assert_unsafe_precondition!(
1392        check_library_ub,
1393        "ptr::swap_nonoverlapping requires that both pointer arguments are aligned and non-null \
1394        and the specified memory ranges do not overlap",
1395        (
1396            x: *mut () = x as *mut (),
1397            y: *mut () = y as *mut (),
1398            size: usize = size_of::<T>(),
1399            align: usize = align_of::<T>(),
1400            count: usize = count,
1401        ) => {
1402            let zero_size = size == 0 || count == 0;
1403            ub_checks::maybe_is_aligned_and_not_null(x, align, zero_size)
1404                && ub_checks::maybe_is_aligned_and_not_null(y, align, zero_size)
1405                && ub_checks::maybe_is_nonoverlapping(x, y, size, count)
1406        }
1407    );
1408
1409    const_eval_select!(
1410        @capture[T] { x: *mut T, y: *mut T, count: usize }:
1411        if const {
1412            // At compile-time we want to always copy this in chunks of `T`, to ensure that if there
1413            // are pointers inside `T` we will copy them in one go rather than trying to copy a part
1414            // of a pointer (which would not work).
1415            // SAFETY: Same preconditions as this function
1416            unsafe { swap_nonoverlapping_const(x, y, count) }
1417        } else {
1418            // Going though a slice here helps codegen know the size fits in `isize`
1419            let slice = slice_from_raw_parts_mut(x, count);
1420            // SAFETY: This is all readable from the pointer, meaning it's one
1421            // allocation, and thus cannot be more than isize::MAX bytes.
1422            let bytes = unsafe { mem::size_of_val_raw::<[T]>(slice) };
1423            if let Some(bytes) = NonZero::new(bytes) {
1424                // SAFETY: These are the same ranges, just expressed in a different
1425                // type, so they're still non-overlapping.
1426                unsafe { swap_nonoverlapping_bytes(x.cast(), y.cast(), bytes) };
1427            }
1428        }
1429    )
1430}
1431
1432/// Same behavior and safety conditions as [`swap_nonoverlapping`]
1433#[inline]
1434const unsafe fn swap_nonoverlapping_const<T>(x: *mut T, y: *mut T, count: usize) {
1435    let mut i = 0;
1436    while i < count {
1437        // SAFETY: By precondition, `i` is in-bounds because it's below `n`
1438        let x = unsafe { x.add(i) };
1439        // SAFETY: By precondition, `i` is in-bounds because it's below `n`
1440        // and it's distinct from `x` since the ranges are non-overlapping
1441        let y = unsafe { y.add(i) };
1442
1443        // SAFETY: we're only ever given pointers that are valid to read/write,
1444        // including being aligned, and nothing here panics so it's drop-safe.
1445        unsafe {
1446            // Note that it's critical that these use `copy_nonoverlapping`,
1447            // rather than `read`/`write`, to avoid #134713 if T has padding.
1448            let mut temp = MaybeUninit::<T>::uninit();
1449            copy_nonoverlapping(x, temp.as_mut_ptr(), 1);
1450            copy_nonoverlapping(y, x, 1);
1451            copy_nonoverlapping(temp.as_ptr(), y, 1);
1452        }
1453
1454        i += 1;
1455    }
1456}
1457
1458// Don't let MIR inline this, because we really want it to keep its noalias metadata
1459#[rustc_no_mir_inline]
1460#[inline]
1461fn swap_chunk<const N: usize>(x: &mut MaybeUninit<[u8; N]>, y: &mut MaybeUninit<[u8; N]>) {
1462    let a = *x;
1463    let b = *y;
1464    *x = b;
1465    *y = a;
1466}
1467
1468#[inline]
1469unsafe fn swap_nonoverlapping_bytes(x: *mut u8, y: *mut u8, bytes: NonZero<usize>) {
1470    // Same as `swap_nonoverlapping::<[u8; N]>`.
1471    unsafe fn swap_nonoverlapping_chunks<const N: usize>(
1472        x: *mut MaybeUninit<[u8; N]>,
1473        y: *mut MaybeUninit<[u8; N]>,
1474        chunks: NonZero<usize>,
1475    ) {
1476        let chunks = chunks.get();
1477        for i in 0..chunks {
1478            // SAFETY: i is in [0, chunks) so the adds and dereferences are in-bounds.
1479            unsafe { swap_chunk(&mut *x.add(i), &mut *y.add(i)) };
1480        }
1481    }
1482
1483    // Same as `swap_nonoverlapping_bytes`, but accepts at most 1+2+4=7 bytes
1484    #[inline]
1485    unsafe fn swap_nonoverlapping_short(x: *mut u8, y: *mut u8, bytes: NonZero<usize>) {
1486        // Tail handling for auto-vectorized code sometimes has element-at-a-time behaviour,
1487        // see <https://github.com/rust-lang/rust/issues/134946>.
1488        // By swapping as different sizes, rather than as a loop over bytes,
1489        // we make sure not to end up with, say, seven byte-at-a-time copies.
1490
1491        let bytes = bytes.get();
1492        let mut i = 0;
1493        macro_rules! swap_prefix {
1494            ($($n:literal)+) => {$(
1495                if (bytes & $n) != 0 {
1496                    // SAFETY: `i` can only have the same bits set as those in bytes,
1497                    // so these `add`s are in-bounds of `bytes`.  But the bit for
1498                    // `$n` hasn't been set yet, so the `$n` bytes that `swap_chunk`
1499                    // will read and write are within the usable range.
1500                    unsafe { swap_chunk::<$n>(&mut*x.add(i).cast(), &mut*y.add(i).cast()) };
1501                    i |= $n;
1502                }
1503            )+};
1504        }
1505        swap_prefix!(4 2 1);
1506        debug_assert_eq!(i, bytes);
1507    }
1508
1509    const CHUNK_SIZE: usize = size_of::<*const ()>();
1510    let bytes = bytes.get();
1511
1512    let chunks = bytes / CHUNK_SIZE;
1513    let tail = bytes % CHUNK_SIZE;
1514    if let Some(chunks) = NonZero::new(chunks) {
1515        // SAFETY: this is bytes/CHUNK_SIZE*CHUNK_SIZE bytes, which is <= bytes,
1516        // so it's within the range of our non-overlapping bytes.
1517        unsafe { swap_nonoverlapping_chunks::<CHUNK_SIZE>(x.cast(), y.cast(), chunks) };
1518    }
1519    if let Some(tail) = NonZero::new(tail) {
1520        const { assert!(CHUNK_SIZE <= 8) };
1521        let delta = chunks * CHUNK_SIZE;
1522        // SAFETY: the tail length is below CHUNK SIZE because of the remainder,
1523        // and CHUNK_SIZE is at most 8 by the const assert, so tail <= 7
1524        unsafe { swap_nonoverlapping_short(x.add(delta), y.add(delta), tail) };
1525    }
1526}
1527
1528/// Moves `src` into the pointed `dst`, returning the previous `dst` value.
1529///
1530/// Neither value is dropped.
1531///
1532/// This function is semantically equivalent to [`mem::replace`] except that it
1533/// operates on raw pointers instead of references. When references are
1534/// available, [`mem::replace`] should be preferred.
1535///
1536/// # Safety
1537///
1538/// Behavior is undefined if any of the following conditions are violated:
1539///
1540/// * `dst` must be [valid] for both reads and writes.
1541///
1542/// * `dst` must be properly aligned.
1543///
1544/// * `dst` must point to a properly initialized value of type `T`.
1545///
1546/// Note that even if `T` has size `0`, the pointer must be properly aligned.
1547///
1548/// [valid]: self#safety
1549///
1550/// # Examples
1551///
1552/// ```
1553/// use std::ptr;
1554///
1555/// let mut rust = vec!['b', 'u', 's', 't'];
1556///
1557/// // `mem::replace` would have the same effect without requiring the unsafe
1558/// // block.
1559/// let b = unsafe {
1560///     ptr::replace(&mut rust[0], 'r')
1561/// };
1562///
1563/// assert_eq!(b, 'b');
1564/// assert_eq!(rust, &['r', 'u', 's', 't']);
1565/// ```
1566#[inline]
1567#[stable(feature = "rust1", since = "1.0.0")]
1568#[rustc_const_stable(feature = "const_replace", since = "1.83.0")]
1569#[rustc_diagnostic_item = "ptr_replace"]
1570#[track_caller]
1571pub const unsafe fn replace<T>(dst: *mut T, src: T) -> T {
1572    // SAFETY: the caller must guarantee that `dst` is valid to be
1573    // cast to a mutable reference (valid for writes, aligned, initialized),
1574    // and cannot overlap `src` since `dst` must point to a distinct
1575    // allocation.
1576    unsafe {
1577        ub_checks::assert_unsafe_precondition!(
1578            check_language_ub,
1579            "ptr::replace requires that the pointer argument is aligned and non-null",
1580            (
1581                addr: *const () = dst as *const (),
1582                align: usize = align_of::<T>(),
1583                is_zst: bool = T::IS_ZST,
1584            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, is_zst)
1585        );
1586        mem::replace(&mut *dst, src)
1587    }
1588}
1589
1590/// Reads the value from `src` without moving it. This leaves the
1591/// memory in `src` unchanged.
1592///
1593/// # Safety
1594///
1595/// Behavior is undefined if any of the following conditions are violated:
1596///
1597/// * `src` must be [valid] for reads.
1598///
1599/// * `src` must be properly aligned. Use [`read_unaligned`] if this is not the
1600///   case.
1601///
1602/// * `src` must point to a properly initialized value of type `T`.
1603///
1604/// Note that even if `T` has size `0`, the pointer must be properly aligned.
1605///
1606/// # Examples
1607///
1608/// Basic usage:
1609///
1610/// ```
1611/// let x = 12;
1612/// let y = &x as *const i32;
1613///
1614/// unsafe {
1615///     assert_eq!(std::ptr::read(y), 12);
1616/// }
1617/// ```
1618///
1619/// Manually implement [`mem::swap`]:
1620///
1621/// ```
1622/// use std::ptr;
1623///
1624/// fn swap<T>(a: &mut T, b: &mut T) {
1625///     unsafe {
1626///         // Create a bitwise copy of the value at `a` in `tmp`.
1627///         let tmp = ptr::read(a);
1628///
1629///         // Exiting at this point (either by explicitly returning or by
1630///         // calling a function which panics) would cause the value in `tmp` to
1631///         // be dropped while the same value is still referenced by `a`. This
1632///         // could trigger undefined behavior if `T` is not `Copy`.
1633///
1634///         // Create a bitwise copy of the value at `b` in `a`.
1635///         // This is safe because mutable references cannot alias.
1636///         ptr::copy_nonoverlapping(b, a, 1);
1637///
1638///         // As above, exiting here could trigger undefined behavior because
1639///         // the same value is referenced by `a` and `b`.
1640///
1641///         // Move `tmp` into `b`.
1642///         ptr::write(b, tmp);
1643///
1644///         // `tmp` has been moved (`write` takes ownership of its second argument),
1645///         // so nothing is dropped implicitly here.
1646///     }
1647/// }
1648///
1649/// let mut foo = "foo".to_owned();
1650/// let mut bar = "bar".to_owned();
1651///
1652/// swap(&mut foo, &mut bar);
1653///
1654/// assert_eq!(foo, "bar");
1655/// assert_eq!(bar, "foo");
1656/// ```
1657///
1658/// ## Ownership of the Returned Value
1659///
1660/// `read` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`].
1661/// If `T` is not [`Copy`], using both the returned value and the value at
1662/// `*src` can violate memory safety. Note that assigning to `*src` counts as a
1663/// use because it will attempt to drop the value at `*src`.
1664///
1665/// [`write()`] can be used to overwrite data without causing it to be dropped.
1666///
1667/// ```
1668/// use std::ptr;
1669///
1670/// let mut s = String::from("foo");
1671/// unsafe {
1672///     // `s2` now points to the same underlying memory as `s`.
1673///     let mut s2: String = ptr::read(&s);
1674///
1675///     assert_eq!(s2, "foo");
1676///
1677///     // Assigning to `s2` causes its original value to be dropped. Beyond
1678///     // this point, `s` must no longer be used, as the underlying memory has
1679///     // been freed.
1680///     s2 = String::default();
1681///     assert_eq!(s2, "");
1682///
1683///     // Assigning to `s` would cause the old value to be dropped again,
1684///     // resulting in undefined behavior.
1685///     // s = String::from("bar"); // ERROR
1686///
1687///     // `ptr::write` can be used to overwrite a value without dropping it.
1688///     ptr::write(&mut s, String::from("bar"));
1689/// }
1690///
1691/// assert_eq!(s, "bar");
1692/// ```
1693///
1694/// [valid]: self#safety
1695#[inline]
1696#[stable(feature = "rust1", since = "1.0.0")]
1697#[rustc_const_stable(feature = "const_ptr_read", since = "1.71.0")]
1698#[track_caller]
1699#[rustc_diagnostic_item = "ptr_read"]
1700pub const unsafe fn read<T>(src: *const T) -> T {
1701    // It would be semantically correct to implement this via `copy_nonoverlapping`
1702    // and `MaybeUninit`, as was done before PR #109035. Calling `assume_init`
1703    // provides enough information to know that this is a typed operation.
1704
1705    // However, as of March 2023 the compiler was not capable of taking advantage
1706    // of that information. Thus, the implementation here switched to an intrinsic,
1707    // which lowers to `_0 = *src` in MIR, to address a few issues:
1708    //
1709    // - Using `MaybeUninit::assume_init` after a `copy_nonoverlapping` was not
1710    //   turning the untyped copy into a typed load. As such, the generated
1711    //   `load` in LLVM didn't get various metadata, such as `!range` (#73258),
1712    //   `!nonnull`, and `!noundef`, resulting in poorer optimization.
1713    // - Going through the extra local resulted in multiple extra copies, even
1714    //   in optimized MIR.  (Ignoring StorageLive/Dead, the intrinsic is one
1715    //   MIR statement, while the previous implementation was eight.)  LLVM
1716    //   could sometimes optimize them away, but because `read` is at the core
1717    //   of so many things, not having them in the first place improves what we
1718    //   hand off to the backend.  For example, `mem::replace::<Big>` previously
1719    //   emitted 4 `alloca` and 6 `memcpy`s, but is now 1 `alloc` and 3 `memcpy`s.
1720    // - In general, this approach keeps us from getting any more bugs (like
1721    //   #106369) that boil down to "`read(p)` is worse than `*p`", as this
1722    //   makes them look identical to the backend (or other MIR consumers).
1723    //
1724    // Future enhancements to MIR optimizations might well allow this to return
1725    // to the previous implementation, rather than using an intrinsic.
1726
1727    // SAFETY: the caller must guarantee that `src` is valid for reads.
1728    unsafe {
1729        #[cfg(debug_assertions)] // Too expensive to always enable (for now?)
1730        ub_checks::assert_unsafe_precondition!(
1731            check_language_ub,
1732            "ptr::read requires that the pointer argument is aligned and non-null",
1733            (
1734                addr: *const () = src as *const (),
1735                align: usize = align_of::<T>(),
1736                is_zst: bool = T::IS_ZST,
1737            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, is_zst)
1738        );
1739        crate::intrinsics::read_via_copy(src)
1740    }
1741}
1742
1743/// Reads the value from `src` without moving it. This leaves the
1744/// memory in `src` unchanged.
1745///
1746/// Unlike [`read`], `read_unaligned` works with unaligned pointers.
1747///
1748/// # Safety
1749///
1750/// Behavior is undefined if any of the following conditions are violated:
1751///
1752/// * `src` must be [valid] for reads.
1753///
1754/// * `src` must point to a properly initialized value of type `T`.
1755///
1756/// Like [`read`], `read_unaligned` creates a bitwise copy of `T`, regardless of
1757/// whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned
1758/// value and the value at `*src` can [violate memory safety][read-ownership].
1759///
1760/// [read-ownership]: read#ownership-of-the-returned-value
1761/// [valid]: self#safety
1762///
1763/// ## On `packed` structs
1764///
1765/// Attempting to create a raw pointer to an `unaligned` struct field with
1766/// an expression such as `&packed.unaligned as *const FieldType` creates an
1767/// intermediate unaligned reference before converting that to a raw pointer.
1768/// That this reference is temporary and immediately cast is inconsequential
1769/// as the compiler always expects references to be properly aligned.
1770/// As a result, using `&packed.unaligned as *const FieldType` causes immediate
1771/// *undefined behavior* in your program.
1772///
1773/// Instead you must use the `&raw const` syntax to create the pointer.
1774/// You may use that constructed pointer together with this function.
1775///
1776/// An example of what not to do and how this relates to `read_unaligned` is:
1777///
1778/// ```
1779/// #[repr(packed, C)]
1780/// struct Packed {
1781///     _padding: u8,
1782///     unaligned: u32,
1783/// }
1784///
1785/// let packed = Packed {
1786///     _padding: 0x00,
1787///     unaligned: 0x01020304,
1788/// };
1789///
1790/// // Take the address of a 32-bit integer which is not aligned.
1791/// // In contrast to `&packed.unaligned as *const _`, this has no undefined behavior.
1792/// let unaligned = &raw const packed.unaligned;
1793///
1794/// let v = unsafe { std::ptr::read_unaligned(unaligned) };
1795/// assert_eq!(v, 0x01020304);
1796/// ```
1797///
1798/// Accessing unaligned fields directly with e.g. `packed.unaligned` is safe however.
1799///
1800/// # Examples
1801///
1802/// Read a `usize` value from a byte buffer:
1803///
1804/// ```
1805/// fn read_usize(x: &[u8]) -> usize {
1806///     assert!(x.len() >= size_of::<usize>());
1807///
1808///     let ptr = x.as_ptr() as *const usize;
1809///
1810///     unsafe { ptr.read_unaligned() }
1811/// }
1812/// ```
1813#[inline]
1814#[stable(feature = "ptr_unaligned", since = "1.17.0")]
1815#[rustc_const_stable(feature = "const_ptr_read", since = "1.71.0")]
1816#[track_caller]
1817#[rustc_diagnostic_item = "ptr_read_unaligned"]
1818pub const unsafe fn read_unaligned<T>(src: *const T) -> T {
1819    let mut tmp = MaybeUninit::<T>::uninit();
1820    // SAFETY: the caller must guarantee that `src` is valid for reads.
1821    // `src` cannot overlap `tmp` because `tmp` was just allocated on
1822    // the stack as a separate allocation.
1823    //
1824    // Also, since we just wrote a valid value into `tmp`, it is guaranteed
1825    // to be properly initialized.
1826    unsafe {
1827        copy_nonoverlapping(src as *const u8, tmp.as_mut_ptr() as *mut u8, size_of::<T>());
1828        tmp.assume_init()
1829    }
1830}
1831
1832/// Overwrites a memory location with the given value without reading or
1833/// dropping the old value.
1834///
1835/// `write` does not drop the contents of `dst`. This is safe, but it could leak
1836/// allocations or resources, so care should be taken not to overwrite an object
1837/// that should be dropped.
1838///
1839/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
1840/// location pointed to by `dst`.
1841///
1842/// This is appropriate for initializing uninitialized memory, or overwriting
1843/// memory that has previously been [`read`] from.
1844///
1845/// # Safety
1846///
1847/// Behavior is undefined if any of the following conditions are violated:
1848///
1849/// * `dst` must be [valid] for writes.
1850///
1851/// * `dst` must be properly aligned. Use [`write_unaligned`] if this is not the
1852///   case.
1853///
1854/// Note that even if `T` has size `0`, the pointer must be properly aligned.
1855///
1856/// [valid]: self#safety
1857///
1858/// # Examples
1859///
1860/// Basic usage:
1861///
1862/// ```
1863/// let mut x = 0;
1864/// let y = &mut x as *mut i32;
1865/// let z = 12;
1866///
1867/// unsafe {
1868///     std::ptr::write(y, z);
1869///     assert_eq!(std::ptr::read(y), 12);
1870/// }
1871/// ```
1872///
1873/// Manually implement [`mem::swap`]:
1874///
1875/// ```
1876/// use std::ptr;
1877///
1878/// fn swap<T>(a: &mut T, b: &mut T) {
1879///     unsafe {
1880///         // Create a bitwise copy of the value at `a` in `tmp`.
1881///         let tmp = ptr::read(a);
1882///
1883///         // Exiting at this point (either by explicitly returning or by
1884///         // calling a function which panics) would cause the value in `tmp` to
1885///         // be dropped while the same value is still referenced by `a`. This
1886///         // could trigger undefined behavior if `T` is not `Copy`.
1887///
1888///         // Create a bitwise copy of the value at `b` in `a`.
1889///         // This is safe because mutable references cannot alias.
1890///         ptr::copy_nonoverlapping(b, a, 1);
1891///
1892///         // As above, exiting here could trigger undefined behavior because
1893///         // the same value is referenced by `a` and `b`.
1894///
1895///         // Move `tmp` into `b`.
1896///         ptr::write(b, tmp);
1897///
1898///         // `tmp` has been moved (`write` takes ownership of its second argument),
1899///         // so nothing is dropped implicitly here.
1900///     }
1901/// }
1902///
1903/// let mut foo = "foo".to_owned();
1904/// let mut bar = "bar".to_owned();
1905///
1906/// swap(&mut foo, &mut bar);
1907///
1908/// assert_eq!(foo, "bar");
1909/// assert_eq!(bar, "foo");
1910/// ```
1911#[inline]
1912#[stable(feature = "rust1", since = "1.0.0")]
1913#[rustc_const_stable(feature = "const_ptr_write", since = "1.83.0")]
1914#[rustc_diagnostic_item = "ptr_write"]
1915#[track_caller]
1916pub const unsafe fn write<T>(dst: *mut T, src: T) {
1917    // Semantically, it would be fine for this to be implemented as a
1918    // `copy_nonoverlapping` and appropriate drop suppression of `src`.
1919
1920    // However, implementing via that currently produces more MIR than is ideal.
1921    // Using an intrinsic keeps it down to just the simple `*dst = move src` in
1922    // MIR (11 statements shorter, at the time of writing), and also allows
1923    // `src` to stay an SSA value in codegen_ssa, rather than a memory one.
1924
1925    // SAFETY: the caller must guarantee that `dst` is valid for writes.
1926    // `dst` cannot overlap `src` because the caller has mutable access
1927    // to `dst` while `src` is owned by this function.
1928    unsafe {
1929        #[cfg(debug_assertions)] // Too expensive to always enable (for now?)
1930        ub_checks::assert_unsafe_precondition!(
1931            check_language_ub,
1932            "ptr::write requires that the pointer argument is aligned and non-null",
1933            (
1934                addr: *mut () = dst as *mut (),
1935                align: usize = align_of::<T>(),
1936                is_zst: bool = T::IS_ZST,
1937            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, is_zst)
1938        );
1939        intrinsics::write_via_move(dst, src)
1940    }
1941}
1942
1943/// Overwrites a memory location with the given value without reading or
1944/// dropping the old value.
1945///
1946/// Unlike [`write()`], the pointer may be unaligned.
1947///
1948/// `write_unaligned` does not drop the contents of `dst`. This is safe, but it
1949/// could leak allocations or resources, so care should be taken not to overwrite
1950/// an object that should be dropped.
1951///
1952/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
1953/// location pointed to by `dst`.
1954///
1955/// This is appropriate for initializing uninitialized memory, or overwriting
1956/// memory that has previously been read with [`read_unaligned`].
1957///
1958/// # Safety
1959///
1960/// Behavior is undefined if any of the following conditions are violated:
1961///
1962/// * `dst` must be [valid] for writes.
1963///
1964/// [valid]: self#safety
1965///
1966/// ## On `packed` structs
1967///
1968/// Attempting to create a raw pointer to an `unaligned` struct field with
1969/// an expression such as `&packed.unaligned as *const FieldType` creates an
1970/// intermediate unaligned reference before converting that to a raw pointer.
1971/// That this reference is temporary and immediately cast is inconsequential
1972/// as the compiler always expects references to be properly aligned.
1973/// As a result, using `&packed.unaligned as *const FieldType` causes immediate
1974/// *undefined behavior* in your program.
1975///
1976/// Instead, you must use the `&raw mut` syntax to create the pointer.
1977/// You may use that constructed pointer together with this function.
1978///
1979/// An example of how to do it and how this relates to `write_unaligned` is:
1980///
1981/// ```
1982/// #[repr(packed, C)]
1983/// struct Packed {
1984///     _padding: u8,
1985///     unaligned: u32,
1986/// }
1987///
1988/// let mut packed: Packed = unsafe { std::mem::zeroed() };
1989///
1990/// // Take the address of a 32-bit integer which is not aligned.
1991/// // In contrast to `&packed.unaligned as *mut _`, this has no undefined behavior.
1992/// let unaligned = &raw mut packed.unaligned;
1993///
1994/// unsafe { std::ptr::write_unaligned(unaligned, 42) };
1995///
1996/// assert_eq!({packed.unaligned}, 42); // `{...}` forces copying the field instead of creating a reference.
1997/// ```
1998///
1999/// Accessing unaligned fields directly with e.g. `packed.unaligned` is safe however
2000/// (as can be seen in the `assert_eq!` above).
2001///
2002/// # Examples
2003///
2004/// Write a `usize` value to a byte buffer:
2005///
2006/// ```
2007/// fn write_usize(x: &mut [u8], val: usize) {
2008///     assert!(x.len() >= size_of::<usize>());
2009///
2010///     let ptr = x.as_mut_ptr() as *mut usize;
2011///
2012///     unsafe { ptr.write_unaligned(val) }
2013/// }
2014/// ```
2015#[inline]
2016#[stable(feature = "ptr_unaligned", since = "1.17.0")]
2017#[rustc_const_stable(feature = "const_ptr_write", since = "1.83.0")]
2018#[rustc_diagnostic_item = "ptr_write_unaligned"]
2019#[track_caller]
2020pub const unsafe fn write_unaligned<T>(dst: *mut T, src: T) {
2021    // SAFETY: the caller must guarantee that `dst` is valid for writes.
2022    // `dst` cannot overlap `src` because the caller has mutable access
2023    // to `dst` while `src` is owned by this function.
2024    unsafe {
2025        copy_nonoverlapping((&raw const src) as *const u8, dst as *mut u8, size_of::<T>());
2026        // We are calling the intrinsic directly to avoid function calls in the generated code.
2027        intrinsics::forget(src);
2028    }
2029}
2030
2031/// Performs a volatile read of the value from `src` without moving it.
2032///
2033/// Volatile operations are intended to act on I/O memory. As such, they are considered externally
2034/// observable events (just like syscalls, but less opaque), and are guaranteed to not be elided or
2035/// reordered by the compiler across other externally observable events. With this in mind, there
2036/// are two cases of usage that need to be distinguished:
2037///
2038/// - When a volatile operation is used for memory inside an [allocation], it behaves exactly like
2039///   [`read`], except for the additional guarantee that it won't be elided or reordered (see
2040///   above). This implies that the operation will actually access memory and not e.g. be lowered to
2041///   reusing data from a previous read. Other than that, all the usual rules for memory accesses
2042///   apply (including provenance).  In particular, just like in C, whether an operation is volatile
2043///   has no bearing whatsoever on questions involving concurrent accesses from multiple threads.
2044///   Volatile accesses behave exactly like non-atomic accesses in that regard.
2045///
2046/// - Volatile operations, however, may also be used to access memory that is _outside_ of any Rust
2047///   allocation. In this use-case, the pointer does *not* have to be [valid] for reads. This is
2048///   typically used for CPU and peripheral registers that must be accessed via an I/O memory
2049///   mapping, most commonly at fixed addresses reserved by the hardware. These often have special
2050///   semantics associated to their manipulation, and cannot be used as general purpose memory.
2051///   Here, any address value is possible, including 0 and [`usize::MAX`], so long as the semantics
2052///   of such a read are well-defined by the target hardware. The provenance of the pointer is
2053///   irrelevant, and it can be created with [`without_provenance`]. The access must not trap. It
2054///   can cause side-effects, but those must not affect Rust-allocated memory in any way. This
2055///   access is still not considered [atomic], and as such it cannot be used for inter-thread
2056///   synchronization.
2057///
2058/// Note that volatile memory operations where T is a zero-sized type are noops and may be ignored.
2059///
2060/// [allocation]: crate::ptr#allocated-object
2061/// [atomic]: crate::sync::atomic#memory-model-for-atomic-accesses
2062///
2063/// # Safety
2064///
2065/// Like [`read`], `read_volatile` creates a bitwise copy of `T`, regardless of whether `T` is
2066/// [`Copy`]. If `T` is not [`Copy`], using both the returned value and the value at `*src` can
2067/// [violate memory safety][read-ownership]. However, storing non-[`Copy`] types in volatile memory
2068/// is almost certainly incorrect.
2069///
2070/// Behavior is undefined if any of the following conditions are violated:
2071///
2072/// * `src` must be either [valid] for reads, or it must point to memory outside of all Rust
2073///   allocations and reading from that memory must:
2074///   - not trap, and
2075///   - not cause any memory inside a Rust allocation to be modified.
2076///
2077/// * `src` must be properly aligned.
2078///
2079/// * Reading from `src` must produce a properly initialized value of type `T`.
2080///
2081/// Note that even if `T` has size `0`, the pointer must be properly aligned.
2082///
2083/// [valid]: self#safety
2084/// [read-ownership]: read#ownership-of-the-returned-value
2085///
2086/// # Examples
2087///
2088/// Basic usage:
2089///
2090/// ```
2091/// let x = 12;
2092/// let y = &x as *const i32;
2093///
2094/// unsafe {
2095///     assert_eq!(std::ptr::read_volatile(y), 12);
2096/// }
2097/// ```
2098#[inline]
2099#[stable(feature = "volatile", since = "1.9.0")]
2100#[track_caller]
2101#[rustc_diagnostic_item = "ptr_read_volatile"]
2102pub unsafe fn read_volatile<T>(src: *const T) -> T {
2103    // SAFETY: the caller must uphold the safety contract for `volatile_load`.
2104    unsafe {
2105        ub_checks::assert_unsafe_precondition!(
2106            check_language_ub,
2107            "ptr::read_volatile requires that the pointer argument is aligned",
2108            (
2109                addr: *const () = src as *const (),
2110                align: usize = align_of::<T>(),
2111            ) => ub_checks::maybe_is_aligned(addr, align)
2112        );
2113        intrinsics::volatile_load(src)
2114    }
2115}
2116
2117/// Performs a volatile write of a memory location with the given value without reading or dropping
2118/// the old value.
2119///
2120/// Volatile operations are intended to act on I/O memory. As such, they are considered externally
2121/// observable events (just like syscalls), and are guaranteed to not be elided or reordered by the
2122/// compiler across other externally observable events. With this in mind, there are two cases of
2123/// usage that need to be distinguished:
2124///
2125/// - When a volatile operation is used for memory inside an [allocation], it behaves exactly like
2126///   [`write`][write()], except for the additional guarantee that it won't be elided or reordered
2127///   (see above). This implies that the operation will actually access memory and not e.g. be
2128///   lowered to a register access. Other than that, all the usual rules for memory accesses apply
2129///   (including provenance). In particular, just like in C, whether an operation is volatile has no
2130///   bearing whatsoever on questions involving concurrent access from multiple threads. Volatile
2131///   accesses behave exactly like non-atomic accesses in that regard.
2132///
2133/// - Volatile operations, however, may also be used to access memory that is _outside_ of any Rust
2134///   allocation. In this use-case, the pointer does *not* have to be [valid] for writes. This is
2135///   typically used for CPU and peripheral registers that must be accessed via an I/O memory
2136///   mapping, most commonly at fixed addresses reserved by the hardware. These often have special
2137///   semantics associated to their manipulation, and cannot be used as general purpose memory.
2138///   Here, any address value is possible, including 0 and [`usize::MAX`], so long as the semantics
2139///   of such a write are well-defined by the target hardware. The provenance of the pointer is
2140///   irrelevant, and it can be created with [`without_provenance`]. The access must not trap. It
2141///   can cause side-effects, but those must not affect Rust-allocated memory in any way. This
2142///   access is still not considered [atomic], and as such it cannot be used for inter-thread
2143///   synchronization.
2144///
2145/// Note that volatile memory operations on zero-sized types (e.g., if a zero-sized type is passed
2146/// to `write_volatile`) are noops and may be ignored.
2147///
2148/// `write_volatile` does not drop the contents of `dst`. This is safe, but it could leak
2149/// allocations or resources, so care should be taken not to overwrite an object that should be
2150/// dropped when operating on Rust memory. Additionally, it does not drop `src`. Semantically, `src`
2151/// is moved into the location pointed to by `dst`.
2152///
2153/// [allocation]: crate::ptr#allocated-object
2154/// [atomic]: crate::sync::atomic#memory-model-for-atomic-accesses
2155///
2156/// # Safety
2157///
2158/// Behavior is undefined if any of the following conditions are violated:
2159///
2160/// * `dst` must be either [valid] for writes, or it must point to memory outside of all Rust
2161///   allocations and writing to that memory must:
2162///   - not trap, and
2163///   - not cause any memory inside a Rust allocation to be modified.
2164///
2165/// * `dst` must be properly aligned.
2166///
2167/// Note that even if `T` has size `0`, the pointer must be properly aligned.
2168///
2169/// [valid]: self#safety
2170///
2171/// # Examples
2172///
2173/// Basic usage:
2174///
2175/// ```
2176/// let mut x = 0;
2177/// let y = &mut x as *mut i32;
2178/// let z = 12;
2179///
2180/// unsafe {
2181///     std::ptr::write_volatile(y, z);
2182///     assert_eq!(std::ptr::read_volatile(y), 12);
2183/// }
2184/// ```
2185#[inline]
2186#[stable(feature = "volatile", since = "1.9.0")]
2187#[rustc_diagnostic_item = "ptr_write_volatile"]
2188#[track_caller]
2189pub unsafe fn write_volatile<T>(dst: *mut T, src: T) {
2190    // SAFETY: the caller must uphold the safety contract for `volatile_store`.
2191    unsafe {
2192        ub_checks::assert_unsafe_precondition!(
2193            check_language_ub,
2194            "ptr::write_volatile requires that the pointer argument is aligned",
2195            (
2196                addr: *mut () = dst as *mut (),
2197                align: usize = align_of::<T>(),
2198            ) => ub_checks::maybe_is_aligned(addr, align)
2199        );
2200        intrinsics::volatile_store(dst, src);
2201    }
2202}
2203
2204/// Align pointer `p`.
2205///
2206/// Calculate offset (in terms of elements of `size_of::<T>()` stride) that has to be applied
2207/// to pointer `p` so that pointer `p` would get aligned to `a`.
2208///
2209/// # Safety
2210/// `a` must be a power of two.
2211///
2212/// # Notes
2213/// This implementation has been carefully tailored to not panic. It is UB for this to panic.
2214/// The only real change that can be made here is change of `INV_TABLE_MOD_16` and associated
2215/// constants.
2216///
2217/// If we ever decide to make it possible to call the intrinsic with `a` that is not a
2218/// power-of-two, it will probably be more prudent to just change to a naive implementation rather
2219/// than trying to adapt this to accommodate that change.
2220///
2221/// Any questions go to @nagisa.
2222#[allow(ptr_to_integer_transmute_in_consts)]
2223pub(crate) unsafe fn align_offset<T: Sized>(p: *const T, a: usize) -> usize {
2224    // FIXME(#75598): Direct use of these intrinsics improves codegen significantly at opt-level <=
2225    // 1, where the method versions of these operations are not inlined.
2226    use intrinsics::{
2227        assume, cttz_nonzero, exact_div, mul_with_overflow, unchecked_rem, unchecked_shl,
2228        unchecked_shr, unchecked_sub, wrapping_add, wrapping_mul, wrapping_sub,
2229    };
2230
2231    /// Calculate multiplicative modular inverse of `x` modulo `m`.
2232    ///
2233    /// This implementation is tailored for `align_offset` and has following preconditions:
2234    ///
2235    /// * `m` is a power-of-two;
2236    /// * `x < m`; (if `x ≥ m`, pass in `x % m` instead)
2237    ///
2238    /// Implementation of this function shall not panic. Ever.
2239    #[inline]
2240    const unsafe fn mod_inv(x: usize, m: usize) -> usize {
2241        /// Multiplicative modular inverse table modulo 2⁴ = 16.
2242        ///
2243        /// Note, that this table does not contain values where inverse does not exist (i.e., for
2244        /// `0⁻¹ mod 16`, `2⁻¹ mod 16`, etc.)
2245        const INV_TABLE_MOD_16: [u8; 8] = [1, 11, 13, 7, 9, 3, 5, 15];
2246        /// Modulo for which the `INV_TABLE_MOD_16` is intended.
2247        const INV_TABLE_MOD: usize = 16;
2248
2249        // SAFETY: `m` is required to be a power-of-two, hence non-zero.
2250        let m_minus_one = unsafe { unchecked_sub(m, 1) };
2251        let mut inverse = INV_TABLE_MOD_16[(x & (INV_TABLE_MOD - 1)) >> 1] as usize;
2252        let mut mod_gate = INV_TABLE_MOD;
2253        // We iterate "up" using the following formula:
2254        //
2255        // $$ xy ≡ 1 (mod 2ⁿ) → xy (2 - xy) ≡ 1 (mod 2²ⁿ) $$
2256        //
2257        // This application needs to be applied at least until `2²ⁿ ≥ m`, at which point we can
2258        // finally reduce the computation to our desired `m` by taking `inverse mod m`.
2259        //
2260        // This computation is `O(log log m)`, which is to say, that on 64-bit machines this loop
2261        // will always finish in at most 4 iterations.
2262        loop {
2263            // y = y * (2 - xy) mod n
2264            //
2265            // Note, that we use wrapping operations here intentionally – the original formula
2266            // uses e.g., subtraction `mod n`. It is entirely fine to do them `mod
2267            // usize::MAX` instead, because we take the result `mod n` at the end
2268            // anyway.
2269            if mod_gate >= m {
2270                break;
2271            }
2272            inverse = wrapping_mul(inverse, wrapping_sub(2usize, wrapping_mul(x, inverse)));
2273            let (new_gate, overflow) = mul_with_overflow(mod_gate, mod_gate);
2274            if overflow {
2275                break;
2276            }
2277            mod_gate = new_gate;
2278        }
2279        inverse & m_minus_one
2280    }
2281
2282    let stride = size_of::<T>();
2283
2284    let addr: usize = p.addr();
2285
2286    // SAFETY: `a` is a power-of-two, therefore non-zero.
2287    let a_minus_one = unsafe { unchecked_sub(a, 1) };
2288
2289    if stride == 0 {
2290        // SPECIAL_CASE: handle 0-sized types. No matter how many times we step, the address will
2291        // stay the same, so no offset will be able to align the pointer unless it is already
2292        // aligned. This branch _will_ be optimized out as `stride` is known at compile-time.
2293        let p_mod_a = addr & a_minus_one;
2294        return if p_mod_a == 0 { 0 } else { usize::MAX };
2295    }
2296
2297    // SAFETY: `stride == 0` case has been handled by the special case above.
2298    let a_mod_stride = unsafe { unchecked_rem(a, stride) };
2299    if a_mod_stride == 0 {
2300        // SPECIAL_CASE: In cases where the `a` is divisible by `stride`, byte offset to align a
2301        // pointer can be computed more simply through `-p (mod a)`. In the off-chance the byte
2302        // offset is not a multiple of `stride`, the input pointer was misaligned and no pointer
2303        // offset will be able to produce a `p` aligned to the specified `a`.
2304        //
2305        // The naive `-p (mod a)` equation inhibits LLVM's ability to select instructions
2306        // like `lea`. We compute `(round_up_to_next_alignment(p, a) - p)` instead. This
2307        // redistributes operations around the load-bearing, but pessimizing `and` instruction
2308        // sufficiently for LLVM to be able to utilize the various optimizations it knows about.
2309        //
2310        // LLVM handles the branch here particularly nicely. If this branch needs to be evaluated
2311        // at runtime, it will produce a mask `if addr_mod_stride == 0 { 0 } else { usize::MAX }`
2312        // in a branch-free way and then bitwise-OR it with whatever result the `-p mod a`
2313        // computation produces.
2314
2315        let aligned_address = wrapping_add(addr, a_minus_one) & wrapping_sub(0, a);
2316        let byte_offset = wrapping_sub(aligned_address, addr);
2317        // FIXME: Remove the assume after <https://github.com/llvm/llvm-project/issues/62502>
2318        // SAFETY: Masking by `-a` can only affect the low bits, and thus cannot have reduced
2319        // the value by more than `a-1`, so even though the intermediate values might have
2320        // wrapped, the byte_offset is always in `[0, a)`.
2321        unsafe { assume(byte_offset < a) };
2322
2323        // SAFETY: `stride == 0` case has been handled by the special case above.
2324        let addr_mod_stride = unsafe { unchecked_rem(addr, stride) };
2325
2326        return if addr_mod_stride == 0 {
2327            // SAFETY: `stride` is non-zero. This is guaranteed to divide exactly as well, because
2328            // addr has been verified to be aligned to the original type’s alignment requirements.
2329            unsafe { exact_div(byte_offset, stride) }
2330        } else {
2331            usize::MAX
2332        };
2333    }
2334
2335    // GENERAL_CASE: From here on we’re handling the very general case where `addr` may be
2336    // misaligned, there isn’t an obvious relationship between `stride` and `a` that we can take an
2337    // advantage of, etc. This case produces machine code that isn’t particularly high quality,
2338    // compared to the special cases above. The code produced here is still within the realm of
2339    // miracles, given the situations this case has to deal with.
2340
2341    // SAFETY: a is power-of-two hence non-zero. stride == 0 case is handled above.
2342    // FIXME(const-hack) replace with min
2343    let gcdpow = unsafe {
2344        let x = cttz_nonzero(stride);
2345        let y = cttz_nonzero(a);
2346        if x < y { x } else { y }
2347    };
2348    // SAFETY: gcdpow has an upper-bound that’s at most the number of bits in a `usize`.
2349    let gcd = unsafe { unchecked_shl(1usize, gcdpow) };
2350    // SAFETY: gcd is always greater or equal to 1.
2351    if addr & unsafe { unchecked_sub(gcd, 1) } == 0 {
2352        // This branch solves for the following linear congruence equation:
2353        //
2354        // ` p + so = 0 mod a `
2355        //
2356        // `p` here is the pointer value, `s` - stride of `T`, `o` offset in `T`s, and `a` - the
2357        // requested alignment.
2358        //
2359        // With `g = gcd(a, s)`, and the above condition asserting that `p` is also divisible by
2360        // `g`, we can denote `a' = a/g`, `s' = s/g`, `p' = p/g`, then this becomes equivalent to:
2361        //
2362        // ` p' + s'o = 0 mod a' `
2363        // ` o = (a' - (p' mod a')) * (s'^-1 mod a') `
2364        //
2365        // The first term is "the relative alignment of `p` to `a`" (divided by the `g`), the
2366        // second term is "how does incrementing `p` by `s` bytes change the relative alignment of
2367        // `p`" (again divided by `g`). Division by `g` is necessary to make the inverse well
2368        // formed if `a` and `s` are not co-prime.
2369        //
2370        // Furthermore, the result produced by this solution is not "minimal", so it is necessary
2371        // to take the result `o mod lcm(s, a)`. This `lcm(s, a)` is the same as `a'`.
2372
2373        // SAFETY: `gcdpow` has an upper-bound not greater than the number of trailing 0-bits in
2374        // `a`.
2375        let a2 = unsafe { unchecked_shr(a, gcdpow) };
2376        // SAFETY: `a2` is non-zero. Shifting `a` by `gcdpow` cannot shift out any of the set bits
2377        // in `a` (of which it has exactly one).
2378        let a2minus1 = unsafe { unchecked_sub(a2, 1) };
2379        // SAFETY: `gcdpow` has an upper-bound not greater than the number of trailing 0-bits in
2380        // `a`.
2381        let s2 = unsafe { unchecked_shr(stride & a_minus_one, gcdpow) };
2382        // SAFETY: `gcdpow` has an upper-bound not greater than the number of trailing 0-bits in
2383        // `a`. Furthermore, the subtraction cannot overflow, because `a2 = a >> gcdpow` will
2384        // always be strictly greater than `(p % a) >> gcdpow`.
2385        let minusp2 = unsafe { unchecked_sub(a2, unchecked_shr(addr & a_minus_one, gcdpow)) };
2386        // SAFETY: `a2` is a power-of-two, as proven above. `s2` is strictly less than `a2`
2387        // because `(s % a) >> gcdpow` is strictly less than `a >> gcdpow`.
2388        return wrapping_mul(minusp2, unsafe { mod_inv(s2, a2) }) & a2minus1;
2389    }
2390
2391    // Cannot be aligned at all.
2392    usize::MAX
2393}
2394
2395/// Compares raw pointers for equality.
2396///
2397/// This is the same as using the `==` operator, but less generic:
2398/// the arguments have to be `*const T` raw pointers,
2399/// not anything that implements `PartialEq`.
2400///
2401/// This can be used to compare `&T` references (which coerce to `*const T` implicitly)
2402/// by their address rather than comparing the values they point to
2403/// (which is what the `PartialEq for &T` implementation does).
2404///
2405/// When comparing wide pointers, both the address and the metadata are tested for equality.
2406/// However, note that comparing trait object pointers (`*const dyn Trait`) is unreliable: pointers
2407/// to values of the same underlying type can compare inequal (because vtables are duplicated in
2408/// multiple codegen units), and pointers to values of *different* underlying type can compare equal
2409/// (since identical vtables can be deduplicated within a codegen unit).
2410///
2411/// # Examples
2412///
2413/// ```
2414/// use std::ptr;
2415///
2416/// let five = 5;
2417/// let other_five = 5;
2418/// let five_ref = &five;
2419/// let same_five_ref = &five;
2420/// let other_five_ref = &other_five;
2421///
2422/// assert!(five_ref == same_five_ref);
2423/// assert!(ptr::eq(five_ref, same_five_ref));
2424///
2425/// assert!(five_ref == other_five_ref);
2426/// assert!(!ptr::eq(five_ref, other_five_ref));
2427/// ```
2428///
2429/// Slices are also compared by their length (fat pointers):
2430///
2431/// ```
2432/// let a = [1, 2, 3];
2433/// assert!(std::ptr::eq(&a[..3], &a[..3]));
2434/// assert!(!std::ptr::eq(&a[..2], &a[..3]));
2435/// assert!(!std::ptr::eq(&a[0..2], &a[1..3]));
2436/// ```
2437#[stable(feature = "ptr_eq", since = "1.17.0")]
2438#[inline(always)]
2439#[must_use = "pointer comparison produces a value"]
2440#[rustc_diagnostic_item = "ptr_eq"]
2441#[allow(ambiguous_wide_pointer_comparisons)] // it's actually clear here
2442pub fn eq<T: PointeeSized>(a: *const T, b: *const T) -> bool {
2443    a == b
2444}
2445
2446/// Compares the *addresses* of the two pointers for equality,
2447/// ignoring any metadata in fat pointers.
2448///
2449/// If the arguments are thin pointers of the same type,
2450/// then this is the same as [`eq`].
2451///
2452/// # Examples
2453///
2454/// ```
2455/// use std::ptr;
2456///
2457/// let whole: &[i32; 3] = &[1, 2, 3];
2458/// let first: &i32 = &whole[0];
2459///
2460/// assert!(ptr::addr_eq(whole, first));
2461/// assert!(!ptr::eq::<dyn std::fmt::Debug>(whole, first));
2462/// ```
2463#[stable(feature = "ptr_addr_eq", since = "1.76.0")]
2464#[inline(always)]
2465#[must_use = "pointer comparison produces a value"]
2466pub fn addr_eq<T: PointeeSized, U: PointeeSized>(p: *const T, q: *const U) -> bool {
2467    (p as *const ()) == (q as *const ())
2468}
2469
2470/// Compares the *addresses* of the two function pointers for equality.
2471///
2472/// This is the same as `f == g`, but using this function makes clear that the potentially
2473/// surprising semantics of function pointer comparison are involved.
2474///
2475/// There are **very few guarantees** about how functions are compiled and they have no intrinsic
2476/// “identity”; in particular, this comparison:
2477///
2478/// * May return `true` unexpectedly, in cases where functions are equivalent.
2479///
2480///   For example, the following program is likely (but not guaranteed) to print `(true, true)`
2481///   when compiled with optimization:
2482///
2483///   ```
2484///   let f: fn(i32) -> i32 = |x| x;
2485///   let g: fn(i32) -> i32 = |x| x + 0;  // different closure, different body
2486///   let h: fn(u32) -> u32 = |x| x + 0;  // different signature too
2487///   dbg!(std::ptr::fn_addr_eq(f, g), std::ptr::fn_addr_eq(f, h)); // not guaranteed to be equal
2488///   ```
2489///
2490/// * May return `false` in any case.
2491///
2492///   This is particularly likely with generic functions but may happen with any function.
2493///   (From an implementation perspective, this is possible because functions may sometimes be
2494///   processed more than once by the compiler, resulting in duplicate machine code.)
2495///
2496/// Despite these false positives and false negatives, this comparison can still be useful.
2497/// Specifically, if
2498///
2499/// * `T` is the same type as `U`, `T` is a [subtype] of `U`, or `U` is a [subtype] of `T`, and
2500/// * `ptr::fn_addr_eq(f, g)` returns true,
2501///
2502/// then calling `f` and calling `g` will be equivalent.
2503///
2504///
2505/// # Examples
2506///
2507/// ```
2508/// use std::ptr;
2509///
2510/// fn a() { println!("a"); }
2511/// fn b() { println!("b"); }
2512/// assert!(!ptr::fn_addr_eq(a as fn(), b as fn()));
2513/// ```
2514///
2515/// [subtype]: https://doc.rust-lang.org/reference/subtyping.html
2516#[stable(feature = "ptr_fn_addr_eq", since = "1.85.0")]
2517#[inline(always)]
2518#[must_use = "function pointer comparison produces a value"]
2519pub fn fn_addr_eq<T: FnPtr, U: FnPtr>(f: T, g: U) -> bool {
2520    f.addr() == g.addr()
2521}
2522
2523/// Hash a raw pointer.
2524///
2525/// This can be used to hash a `&T` reference (which coerces to `*const T` implicitly)
2526/// by its address rather than the value it points to
2527/// (which is what the `Hash for &T` implementation does).
2528///
2529/// # Examples
2530///
2531/// ```
2532/// use std::hash::{DefaultHasher, Hash, Hasher};
2533/// use std::ptr;
2534///
2535/// let five = 5;
2536/// let five_ref = &five;
2537///
2538/// let mut hasher = DefaultHasher::new();
2539/// ptr::hash(five_ref, &mut hasher);
2540/// let actual = hasher.finish();
2541///
2542/// let mut hasher = DefaultHasher::new();
2543/// (five_ref as *const i32).hash(&mut hasher);
2544/// let expected = hasher.finish();
2545///
2546/// assert_eq!(actual, expected);
2547/// ```
2548#[stable(feature = "ptr_hash", since = "1.35.0")]
2549pub fn hash<T: PointeeSized, S: hash::Hasher>(hashee: *const T, into: &mut S) {
2550    use crate::hash::Hash;
2551    hashee.hash(into);
2552}
2553
2554#[stable(feature = "fnptr_impls", since = "1.4.0")]
2555impl<F: FnPtr> PartialEq for F {
2556    #[inline]
2557    fn eq(&self, other: &Self) -> bool {
2558        self.addr() == other.addr()
2559    }
2560}
2561#[stable(feature = "fnptr_impls", since = "1.4.0")]
2562impl<F: FnPtr> Eq for F {}
2563
2564#[stable(feature = "fnptr_impls", since = "1.4.0")]
2565impl<F: FnPtr> PartialOrd for F {
2566    #[inline]
2567    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
2568        self.addr().partial_cmp(&other.addr())
2569    }
2570}
2571#[stable(feature = "fnptr_impls", since = "1.4.0")]
2572impl<F: FnPtr> Ord for F {
2573    #[inline]
2574    fn cmp(&self, other: &Self) -> Ordering {
2575        self.addr().cmp(&other.addr())
2576    }
2577}
2578
2579#[stable(feature = "fnptr_impls", since = "1.4.0")]
2580impl<F: FnPtr> hash::Hash for F {
2581    fn hash<HH: hash::Hasher>(&self, state: &mut HH) {
2582        state.write_usize(self.addr() as _)
2583    }
2584}
2585
2586#[stable(feature = "fnptr_impls", since = "1.4.0")]
2587impl<F: FnPtr> fmt::Pointer for F {
2588    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2589        fmt::pointer_fmt_inner(self.addr() as _, f)
2590    }
2591}
2592
2593#[stable(feature = "fnptr_impls", since = "1.4.0")]
2594impl<F: FnPtr> fmt::Debug for F {
2595    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2596        fmt::pointer_fmt_inner(self.addr() as _, f)
2597    }
2598}
2599
2600/// Creates a `const` raw pointer to a place, without creating an intermediate reference.
2601///
2602/// `addr_of!(expr)` is equivalent to `&raw const expr`. The macro is *soft-deprecated*;
2603/// use `&raw const` instead.
2604///
2605/// It is still an open question under which conditions writing through an `addr_of!`-created
2606/// pointer is permitted. If the place `expr` evaluates to is based on a raw pointer, then the
2607/// result of `addr_of!` inherits all permissions from that raw pointer. However, if the place is
2608/// based on a reference, local variable, or `static`, then until all details are decided, the same
2609/// rules as for shared references apply: it is UB to write through a pointer created with this
2610/// operation, except for bytes located inside an `UnsafeCell`. Use `&raw mut` (or [`addr_of_mut`])
2611/// to create a raw pointer that definitely permits mutation.
2612///
2613/// Creating a reference with `&`/`&mut` is only allowed if the pointer is properly aligned
2614/// and points to initialized data. For cases where those requirements do not hold,
2615/// raw pointers should be used instead. However, `&expr as *const _` creates a reference
2616/// before casting it to a raw pointer, and that reference is subject to the same rules
2617/// as all other references. This macro can create a raw pointer *without* creating
2618/// a reference first.
2619///
2620/// See [`addr_of_mut`] for how to create a pointer to uninitialized data.
2621/// Doing that with `addr_of` would not make much sense since one could only
2622/// read the data, and that would be Undefined Behavior.
2623///
2624/// # Safety
2625///
2626/// The `expr` in `addr_of!(expr)` is evaluated as a place expression, but never loads from the
2627/// place or requires the place to be dereferenceable. This means that `addr_of!((*ptr).field)`
2628/// still requires the projection to `field` to be in-bounds, using the same rules as [`offset`].
2629/// However, `addr_of!(*ptr)` is defined behavior even if `ptr` is null, dangling, or misaligned.
2630///
2631/// Note that `Deref`/`Index` coercions (and their mutable counterparts) are applied inside
2632/// `addr_of!` like everywhere else, in which case a reference is created to call `Deref::deref` or
2633/// `Index::index`, respectively. The statements above only apply when no such coercions are
2634/// applied.
2635///
2636/// [`offset`]: pointer::offset
2637///
2638/// # Example
2639///
2640/// **Correct usage: Creating a pointer to unaligned data**
2641///
2642/// ```
2643/// use std::ptr;
2644///
2645/// #[repr(packed)]
2646/// struct Packed {
2647///     f1: u8,
2648///     f2: u16,
2649/// }
2650///
2651/// let packed = Packed { f1: 1, f2: 2 };
2652/// // `&packed.f2` would create an unaligned reference, and thus be Undefined Behavior!
2653/// let raw_f2 = ptr::addr_of!(packed.f2);
2654/// assert_eq!(unsafe { raw_f2.read_unaligned() }, 2);
2655/// ```
2656///
2657/// **Incorrect usage: Out-of-bounds fields projection**
2658///
2659/// ```rust,no_run
2660/// use std::ptr;
2661///
2662/// #[repr(C)]
2663/// struct MyStruct {
2664///     field1: i32,
2665///     field2: i32,
2666/// }
2667///
2668/// let ptr: *const MyStruct = ptr::null();
2669/// let fieldptr = unsafe { ptr::addr_of!((*ptr).field2) }; // Undefined Behavior ⚠️
2670/// ```
2671///
2672/// The field projection `.field2` would offset the pointer by 4 bytes,
2673/// but the pointer is not in-bounds of an allocation for 4 bytes,
2674/// so this offset is Undefined Behavior.
2675/// See the [`offset`] docs for a full list of requirements for inbounds pointer arithmetic; the
2676/// same requirements apply to field projections, even inside `addr_of!`. (In particular, it makes
2677/// no difference whether the pointer is null or dangling.)
2678#[stable(feature = "raw_ref_macros", since = "1.51.0")]
2679#[rustc_macro_transparency = "semitransparent"]
2680pub macro addr_of($place:expr) {
2681    &raw const $place
2682}
2683
2684/// Creates a `mut` raw pointer to a place, without creating an intermediate reference.
2685///
2686/// `addr_of_mut!(expr)` is equivalent to `&raw mut expr`. The macro is *soft-deprecated*;
2687/// use `&raw mut` instead.
2688///
2689/// Creating a reference with `&`/`&mut` is only allowed if the pointer is properly aligned
2690/// and points to initialized data. For cases where those requirements do not hold,
2691/// raw pointers should be used instead. However, `&mut expr as *mut _` creates a reference
2692/// before casting it to a raw pointer, and that reference is subject to the same rules
2693/// as all other references. This macro can create a raw pointer *without* creating
2694/// a reference first.
2695///
2696/// # Safety
2697///
2698/// The `expr` in `addr_of_mut!(expr)` is evaluated as a place expression, but never loads from the
2699/// place or requires the place to be dereferenceable. This means that `addr_of_mut!((*ptr).field)`
2700/// still requires the projection to `field` to be in-bounds, using the same rules as [`offset`].
2701/// However, `addr_of_mut!(*ptr)` is defined behavior even if `ptr` is null, dangling, or misaligned.
2702///
2703/// Note that `Deref`/`Index` coercions (and their mutable counterparts) are applied inside
2704/// `addr_of_mut!` like everywhere else, in which case a reference is created to call `Deref::deref`
2705/// or `Index::index`, respectively. The statements above only apply when no such coercions are
2706/// applied.
2707///
2708/// [`offset`]: pointer::offset
2709///
2710/// # Examples
2711///
2712/// **Correct usage: Creating a pointer to unaligned data**
2713///
2714/// ```
2715/// use std::ptr;
2716///
2717/// #[repr(packed)]
2718/// struct Packed {
2719///     f1: u8,
2720///     f2: u16,
2721/// }
2722///
2723/// let mut packed = Packed { f1: 1, f2: 2 };
2724/// // `&mut packed.f2` would create an unaligned reference, and thus be Undefined Behavior!
2725/// let raw_f2 = ptr::addr_of_mut!(packed.f2);
2726/// unsafe { raw_f2.write_unaligned(42); }
2727/// assert_eq!({packed.f2}, 42); // `{...}` forces copying the field instead of creating a reference.
2728/// ```
2729///
2730/// **Correct usage: Creating a pointer to uninitialized data**
2731///
2732/// ```rust
2733/// use std::{ptr, mem::MaybeUninit};
2734///
2735/// struct Demo {
2736///     field: bool,
2737/// }
2738///
2739/// let mut uninit = MaybeUninit::<Demo>::uninit();
2740/// // `&uninit.as_mut().field` would create a reference to an uninitialized `bool`,
2741/// // and thus be Undefined Behavior!
2742/// let f1_ptr = unsafe { ptr::addr_of_mut!((*uninit.as_mut_ptr()).field) };
2743/// unsafe { f1_ptr.write(true); }
2744/// let init = unsafe { uninit.assume_init() };
2745/// ```
2746///
2747/// **Incorrect usage: Out-of-bounds fields projection**
2748///
2749/// ```rust,no_run
2750/// use std::ptr;
2751///
2752/// #[repr(C)]
2753/// struct MyStruct {
2754///     field1: i32,
2755///     field2: i32,
2756/// }
2757///
2758/// let ptr: *mut MyStruct = ptr::null_mut();
2759/// let fieldptr = unsafe { ptr::addr_of_mut!((*ptr).field2) }; // Undefined Behavior ⚠️
2760/// ```
2761///
2762/// The field projection `.field2` would offset the pointer by 4 bytes,
2763/// but the pointer is not in-bounds of an allocation for 4 bytes,
2764/// so this offset is Undefined Behavior.
2765/// See the [`offset`] docs for a full list of requirements for inbounds pointer arithmetic; the
2766/// same requirements apply to field projections, even inside `addr_of_mut!`. (In particular, it
2767/// makes no difference whether the pointer is null or dangling.)
2768#[stable(feature = "raw_ref_macros", since = "1.51.0")]
2769#[rustc_macro_transparency = "semitransparent"]
2770pub macro addr_of_mut($place:expr) {
2771    &raw mut $place
2772}
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy