core/ptr/
mod.rs

1//! Manually manage memory through raw pointers.
2//!
3//! *[See also the pointer primitive types](pointer).*
4//!
5//! # Safety
6//!
7//! Many functions in this module take raw pointers as arguments and read from or write to them. For
8//! this to be safe, these pointers must be *valid* for the given access. Whether a pointer is valid
9//! depends on the operation it is used for (read or write), and the extent of the memory that is
10//! accessed (i.e., how many bytes are read/written) -- it makes no sense to ask "is this pointer
11//! valid"; one has to ask "is this pointer valid for a given access". Most functions use `*mut T`
12//! and `*const T` to access only a single value, in which case the documentation omits the size and
13//! implicitly assumes it to be `size_of::<T>()` bytes.
14//!
15//! The precise rules for validity are not determined yet. The guarantees that are
16//! provided at this point are very minimal:
17//!
18//! * For memory accesses of [size zero][zst], *every* pointer is valid, including the [null]
19//!   pointer. The following points are only concerned with non-zero-sized accesses.
20//! * A [null] pointer is *never* valid.
21//! * For a pointer to be valid, it is necessary, but not always sufficient, that the pointer be
22//!   *dereferenceable*. The [provenance] of the pointer is used to determine which [allocated
23//!   object] it is derived from; a pointer is dereferenceable if the memory range of the given size
24//!   starting at the pointer is entirely contained within the bounds of that allocated object. Note
25//!   that in Rust, every (stack-allocated) variable is considered a separate allocated object.
26//! * All accesses performed by functions in this module are *non-atomic* in the sense
27//!   of [atomic operations] used to synchronize between threads. This means it is
28//!   undefined behavior to perform two concurrent accesses to the same location from different
29//!   threads unless both accesses only read from memory. Notice that this explicitly
30//!   includes [`read_volatile`] and [`write_volatile`]: Volatile accesses cannot
31//!   be used for inter-thread synchronization.
32//! * The result of casting a reference to a pointer is valid for as long as the
33//!   underlying object is live and no reference (just raw pointers) is used to
34//!   access the same memory. That is, reference and pointer accesses cannot be
35//!   interleaved.
36//!
37//! These axioms, along with careful use of [`offset`] for pointer arithmetic,
38//! are enough to correctly implement many useful things in unsafe code. Stronger guarantees
39//! will be provided eventually, as the [aliasing] rules are being determined. For more
40//! information, see the [book] as well as the section in the reference devoted
41//! to [undefined behavior][ub].
42//!
43//! We say that a pointer is "dangling" if it is not valid for any non-zero-sized accesses. This
44//! means out-of-bounds pointers, pointers to freed memory, null pointers, and pointers created with
45//! [`NonNull::dangling`] are all dangling.
46//!
47//! ## Alignment
48//!
49//! Valid raw pointers as defined above are not necessarily properly aligned (where
50//! "proper" alignment is defined by the pointee type, i.e., `*const T` must be
51//! aligned to `align_of::<T>()`). However, most functions require their
52//! arguments to be properly aligned, and will explicitly state
53//! this requirement in their documentation. Notable exceptions to this are
54//! [`read_unaligned`] and [`write_unaligned`].
55//!
56//! When a function requires proper alignment, it does so even if the access
57//! has size 0, i.e., even if memory is not actually touched. Consider using
58//! [`NonNull::dangling`] in such cases.
59//!
60//! ## Pointer to reference conversion
61//!
62//! When converting a pointer to a reference (e.g. via `&*ptr` or `&mut *ptr`),
63//! there are several rules that must be followed:
64//!
65//! * The pointer must be properly aligned.
66//!
67//! * It must be non-null.
68//!
69//! * It must be "dereferenceable" in the sense defined above.
70//!
71//! * The pointer must point to a [valid value] of type `T`.
72//!
73//! * You must enforce Rust's aliasing rules. The exact aliasing rules are not decided yet, so we
74//!   only give a rough overview here. The rules also depend on whether a mutable or a shared
75//!   reference is being created.
76//!   * When creating a mutable reference, then while this reference exists, the memory it points to
77//!     must not get accessed (read or written) through any other pointer or reference not derived
78//!     from this reference.
79//!   * When creating a shared reference, then while this reference exists, the memory it points to
80//!     must not get mutated (except inside `UnsafeCell`).
81//!
82//! If a pointer follows all of these rules, it is said to be
83//! *convertible to a (mutable or shared) reference*.
84// ^ we use this term instead of saying that the produced reference must
85// be valid, as the validity of a reference is easily confused for the
86// validity of the thing it refers to, and while the two concepts are
87// closely related, they are not identical.
88//!
89//! These rules apply even if the result is unused!
90//! (The part about being initialized is not yet fully decided, but until
91//! it is, the only safe approach is to ensure that they are indeed initialized.)
92//!
93//! An example of the implications of the above rules is that an expression such
94//! as `unsafe { &*(0 as *const u8) }` is Immediate Undefined Behavior.
95//!
96//! [valid value]: ../../reference/behavior-considered-undefined.html#invalid-values
97//!
98//! ## Allocated object
99//!
100//! An *allocated object* is a subset of program memory which is addressable
101//! from Rust, and within which pointer arithmetic is possible. Examples of
102//! allocated objects include heap allocations, stack-allocated variables,
103//! statics, and consts. The safety preconditions of some Rust operations -
104//! such as `offset` and field projections (`expr.field`) - are defined in
105//! terms of the allocated objects on which they operate.
106//!
107//! An allocated object has a base address, a size, and a set of memory
108//! addresses. It is possible for an allocated object to have zero size, but
109//! such an allocated object will still have a base address. The base address
110//! of an allocated object is not necessarily unique. While it is currently the
111//! case that an allocated object always has a set of memory addresses which is
112//! fully contiguous (i.e., has no "holes"), there is no guarantee that this
113//! will not change in the future.
114//!
115//! For any allocated object with `base` address, `size`, and a set of
116//! `addresses`, the following are guaranteed:
117//! - For all addresses `a` in `addresses`, `a` is in the range `base .. (base +
118//!   size)` (note that this requires `a < base + size`, not `a <= base + size`)
119//! - `base` is not equal to [`null()`] (i.e., the address with the numerical
120//!   value 0)
121//! - `base + size <= usize::MAX`
122//! - `size <= isize::MAX`
123//!
124//! As a consequence of these guarantees, given any address `a` within the set
125//! of addresses of an allocated object:
126//! - It is guaranteed that `a - base` does not overflow `isize`
127//! - It is guaranteed that `a - base` is non-negative
128//! - It is guaranteed that, given `o = a - base` (i.e., the offset of `a` within
129//!   the allocated object), `base + o` will not wrap around the address space (in
130//!   other words, will not overflow `usize`)
131//!
132//! [`null()`]: null
133//!
134//! # Provenance
135//!
136//! Pointers are not *simply* an "integer" or "address". For instance, it's uncontroversial
137//! to say that a Use After Free is clearly Undefined Behavior, even if you "get lucky"
138//! and the freed memory gets reallocated before your read/write (in fact this is the
139//! worst-case scenario, UAFs would be much less concerning if this didn't happen!).
140//! As another example, consider that [`wrapping_offset`] is documented to "remember"
141//! the allocated object that the original pointer points to, even if it is offset far
142//! outside the memory range occupied by that allocated object.
143//! To rationalize claims like this, pointers need to somehow be *more* than just their addresses:
144//! they must have **provenance**.
145//!
146//! A pointer value in Rust semantically contains the following information:
147//!
148//! * The **address** it points to, which can be represented by a `usize`.
149//! * The **provenance** it has, defining the memory it has permission to access. Provenance can be
150//!   absent, in which case the pointer does not have permission to access any memory.
151//!
152//! The exact structure of provenance is not yet specified, but the permission defined by a
153//! pointer's provenance have a *spatial* component, a *temporal* component, and a *mutability*
154//! component:
155//!
156//! * Spatial: The set of memory addresses that the pointer is allowed to access.
157//! * Temporal: The timespan during which the pointer is allowed to access those memory addresses.
158//! * Mutability: Whether the pointer may only access the memory for reads, or also access it for
159//!   writes. Note that this can interact with the other components, e.g. a pointer might permit
160//!   mutation only for a subset of addresses, or only for a subset of its maximal timespan.
161//!
162//! When an [allocated object] is created, it has a unique Original Pointer. For alloc
163//! APIs this is literally the pointer the call returns, and for local variables and statics,
164//! this is the name of the variable/static. (This is mildly overloading the term "pointer"
165//! for the sake of brevity/exposition.)
166//!
167//! The Original Pointer for an allocated object has provenance that constrains the *spatial*
168//! permissions of this pointer to the memory range of the allocation, and the *temporal*
169//! permissions to the lifetime of the allocation. Provenance is implicitly inherited by all
170//! pointers transitively derived from the Original Pointer through operations like [`offset`],
171//! borrowing, and pointer casts. Some operations may *shrink* the permissions of the derived
172//! provenance, limiting how much memory it can access or how long it's valid for (i.e. borrowing a
173//! subfield and subslicing can shrink the spatial component of provenance, and all borrowing can
174//! shrink the temporal component of provenance). However, no operation can ever *grow* the
175//! permissions of the derived provenance: even if you "know" there is a larger allocation, you
176//! can't derive a pointer with a larger provenance. Similarly, you cannot "recombine" two
177//! contiguous provenances back into one (i.e. with a `fn merge(&[T], &[T]) -> &[T]`).
178//!
179//! A reference to a place always has provenance over at least the memory that place occupies.
180//! A reference to a slice always has provenance over at least the range that slice describes.
181//! Whether and when exactly the provenance of a reference gets "shrunk" to *exactly* fit
182//! the memory it points to is not yet determined.
183//!
184//! A *shared* reference only ever has provenance that permits reading from memory,
185//! and never permits writes, except inside [`UnsafeCell`].
186//!
187//! Provenance can affect whether a program has undefined behavior:
188//!
189//! * It is undefined behavior to access memory through a pointer that does not have provenance over
190//!   that memory. Note that a pointer "at the end" of its provenance is not actually outside its
191//!   provenance, it just has 0 bytes it can load/store. Zero-sized accesses do not require any
192//!   provenance since they access an empty range of memory.
193//!
194//! * It is undefined behavior to [`offset`] a pointer across a memory range that is not contained
195//!   in the allocated object it is derived from, or to [`offset_from`] two pointers not derived
196//!   from the same allocated object. Provenance is used to say what exactly "derived from" even
197//!   means: the lineage of a pointer is traced back to the Original Pointer it descends from, and
198//!   that identifies the relevant allocated object. In particular, it's always UB to offset a
199//!   pointer derived from something that is now deallocated, except if the offset is 0.
200//!
201//! But it *is* still sound to:
202//!
203//! * Create a pointer without provenance from just an address (see [`without_provenance`]). Such a
204//!   pointer cannot be used for memory accesses (except for zero-sized accesses). This can still be
205//!   useful for sentinel values like `null` *or* to represent a tagged pointer that will never be
206//!   dereferenceable. In general, it is always sound for an integer to pretend to be a pointer "for
207//!   fun" as long as you don't use operations on it which require it to be valid (non-zero-sized
208//!   offset, read, write, etc).
209//!
210//! * Forge an allocation of size zero at any sufficiently aligned non-null address.
211//!   i.e. the usual "ZSTs are fake, do what you want" rules apply.
212//!
213//! * [`wrapping_offset`] a pointer outside its provenance. This includes pointers
214//!   which have "no" provenance. In particular, this makes it sound to do pointer tagging tricks.
215//!
216//! * Compare arbitrary pointers by address. Pointer comparison ignores provenance and addresses
217//!   *are* just integers, so there is always a coherent answer, even if the pointers are dangling
218//!   or from different provenances. Note that if you get "lucky" and notice that a pointer at the
219//!   end of one allocated object is the "same" address as the start of another allocated object,
220//!   anything you do with that fact is *probably* going to be gibberish. The scope of that
221//!   gibberish is kept under control by the fact that the two pointers *still* aren't allowed to
222//!   access the other's allocation (bytes), because they still have different provenance.
223//!
224//! Note that the full definition of provenance in Rust is not decided yet, as this interacts
225//! with the as-yet undecided [aliasing] rules.
226//!
227//! ## Pointers Vs Integers
228//!
229//! From this discussion, it becomes very clear that a `usize` *cannot* accurately represent a pointer,
230//! and converting from a pointer to a `usize` is generally an operation which *only* extracts the
231//! address. Converting this address back into pointer requires somehow answering the question:
232//! which provenance should the resulting pointer have?
233//!
234//! Rust provides two ways of dealing with this situation: *Strict Provenance* and *Exposed Provenance*.
235//!
236//! Note that a pointer *can* represent a `usize` (via [`without_provenance`]), so the right type to
237//! use in situations where a value is "sometimes a pointer and sometimes a bare `usize`" is a
238//! pointer type.
239//!
240//! ## Strict Provenance
241//!
242//! "Strict Provenance" refers to a set of APIs designed to make working with provenance more
243//! explicit. They are intended as substitutes for casting a pointer to an integer and back.
244//!
245//! Entirely avoiding integer-to-pointer casts successfully side-steps the inherent ambiguity of
246//! that operation. This benefits compiler optimizations, and it is pretty much a requirement for
247//! using tools like [Miri] and architectures like [CHERI] that aim to detect and diagnose pointer
248//! misuse.
249//!
250//! The key insight to making programming without integer-to-pointer casts *at all* viable is the
251//! [`with_addr`] method:
252//!
253//! ```text
254//!     /// Creates a new pointer with the given address.
255//!     ///
256//!     /// This performs the same operation as an `addr as ptr` cast, but copies
257//!     /// the *provenance* of `self` to the new pointer.
258//!     /// This allows us to dynamically preserve and propagate this important
259//!     /// information in a way that is otherwise impossible with a unary cast.
260//!     ///
261//!     /// This is equivalent to using `wrapping_offset` to offset `self` to the
262//!     /// given address, and therefore has all the same capabilities and restrictions.
263//!     pub fn with_addr(self, addr: usize) -> Self;
264//! ```
265//!
266//! So you're still able to drop down to the address representation and do whatever
267//! clever bit tricks you want *as long as* you're able to keep around a pointer
268//! into the allocation you care about that can "reconstitute" the provenance.
269//! Usually this is very easy, because you only are taking a pointer, messing with the address,
270//! and then immediately converting back to a pointer. To make this use case more ergonomic,
271//! we provide the [`map_addr`] method.
272//!
273//! To help make it clear that code is "following" Strict Provenance semantics, we also provide an
274//! [`addr`] method which promises that the returned address is not part of a
275//! pointer-integer-pointer roundtrip. In the future we may provide a lint for pointer<->integer
276//! casts to help you audit if your code conforms to strict provenance.
277//!
278//! ### Using Strict Provenance
279//!
280//! Most code needs no changes to conform to strict provenance, as the only really concerning
281//! operation is casts from `usize` to a pointer. For code which *does* cast a `usize` to a pointer,
282//! the scope of the change depends on exactly what you're doing.
283//!
284//! In general, you just need to make sure that if you want to convert a `usize` address to a
285//! pointer and then use that pointer to read/write memory, you need to keep around a pointer
286//! that has sufficient provenance to perform that read/write itself. In this way all of your
287//! casts from an address to a pointer are essentially just applying offsets/indexing.
288//!
289//! This is generally trivial to do for simple cases like tagged pointers *as long as you
290//! represent the tagged pointer as an actual pointer and not a `usize`*. For instance:
291//!
292//! ```
293//! unsafe {
294//!     // A flag we want to pack into our pointer
295//!     static HAS_DATA: usize = 0x1;
296//!     static FLAG_MASK: usize = !HAS_DATA;
297//!
298//!     // Our value, which must have enough alignment to have spare least-significant-bits.
299//!     let my_precious_data: u32 = 17;
300//!     assert!(align_of::<u32>() > 1);
301//!
302//!     // Create a tagged pointer
303//!     let ptr = &my_precious_data as *const u32;
304//!     let tagged = ptr.map_addr(|addr| addr | HAS_DATA);
305//!
306//!     // Check the flag:
307//!     if tagged.addr() & HAS_DATA != 0 {
308//!         // Untag and read the pointer
309//!         let data = *tagged.map_addr(|addr| addr & FLAG_MASK);
310//!         assert_eq!(data, 17);
311//!     } else {
312//!         unreachable!()
313//!     }
314//! }
315//! ```
316//!
317//! (Yes, if you've been using [`AtomicUsize`] for pointers in concurrent datastructures, you should
318//! be using [`AtomicPtr`] instead. If that messes up the way you atomically manipulate pointers,
319//! we would like to know why, and what needs to be done to fix it.)
320//!
321//! Situations where a valid pointer *must* be created from just an address, such as baremetal code
322//! accessing a memory-mapped interface at a fixed address, cannot currently be handled with strict
323//! provenance APIs and should use [exposed provenance](#exposed-provenance).
324//!
325//! ## Exposed Provenance
326//!
327//! As discussed above, integer-to-pointer casts are not possible with Strict Provenance APIs.
328//! This is by design: the goal of Strict Provenance is to provide a clear specification that we are
329//! confident can be formalized unambiguously and can be subject to precise formal reasoning.
330//! Integer-to-pointer casts do not (currently) have such a clear specification.
331//!
332//! However, there exist situations where integer-to-pointer casts cannot be avoided, or
333//! where avoiding them would require major refactoring. Legacy platform APIs also regularly assume
334//! that `usize` can capture all the information that makes up a pointer.
335//! Bare-metal platforms can also require the synthesis of a pointer "out of thin air" without
336//! anywhere to obtain proper provenance from.
337//!
338//! Rust's model for dealing with integer-to-pointer casts is called *Exposed Provenance*. However,
339//! the semantics of Exposed Provenance are on much less solid footing than Strict Provenance, and
340//! at this point it is not yet clear whether a satisfying unambiguous semantics can be defined for
341//! Exposed Provenance. (If that sounds bad, be reassured that other popular languages that provide
342//! integer-to-pointer casts are not faring any better.) Furthermore, Exposed Provenance will not
343//! work (well) with tools like [Miri] and [CHERI].
344//!
345//! Exposed Provenance is provided by the [`expose_provenance`] and [`with_exposed_provenance`] methods,
346//! which are equivalent to `as` casts between pointers and integers.
347//! - [`expose_provenance`] is a lot like [`addr`], but additionally adds the provenance of the
348//!   pointer to a global list of 'exposed' provenances. (This list is purely conceptual, it exists
349//!   for the purpose of specifying Rust but is not materialized in actual executions, except in
350//!   tools like [Miri].)
351//!   Memory which is outside the control of the Rust abstract machine (MMIO registers, for example)
352//!   is always considered to be exposed, so long as this memory is disjoint from memory that will
353//!   be used by the abstract machine such as the stack, heap, and statics.
354//! - [`with_exposed_provenance`] can be used to construct a pointer with one of these previously
355//!   'exposed' provenances. [`with_exposed_provenance`] takes only `addr: usize` as arguments, so
356//!   unlike in [`with_addr`] there is no indication of what the correct provenance for the returned
357//!   pointer is -- and that is exactly what makes integer-to-pointer casts so tricky to rigorously
358//!   specify! The compiler will do its best to pick the right provenance for you, but currently we
359//!   cannot provide any guarantees about which provenance the resulting pointer will have. Only one
360//!   thing is clear: if there is *no* previously 'exposed' provenance that justifies the way the
361//!   returned pointer will be used, the program has undefined behavior.
362//!
363//! If at all possible, we encourage code to be ported to [Strict Provenance] APIs, thus avoiding
364//! the need for Exposed Provenance. Maximizing the amount of such code is a major win for avoiding
365//! specification complexity and to facilitate adoption of tools like [CHERI] and [Miri] that can be
366//! a big help in increasing the confidence in (unsafe) Rust code. However, we acknowledge that this
367//! is not always possible, and offer Exposed Provenance as a way to explicit "opt out" of the
368//! well-defined semantics of Strict Provenance, and "opt in" to the unclear semantics of
369//! integer-to-pointer casts.
370//!
371//! [aliasing]: ../../nomicon/aliasing.html
372//! [allocated object]: #allocated-object
373//! [provenance]: #provenance
374//! [book]: ../../book/ch19-01-unsafe-rust.html#dereferencing-a-raw-pointer
375//! [ub]: ../../reference/behavior-considered-undefined.html
376//! [zst]: ../../nomicon/exotic-sizes.html#zero-sized-types-zsts
377//! [atomic operations]: crate::sync::atomic
378//! [`offset`]: pointer::offset
379//! [`offset_from`]: pointer::offset_from
380//! [`wrapping_offset`]: pointer::wrapping_offset
381//! [`with_addr`]: pointer::with_addr
382//! [`map_addr`]: pointer::map_addr
383//! [`addr`]: pointer::addr
384//! [`AtomicUsize`]: crate::sync::atomic::AtomicUsize
385//! [`AtomicPtr`]: crate::sync::atomic::AtomicPtr
386//! [`expose_provenance`]: pointer::expose_provenance
387//! [`with_exposed_provenance`]: with_exposed_provenance
388//! [Miri]: https://github.com/rust-lang/miri
389//! [CHERI]: https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
390//! [Strict Provenance]: #strict-provenance
391//! [`UnsafeCell`]: core::cell::UnsafeCell
392
393#![stable(feature = "rust1", since = "1.0.0")]
394// There are many unsafe functions taking pointers that don't dereference them.
395#![allow(clippy::not_unsafe_ptr_arg_deref)]
396
397use crate::cmp::Ordering;
398use crate::intrinsics::const_eval_select;
399use crate::marker::FnPtr;
400use crate::mem::{self, MaybeUninit, SizedTypeProperties};
401use crate::num::NonZero;
402use crate::{fmt, hash, intrinsics, ub_checks};
403
404mod alignment;
405#[unstable(feature = "ptr_alignment_type", issue = "102070")]
406pub use alignment::Alignment;
407
408#[stable(feature = "rust1", since = "1.0.0")]
409#[doc(inline)]
410pub use crate::intrinsics::copy;
411#[stable(feature = "rust1", since = "1.0.0")]
412#[doc(inline)]
413pub use crate::intrinsics::copy_nonoverlapping;
414#[stable(feature = "rust1", since = "1.0.0")]
415#[doc(inline)]
416pub use crate::intrinsics::write_bytes;
417
418mod metadata;
419#[unstable(feature = "ptr_metadata", issue = "81513")]
420pub use metadata::{DynMetadata, Pointee, Thin, from_raw_parts, from_raw_parts_mut, metadata};
421
422mod non_null;
423#[stable(feature = "nonnull", since = "1.25.0")]
424pub use non_null::NonNull;
425
426mod unique;
427#[unstable(feature = "ptr_internals", issue = "none")]
428pub use unique::Unique;
429
430mod const_ptr;
431mod mut_ptr;
432
433/// Executes the destructor (if any) of the pointed-to value.
434///
435/// This is almost the same as calling [`ptr::read`] and discarding
436/// the result, but has the following advantages:
437// FIXME: say something more useful than "almost the same"?
438// There are open questions here: `read` requires the value to be fully valid, e.g. if `T` is a
439// `bool` it must be 0 or 1, if it is a reference then it must be dereferenceable. `drop_in_place`
440// only requires that `*to_drop` be "valid for dropping" and we have not defined what that means. In
441// Miri it currently (May 2024) requires nothing at all for types without drop glue.
442///
443/// * It is *required* to use `drop_in_place` to drop unsized types like
444///   trait objects, because they can't be read out onto the stack and
445///   dropped normally.
446///
447/// * It is friendlier to the optimizer to do this over [`ptr::read`] when
448///   dropping manually allocated memory (e.g., in the implementations of
449///   `Box`/`Rc`/`Vec`), as the compiler doesn't need to prove that it's
450///   sound to elide the copy.
451///
452/// * It can be used to drop [pinned] data when `T` is not `repr(packed)`
453///   (pinned data must not be moved before it is dropped).
454///
455/// Unaligned values cannot be dropped in place, they must be copied to an aligned
456/// location first using [`ptr::read_unaligned`]. For packed structs, this move is
457/// done automatically by the compiler. This means the fields of packed structs
458/// are not dropped in-place.
459///
460/// [`ptr::read`]: self::read
461/// [`ptr::read_unaligned`]: self::read_unaligned
462/// [pinned]: crate::pin
463///
464/// # Safety
465///
466/// Behavior is undefined if any of the following conditions are violated:
467///
468/// * `to_drop` must be [valid] for both reads and writes.
469///
470/// * `to_drop` must be properly aligned, even if `T` has size 0.
471///
472/// * `to_drop` must be nonnull, even if `T` has size 0.
473///
474/// * The value `to_drop` points to must be valid for dropping, which may mean
475///   it must uphold additional invariants. These invariants depend on the type
476///   of the value being dropped. For instance, when dropping a Box, the box's
477///   pointer to the heap must be valid.
478///
479/// * While `drop_in_place` is executing, the only way to access parts of
480///   `to_drop` is through the `&mut self` references supplied to the
481///   `Drop::drop` methods that `drop_in_place` invokes.
482///
483/// Additionally, if `T` is not [`Copy`], using the pointed-to value after
484/// calling `drop_in_place` can cause undefined behavior. Note that `*to_drop =
485/// foo` counts as a use because it will cause the value to be dropped
486/// again. [`write()`] can be used to overwrite data without causing it to be
487/// dropped.
488///
489/// [valid]: self#safety
490///
491/// # Examples
492///
493/// Manually remove the last item from a vector:
494///
495/// ```
496/// use std::ptr;
497/// use std::rc::Rc;
498///
499/// let last = Rc::new(1);
500/// let weak = Rc::downgrade(&last);
501///
502/// let mut v = vec![Rc::new(0), last];
503///
504/// unsafe {
505///     // Get a raw pointer to the last element in `v`.
506///     let ptr = &mut v[1] as *mut _;
507///     // Shorten `v` to prevent the last item from being dropped. We do that first,
508///     // to prevent issues if the `drop_in_place` below panics.
509///     v.set_len(1);
510///     // Without a call `drop_in_place`, the last item would never be dropped,
511///     // and the memory it manages would be leaked.
512///     ptr::drop_in_place(ptr);
513/// }
514///
515/// assert_eq!(v, &[0.into()]);
516///
517/// // Ensure that the last item was dropped.
518/// assert!(weak.upgrade().is_none());
519/// ```
520#[stable(feature = "drop_in_place", since = "1.8.0")]
521#[lang = "drop_in_place"]
522#[allow(unconditional_recursion)]
523#[rustc_diagnostic_item = "ptr_drop_in_place"]
524pub unsafe fn drop_in_place<T: ?Sized>(to_drop: *mut T) {
525    // Code here does not matter - this is replaced by the
526    // real drop glue by the compiler.
527
528    // SAFETY: see comment above
529    unsafe { drop_in_place(to_drop) }
530}
531
532/// Creates a null raw pointer.
533///
534/// This function is equivalent to zero-initializing the pointer:
535/// `MaybeUninit::<*const T>::zeroed().assume_init()`.
536/// The resulting pointer has the address 0.
537///
538/// # Examples
539///
540/// ```
541/// use std::ptr;
542///
543/// let p: *const i32 = ptr::null();
544/// assert!(p.is_null());
545/// assert_eq!(p as usize, 0); // this pointer has the address 0
546/// ```
547#[inline(always)]
548#[must_use]
549#[stable(feature = "rust1", since = "1.0.0")]
550#[rustc_promotable]
551#[rustc_const_stable(feature = "const_ptr_null", since = "1.24.0")]
552#[rustc_diagnostic_item = "ptr_null"]
553pub const fn null<T: ?Sized + Thin>() -> *const T {
554    from_raw_parts(without_provenance::<()>(0), ())
555}
556
557/// Creates a null mutable raw pointer.
558///
559/// This function is equivalent to zero-initializing the pointer:
560/// `MaybeUninit::<*mut T>::zeroed().assume_init()`.
561/// The resulting pointer has the address 0.
562///
563/// # Examples
564///
565/// ```
566/// use std::ptr;
567///
568/// let p: *mut i32 = ptr::null_mut();
569/// assert!(p.is_null());
570/// assert_eq!(p as usize, 0); // this pointer has the address 0
571/// ```
572#[inline(always)]
573#[must_use]
574#[stable(feature = "rust1", since = "1.0.0")]
575#[rustc_promotable]
576#[rustc_const_stable(feature = "const_ptr_null", since = "1.24.0")]
577#[rustc_diagnostic_item = "ptr_null_mut"]
578pub const fn null_mut<T: ?Sized + Thin>() -> *mut T {
579    from_raw_parts_mut(without_provenance_mut::<()>(0), ())
580}
581
582/// Creates a pointer with the given address and no [provenance][crate::ptr#provenance].
583///
584/// This is equivalent to `ptr::null().with_addr(addr)`.
585///
586/// Without provenance, this pointer is not associated with any actual allocation. Such a
587/// no-provenance pointer may be used for zero-sized memory accesses (if suitably aligned), but
588/// non-zero-sized memory accesses with a no-provenance pointer are UB. No-provenance pointers are
589/// little more than a `usize` address in disguise.
590///
591/// This is different from `addr as *const T`, which creates a pointer that picks up a previously
592/// exposed provenance. See [`with_exposed_provenance`] for more details on that operation.
593///
594/// This is a [Strict Provenance][crate::ptr#strict-provenance] API.
595#[inline(always)]
596#[must_use]
597#[stable(feature = "strict_provenance", since = "1.84.0")]
598#[rustc_const_stable(feature = "strict_provenance", since = "1.84.0")]
599pub const fn without_provenance<T>(addr: usize) -> *const T {
600    without_provenance_mut(addr)
601}
602
603/// Creates a new pointer that is dangling, but non-null and well-aligned.
604///
605/// This is useful for initializing types which lazily allocate, like
606/// `Vec::new` does.
607///
608/// Note that the pointer value may potentially represent a valid pointer to
609/// a `T`, which means this must not be used as a "not yet initialized"
610/// sentinel value. Types that lazily allocate must track initialization by
611/// some other means.
612#[inline(always)]
613#[must_use]
614#[stable(feature = "strict_provenance", since = "1.84.0")]
615#[rustc_const_stable(feature = "strict_provenance", since = "1.84.0")]
616pub const fn dangling<T>() -> *const T {
617    dangling_mut()
618}
619
620/// Creates a pointer with the given address and no [provenance][crate::ptr#provenance].
621///
622/// This is equivalent to `ptr::null_mut().with_addr(addr)`.
623///
624/// Without provenance, this pointer is not associated with any actual allocation. Such a
625/// no-provenance pointer may be used for zero-sized memory accesses (if suitably aligned), but
626/// non-zero-sized memory accesses with a no-provenance pointer are UB. No-provenance pointers are
627/// little more than a `usize` address in disguise.
628///
629/// This is different from `addr as *mut T`, which creates a pointer that picks up a previously
630/// exposed provenance. See [`with_exposed_provenance_mut`] for more details on that operation.
631///
632/// This is a [Strict Provenance][crate::ptr#strict-provenance] API.
633#[inline(always)]
634#[must_use]
635#[stable(feature = "strict_provenance", since = "1.84.0")]
636#[rustc_const_stable(feature = "strict_provenance", since = "1.84.0")]
637pub const fn without_provenance_mut<T>(addr: usize) -> *mut T {
638    // An int-to-pointer transmute currently has exactly the intended semantics: it creates a
639    // pointer without provenance. Note that this is *not* a stable guarantee about transmute
640    // semantics, it relies on sysroot crates having special status.
641    // SAFETY: every valid integer is also a valid pointer (as long as you don't dereference that
642    // pointer).
643    unsafe { mem::transmute(addr) }
644}
645
646/// Creates a new pointer that is dangling, but non-null and well-aligned.
647///
648/// This is useful for initializing types which lazily allocate, like
649/// `Vec::new` does.
650///
651/// Note that the pointer value may potentially represent a valid pointer to
652/// a `T`, which means this must not be used as a "not yet initialized"
653/// sentinel value. Types that lazily allocate must track initialization by
654/// some other means.
655#[inline(always)]
656#[must_use]
657#[stable(feature = "strict_provenance", since = "1.84.0")]
658#[rustc_const_stable(feature = "strict_provenance", since = "1.84.0")]
659pub const fn dangling_mut<T>() -> *mut T {
660    NonNull::dangling().as_ptr()
661}
662
663/// Converts an address back to a pointer, picking up some previously 'exposed'
664/// [provenance][crate::ptr#provenance].
665///
666/// This is fully equivalent to `addr as *const T`. The provenance of the returned pointer is that
667/// of *some* pointer that was previously exposed by passing it to
668/// [`expose_provenance`][pointer::expose_provenance], or a `ptr as usize` cast. In addition, memory
669/// which is outside the control of the Rust abstract machine (MMIO registers, for example) is
670/// always considered to be accessible with an exposed provenance, so long as this memory is disjoint
671/// from memory that will be used by the abstract machine such as the stack, heap, and statics.
672///
673/// The exact provenance that gets picked is not specified. The compiler will do its best to pick
674/// the "right" provenance for you (whatever that may be), but currently we cannot provide any
675/// guarantees about which provenance the resulting pointer will have -- and therefore there
676/// is no definite specification for which memory the resulting pointer may access.
677///
678/// If there is *no* previously 'exposed' provenance that justifies the way the returned pointer
679/// will be used, the program has undefined behavior. In particular, the aliasing rules still apply:
680/// pointers and references that have been invalidated due to aliasing accesses cannot be used
681/// anymore, even if they have been exposed!
682///
683/// Due to its inherent ambiguity, this operation may not be supported by tools that help you to
684/// stay conformant with the Rust memory model. It is recommended to use [Strict
685/// Provenance][self#strict-provenance] APIs such as [`with_addr`][pointer::with_addr] wherever
686/// possible.
687///
688/// On most platforms this will produce a value with the same bytes as the address. Platforms
689/// which need to store additional information in a pointer may not support this operation,
690/// since it is generally not possible to actually *compute* which provenance the returned
691/// pointer has to pick up.
692///
693/// This is an [Exposed Provenance][crate::ptr#exposed-provenance] API.
694#[must_use]
695#[inline(always)]
696#[stable(feature = "exposed_provenance", since = "1.84.0")]
697#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
698#[allow(fuzzy_provenance_casts)] // this *is* the explicit provenance API one should use instead
699pub fn with_exposed_provenance<T>(addr: usize) -> *const T {
700    addr as *const T
701}
702
703/// Converts an address back to a mutable pointer, picking up some previously 'exposed'
704/// [provenance][crate::ptr#provenance].
705///
706/// This is fully equivalent to `addr as *mut T`. The provenance of the returned pointer is that
707/// of *some* pointer that was previously exposed by passing it to
708/// [`expose_provenance`][pointer::expose_provenance], or a `ptr as usize` cast. In addition, memory
709/// which is outside the control of the Rust abstract machine (MMIO registers, for example) is
710/// always considered to be accessible with an exposed provenance, so long as this memory is disjoint
711/// from memory that will be used by the abstract machine such as the stack, heap, and statics.
712///
713/// The exact provenance that gets picked is not specified. The compiler will do its best to pick
714/// the "right" provenance for you (whatever that may be), but currently we cannot provide any
715/// guarantees about which provenance the resulting pointer will have -- and therefore there
716/// is no definite specification for which memory the resulting pointer may access.
717///
718/// If there is *no* previously 'exposed' provenance that justifies the way the returned pointer
719/// will be used, the program has undefined behavior. In particular, the aliasing rules still apply:
720/// pointers and references that have been invalidated due to aliasing accesses cannot be used
721/// anymore, even if they have been exposed!
722///
723/// Due to its inherent ambiguity, this operation may not be supported by tools that help you to
724/// stay conformant with the Rust memory model. It is recommended to use [Strict
725/// Provenance][self#strict-provenance] APIs such as [`with_addr`][pointer::with_addr] wherever
726/// possible.
727///
728/// On most platforms this will produce a value with the same bytes as the address. Platforms
729/// which need to store additional information in a pointer may not support this operation,
730/// since it is generally not possible to actually *compute* which provenance the returned
731/// pointer has to pick up.
732///
733/// This is an [Exposed Provenance][crate::ptr#exposed-provenance] API.
734#[must_use]
735#[inline(always)]
736#[stable(feature = "exposed_provenance", since = "1.84.0")]
737#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
738#[allow(fuzzy_provenance_casts)] // this *is* the explicit provenance API one should use instead
739pub fn with_exposed_provenance_mut<T>(addr: usize) -> *mut T {
740    addr as *mut T
741}
742
743/// Converts a reference to a raw pointer.
744///
745/// For `r: &T`, `from_ref(r)` is equivalent to `r as *const T` (except for the caveat noted below),
746/// but is a bit safer since it will never silently change type or mutability, in particular if the
747/// code is refactored.
748///
749/// The caller must ensure that the pointee outlives the pointer this function returns, or else it
750/// will end up dangling.
751///
752/// The caller must also ensure that the memory the pointer (non-transitively) points to is never
753/// written to (except inside an `UnsafeCell`) using this pointer or any pointer derived from it. If
754/// you need to mutate the pointee, use [`from_mut`]. Specifically, to turn a mutable reference `m:
755/// &mut T` into `*const T`, prefer `from_mut(m).cast_const()` to obtain a pointer that can later be
756/// used for mutation.
757///
758/// ## Interaction with lifetime extension
759///
760/// Note that this has subtle interactions with the rules for lifetime extension of temporaries in
761/// tail expressions. This code is valid, albeit in a non-obvious way:
762/// ```rust
763/// # type T = i32;
764/// # fn foo() -> T { 42 }
765/// // The temporary holding the return value of `foo` has its lifetime extended,
766/// // because the surrounding expression involves no function call.
767/// let p = &foo() as *const T;
768/// unsafe { p.read() };
769/// ```
770/// Naively replacing the cast with `from_ref` is not valid:
771/// ```rust,no_run
772/// # use std::ptr;
773/// # type T = i32;
774/// # fn foo() -> T { 42 }
775/// // The temporary holding the return value of `foo` does *not* have its lifetime extended,
776/// // because the surrounding expression involves a function call.
777/// let p = ptr::from_ref(&foo());
778/// unsafe { p.read() }; // UB! Reading from a dangling pointer ⚠️
779/// ```
780/// The recommended way to write this code is to avoid relying on lifetime extension
781/// when raw pointers are involved:
782/// ```rust
783/// # use std::ptr;
784/// # type T = i32;
785/// # fn foo() -> T { 42 }
786/// let x = foo();
787/// let p = ptr::from_ref(&x);
788/// unsafe { p.read() };
789/// ```
790#[inline(always)]
791#[must_use]
792#[stable(feature = "ptr_from_ref", since = "1.76.0")]
793#[rustc_const_stable(feature = "ptr_from_ref", since = "1.76.0")]
794#[rustc_never_returns_null_ptr]
795#[rustc_diagnostic_item = "ptr_from_ref"]
796pub const fn from_ref<T: ?Sized>(r: &T) -> *const T {
797    r
798}
799
800/// Converts a mutable reference to a raw pointer.
801///
802/// For `r: &mut T`, `from_mut(r)` is equivalent to `r as *mut T` (except for the caveat noted
803/// below), but is a bit safer since it will never silently change type or mutability, in particular
804/// if the code is refactored.
805///
806/// The caller must ensure that the pointee outlives the pointer this function returns, or else it
807/// will end up dangling.
808///
809/// ## Interaction with lifetime extension
810///
811/// Note that this has subtle interactions with the rules for lifetime extension of temporaries in
812/// tail expressions. This code is valid, albeit in a non-obvious way:
813/// ```rust
814/// # type T = i32;
815/// # fn foo() -> T { 42 }
816/// // The temporary holding the return value of `foo` has its lifetime extended,
817/// // because the surrounding expression involves no function call.
818/// let p = &mut foo() as *mut T;
819/// unsafe { p.write(T::default()) };
820/// ```
821/// Naively replacing the cast with `from_mut` is not valid:
822/// ```rust,no_run
823/// # use std::ptr;
824/// # type T = i32;
825/// # fn foo() -> T { 42 }
826/// // The temporary holding the return value of `foo` does *not* have its lifetime extended,
827/// // because the surrounding expression involves a function call.
828/// let p = ptr::from_mut(&mut foo());
829/// unsafe { p.write(T::default()) }; // UB! Writing to a dangling pointer ⚠️
830/// ```
831/// The recommended way to write this code is to avoid relying on lifetime extension
832/// when raw pointers are involved:
833/// ```rust
834/// # use std::ptr;
835/// # type T = i32;
836/// # fn foo() -> T { 42 }
837/// let mut x = foo();
838/// let p = ptr::from_mut(&mut x);
839/// unsafe { p.write(T::default()) };
840/// ```
841#[inline(always)]
842#[must_use]
843#[stable(feature = "ptr_from_ref", since = "1.76.0")]
844#[rustc_const_stable(feature = "ptr_from_ref", since = "1.76.0")]
845#[rustc_never_returns_null_ptr]
846pub const fn from_mut<T: ?Sized>(r: &mut T) -> *mut T {
847    r
848}
849
850/// Forms a raw slice from a pointer and a length.
851///
852/// The `len` argument is the number of **elements**, not the number of bytes.
853///
854/// This function is safe, but actually using the return value is unsafe.
855/// See the documentation of [`slice::from_raw_parts`] for slice safety requirements.
856///
857/// [`slice::from_raw_parts`]: crate::slice::from_raw_parts
858///
859/// # Examples
860///
861/// ```rust
862/// use std::ptr;
863///
864/// // create a slice pointer when starting out with a pointer to the first element
865/// let x = [5, 6, 7];
866/// let raw_pointer = x.as_ptr();
867/// let slice = ptr::slice_from_raw_parts(raw_pointer, 3);
868/// assert_eq!(unsafe { &*slice }[2], 7);
869/// ```
870///
871/// You must ensure that the pointer is valid and not null before dereferencing
872/// the raw slice. A slice reference must never have a null pointer, even if it's empty.
873///
874/// ```rust,should_panic
875/// use std::ptr;
876/// let danger: *const [u8] = ptr::slice_from_raw_parts(ptr::null(), 0);
877/// unsafe {
878///     danger.as_ref().expect("references must not be null");
879/// }
880/// ```
881#[inline]
882#[stable(feature = "slice_from_raw_parts", since = "1.42.0")]
883#[rustc_const_stable(feature = "const_slice_from_raw_parts", since = "1.64.0")]
884#[rustc_diagnostic_item = "ptr_slice_from_raw_parts"]
885pub const fn slice_from_raw_parts<T>(data: *const T, len: usize) -> *const [T] {
886    from_raw_parts(data, len)
887}
888
889/// Forms a raw mutable slice from a pointer and a length.
890///
891/// The `len` argument is the number of **elements**, not the number of bytes.
892///
893/// Performs the same functionality as [`slice_from_raw_parts`], except that a
894/// raw mutable slice is returned, as opposed to a raw immutable slice.
895///
896/// This function is safe, but actually using the return value is unsafe.
897/// See the documentation of [`slice::from_raw_parts_mut`] for slice safety requirements.
898///
899/// [`slice::from_raw_parts_mut`]: crate::slice::from_raw_parts_mut
900///
901/// # Examples
902///
903/// ```rust
904/// use std::ptr;
905///
906/// let x = &mut [5, 6, 7];
907/// let raw_pointer = x.as_mut_ptr();
908/// let slice = ptr::slice_from_raw_parts_mut(raw_pointer, 3);
909///
910/// unsafe {
911///     (*slice)[2] = 99; // assign a value at an index in the slice
912/// };
913///
914/// assert_eq!(unsafe { &*slice }[2], 99);
915/// ```
916///
917/// You must ensure that the pointer is valid and not null before dereferencing
918/// the raw slice. A slice reference must never have a null pointer, even if it's empty.
919///
920/// ```rust,should_panic
921/// use std::ptr;
922/// let danger: *mut [u8] = ptr::slice_from_raw_parts_mut(ptr::null_mut(), 0);
923/// unsafe {
924///     danger.as_mut().expect("references must not be null");
925/// }
926/// ```
927#[inline]
928#[stable(feature = "slice_from_raw_parts", since = "1.42.0")]
929#[rustc_const_stable(feature = "const_slice_from_raw_parts_mut", since = "1.83.0")]
930#[rustc_diagnostic_item = "ptr_slice_from_raw_parts_mut"]
931pub const fn slice_from_raw_parts_mut<T>(data: *mut T, len: usize) -> *mut [T] {
932    from_raw_parts_mut(data, len)
933}
934
935/// Swaps the values at two mutable locations of the same type, without
936/// deinitializing either.
937///
938/// But for the following exceptions, this function is semantically
939/// equivalent to [`mem::swap`]:
940///
941/// * It operates on raw pointers instead of references. When references are
942///   available, [`mem::swap`] should be preferred.
943///
944/// * The two pointed-to values may overlap. If the values do overlap, then the
945///   overlapping region of memory from `x` will be used. This is demonstrated
946///   in the second example below.
947///
948/// * The operation is "untyped" in the sense that data may be uninitialized or otherwise violate
949///   the requirements of `T`. The initialization state is preserved exactly.
950///
951/// # Safety
952///
953/// Behavior is undefined if any of the following conditions are violated:
954///
955/// * Both `x` and `y` must be [valid] for both reads and writes. They must remain valid even when the
956///   other pointer is written. (This means if the memory ranges overlap, the two pointers must not
957///   be subject to aliasing restrictions relative to each other.)
958///
959/// * Both `x` and `y` must be properly aligned.
960///
961/// Note that even if `T` has size `0`, the pointers must be properly aligned.
962///
963/// [valid]: self#safety
964///
965/// # Examples
966///
967/// Swapping two non-overlapping regions:
968///
969/// ```
970/// use std::ptr;
971///
972/// let mut array = [0, 1, 2, 3];
973///
974/// let (x, y) = array.split_at_mut(2);
975/// let x = x.as_mut_ptr().cast::<[u32; 2]>(); // this is `array[0..2]`
976/// let y = y.as_mut_ptr().cast::<[u32; 2]>(); // this is `array[2..4]`
977///
978/// unsafe {
979///     ptr::swap(x, y);
980///     assert_eq!([2, 3, 0, 1], array);
981/// }
982/// ```
983///
984/// Swapping two overlapping regions:
985///
986/// ```
987/// use std::ptr;
988///
989/// let mut array: [i32; 4] = [0, 1, 2, 3];
990///
991/// let array_ptr: *mut i32 = array.as_mut_ptr();
992///
993/// let x = array_ptr as *mut [i32; 3]; // this is `array[0..3]`
994/// let y = unsafe { array_ptr.add(1) } as *mut [i32; 3]; // this is `array[1..4]`
995///
996/// unsafe {
997///     ptr::swap(x, y);
998///     // The indices `1..3` of the slice overlap between `x` and `y`.
999///     // Reasonable results would be for to them be `[2, 3]`, so that indices `0..3` are
1000///     // `[1, 2, 3]` (matching `y` before the `swap`); or for them to be `[0, 1]`
1001///     // so that indices `1..4` are `[0, 1, 2]` (matching `x` before the `swap`).
1002///     // This implementation is defined to make the latter choice.
1003///     assert_eq!([1, 0, 1, 2], array);
1004/// }
1005/// ```
1006#[inline]
1007#[stable(feature = "rust1", since = "1.0.0")]
1008#[rustc_const_stable(feature = "const_swap", since = "1.85.0")]
1009#[rustc_diagnostic_item = "ptr_swap"]
1010pub const unsafe fn swap<T>(x: *mut T, y: *mut T) {
1011    // Give ourselves some scratch space to work with.
1012    // We do not have to worry about drops: `MaybeUninit` does nothing when dropped.
1013    let mut tmp = MaybeUninit::<T>::uninit();
1014
1015    // Perform the swap
1016    // SAFETY: the caller must guarantee that `x` and `y` are
1017    // valid for writes and properly aligned. `tmp` cannot be
1018    // overlapping either `x` or `y` because `tmp` was just allocated
1019    // on the stack as a separate allocated object.
1020    unsafe {
1021        copy_nonoverlapping(x, tmp.as_mut_ptr(), 1);
1022        copy(y, x, 1); // `x` and `y` may overlap
1023        copy_nonoverlapping(tmp.as_ptr(), y, 1);
1024    }
1025}
1026
1027/// Swaps `count * size_of::<T>()` bytes between the two regions of memory
1028/// beginning at `x` and `y`. The two regions must *not* overlap.
1029///
1030/// The operation is "untyped" in the sense that data may be uninitialized or otherwise violate the
1031/// requirements of `T`. The initialization state is preserved exactly.
1032///
1033/// # Safety
1034///
1035/// Behavior is undefined if any of the following conditions are violated:
1036///
1037/// * Both `x` and `y` must be [valid] for both reads and writes of `count *
1038///   size_of::<T>()` bytes.
1039///
1040/// * Both `x` and `y` must be properly aligned.
1041///
1042/// * The region of memory beginning at `x` with a size of `count *
1043///   size_of::<T>()` bytes must *not* overlap with the region of memory
1044///   beginning at `y` with the same size.
1045///
1046/// Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`,
1047/// the pointers must be properly aligned.
1048///
1049/// [valid]: self#safety
1050///
1051/// # Examples
1052///
1053/// Basic usage:
1054///
1055/// ```
1056/// use std::ptr;
1057///
1058/// let mut x = [1, 2, 3, 4];
1059/// let mut y = [7, 8, 9];
1060///
1061/// unsafe {
1062///     ptr::swap_nonoverlapping(x.as_mut_ptr(), y.as_mut_ptr(), 2);
1063/// }
1064///
1065/// assert_eq!(x, [7, 8, 3, 4]);
1066/// assert_eq!(y, [1, 2, 9]);
1067/// ```
1068///
1069/// # Const evaluation limitations
1070///
1071/// If this function is invoked during const-evaluation, the current implementation has a small (and
1072/// rarely relevant) limitation: if `count` is at least 2 and the data pointed to by `x` or `y`
1073/// contains a pointer that crosses the boundary of two `T`-sized chunks of memory, the function may
1074/// fail to evaluate (similar to a panic during const-evaluation). This behavior may change in the
1075/// future.
1076///
1077/// The limitation is illustrated by the following example:
1078///
1079/// ```
1080/// use std::mem::size_of;
1081/// use std::ptr;
1082///
1083/// const { unsafe {
1084///     const PTR_SIZE: usize = size_of::<*const i32>();
1085///     let mut data1 = [0u8; PTR_SIZE];
1086///     let mut data2 = [0u8; PTR_SIZE];
1087///     // Store a pointer in `data1`.
1088///     data1.as_mut_ptr().cast::<*const i32>().write_unaligned(&42);
1089///     // Swap the contents of `data1` and `data2` by swapping `PTR_SIZE` many `u8`-sized chunks.
1090///     // This call will fail, because the pointer in `data1` crosses the boundary
1091///     // between several of the 1-byte chunks that are being swapped here.
1092///     //ptr::swap_nonoverlapping(data1.as_mut_ptr(), data2.as_mut_ptr(), PTR_SIZE);
1093///     // Swap the contents of `data1` and `data2` by swapping a single chunk of size
1094///     // `[u8; PTR_SIZE]`. That works, as there is no pointer crossing the boundary between
1095///     // two chunks.
1096///     ptr::swap_nonoverlapping(&mut data1, &mut data2, 1);
1097///     // Read the pointer from `data2` and dereference it.
1098///     let ptr = data2.as_ptr().cast::<*const i32>().read_unaligned();
1099///     assert!(*ptr == 42);
1100/// } }
1101/// ```
1102#[inline]
1103#[stable(feature = "swap_nonoverlapping", since = "1.27.0")]
1104#[rustc_const_stable(feature = "const_swap_nonoverlapping", since = "1.88.0")]
1105#[rustc_diagnostic_item = "ptr_swap_nonoverlapping"]
1106#[rustc_allow_const_fn_unstable(const_eval_select)] // both implementations behave the same
1107pub const unsafe fn swap_nonoverlapping<T>(x: *mut T, y: *mut T, count: usize) {
1108    ub_checks::assert_unsafe_precondition!(
1109        check_library_ub,
1110        "ptr::swap_nonoverlapping requires that both pointer arguments are aligned and non-null \
1111        and the specified memory ranges do not overlap",
1112        (
1113            x: *mut () = x as *mut (),
1114            y: *mut () = y as *mut (),
1115            size: usize = size_of::<T>(),
1116            align: usize = align_of::<T>(),
1117            count: usize = count,
1118        ) => {
1119            let zero_size = size == 0 || count == 0;
1120            ub_checks::maybe_is_aligned_and_not_null(x, align, zero_size)
1121                && ub_checks::maybe_is_aligned_and_not_null(y, align, zero_size)
1122                && ub_checks::maybe_is_nonoverlapping(x, y, size, count)
1123        }
1124    );
1125
1126    const_eval_select!(
1127        @capture[T] { x: *mut T, y: *mut T, count: usize }:
1128        if const {
1129            // At compile-time we want to always copy this in chunks of `T`, to ensure that if there
1130            // are pointers inside `T` we will copy them in one go rather than trying to copy a part
1131            // of a pointer (which would not work).
1132            // SAFETY: Same preconditions as this function
1133            unsafe { swap_nonoverlapping_const(x, y, count) }
1134        } else {
1135            // Going though a slice here helps codegen know the size fits in `isize`
1136            let slice = slice_from_raw_parts_mut(x, count);
1137            // SAFETY: This is all readable from the pointer, meaning it's one
1138            // allocated object, and thus cannot be more than isize::MAX bytes.
1139            let bytes = unsafe { mem::size_of_val_raw::<[T]>(slice) };
1140            if let Some(bytes) = NonZero::new(bytes) {
1141                // SAFETY: These are the same ranges, just expressed in a different
1142                // type, so they're still non-overlapping.
1143                unsafe { swap_nonoverlapping_bytes(x.cast(), y.cast(), bytes) };
1144            }
1145        }
1146    )
1147}
1148
1149/// Same behavior and safety conditions as [`swap_nonoverlapping`]
1150#[inline]
1151const unsafe fn swap_nonoverlapping_const<T>(x: *mut T, y: *mut T, count: usize) {
1152    let mut i = 0;
1153    while i < count {
1154        // SAFETY: By precondition, `i` is in-bounds because it's below `n`
1155        let x = unsafe { x.add(i) };
1156        // SAFETY: By precondition, `i` is in-bounds because it's below `n`
1157        // and it's distinct from `x` since the ranges are non-overlapping
1158        let y = unsafe { y.add(i) };
1159
1160        // SAFETY: we're only ever given pointers that are valid to read/write,
1161        // including being aligned, and nothing here panics so it's drop-safe.
1162        unsafe {
1163            // Note that it's critical that these use `copy_nonoverlapping`,
1164            // rather than `read`/`write`, to avoid #134713 if T has padding.
1165            let mut temp = MaybeUninit::<T>::uninit();
1166            copy_nonoverlapping(x, temp.as_mut_ptr(), 1);
1167            copy_nonoverlapping(y, x, 1);
1168            copy_nonoverlapping(temp.as_ptr(), y, 1);
1169        }
1170
1171        i += 1;
1172    }
1173}
1174
1175// Don't let MIR inline this, because we really want it to keep its noalias metadata
1176#[rustc_no_mir_inline]
1177#[inline]
1178fn swap_chunk<const N: usize>(x: &mut MaybeUninit<[u8; N]>, y: &mut MaybeUninit<[u8; N]>) {
1179    let a = *x;
1180    let b = *y;
1181    *x = b;
1182    *y = a;
1183}
1184
1185#[inline]
1186unsafe fn swap_nonoverlapping_bytes(x: *mut u8, y: *mut u8, bytes: NonZero<usize>) {
1187    // Same as `swap_nonoverlapping::<[u8; N]>`.
1188    unsafe fn swap_nonoverlapping_chunks<const N: usize>(
1189        x: *mut MaybeUninit<[u8; N]>,
1190        y: *mut MaybeUninit<[u8; N]>,
1191        chunks: NonZero<usize>,
1192    ) {
1193        let chunks = chunks.get();
1194        for i in 0..chunks {
1195            // SAFETY: i is in [0, chunks) so the adds and dereferences are in-bounds.
1196            unsafe { swap_chunk(&mut *x.add(i), &mut *y.add(i)) };
1197        }
1198    }
1199
1200    // Same as `swap_nonoverlapping_bytes`, but accepts at most 1+2+4=7 bytes
1201    #[inline]
1202    unsafe fn swap_nonoverlapping_short(x: *mut u8, y: *mut u8, bytes: NonZero<usize>) {
1203        // Tail handling for auto-vectorized code sometimes has element-at-a-time behaviour,
1204        // see <https://github.com/rust-lang/rust/issues/134946>.
1205        // By swapping as different sizes, rather than as a loop over bytes,
1206        // we make sure not to end up with, say, seven byte-at-a-time copies.
1207
1208        let bytes = bytes.get();
1209        let mut i = 0;
1210        macro_rules! swap_prefix {
1211            ($($n:literal)+) => {$(
1212                if (bytes & $n) != 0 {
1213                    // SAFETY: `i` can only have the same bits set as those in bytes,
1214                    // so these `add`s are in-bounds of `bytes`.  But the bit for
1215                    // `$n` hasn't been set yet, so the `$n` bytes that `swap_chunk`
1216                    // will read and write are within the usable range.
1217                    unsafe { swap_chunk::<$n>(&mut*x.add(i).cast(), &mut*y.add(i).cast()) };
1218                    i |= $n;
1219                }
1220            )+};
1221        }
1222        swap_prefix!(4 2 1);
1223        debug_assert_eq!(i, bytes);
1224    }
1225
1226    const CHUNK_SIZE: usize = size_of::<*const ()>();
1227    let bytes = bytes.get();
1228
1229    let chunks = bytes / CHUNK_SIZE;
1230    let tail = bytes % CHUNK_SIZE;
1231    if let Some(chunks) = NonZero::new(chunks) {
1232        // SAFETY: this is bytes/CHUNK_SIZE*CHUNK_SIZE bytes, which is <= bytes,
1233        // so it's within the range of our non-overlapping bytes.
1234        unsafe { swap_nonoverlapping_chunks::<CHUNK_SIZE>(x.cast(), y.cast(), chunks) };
1235    }
1236    if let Some(tail) = NonZero::new(tail) {
1237        const { assert!(CHUNK_SIZE <= 8) };
1238        let delta = chunks * CHUNK_SIZE;
1239        // SAFETY: the tail length is below CHUNK SIZE because of the remainder,
1240        // and CHUNK_SIZE is at most 8 by the const assert, so tail <= 7
1241        unsafe { swap_nonoverlapping_short(x.add(delta), y.add(delta), tail) };
1242    }
1243}
1244
1245/// Moves `src` into the pointed `dst`, returning the previous `dst` value.
1246///
1247/// Neither value is dropped.
1248///
1249/// This function is semantically equivalent to [`mem::replace`] except that it
1250/// operates on raw pointers instead of references. When references are
1251/// available, [`mem::replace`] should be preferred.
1252///
1253/// # Safety
1254///
1255/// Behavior is undefined if any of the following conditions are violated:
1256///
1257/// * `dst` must be [valid] for both reads and writes.
1258///
1259/// * `dst` must be properly aligned.
1260///
1261/// * `dst` must point to a properly initialized value of type `T`.
1262///
1263/// Note that even if `T` has size `0`, the pointer must be properly aligned.
1264///
1265/// [valid]: self#safety
1266///
1267/// # Examples
1268///
1269/// ```
1270/// use std::ptr;
1271///
1272/// let mut rust = vec!['b', 'u', 's', 't'];
1273///
1274/// // `mem::replace` would have the same effect without requiring the unsafe
1275/// // block.
1276/// let b = unsafe {
1277///     ptr::replace(&mut rust[0], 'r')
1278/// };
1279///
1280/// assert_eq!(b, 'b');
1281/// assert_eq!(rust, &['r', 'u', 's', 't']);
1282/// ```
1283#[inline]
1284#[stable(feature = "rust1", since = "1.0.0")]
1285#[rustc_const_stable(feature = "const_replace", since = "1.83.0")]
1286#[rustc_diagnostic_item = "ptr_replace"]
1287pub const unsafe fn replace<T>(dst: *mut T, src: T) -> T {
1288    // SAFETY: the caller must guarantee that `dst` is valid to be
1289    // cast to a mutable reference (valid for writes, aligned, initialized),
1290    // and cannot overlap `src` since `dst` must point to a distinct
1291    // allocated object.
1292    unsafe {
1293        ub_checks::assert_unsafe_precondition!(
1294            check_language_ub,
1295            "ptr::replace requires that the pointer argument is aligned and non-null",
1296            (
1297                addr: *const () = dst as *const (),
1298                align: usize = align_of::<T>(),
1299                is_zst: bool = T::IS_ZST,
1300            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, is_zst)
1301        );
1302        mem::replace(&mut *dst, src)
1303    }
1304}
1305
1306/// Reads the value from `src` without moving it. This leaves the
1307/// memory in `src` unchanged.
1308///
1309/// # Safety
1310///
1311/// Behavior is undefined if any of the following conditions are violated:
1312///
1313/// * `src` must be [valid] for reads.
1314///
1315/// * `src` must be properly aligned. Use [`read_unaligned`] if this is not the
1316///   case.
1317///
1318/// * `src` must point to a properly initialized value of type `T`.
1319///
1320/// Note that even if `T` has size `0`, the pointer must be properly aligned.
1321///
1322/// # Examples
1323///
1324/// Basic usage:
1325///
1326/// ```
1327/// let x = 12;
1328/// let y = &x as *const i32;
1329///
1330/// unsafe {
1331///     assert_eq!(std::ptr::read(y), 12);
1332/// }
1333/// ```
1334///
1335/// Manually implement [`mem::swap`]:
1336///
1337/// ```
1338/// use std::ptr;
1339///
1340/// fn swap<T>(a: &mut T, b: &mut T) {
1341///     unsafe {
1342///         // Create a bitwise copy of the value at `a` in `tmp`.
1343///         let tmp = ptr::read(a);
1344///
1345///         // Exiting at this point (either by explicitly returning or by
1346///         // calling a function which panics) would cause the value in `tmp` to
1347///         // be dropped while the same value is still referenced by `a`. This
1348///         // could trigger undefined behavior if `T` is not `Copy`.
1349///
1350///         // Create a bitwise copy of the value at `b` in `a`.
1351///         // This is safe because mutable references cannot alias.
1352///         ptr::copy_nonoverlapping(b, a, 1);
1353///
1354///         // As above, exiting here could trigger undefined behavior because
1355///         // the same value is referenced by `a` and `b`.
1356///
1357///         // Move `tmp` into `b`.
1358///         ptr::write(b, tmp);
1359///
1360///         // `tmp` has been moved (`write` takes ownership of its second argument),
1361///         // so nothing is dropped implicitly here.
1362///     }
1363/// }
1364///
1365/// let mut foo = "foo".to_owned();
1366/// let mut bar = "bar".to_owned();
1367///
1368/// swap(&mut foo, &mut bar);
1369///
1370/// assert_eq!(foo, "bar");
1371/// assert_eq!(bar, "foo");
1372/// ```
1373///
1374/// ## Ownership of the Returned Value
1375///
1376/// `read` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`].
1377/// If `T` is not [`Copy`], using both the returned value and the value at
1378/// `*src` can violate memory safety. Note that assigning to `*src` counts as a
1379/// use because it will attempt to drop the value at `*src`.
1380///
1381/// [`write()`] can be used to overwrite data without causing it to be dropped.
1382///
1383/// ```
1384/// use std::ptr;
1385///
1386/// let mut s = String::from("foo");
1387/// unsafe {
1388///     // `s2` now points to the same underlying memory as `s`.
1389///     let mut s2: String = ptr::read(&s);
1390///
1391///     assert_eq!(s2, "foo");
1392///
1393///     // Assigning to `s2` causes its original value to be dropped. Beyond
1394///     // this point, `s` must no longer be used, as the underlying memory has
1395///     // been freed.
1396///     s2 = String::default();
1397///     assert_eq!(s2, "");
1398///
1399///     // Assigning to `s` would cause the old value to be dropped again,
1400///     // resulting in undefined behavior.
1401///     // s = String::from("bar"); // ERROR
1402///
1403///     // `ptr::write` can be used to overwrite a value without dropping it.
1404///     ptr::write(&mut s, String::from("bar"));
1405/// }
1406///
1407/// assert_eq!(s, "bar");
1408/// ```
1409///
1410/// [valid]: self#safety
1411#[inline]
1412#[stable(feature = "rust1", since = "1.0.0")]
1413#[rustc_const_stable(feature = "const_ptr_read", since = "1.71.0")]
1414#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1415#[rustc_diagnostic_item = "ptr_read"]
1416pub const unsafe fn read<T>(src: *const T) -> T {
1417    // It would be semantically correct to implement this via `copy_nonoverlapping`
1418    // and `MaybeUninit`, as was done before PR #109035. Calling `assume_init`
1419    // provides enough information to know that this is a typed operation.
1420
1421    // However, as of March 2023 the compiler was not capable of taking advantage
1422    // of that information. Thus, the implementation here switched to an intrinsic,
1423    // which lowers to `_0 = *src` in MIR, to address a few issues:
1424    //
1425    // - Using `MaybeUninit::assume_init` after a `copy_nonoverlapping` was not
1426    //   turning the untyped copy into a typed load. As such, the generated
1427    //   `load` in LLVM didn't get various metadata, such as `!range` (#73258),
1428    //   `!nonnull`, and `!noundef`, resulting in poorer optimization.
1429    // - Going through the extra local resulted in multiple extra copies, even
1430    //   in optimized MIR.  (Ignoring StorageLive/Dead, the intrinsic is one
1431    //   MIR statement, while the previous implementation was eight.)  LLVM
1432    //   could sometimes optimize them away, but because `read` is at the core
1433    //   of so many things, not having them in the first place improves what we
1434    //   hand off to the backend.  For example, `mem::replace::<Big>` previously
1435    //   emitted 4 `alloca` and 6 `memcpy`s, but is now 1 `alloc` and 3 `memcpy`s.
1436    // - In general, this approach keeps us from getting any more bugs (like
1437    //   #106369) that boil down to "`read(p)` is worse than `*p`", as this
1438    //   makes them look identical to the backend (or other MIR consumers).
1439    //
1440    // Future enhancements to MIR optimizations might well allow this to return
1441    // to the previous implementation, rather than using an intrinsic.
1442
1443    // SAFETY: the caller must guarantee that `src` is valid for reads.
1444    unsafe {
1445        #[cfg(debug_assertions)] // Too expensive to always enable (for now?)
1446        ub_checks::assert_unsafe_precondition!(
1447            check_language_ub,
1448            "ptr::read requires that the pointer argument is aligned and non-null",
1449            (
1450                addr: *const () = src as *const (),
1451                align: usize = align_of::<T>(),
1452                is_zst: bool = T::IS_ZST,
1453            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, is_zst)
1454        );
1455        crate::intrinsics::read_via_copy(src)
1456    }
1457}
1458
1459/// Reads the value from `src` without moving it. This leaves the
1460/// memory in `src` unchanged.
1461///
1462/// Unlike [`read`], `read_unaligned` works with unaligned pointers.
1463///
1464/// # Safety
1465///
1466/// Behavior is undefined if any of the following conditions are violated:
1467///
1468/// * `src` must be [valid] for reads.
1469///
1470/// * `src` must point to a properly initialized value of type `T`.
1471///
1472/// Like [`read`], `read_unaligned` creates a bitwise copy of `T`, regardless of
1473/// whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned
1474/// value and the value at `*src` can [violate memory safety][read-ownership].
1475///
1476/// [read-ownership]: read#ownership-of-the-returned-value
1477/// [valid]: self#safety
1478///
1479/// ## On `packed` structs
1480///
1481/// Attempting to create a raw pointer to an `unaligned` struct field with
1482/// an expression such as `&packed.unaligned as *const FieldType` creates an
1483/// intermediate unaligned reference before converting that to a raw pointer.
1484/// That this reference is temporary and immediately cast is inconsequential
1485/// as the compiler always expects references to be properly aligned.
1486/// As a result, using `&packed.unaligned as *const FieldType` causes immediate
1487/// *undefined behavior* in your program.
1488///
1489/// Instead you must use the `&raw const` syntax to create the pointer.
1490/// You may use that constructed pointer together with this function.
1491///
1492/// An example of what not to do and how this relates to `read_unaligned` is:
1493///
1494/// ```
1495/// #[repr(packed, C)]
1496/// struct Packed {
1497///     _padding: u8,
1498///     unaligned: u32,
1499/// }
1500///
1501/// let packed = Packed {
1502///     _padding: 0x00,
1503///     unaligned: 0x01020304,
1504/// };
1505///
1506/// // Take the address of a 32-bit integer which is not aligned.
1507/// // In contrast to `&packed.unaligned as *const _`, this has no undefined behavior.
1508/// let unaligned = &raw const packed.unaligned;
1509///
1510/// let v = unsafe { std::ptr::read_unaligned(unaligned) };
1511/// assert_eq!(v, 0x01020304);
1512/// ```
1513///
1514/// Accessing unaligned fields directly with e.g. `packed.unaligned` is safe however.
1515///
1516/// # Examples
1517///
1518/// Read a `usize` value from a byte buffer:
1519///
1520/// ```
1521/// fn read_usize(x: &[u8]) -> usize {
1522///     assert!(x.len() >= size_of::<usize>());
1523///
1524///     let ptr = x.as_ptr() as *const usize;
1525///
1526///     unsafe { ptr.read_unaligned() }
1527/// }
1528/// ```
1529#[inline]
1530#[stable(feature = "ptr_unaligned", since = "1.17.0")]
1531#[rustc_const_stable(feature = "const_ptr_read", since = "1.71.0")]
1532#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1533#[rustc_diagnostic_item = "ptr_read_unaligned"]
1534pub const unsafe fn read_unaligned<T>(src: *const T) -> T {
1535    let mut tmp = MaybeUninit::<T>::uninit();
1536    // SAFETY: the caller must guarantee that `src` is valid for reads.
1537    // `src` cannot overlap `tmp` because `tmp` was just allocated on
1538    // the stack as a separate allocated object.
1539    //
1540    // Also, since we just wrote a valid value into `tmp`, it is guaranteed
1541    // to be properly initialized.
1542    unsafe {
1543        copy_nonoverlapping(src as *const u8, tmp.as_mut_ptr() as *mut u8, size_of::<T>());
1544        tmp.assume_init()
1545    }
1546}
1547
1548/// Overwrites a memory location with the given value without reading or
1549/// dropping the old value.
1550///
1551/// `write` does not drop the contents of `dst`. This is safe, but it could leak
1552/// allocations or resources, so care should be taken not to overwrite an object
1553/// that should be dropped.
1554///
1555/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
1556/// location pointed to by `dst`.
1557///
1558/// This is appropriate for initializing uninitialized memory, or overwriting
1559/// memory that has previously been [`read`] from.
1560///
1561/// # Safety
1562///
1563/// Behavior is undefined if any of the following conditions are violated:
1564///
1565/// * `dst` must be [valid] for writes.
1566///
1567/// * `dst` must be properly aligned. Use [`write_unaligned`] if this is not the
1568///   case.
1569///
1570/// Note that even if `T` has size `0`, the pointer must be properly aligned.
1571///
1572/// [valid]: self#safety
1573///
1574/// # Examples
1575///
1576/// Basic usage:
1577///
1578/// ```
1579/// let mut x = 0;
1580/// let y = &mut x as *mut i32;
1581/// let z = 12;
1582///
1583/// unsafe {
1584///     std::ptr::write(y, z);
1585///     assert_eq!(std::ptr::read(y), 12);
1586/// }
1587/// ```
1588///
1589/// Manually implement [`mem::swap`]:
1590///
1591/// ```
1592/// use std::ptr;
1593///
1594/// fn swap<T>(a: &mut T, b: &mut T) {
1595///     unsafe {
1596///         // Create a bitwise copy of the value at `a` in `tmp`.
1597///         let tmp = ptr::read(a);
1598///
1599///         // Exiting at this point (either by explicitly returning or by
1600///         // calling a function which panics) would cause the value in `tmp` to
1601///         // be dropped while the same value is still referenced by `a`. This
1602///         // could trigger undefined behavior if `T` is not `Copy`.
1603///
1604///         // Create a bitwise copy of the value at `b` in `a`.
1605///         // This is safe because mutable references cannot alias.
1606///         ptr::copy_nonoverlapping(b, a, 1);
1607///
1608///         // As above, exiting here could trigger undefined behavior because
1609///         // the same value is referenced by `a` and `b`.
1610///
1611///         // Move `tmp` into `b`.
1612///         ptr::write(b, tmp);
1613///
1614///         // `tmp` has been moved (`write` takes ownership of its second argument),
1615///         // so nothing is dropped implicitly here.
1616///     }
1617/// }
1618///
1619/// let mut foo = "foo".to_owned();
1620/// let mut bar = "bar".to_owned();
1621///
1622/// swap(&mut foo, &mut bar);
1623///
1624/// assert_eq!(foo, "bar");
1625/// assert_eq!(bar, "foo");
1626/// ```
1627#[inline]
1628#[stable(feature = "rust1", since = "1.0.0")]
1629#[rustc_const_stable(feature = "const_ptr_write", since = "1.83.0")]
1630#[rustc_diagnostic_item = "ptr_write"]
1631#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1632pub const unsafe fn write<T>(dst: *mut T, src: T) {
1633    // Semantically, it would be fine for this to be implemented as a
1634    // `copy_nonoverlapping` and appropriate drop suppression of `src`.
1635
1636    // However, implementing via that currently produces more MIR than is ideal.
1637    // Using an intrinsic keeps it down to just the simple `*dst = move src` in
1638    // MIR (11 statements shorter, at the time of writing), and also allows
1639    // `src` to stay an SSA value in codegen_ssa, rather than a memory one.
1640
1641    // SAFETY: the caller must guarantee that `dst` is valid for writes.
1642    // `dst` cannot overlap `src` because the caller has mutable access
1643    // to `dst` while `src` is owned by this function.
1644    unsafe {
1645        #[cfg(debug_assertions)] // Too expensive to always enable (for now?)
1646        ub_checks::assert_unsafe_precondition!(
1647            check_language_ub,
1648            "ptr::write requires that the pointer argument is aligned and non-null",
1649            (
1650                addr: *mut () = dst as *mut (),
1651                align: usize = align_of::<T>(),
1652                is_zst: bool = T::IS_ZST,
1653            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, is_zst)
1654        );
1655        intrinsics::write_via_move(dst, src)
1656    }
1657}
1658
1659/// Overwrites a memory location with the given value without reading or
1660/// dropping the old value.
1661///
1662/// Unlike [`write()`], the pointer may be unaligned.
1663///
1664/// `write_unaligned` does not drop the contents of `dst`. This is safe, but it
1665/// could leak allocations or resources, so care should be taken not to overwrite
1666/// an object that should be dropped.
1667///
1668/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
1669/// location pointed to by `dst`.
1670///
1671/// This is appropriate for initializing uninitialized memory, or overwriting
1672/// memory that has previously been read with [`read_unaligned`].
1673///
1674/// # Safety
1675///
1676/// Behavior is undefined if any of the following conditions are violated:
1677///
1678/// * `dst` must be [valid] for writes.
1679///
1680/// [valid]: self#safety
1681///
1682/// ## On `packed` structs
1683///
1684/// Attempting to create a raw pointer to an `unaligned` struct field with
1685/// an expression such as `&packed.unaligned as *const FieldType` creates an
1686/// intermediate unaligned reference before converting that to a raw pointer.
1687/// That this reference is temporary and immediately cast is inconsequential
1688/// as the compiler always expects references to be properly aligned.
1689/// As a result, using `&packed.unaligned as *const FieldType` causes immediate
1690/// *undefined behavior* in your program.
1691///
1692/// Instead, you must use the `&raw mut` syntax to create the pointer.
1693/// You may use that constructed pointer together with this function.
1694///
1695/// An example of how to do it and how this relates to `write_unaligned` is:
1696///
1697/// ```
1698/// #[repr(packed, C)]
1699/// struct Packed {
1700///     _padding: u8,
1701///     unaligned: u32,
1702/// }
1703///
1704/// let mut packed: Packed = unsafe { std::mem::zeroed() };
1705///
1706/// // Take the address of a 32-bit integer which is not aligned.
1707/// // In contrast to `&packed.unaligned as *mut _`, this has no undefined behavior.
1708/// let unaligned = &raw mut packed.unaligned;
1709///
1710/// unsafe { std::ptr::write_unaligned(unaligned, 42) };
1711///
1712/// assert_eq!({packed.unaligned}, 42); // `{...}` forces copying the field instead of creating a reference.
1713/// ```
1714///
1715/// Accessing unaligned fields directly with e.g. `packed.unaligned` is safe however
1716/// (as can be seen in the `assert_eq!` above).
1717///
1718/// # Examples
1719///
1720/// Write a `usize` value to a byte buffer:
1721///
1722/// ```
1723/// fn write_usize(x: &mut [u8], val: usize) {
1724///     assert!(x.len() >= size_of::<usize>());
1725///
1726///     let ptr = x.as_mut_ptr() as *mut usize;
1727///
1728///     unsafe { ptr.write_unaligned(val) }
1729/// }
1730/// ```
1731#[inline]
1732#[stable(feature = "ptr_unaligned", since = "1.17.0")]
1733#[rustc_const_stable(feature = "const_ptr_write", since = "1.83.0")]
1734#[rustc_diagnostic_item = "ptr_write_unaligned"]
1735#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1736pub const unsafe fn write_unaligned<T>(dst: *mut T, src: T) {
1737    // SAFETY: the caller must guarantee that `dst` is valid for writes.
1738    // `dst` cannot overlap `src` because the caller has mutable access
1739    // to `dst` while `src` is owned by this function.
1740    unsafe {
1741        copy_nonoverlapping((&raw const src) as *const u8, dst as *mut u8, size_of::<T>());
1742        // We are calling the intrinsic directly to avoid function calls in the generated code.
1743        intrinsics::forget(src);
1744    }
1745}
1746
1747/// Performs a volatile read of the value from `src` without moving it. This
1748/// leaves the memory in `src` unchanged.
1749///
1750/// Volatile operations are intended to act on I/O memory, and are guaranteed
1751/// to not be elided or reordered by the compiler across other volatile
1752/// operations.
1753///
1754/// # Notes
1755///
1756/// Rust does not currently have a rigorously and formally defined memory model,
1757/// so the precise semantics of what "volatile" means here is subject to change
1758/// over time. That being said, the semantics will almost always end up pretty
1759/// similar to [C11's definition of volatile][c11].
1760///
1761/// The compiler shouldn't change the relative order or number of volatile
1762/// memory operations. However, volatile memory operations on zero-sized types
1763/// (e.g., if a zero-sized type is passed to `read_volatile`) are noops
1764/// and may be ignored.
1765///
1766/// [c11]: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
1767///
1768/// # Safety
1769///
1770/// Behavior is undefined if any of the following conditions are violated:
1771///
1772/// * `src` must be [valid] for reads.
1773///
1774/// * `src` must be properly aligned.
1775///
1776/// * `src` must point to a properly initialized value of type `T`.
1777///
1778/// Like [`read`], `read_volatile` creates a bitwise copy of `T`, regardless of
1779/// whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned
1780/// value and the value at `*src` can [violate memory safety][read-ownership].
1781/// However, storing non-[`Copy`] types in volatile memory is almost certainly
1782/// incorrect.
1783///
1784/// Note that even if `T` has size `0`, the pointer must be properly aligned.
1785///
1786/// [valid]: self#safety
1787/// [read-ownership]: read#ownership-of-the-returned-value
1788///
1789/// Just like in C, whether an operation is volatile has no bearing whatsoever
1790/// on questions involving concurrent access from multiple threads. Volatile
1791/// accesses behave exactly like non-atomic accesses in that regard. In particular,
1792/// a race between a `read_volatile` and any write operation to the same location
1793/// is undefined behavior.
1794///
1795/// # Examples
1796///
1797/// Basic usage:
1798///
1799/// ```
1800/// let x = 12;
1801/// let y = &x as *const i32;
1802///
1803/// unsafe {
1804///     assert_eq!(std::ptr::read_volatile(y), 12);
1805/// }
1806/// ```
1807#[inline]
1808#[stable(feature = "volatile", since = "1.9.0")]
1809#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1810#[rustc_diagnostic_item = "ptr_read_volatile"]
1811pub unsafe fn read_volatile<T>(src: *const T) -> T {
1812    // SAFETY: the caller must uphold the safety contract for `volatile_load`.
1813    unsafe {
1814        ub_checks::assert_unsafe_precondition!(
1815            check_language_ub,
1816            "ptr::read_volatile requires that the pointer argument is aligned and non-null",
1817            (
1818                addr: *const () = src as *const (),
1819                align: usize = align_of::<T>(),
1820                is_zst: bool = T::IS_ZST,
1821            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, is_zst)
1822        );
1823        intrinsics::volatile_load(src)
1824    }
1825}
1826
1827/// Performs a volatile write of a memory location with the given value without
1828/// reading or dropping the old value.
1829///
1830/// Volatile operations are intended to act on I/O memory, and are guaranteed
1831/// to not be elided or reordered by the compiler across other volatile
1832/// operations.
1833///
1834/// `write_volatile` does not drop the contents of `dst`. This is safe, but it
1835/// could leak allocations or resources, so care should be taken not to overwrite
1836/// an object that should be dropped.
1837///
1838/// Additionally, it does not drop `src`. Semantically, `src` is moved into the
1839/// location pointed to by `dst`.
1840///
1841/// # Notes
1842///
1843/// Rust does not currently have a rigorously and formally defined memory model,
1844/// so the precise semantics of what "volatile" means here is subject to change
1845/// over time. That being said, the semantics will almost always end up pretty
1846/// similar to [C11's definition of volatile][c11].
1847///
1848/// The compiler shouldn't change the relative order or number of volatile
1849/// memory operations. However, volatile memory operations on zero-sized types
1850/// (e.g., if a zero-sized type is passed to `write_volatile`) are noops
1851/// and may be ignored.
1852///
1853/// [c11]: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
1854///
1855/// # Safety
1856///
1857/// Behavior is undefined if any of the following conditions are violated:
1858///
1859/// * `dst` must be [valid] for writes.
1860///
1861/// * `dst` must be properly aligned.
1862///
1863/// Note that even if `T` has size `0`, the pointer must be properly aligned.
1864///
1865/// [valid]: self#safety
1866///
1867/// Just like in C, whether an operation is volatile has no bearing whatsoever
1868/// on questions involving concurrent access from multiple threads. Volatile
1869/// accesses behave exactly like non-atomic accesses in that regard. In particular,
1870/// a race between a `write_volatile` and any other operation (reading or writing)
1871/// on the same location is undefined behavior.
1872///
1873/// # Examples
1874///
1875/// Basic usage:
1876///
1877/// ```
1878/// let mut x = 0;
1879/// let y = &mut x as *mut i32;
1880/// let z = 12;
1881///
1882/// unsafe {
1883///     std::ptr::write_volatile(y, z);
1884///     assert_eq!(std::ptr::read_volatile(y), 12);
1885/// }
1886/// ```
1887#[inline]
1888#[stable(feature = "volatile", since = "1.9.0")]
1889#[rustc_diagnostic_item = "ptr_write_volatile"]
1890#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1891pub unsafe fn write_volatile<T>(dst: *mut T, src: T) {
1892    // SAFETY: the caller must uphold the safety contract for `volatile_store`.
1893    unsafe {
1894        ub_checks::assert_unsafe_precondition!(
1895            check_language_ub,
1896            "ptr::write_volatile requires that the pointer argument is aligned and non-null",
1897            (
1898                addr: *mut () = dst as *mut (),
1899                align: usize = align_of::<T>(),
1900                is_zst: bool = T::IS_ZST,
1901            ) => ub_checks::maybe_is_aligned_and_not_null(addr, align, is_zst)
1902        );
1903        intrinsics::volatile_store(dst, src);
1904    }
1905}
1906
1907/// Align pointer `p`.
1908///
1909/// Calculate offset (in terms of elements of `size_of::<T>()` stride) that has to be applied
1910/// to pointer `p` so that pointer `p` would get aligned to `a`.
1911///
1912/// # Safety
1913/// `a` must be a power of two.
1914///
1915/// # Notes
1916/// This implementation has been carefully tailored to not panic. It is UB for this to panic.
1917/// The only real change that can be made here is change of `INV_TABLE_MOD_16` and associated
1918/// constants.
1919///
1920/// If we ever decide to make it possible to call the intrinsic with `a` that is not a
1921/// power-of-two, it will probably be more prudent to just change to a naive implementation rather
1922/// than trying to adapt this to accommodate that change.
1923///
1924/// Any questions go to @nagisa.
1925#[allow(ptr_to_integer_transmute_in_consts)]
1926pub(crate) unsafe fn align_offset<T: Sized>(p: *const T, a: usize) -> usize {
1927    // FIXME(#75598): Direct use of these intrinsics improves codegen significantly at opt-level <=
1928    // 1, where the method versions of these operations are not inlined.
1929    use intrinsics::{
1930        assume, cttz_nonzero, exact_div, mul_with_overflow, unchecked_rem, unchecked_shl,
1931        unchecked_shr, unchecked_sub, wrapping_add, wrapping_mul, wrapping_sub,
1932    };
1933
1934    /// Calculate multiplicative modular inverse of `x` modulo `m`.
1935    ///
1936    /// This implementation is tailored for `align_offset` and has following preconditions:
1937    ///
1938    /// * `m` is a power-of-two;
1939    /// * `x < m`; (if `x ≥ m`, pass in `x % m` instead)
1940    ///
1941    /// Implementation of this function shall not panic. Ever.
1942    #[inline]
1943    const unsafe fn mod_inv(x: usize, m: usize) -> usize {
1944        /// Multiplicative modular inverse table modulo 2⁴ = 16.
1945        ///
1946        /// Note, that this table does not contain values where inverse does not exist (i.e., for
1947        /// `0⁻¹ mod 16`, `2⁻¹ mod 16`, etc.)
1948        const INV_TABLE_MOD_16: [u8; 8] = [1, 11, 13, 7, 9, 3, 5, 15];
1949        /// Modulo for which the `INV_TABLE_MOD_16` is intended.
1950        const INV_TABLE_MOD: usize = 16;
1951
1952        // SAFETY: `m` is required to be a power-of-two, hence non-zero.
1953        let m_minus_one = unsafe { unchecked_sub(m, 1) };
1954        let mut inverse = INV_TABLE_MOD_16[(x & (INV_TABLE_MOD - 1)) >> 1] as usize;
1955        let mut mod_gate = INV_TABLE_MOD;
1956        // We iterate "up" using the following formula:
1957        //
1958        // $$ xy ≡ 1 (mod 2ⁿ) → xy (2 - xy) ≡ 1 (mod 2²ⁿ) $$
1959        //
1960        // This application needs to be applied at least until `2²ⁿ ≥ m`, at which point we can
1961        // finally reduce the computation to our desired `m` by taking `inverse mod m`.
1962        //
1963        // This computation is `O(log log m)`, which is to say, that on 64-bit machines this loop
1964        // will always finish in at most 4 iterations.
1965        loop {
1966            // y = y * (2 - xy) mod n
1967            //
1968            // Note, that we use wrapping operations here intentionally – the original formula
1969            // uses e.g., subtraction `mod n`. It is entirely fine to do them `mod
1970            // usize::MAX` instead, because we take the result `mod n` at the end
1971            // anyway.
1972            if mod_gate >= m {
1973                break;
1974            }
1975            inverse = wrapping_mul(inverse, wrapping_sub(2usize, wrapping_mul(x, inverse)));
1976            let (new_gate, overflow) = mul_with_overflow(mod_gate, mod_gate);
1977            if overflow {
1978                break;
1979            }
1980            mod_gate = new_gate;
1981        }
1982        inverse & m_minus_one
1983    }
1984
1985    let stride = size_of::<T>();
1986
1987    let addr: usize = p.addr();
1988
1989    // SAFETY: `a` is a power-of-two, therefore non-zero.
1990    let a_minus_one = unsafe { unchecked_sub(a, 1) };
1991
1992    if stride == 0 {
1993        // SPECIAL_CASE: handle 0-sized types. No matter how many times we step, the address will
1994        // stay the same, so no offset will be able to align the pointer unless it is already
1995        // aligned. This branch _will_ be optimized out as `stride` is known at compile-time.
1996        let p_mod_a = addr & a_minus_one;
1997        return if p_mod_a == 0 { 0 } else { usize::MAX };
1998    }
1999
2000    // SAFETY: `stride == 0` case has been handled by the special case above.
2001    let a_mod_stride = unsafe { unchecked_rem(a, stride) };
2002    if a_mod_stride == 0 {
2003        // SPECIAL_CASE: In cases where the `a` is divisible by `stride`, byte offset to align a
2004        // pointer can be computed more simply through `-p (mod a)`. In the off-chance the byte
2005        // offset is not a multiple of `stride`, the input pointer was misaligned and no pointer
2006        // offset will be able to produce a `p` aligned to the specified `a`.
2007        //
2008        // The naive `-p (mod a)` equation inhibits LLVM's ability to select instructions
2009        // like `lea`. We compute `(round_up_to_next_alignment(p, a) - p)` instead. This
2010        // redistributes operations around the load-bearing, but pessimizing `and` instruction
2011        // sufficiently for LLVM to be able to utilize the various optimizations it knows about.
2012        //
2013        // LLVM handles the branch here particularly nicely. If this branch needs to be evaluated
2014        // at runtime, it will produce a mask `if addr_mod_stride == 0 { 0 } else { usize::MAX }`
2015        // in a branch-free way and then bitwise-OR it with whatever result the `-p mod a`
2016        // computation produces.
2017
2018        let aligned_address = wrapping_add(addr, a_minus_one) & wrapping_sub(0, a);
2019        let byte_offset = wrapping_sub(aligned_address, addr);
2020        // FIXME: Remove the assume after <https://github.com/llvm/llvm-project/issues/62502>
2021        // SAFETY: Masking by `-a` can only affect the low bits, and thus cannot have reduced
2022        // the value by more than `a-1`, so even though the intermediate values might have
2023        // wrapped, the byte_offset is always in `[0, a)`.
2024        unsafe { assume(byte_offset < a) };
2025
2026        // SAFETY: `stride == 0` case has been handled by the special case above.
2027        let addr_mod_stride = unsafe { unchecked_rem(addr, stride) };
2028
2029        return if addr_mod_stride == 0 {
2030            // SAFETY: `stride` is non-zero. This is guaranteed to divide exactly as well, because
2031            // addr has been verified to be aligned to the original type’s alignment requirements.
2032            unsafe { exact_div(byte_offset, stride) }
2033        } else {
2034            usize::MAX
2035        };
2036    }
2037
2038    // GENERAL_CASE: From here on we’re handling the very general case where `addr` may be
2039    // misaligned, there isn’t an obvious relationship between `stride` and `a` that we can take an
2040    // advantage of, etc. This case produces machine code that isn’t particularly high quality,
2041    // compared to the special cases above. The code produced here is still within the realm of
2042    // miracles, given the situations this case has to deal with.
2043
2044    // SAFETY: a is power-of-two hence non-zero. stride == 0 case is handled above.
2045    // FIXME(const-hack) replace with min
2046    let gcdpow = unsafe {
2047        let x = cttz_nonzero(stride);
2048        let y = cttz_nonzero(a);
2049        if x < y { x } else { y }
2050    };
2051    // SAFETY: gcdpow has an upper-bound that’s at most the number of bits in a `usize`.
2052    let gcd = unsafe { unchecked_shl(1usize, gcdpow) };
2053    // SAFETY: gcd is always greater or equal to 1.
2054    if addr & unsafe { unchecked_sub(gcd, 1) } == 0 {
2055        // This branch solves for the following linear congruence equation:
2056        //
2057        // ` p + so = 0 mod a `
2058        //
2059        // `p` here is the pointer value, `s` - stride of `T`, `o` offset in `T`s, and `a` - the
2060        // requested alignment.
2061        //
2062        // With `g = gcd(a, s)`, and the above condition asserting that `p` is also divisible by
2063        // `g`, we can denote `a' = a/g`, `s' = s/g`, `p' = p/g`, then this becomes equivalent to:
2064        //
2065        // ` p' + s'o = 0 mod a' `
2066        // ` o = (a' - (p' mod a')) * (s'^-1 mod a') `
2067        //
2068        // The first term is "the relative alignment of `p` to `a`" (divided by the `g`), the
2069        // second term is "how does incrementing `p` by `s` bytes change the relative alignment of
2070        // `p`" (again divided by `g`). Division by `g` is necessary to make the inverse well
2071        // formed if `a` and `s` are not co-prime.
2072        //
2073        // Furthermore, the result produced by this solution is not "minimal", so it is necessary
2074        // to take the result `o mod lcm(s, a)`. This `lcm(s, a)` is the same as `a'`.
2075
2076        // SAFETY: `gcdpow` has an upper-bound not greater than the number of trailing 0-bits in
2077        // `a`.
2078        let a2 = unsafe { unchecked_shr(a, gcdpow) };
2079        // SAFETY: `a2` is non-zero. Shifting `a` by `gcdpow` cannot shift out any of the set bits
2080        // in `a` (of which it has exactly one).
2081        let a2minus1 = unsafe { unchecked_sub(a2, 1) };
2082        // SAFETY: `gcdpow` has an upper-bound not greater than the number of trailing 0-bits in
2083        // `a`.
2084        let s2 = unsafe { unchecked_shr(stride & a_minus_one, gcdpow) };
2085        // SAFETY: `gcdpow` has an upper-bound not greater than the number of trailing 0-bits in
2086        // `a`. Furthermore, the subtraction cannot overflow, because `a2 = a >> gcdpow` will
2087        // always be strictly greater than `(p % a) >> gcdpow`.
2088        let minusp2 = unsafe { unchecked_sub(a2, unchecked_shr(addr & a_minus_one, gcdpow)) };
2089        // SAFETY: `a2` is a power-of-two, as proven above. `s2` is strictly less than `a2`
2090        // because `(s % a) >> gcdpow` is strictly less than `a >> gcdpow`.
2091        return wrapping_mul(minusp2, unsafe { mod_inv(s2, a2) }) & a2minus1;
2092    }
2093
2094    // Cannot be aligned at all.
2095    usize::MAX
2096}
2097
2098/// Compares raw pointers for equality.
2099///
2100/// This is the same as using the `==` operator, but less generic:
2101/// the arguments have to be `*const T` raw pointers,
2102/// not anything that implements `PartialEq`.
2103///
2104/// This can be used to compare `&T` references (which coerce to `*const T` implicitly)
2105/// by their address rather than comparing the values they point to
2106/// (which is what the `PartialEq for &T` implementation does).
2107///
2108/// When comparing wide pointers, both the address and the metadata are tested for equality.
2109/// However, note that comparing trait object pointers (`*const dyn Trait`) is unreliable: pointers
2110/// to values of the same underlying type can compare inequal (because vtables are duplicated in
2111/// multiple codegen units), and pointers to values of *different* underlying type can compare equal
2112/// (since identical vtables can be deduplicated within a codegen unit).
2113///
2114/// # Examples
2115///
2116/// ```
2117/// use std::ptr;
2118///
2119/// let five = 5;
2120/// let other_five = 5;
2121/// let five_ref = &five;
2122/// let same_five_ref = &five;
2123/// let other_five_ref = &other_five;
2124///
2125/// assert!(five_ref == same_five_ref);
2126/// assert!(ptr::eq(five_ref, same_five_ref));
2127///
2128/// assert!(five_ref == other_five_ref);
2129/// assert!(!ptr::eq(five_ref, other_five_ref));
2130/// ```
2131///
2132/// Slices are also compared by their length (fat pointers):
2133///
2134/// ```
2135/// let a = [1, 2, 3];
2136/// assert!(std::ptr::eq(&a[..3], &a[..3]));
2137/// assert!(!std::ptr::eq(&a[..2], &a[..3]));
2138/// assert!(!std::ptr::eq(&a[0..2], &a[1..3]));
2139/// ```
2140#[stable(feature = "ptr_eq", since = "1.17.0")]
2141#[inline(always)]
2142#[must_use = "pointer comparison produces a value"]
2143#[rustc_diagnostic_item = "ptr_eq"]
2144#[allow(ambiguous_wide_pointer_comparisons)] // it's actually clear here
2145pub fn eq<T: ?Sized>(a: *const T, b: *const T) -> bool {
2146    a == b
2147}
2148
2149/// Compares the *addresses* of the two pointers for equality,
2150/// ignoring any metadata in fat pointers.
2151///
2152/// If the arguments are thin pointers of the same type,
2153/// then this is the same as [`eq`].
2154///
2155/// # Examples
2156///
2157/// ```
2158/// use std::ptr;
2159///
2160/// let whole: &[i32; 3] = &[1, 2, 3];
2161/// let first: &i32 = &whole[0];
2162///
2163/// assert!(ptr::addr_eq(whole, first));
2164/// assert!(!ptr::eq::<dyn std::fmt::Debug>(whole, first));
2165/// ```
2166#[stable(feature = "ptr_addr_eq", since = "1.76.0")]
2167#[inline(always)]
2168#[must_use = "pointer comparison produces a value"]
2169pub fn addr_eq<T: ?Sized, U: ?Sized>(p: *const T, q: *const U) -> bool {
2170    (p as *const ()) == (q as *const ())
2171}
2172
2173/// Compares the *addresses* of the two function pointers for equality.
2174///
2175/// This is the same as `f == g`, but using this function makes clear that the potentially
2176/// surprising semantics of function pointer comparison are involved.
2177///
2178/// There are **very few guarantees** about how functions are compiled and they have no intrinsic
2179/// “identity”; in particular, this comparison:
2180///
2181/// * May return `true` unexpectedly, in cases where functions are equivalent.
2182///
2183///   For example, the following program is likely (but not guaranteed) to print `(true, true)`
2184///   when compiled with optimization:
2185///
2186///   ```
2187///   let f: fn(i32) -> i32 = |x| x;
2188///   let g: fn(i32) -> i32 = |x| x + 0;  // different closure, different body
2189///   let h: fn(u32) -> u32 = |x| x + 0;  // different signature too
2190///   dbg!(std::ptr::fn_addr_eq(f, g), std::ptr::fn_addr_eq(f, h)); // not guaranteed to be equal
2191///   ```
2192///
2193/// * May return `false` in any case.
2194///
2195///   This is particularly likely with generic functions but may happen with any function.
2196///   (From an implementation perspective, this is possible because functions may sometimes be
2197///   processed more than once by the compiler, resulting in duplicate machine code.)
2198///
2199/// Despite these false positives and false negatives, this comparison can still be useful.
2200/// Specifically, if
2201///
2202/// * `T` is the same type as `U`, `T` is a [subtype] of `U`, or `U` is a [subtype] of `T`, and
2203/// * `ptr::fn_addr_eq(f, g)` returns true,
2204///
2205/// then calling `f` and calling `g` will be equivalent.
2206///
2207///
2208/// # Examples
2209///
2210/// ```
2211/// use std::ptr;
2212///
2213/// fn a() { println!("a"); }
2214/// fn b() { println!("b"); }
2215/// assert!(!ptr::fn_addr_eq(a as fn(), b as fn()));
2216/// ```
2217///
2218/// [subtype]: https://doc.rust-lang.org/reference/subtyping.html
2219#[stable(feature = "ptr_fn_addr_eq", since = "1.85.0")]
2220#[inline(always)]
2221#[must_use = "function pointer comparison produces a value"]
2222pub fn fn_addr_eq<T: FnPtr, U: FnPtr>(f: T, g: U) -> bool {
2223    f.addr() == g.addr()
2224}
2225
2226/// Hash a raw pointer.
2227///
2228/// This can be used to hash a `&T` reference (which coerces to `*const T` implicitly)
2229/// by its address rather than the value it points to
2230/// (which is what the `Hash for &T` implementation does).
2231///
2232/// # Examples
2233///
2234/// ```
2235/// use std::hash::{DefaultHasher, Hash, Hasher};
2236/// use std::ptr;
2237///
2238/// let five = 5;
2239/// let five_ref = &five;
2240///
2241/// let mut hasher = DefaultHasher::new();
2242/// ptr::hash(five_ref, &mut hasher);
2243/// let actual = hasher.finish();
2244///
2245/// let mut hasher = DefaultHasher::new();
2246/// (five_ref as *const i32).hash(&mut hasher);
2247/// let expected = hasher.finish();
2248///
2249/// assert_eq!(actual, expected);
2250/// ```
2251#[stable(feature = "ptr_hash", since = "1.35.0")]
2252pub fn hash<T: ?Sized, S: hash::Hasher>(hashee: *const T, into: &mut S) {
2253    use crate::hash::Hash;
2254    hashee.hash(into);
2255}
2256
2257#[stable(feature = "fnptr_impls", since = "1.4.0")]
2258impl<F: FnPtr> PartialEq for F {
2259    #[inline]
2260    fn eq(&self, other: &Self) -> bool {
2261        self.addr() == other.addr()
2262    }
2263}
2264#[stable(feature = "fnptr_impls", since = "1.4.0")]
2265impl<F: FnPtr> Eq for F {}
2266
2267#[stable(feature = "fnptr_impls", since = "1.4.0")]
2268impl<F: FnPtr> PartialOrd for F {
2269    #[inline]
2270    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
2271        self.addr().partial_cmp(&other.addr())
2272    }
2273}
2274#[stable(feature = "fnptr_impls", since = "1.4.0")]
2275impl<F: FnPtr> Ord for F {
2276    #[inline]
2277    fn cmp(&self, other: &Self) -> Ordering {
2278        self.addr().cmp(&other.addr())
2279    }
2280}
2281
2282#[stable(feature = "fnptr_impls", since = "1.4.0")]
2283impl<F: FnPtr> hash::Hash for F {
2284    fn hash<HH: hash::Hasher>(&self, state: &mut HH) {
2285        state.write_usize(self.addr() as _)
2286    }
2287}
2288
2289#[stable(feature = "fnptr_impls", since = "1.4.0")]
2290impl<F: FnPtr> fmt::Pointer for F {
2291    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2292        fmt::pointer_fmt_inner(self.addr() as _, f)
2293    }
2294}
2295
2296#[stable(feature = "fnptr_impls", since = "1.4.0")]
2297impl<F: FnPtr> fmt::Debug for F {
2298    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2299        fmt::pointer_fmt_inner(self.addr() as _, f)
2300    }
2301}
2302
2303/// Creates a `const` raw pointer to a place, without creating an intermediate reference.
2304///
2305/// `addr_of!(expr)` is equivalent to `&raw const expr`. The macro is *soft-deprecated*;
2306/// use `&raw const` instead.
2307///
2308/// It is still an open question under which conditions writing through an `addr_of!`-created
2309/// pointer is permitted. If the place `expr` evaluates to is based on a raw pointer, then the
2310/// result of `addr_of!` inherits all permissions from that raw pointer. However, if the place is
2311/// based on a reference, local variable, or `static`, then until all details are decided, the same
2312/// rules as for shared references apply: it is UB to write through a pointer created with this
2313/// operation, except for bytes located inside an `UnsafeCell`. Use `&raw mut` (or [`addr_of_mut`])
2314/// to create a raw pointer that definitely permits mutation.
2315///
2316/// Creating a reference with `&`/`&mut` is only allowed if the pointer is properly aligned
2317/// and points to initialized data. For cases where those requirements do not hold,
2318/// raw pointers should be used instead. However, `&expr as *const _` creates a reference
2319/// before casting it to a raw pointer, and that reference is subject to the same rules
2320/// as all other references. This macro can create a raw pointer *without* creating
2321/// a reference first.
2322///
2323/// See [`addr_of_mut`] for how to create a pointer to uninitialized data.
2324/// Doing that with `addr_of` would not make much sense since one could only
2325/// read the data, and that would be Undefined Behavior.
2326///
2327/// # Safety
2328///
2329/// The `expr` in `addr_of!(expr)` is evaluated as a place expression, but never loads from the
2330/// place or requires the place to be dereferenceable. This means that `addr_of!((*ptr).field)`
2331/// still requires the projection to `field` to be in-bounds, using the same rules as [`offset`].
2332/// However, `addr_of!(*ptr)` is defined behavior even if `ptr` is null, dangling, or misaligned.
2333///
2334/// Note that `Deref`/`Index` coercions (and their mutable counterparts) are applied inside
2335/// `addr_of!` like everywhere else, in which case a reference is created to call `Deref::deref` or
2336/// `Index::index`, respectively. The statements above only apply when no such coercions are
2337/// applied.
2338///
2339/// [`offset`]: pointer::offset
2340///
2341/// # Example
2342///
2343/// **Correct usage: Creating a pointer to unaligned data**
2344///
2345/// ```
2346/// use std::ptr;
2347///
2348/// #[repr(packed)]
2349/// struct Packed {
2350///     f1: u8,
2351///     f2: u16,
2352/// }
2353///
2354/// let packed = Packed { f1: 1, f2: 2 };
2355/// // `&packed.f2` would create an unaligned reference, and thus be Undefined Behavior!
2356/// let raw_f2 = ptr::addr_of!(packed.f2);
2357/// assert_eq!(unsafe { raw_f2.read_unaligned() }, 2);
2358/// ```
2359///
2360/// **Incorrect usage: Out-of-bounds fields projection**
2361///
2362/// ```rust,no_run
2363/// use std::ptr;
2364///
2365/// #[repr(C)]
2366/// struct MyStruct {
2367///     field1: i32,
2368///     field2: i32,
2369/// }
2370///
2371/// let ptr: *const MyStruct = ptr::null();
2372/// let fieldptr = unsafe { ptr::addr_of!((*ptr).field2) }; // Undefined Behavior ⚠️
2373/// ```
2374///
2375/// The field projection `.field2` would offset the pointer by 4 bytes,
2376/// but the pointer is not in-bounds of an allocation for 4 bytes,
2377/// so this offset is Undefined Behavior.
2378/// See the [`offset`] docs for a full list of requirements for inbounds pointer arithmetic; the
2379/// same requirements apply to field projections, even inside `addr_of!`. (In particular, it makes
2380/// no difference whether the pointer is null or dangling.)
2381#[stable(feature = "raw_ref_macros", since = "1.51.0")]
2382#[rustc_macro_transparency = "semitransparent"]
2383pub macro addr_of($place:expr) {
2384    &raw const $place
2385}
2386
2387/// Creates a `mut` raw pointer to a place, without creating an intermediate reference.
2388///
2389/// `addr_of_mut!(expr)` is equivalent to `&raw mut expr`. The macro is *soft-deprecated*;
2390/// use `&raw mut` instead.
2391///
2392/// Creating a reference with `&`/`&mut` is only allowed if the pointer is properly aligned
2393/// and points to initialized data. For cases where those requirements do not hold,
2394/// raw pointers should be used instead. However, `&mut expr as *mut _` creates a reference
2395/// before casting it to a raw pointer, and that reference is subject to the same rules
2396/// as all other references. This macro can create a raw pointer *without* creating
2397/// a reference first.
2398///
2399/// # Safety
2400///
2401/// The `expr` in `addr_of_mut!(expr)` is evaluated as a place expression, but never loads from the
2402/// place or requires the place to be dereferenceable. This means that `addr_of_mut!((*ptr).field)`
2403/// still requires the projection to `field` to be in-bounds, using the same rules as [`offset`].
2404/// However, `addr_of_mut!(*ptr)` is defined behavior even if `ptr` is null, dangling, or misaligned.
2405///
2406/// Note that `Deref`/`Index` coercions (and their mutable counterparts) are applied inside
2407/// `addr_of_mut!` like everywhere else, in which case a reference is created to call `Deref::deref`
2408/// or `Index::index`, respectively. The statements above only apply when no such coercions are
2409/// applied.
2410///
2411/// [`offset`]: pointer::offset
2412///
2413/// # Examples
2414///
2415/// **Correct usage: Creating a pointer to unaligned data**
2416///
2417/// ```
2418/// use std::ptr;
2419///
2420/// #[repr(packed)]
2421/// struct Packed {
2422///     f1: u8,
2423///     f2: u16,
2424/// }
2425///
2426/// let mut packed = Packed { f1: 1, f2: 2 };
2427/// // `&mut packed.f2` would create an unaligned reference, and thus be Undefined Behavior!
2428/// let raw_f2 = ptr::addr_of_mut!(packed.f2);
2429/// unsafe { raw_f2.write_unaligned(42); }
2430/// assert_eq!({packed.f2}, 42); // `{...}` forces copying the field instead of creating a reference.
2431/// ```
2432///
2433/// **Correct usage: Creating a pointer to uninitialized data**
2434///
2435/// ```rust
2436/// use std::{ptr, mem::MaybeUninit};
2437///
2438/// struct Demo {
2439///     field: bool,
2440/// }
2441///
2442/// let mut uninit = MaybeUninit::<Demo>::uninit();
2443/// // `&uninit.as_mut().field` would create a reference to an uninitialized `bool`,
2444/// // and thus be Undefined Behavior!
2445/// let f1_ptr = unsafe { ptr::addr_of_mut!((*uninit.as_mut_ptr()).field) };
2446/// unsafe { f1_ptr.write(true); }
2447/// let init = unsafe { uninit.assume_init() };
2448/// ```
2449///
2450/// **Incorrect usage: Out-of-bounds fields projection**
2451///
2452/// ```rust,no_run
2453/// use std::ptr;
2454///
2455/// #[repr(C)]
2456/// struct MyStruct {
2457///     field1: i32,
2458///     field2: i32,
2459/// }
2460///
2461/// let ptr: *mut MyStruct = ptr::null_mut();
2462/// let fieldptr = unsafe { ptr::addr_of_mut!((*ptr).field2) }; // Undefined Behavior ⚠️
2463/// ```
2464///
2465/// The field projection `.field2` would offset the pointer by 4 bytes,
2466/// but the pointer is not in-bounds of an allocation for 4 bytes,
2467/// so this offset is Undefined Behavior.
2468/// See the [`offset`] docs for a full list of requirements for inbounds pointer arithmetic; the
2469/// same requirements apply to field projections, even inside `addr_of_mut!`. (In particular, it
2470/// makes no difference whether the pointer is null or dangling.)
2471#[stable(feature = "raw_ref_macros", since = "1.51.0")]
2472#[rustc_macro_transparency = "semitransparent"]
2473pub macro addr_of_mut($place:expr) {
2474    &raw mut $place
2475}
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy