Main

A major thrust of quantum chemistry and material science involves the quantitative prediction of electronic structure properties of molecules and materials. Although powerful computational techniques have been developed over the past decades, especially for weakly correlated systems1,2,3,4, the development of tools for understanding and predicting the properties of materials that feature strongly correlated electrons remains a challenge5,6,7. Quantum computing is a promising route to efficiently capturing such quantum correlations8,9,10, and algorithms for Hamiltonian simulation and energy estimation11 with good asymptotic scaling have been developed. However, existing methods for simulating large-scale electronic structure problems are prohibitively expensive to run on near-term quantum hardware12, highlighting the need for more efficient approaches.

One approach to capturing the complexity of strongly correlated systems utilizes model Hamiltonians13, such as the generalized Ising, Heisenberg and Hubbard models, which describe the interactions between the active degrees of freedom at low temperatures. Like other coarse-graining or effective Hamiltonian approaches14, model parameters can be computed from an ab initio electronic structure problem using a number of classical techniques15,16,17,18,19,20. Furthermore, model Hamiltonians exhibit features such as low-degree connectivity that simplify implementation, making them particularly promising candidates for quantum simulation21,22,23,24,25,26. Although approximate, simplified model Hamiltonians have proved valuable in analysing strongly correlated problems27,28,29 for small system sizes, where accurate but costly classical methods can be applied. However, as the system size increases, classical numerical methods struggle to reliably solve strongly correlated model systems, as the relevant low-energy states often exhibit a large degree of entanglement. In this Article, we focus on the programmable quantum simulations of spin models. These correspond to a class of Hamiltonians that describe compounds where unpaired electrons become localized at low temperatures and can therefore be represented as effective local spins with S ≥ 1/2. These include many polynuclear transition metal compounds and materials containing d- and f-block elements, which play a central role in chemical catalysis and magnetism20,28,29.

Recent advances in quantum simulation30,31 have enabled the study of paradigmatic model Hamiltonians with local connectivity. In particular, experiments have probed non-equilibrium quantum dynamics32,33,34, exotic forms of emergent magnetism35,36,37,38 and long-range entangled topological matter39,40,41 in regimes that push the limits of state-of-the-art classical simulations42. The model Hamiltonians describing realistic molecules and materials, however, often contain more complex features, including anisotropy, non-locality and higher-order interactions29, demanding a higher degree of programmability18. Although universal quantum computers can, in principle, simulate such systems, standard implementations based on local two-qubit gates require large circuit depths23 to realize complex interactions and long-range connectivity. Thus, for optimal performance in devices with limited coherence, it is essential to utilize hardware-efficient capabilities to simulate such systems.

Here, we introduce a framework to simulate model spin Hamiltonians (Fig. 1) on reconfigurable quantum devices. The approach combines two elements. First, we describe a hybrid digital–analogue simulation toolbox for realizing complex spin interactions, which combines the programmability of digital simulation with the efficiency of hardware-optimized multi-qubit analogue operations. Then, we introduce an algorithm, dubbed many-body spectroscopy that leverages time dynamics and snapshot measurements to extract detailed spectral information of the model Hamiltonian in a resource-efficient way43. We describe in detail how these methods can be implemented using Rydberg atom arrays32,41 and discuss its applicability to other emerging platforms that can support multi-qubit control and dynamic programmable connectivity44, such as the ion quantum charge-coupled device (QCCD) architecture45,46. Finally, we illustrate potential applications of the framework on model Hamiltonians describing a prototypical biochemical catalyst and two-dimensional (2D) materials.

Fig. 1: Model Hamiltonian approach to quantum simulation of strongly correlated matter.
figure 1

a, The procedure starts with a description of the target molecule or material structure, whose electronic structure problem is reduced using classical computational chemistry techniques to a simpler effective Hamiltonian that captures the relevant low-energy behaviours. b, Here, we study problems that are modelled by spin Hamiltonians with potentially non-local connectivity and generic on-site spin S ≥ 1/2, where each spin is composed of localized, unpaired electrons in the original molecule. The key simplification comes from capturing charge fluctuations perturbatively, which is a good approximation in certain contexts. c, Programmable quantum simulation is then used to calculate properties of the model Hamiltonian. We develop a simulation framework, based on encoding spins into clusters of qubits, that can be readily implemented on existing hardware. The toolbox enables efficient generation of complex spin interactions by leveraging dynamical reconfigurability and hardware-optimized multi-qubit gates (Figs. 2 and 3). The quantum simulator performs time evolution under the spin Hamiltonian for various simulation times t, and each qubit is projectively measured to produce an set of snapshots. Subsequent classical processing extracts properties such as the low-lying excitation spectrum and magnetic susceptibilities, all from the same dataset (Figs. 46).

Engineering spin Hamiltonians

The general Hamiltonian we consider is

$$\begin{array}{l}H=\mathop{\sum}\limits_{i,\alpha }{B}_{i}^{\alpha }{\hat{S}}_{i}^{\alpha }\,+\,\mathop{\sum}\limits _{ij,\alpha \beta }{J}_{ij}^{\,\alpha \beta }{\hat{S}}_{i}^{\alpha }{\hat{S}}_{j}^{\,\beta }\,\\\qquad+\mathop{\sum}\limits_{ijk,\alpha \beta \gamma }{K}_{ijk}^{\,\alpha \beta \gamma }{\hat{S}}_{i}^{\alpha }{\hat{S}}_{j}^{\,\beta }{\hat{S}}_{k}^{\gamma }+{{\rm{higher}}\,{\rm{order}}},\end{array}$$
(1)

where \({\hat{S}}_{i}^{\alpha },\alpha =x,y,z\) are spin-Si operators of the ith spin (Si ≥ 1/2 can vary between sites), and the interaction coefficients (\({J}_{ij}^{\,\alpha \beta }\), \({K}_{ijk}^{\,\alpha \beta }\) and so on) are potentially long range (\({\hat{S}}\) represents a quantum mechanical operator. We distinguish between the spin operator \({\hat{S}}\) and the spin number S (no hat) throughout the paper). Hamiltonians of this form can capture the effects of many processes arising in physical compounds, including super-exchange, spin–orbit coupling, ring exchange and more29. In our approach, spin-S variables are encoded into the collective spin of 2S qubits, such that the ith spin in equation (1) is rewritten as

$$\begin{array}{r}{\hat{S}}_{i}^{\alpha }=\mathop{\sum }\limits_{a=1}^{2{S}_{i}}{\hat{s}}_{i,a}^{\alpha },\end{array}$$
(2)

where \({\hat{s}}_{i,a}^{\alpha }\) are the spin-1/2 operators of the ath qubit in the ith spin. Valid spin-Si states live in the symmetric subspace with maximum total spin per site \(\langle {({\hat{{\bf{S}}}}_{i})}^{2}\rangle ={S}_{i}({S}_{i}+1)\). While several alternate approaches to encoding spins with hardware-native qudits have been proposed recently47,48,49,50, the cluster approach introduced here uses the same controls as qubit-based computations, ensuring compatibility with existing setups51, and naturally supports simulation of models with mixed on-site spin.

The core of our protocol involves applying a K-step sequential evolution under simpler interaction Hamiltonians \({H}_{i}\ =\ {\sum }_{g\in {G}_{i}}{h}_{i,g}\), i = 1, …, K, acting on disconnected groups Gi of a few qubits each. The combined sequence realizes an effective Floquet Hamiltonian HF that approximates equation (1). To controllably generate effective Hamiltonians, we use the average Hamiltonian approach. In the limit of small step sizes τ 1/K, the evolution is well approximated by \({H}_{{\mathrm{F}}}^{\,(0)}=\frac{1}{K}\mathop{\sum }\nolimits_{i = 1}^{K}{H}_{i}\) to leading order, and contributions from higher-order terms are bounded52.

In general, the performance of this approach will be limited by simulation errors, characterized by the difference between HF and the target Hamiltonian H, and gate errors, determined by the hardware overhead required to implement individual evolutions \({{\mathrm{e}}}^{-i{h}_{i,g}\tau }\). To mitigate the leading sources of error, we next develop Hamiltonian engineering protocols that leverage multi-qubit spin operations to realize equation (1) with short periodic sequences.

Dynamical projection with digital Floquet engineering

Our Hamiltonian engineering approach is based on the cluster encoding equation (2). The key idea is to first generate interactions in the full Hilbert space using a spin-1/2 (qubit) version HI of equation (1) and then dynamically project back onto the encoding space. We select HI such that projection recovers the target Hamiltonian, by mapping each n-body large spin interaction in equation (1) onto an equivalent n-body qubit interaction acting on representatives from the n-clusters (Methods and Fig. 2a). We further decompose HIinto D non-overlapping groups HI,j, such that each term can be applied sequentially.

Fig. 2: Hardware-efficient implementation with neutral-atom tweezer arrays.
figure 2

a, Our protocol for programmable simulation of generic spin Hamiltonian is based on applying sequences of interactions between non-overlapping few-qubit groups. Here, we illustrate an implementation of a complex spin model using the dynamical projection approach. Spin-Si variables are encoded in the collective spin of a cluster of 2Si qubits within a blockade radius RB. Then, interactions between spins are generated by evolving pairs of qubits from each cluster under an interaction Hamiltonian HI. Second-order interactions Jij act on two qubits, while higher-order interactions act on multiple qubits, such as the fourth-order coefficient Liijk of \({\hat{S}}_{i}^{\alpha }{\hat{S}}_{i}^{\,\beta }{\hat{S}}_{j}{\hat{S}}_{k}\). Then, interactions are dynamically projected into the symmetric encoding space within each cluster by evolving under Hamiltonian HP. The target large spin Hamiltonian H is simulated by alternately evolving under HI and HP. This protocol can be realized in any reconfigurable quantum processor. Here, we present an implementation for Rydberg atom arrays, which has two long-lived qubit states \(\left\vert 0\right\rangle ,\left\vert 1\right\rangle\), and an excited Rydberg state \(\left\vert r\right\rangle\) with strong interactions. The interaction connectivity is dynamically changed by moving optical tweezers41, and interactions are generated using all-to-all Rydberg blockade interactions within each cluster and simultaneous global driving of the qubit Ωq(t) and Rydberg Ωr(t) transitions. b, Fast and efficient multi-qubit spin operations US and UP are identified using optimal control to optimize pulse sequences. Gate times are measured in units of the Rydberg driving frequency ΩT, where the two-qubit CZ gate from ref. 51 takes ΩT/2π ≈ 1.2. The alternating ansatz (solid lines) decomposes the target operations into symmetric diagonal gates and single-qubit rotations, which can be are individually optimized (Methods). Simultaneous (dual) driving of both transitions (dotted lines) enables even faster realization of approximate US and UP gates with noise-free error rates below ~10−3. c, We compare against decomposition of US and UP into a two-qubit gate set composed of CPhase gates and single-qubit rotations. Such a decomposition rapidly becomes very costly as the cluster size grows, in contrast to the optimized hardware-native operation.

While this generates the target interactions between spin clusters, it also moves encoded states out of the symmetric subspace. Therefore, to prevent evolution under HI from destroying the encoding, we alternately apply evolution under

$$\begin{array}{r}{H}_{{\mathrm{P}}}=\lambda\mathop{\sum}\limits_{i}\left(1-P[({\hat{{\bf{S}}}}_{i})]\right),\end{array}$$
(3)

composed of projectors \(P[({\hat{{\bf{S}}}}_{i})]\) onto the symmetric spin states, by applying multi-qubit gates within spin clusters. The symmetric states are zero-energy ground states of HP, and non-symmetric states are separated by an energy gap λ. Alternating HI and HP enables Trotter simulation of the static Hamiltonian HI + HP. For λJij, the system is effectively projected into the ground space of HP at low energies, realizing H. However, accurate Trotter simulation in this regime, by alternating HI and HP, requires two separations of timescale between the step size and the target interactions \(\tau \ll {\lambda }^{-1}\ll {J}_{ij}^{-1}\), leading to a large number of gates.

Instead, we develop an approach that enables projection of HI onto the ground state of HP with substantially fewer gates, using ideas inspired by dynamical decoupling53,54. This is achieved by using large-angle rotations \({{\mathrm{e}}}^{-i\tau {H}_{{\mathrm{P}}}}\), where τλ ≈ 1, to generate a time-dependent phase on the parts of HI that couple encoded states to non-symmetric states. These phases cancel out on average, leaving the symmetric part which commutes with HP (see the ‘High-spin Hamiltonian engineering with dynamical Floquet projection’ section in the Methods for details).

This approach provides important benefits in simulating complex spin models, where the number of overlapping terms in H grows rapidly with parameters such as the spin size S, number of interactions per spin d and interaction weight \({n}_{\max }\). In this regime, a Trotter decomposition of H into non-overlapping terms hi,g would require a sequence of length \(K\ge d{(2S)}^{{n}_{\max }-1},\) where d measures the number of interactions each spin Si is involved in. Instead, the projection approach produces decompositions of H into sequences of length \(K=({n}_{\max }+1)\lceil \frac{d}{2S}\rceil\), leading to performance improvements of several orders of magnitude for models with large spins and higher-order interactions.

Similar tools can also be used realize a large class of spin circuits, by generating different evolutions HF,i during each cycle. This effectively implements discrete time-dependent evolution

$$\begin{array}{r}{U}_{{\rm{circ}}}=\mathop{\prod }\limits_{i=1}^{T}{{\mathrm{e}}}^{-i{\tau }_{i}{H}_{{\mathrm{F}},i}}.\end{array}$$
(4)

Variational optimization can be further used to engineer higher-order terms in HF, enabling the generation of more complex spin gates at no additional cost (Extended Data Fig. 1). Although the classical variational optimization procedure is limited to small circuits, Hamiltonian learning protocols could be used to perform larger-scale optimization on a quantum device directly55. Such circuits can be used for operations besides Hamiltonian simulation, including state preparation21.

Hardware-efficient implementation

The digital Hamiltonian engineering sequence can, in principle, be realized on any universal quantum processor, but it is especially well suited for reconfigurable processors with native multi-qubit interactions41,44,46,56. Neutral atom arrays are a particularly promising candidate for realizing these techniques, for which we develop a detailed implementation proposal. In this platform, two long-lived atomic states encode the qubit degree of freedom \(\{\left\vert 0\right\rangle ,\left\vert 1\right\rangle \}\), which can be individually manipulated with fidelity above 99.99% (ref. 57). Strong interactions between qubits are realized by coupling to a Rydberg state \(\left\vert r\right\rangle\) (ref. 58), which enables parallel multi-qubit operations57, with state-of-the-art two-qubit gate fidelities exceeding 99.5% (ref. 51). Further, qubits can be transported with high fidelity by moving optical tweezers41, to realize arbitrary groupings Gi. By placing atoms sufficiently close together, atoms within a group can undergo strong all-to-all interactions, while interactions between groups can be made negligible by placing them far apart.

The key ingredient required for efficiently implementing HI and HP are hardware-efficient multi-qubit spin operations. We show how these can be realized by using pulse engineering to transform the native Rydberg-blockade interaction into the desired form. We illustrate this on two families of representative spin operations

$$\begin{array}{r}{U}_{{\mathrm{S}}}(\theta )={{\mathrm{e}}}^{-i\theta\left({\hat{{{\bf{S}}}_{i}}}^{2}/2S\right)},\quad {U}_{{\mathrm{P}}}(\theta )={{\mathrm{e}}}^{-i\theta P\left[{\hat{{{\bf{S}}}_{i}}}^{2}\right]}\end{array},$$
(5)

where \({\hat{{{\bf{S}}}_{i}}}^{2}\) is the total-spin operator for a cluster of 2Si atoms, and \(P[{\hat{{{\bf{S}}}_{i}}}^{2}]\) are the projectors appearing in equation (3).

One approach to engineering these operations is based on an ansatz that naturally extends ref. 51, where US and UP are found by optimizing an alternating sequence of diagonal phase gates and single-qubit rotations. As in prior works51,59, the pulse profiles generating symmetric diagonal operations can be obtained with numerical optimization via gradient ascent pulse engineering (GrAPE)60. For this alternating ansatz, we find a roughly linear scaling of total gate time Tgate with size of the cluster (Fig. 2b). Similar gates can also be implemented in ion-trap architectures, where coupling to collective motional modes can be used to implement diagonal phase gates61,62.

However, Rydberg atom arrays offer additional control, which allows us to go beyond the alternating ansatz. Specifically, we consider simultaneously driving the qubit transition Ωq(t) in addition to the usual Rydberg transition Ωr(t). We find that this dual driving enables substantially faster realizations of US and UP. After optimizing with GrAPE to identify approximate gates with noise-free simulated fidelities above 99.9% (that is, assuming no experimental error), we find total gate times below ΩTgate/2π = 6.0 up to cluster sizes n = 8 and nearly constant scaling with n (Fig. 2b and Methods). The fact that the evolution is so short implies that these gates generate complex interactions in a very hardware-efficient way, making them ideal for accelerating spin-Hamiltonian simulations. Finally, we develop optimized decompositions of target spin operations into two-qubit gates and find they are still orders of magnitude more costly than the hardware-efficient implementation (Fig. 2c).

In Fig. 3, we illustrate the performance of this method on four representative examples that lie within the family of Hamiltonians (equation (1)). To quantify the performance of the simulation, we present heuristic estimates of the accessible coherent simulation time, measured in units of the target Hamiltonian’s local energy scale. We leverage access to multi-qubit spin operations of the form \({{\mathrm{e}}}^{-i{\sum }_{n}{\theta }_{n}{{\hat{\bf{S}}}}^{n}}\) and estimate gate errors (arising from experimental imperfections) based on the physical evolution time necessary to realize the target operation. The step size τ is chosen to to maximize coherent simulation time, balancing simulation and gate errors (Methods). In the representative examples, we find the combination of dynamical projection and optimized multi-qubit gates outperforms a similarly constructed implementation based on Trotterized interactions and two-qubit gate decomposition. Our approach substantially extends the available simulation time (up to two orders of magnitude) and enables much more efficient generation of complex spin Hamiltonians.

Fig. 3: Efficiency of Hamiltonian simulation framework.
figure 3

Estimates of the quantum simulation’s coherence time Tsc, in the target Hamiltonian’s units HTlocal for various models. We consider Hamiltonian simulation implemented using the dual driving gates from Fig. 2b and assume a depolarizing error probability proportional to the gate time, such that ΩT/2π = 1 incurs an error of 0.1%, which is projected to be achievable with neutral atoms51,87,88. Analogous estimates can be performed straightforwardly for different hardware-dependent error rates using equation (32), which rescales Tsc but does not change the trend. In all cases, we compare against an implementation using two-qubit CPhase gates with fidelity 99.9% and perfect single-qubit rotations (see Methods for detailed descriptions of the heuristic estimation procedure). a, The first two two models are (i) the spin-1/2 Kagome Heisenberg model and (ii) two interacting spin-5/2's with Heisenberg and Dzyaloshinskii–Moriya (DM) terms, both of which are composed of only two-qubit interactions. In (i), a speed-up is achieved by utilizing three-qubit multi-qubit gates \({e}^{-i\tau {{\hat{\bf{S}}}}^{2}}\), which more efficiently generates Heisenberg interactions and reduces the period of the Floquet cycle from K = 4 to K = 2. In (ii), improvement is achieved using dynamical projection, which reduces K from 2S to 2 but at the cost of additional multi-qubit gates. b, Two complex spin models which include spin interactions up to (iii) bi-quadratic interactions \({J}_{1}({{\hat{\bf{S}}}}_{i}\cdot {{\hat{\bf{S}}}}_{j})+{J}_{2}{({{\hat{\bf{S}}}}_{i}\cdot {{\hat{\bf{S}}}}_{j})}^{2}\) and (iv) bi-quartic interactions \({({{\hat{\bf{S}}}}_{i}\cdot {{\hat{\bf{S}}}}_{j})}^{4}\). These correspond to four-body and eight-body qubit interactions, respectively. In (iii), the dramatic speed-up originates from using dynamical projection to reduce the Floquet period, as well as the hardware efficiency of a native four-qubit gate. The individual contribution to the speed-up from both sources is also analysed in Methods. In (iv), the speed-up arises fully from the hardware efficiency of native eight-qubit operations.

Spectral information from dynamics

Having described a toolbox for implementing a large class of spin circuits, enabling time evolution and state preparation, we next present a general-purpose algorithm for calculating chemically relevant information. The approach, dubbed many-body spectroscopy, leverages dynamical snapshot measurements and classical co-processing43 to compute a wide variety of spectral quantities including low-lying states and finite-temperature properties. The procedure, combining insights from statistical phase estimation63,64,65,66 and shadow tomography67, is noise resilient and sample efficient, making it especially promising for near-term experiments.

Specifically, the quantity we extract is an operator-resolved density of states

$$\begin{array}{r}{D}^{\,A}(\omega )=\mathop{\sum}\limits_{n}\left\langle n\right\vert A\left\vert n\right\rangle \delta (\omega -{\epsilon }_{n}),\end{array}$$
(6)

where ϵn and \(\left\vert n\right\rangle\) are the energies and eigenstates of the evolution Hamiltonian H, and A denotes an arbitrary operator. Spectral functions like equation (6) can be used to access detailed information about the properties of H: the location of peaks provides information about energies63,65,66, and properties of eigenstates can be computed by choosing A appropriately64. For example, we can compute total-spin of an eigenstate using \(A={({\sum }_{i}{\hat{{\bf{S}}}}_{i})}^{2}\) or the local spin polarization with \(A={\hat{{\bf{S}}}}_{i}\).

The output of the quantum computation are projective measurements (snapshots), produced by the circuit depicted in Fig. 4a. First, we initialize a system of N qubits and apply a state preparation procedure to prepare a reference state \(\left\vert S\right\rangle \,=\,S\left\vert {0}^{N}\right\rangle\). Next, we use a single-ancilla qubit in the \(\left\vert +\right\rangle\) state to apply a controlled perturbation, preparing an entangled superposition of \(\left\vert 0\right\rangle \left\vert S\right\rangle\) and a probe state \(\left\vert 1\right\rangle \left\vert R\right\rangle\) where \(\left\vert R\right\rangle \,=\,R\left\vert S\right\rangle\). The system is then evolved under H for time t (see also equation (34)). Finally each qubit is projectively measured, producing a sequence of N + 1 bits—a snapshot. By measuring the ancilla in the X or Y basis, this circuit effectively performs an interferometry experiment between the reference and probe states. The resulting snapshot measurements enable parallel estimation of two-time correlation functions of the form \({C}_{O,R}(t)=\left\langle S\right\vert {{\mathrm{e}}}^{iHt}O{{\mathrm{e}}}^{-iHt}R\left\vert S\right\rangle\) for all 2N operators O that are diagonal in the measurement basis. Crucially, DA(ω) can be determined from the same set of snapshots for various A and ω, by changing the classical processing.

Fig. 4: Many-body spectroscopy of model Hamiltonians.
figure 4

a, A schematic quantum circuit diagram for the algorithm. The first step is to apply a state-preparation circuit S to prepare a reference state \(\left\vert S\right\rangle =S\left\vert 0\right\rangle\), followed by an ancilla-controlled perturbation R preparaing a superposition of \(\left\vert S\right\rangle\) and the probe state \(\left\vert R\right\rangle =R\left\vert S\right\rangle\). This superposition is time-evolved by the target Hamiltonian, and then each qubit is projectively measured to produce a snapshot. By repeating the procedure for various evolution times t, different perturbations R and potentially different measurement bases, this setup provides access to detailed information about the spectrum of H. b, Consider a spin Hamiltonian H2 with two anti-ferromagnetically coupled spin-3/2 particles. We simulate 20,000 snapshot measurements and classical processing to calculate the density of states \({D}^{{\mathbb{1}}}(\omega )\) (black line) and total-spin resolved versions \({D}^{{P}_{S}}(\omega )\) (coloured lines). The vertical dashed lines correspond to exact energies, and the coloured regions represent 95% confidence intervals. The peaks are broadened due to finite (coherent) simulation time \(J{T}_{{\rm{sim}}}=0.26\), which sets the spectral resolution. Hardware-efficient simulation schemes, which extend the simulation time (for example, Fig. 3), are favourable because they improve spectral resolution. Many-body spectroscopy further improves the effective spectral resolution, by leveraging multiple observables to distinguish overlapping peaks. Here, we see that spin resolution sparsifies the signal, enabling accurate peak detection and energy estimation, while the bare spectrum \({D}^{{\mathbb{1}}}(\omega )\) is too broad to resolve all states. c, The magnetic susceptibility χ, can also be computed from snapshot measurements using Sz-resolved density of states (here, \(J{T}_{{\rm{sim}}}=1.04\)). For these calculations, it is important to prevent exponential amplification of shot noise. We therefore use a simple empirical truncation procedure that introduces a small amount of bias (Methods) but enables rapid convergence with number of snapshots to the ideal value (black line).

To estimate the spectral function DA(ω), we use a hybrid quantum-classical computation based on the expression

$$\begin{array}{rcl} D^{\,A}(\omega)&=\underbrace{{\tilde{{\mathbb{E}}}}_{R\sim{{\mathcal{R}}}}\displaystyle{\int} \,{{\mathrm{d}}t}}_{{{\mathrm{circuit}}\,{\mathrm{average}}}}\underbrace{{\mathrm{e}}^{i\omega t}\sum\nolimits_{s}\left\langle S\right\vert R^{\dagger}A \, O_{s}{\mathrm{e}}^{-iHt}\left\vert S\right\rangle}_{{{\mathrm{classical}}\,{\mathrm{processing}}}} \\ &\times\underbrace{\left\langle S\right\vert {\mathrm{e}}^{iHt}O_{s}{\mathrm{e}}^{-iHt}R\left\vert S\right\rangle}_{{\mathrm{quantum}}\,{\mathrm{evolution}}\,({\rm{Fig}}. \,4)}. \end{array}.$$
(7)

For this to be formally equivalent to equation (6), the distribution over perturbations \({\mathcal{R}}\) and ensemble of observables {Os} must couple uniformly to all eigenstates, to ensure unbiased estimation (Methods). For example, given a polarized reference state \({\left\vert 0\right\rangle }^{\otimes N}\), X-basis measurements are sufficient.

To realize the distributional average and time integral, the quantum circuit has to be executed (Fig. 4a) for randomly sampled perturbations R and evolution times t. To efficiently evaluate the classical part of equation (7), we require efficient classical representations of \(\left\vert R\right\rangle\) and \({{\mathrm{e}}}^{-iHt}\left\vert S\right\rangle\). A good choice is to select \(\left\vert S\right\rangle\) to be a known eigenstate of H, such that the time evolution is trivial, and preferentially sample \(\left\vert R\right\rangle\) to maximize the overlap with relevant target states. By contrast, the quantum part of equation (7) includes the time evolution of \(\left\vert R\right\rangle\), which has overlap with unknown eigenstates, and often includes large amounts of entanglement. Therefore, this is estimated from snapshot measurements produced by quantum simulation (Methods).

As an example, consider two interacting spin-3/2 particles, described by H2 = JS1S2. We prepare a polarized reference eigenstate \(\left\vert S\right\rangle ={\left\vert 0\right\rangle }^{\otimes 6}\), and sample perturbations R from an ensemble of random single-spin rotations. This corresponds to a trivial state-preparation circuit and a simple controlled-perturbation composed of two-qubit gates in Fig. 4. Next, we measure the system in the X basis, which provides access to the full spectrum for this choice of \(\left\vert S\right\rangle\) (Methods). Finally, during classical processing, the bare density of states is obtained by choosing \(A={\mathbb{1}}\) and evaluating equation (7). The result contains peaks at frequencies ω associated with eigenstates of H2 (Fig. 4b). Further, we can isolate individual contributions of total spin sectors by instead choosing A = PS, the projectors onto S = 0, 1, 2 and 3. This not only allows us to identify the total spin of the eigenstates but also increases the effective spectral resolution in the presence of noise as it sparsifies the signal. Finite-temperature response functions68, such as the z component of the zero-field magnetic susceptibility \(\chi (T)=\frac{1}{Z}{\rm{Tr}}[\frac{1}{T}{({S}_{z})}^{2}{{\mathrm{e}}}^{-H/T}]\), can also be computed from the same dataset, by integrating the Sz-projected density-of-states \({D}^{{S}_{z}}(\omega )\) (Methods). To illustrate this, the magnetic susceptibility is extracted from the same dataset and shown in Fig. 4c. The algorithm is especially promising for near-term devices, having favourable resource requirements quantified by the number of snapshots (sample complexity) and maximum evolution time (coherence) required for accurate spectral computation (see Methods for further discussion).

Application to transition metal clusters and magnetic solids

As an illustration of a relevant computation in chemical catalysis, we consider the Mn4O5Ca core of the oxygen-evolving complex (OEC), a transition metal catalyst central to photosynthesis that is still not fully understood69,70. Classical chemistry calculations have been used to fit model Heisenberg Hamiltonians, containing three spin-3/2 sites and one spin-2 site27,28 (Fig. 5a). While this spin representation cannot directly capture chemical reactions, it can capture the ground and low-lying spin states, that is, the spin ladder, which are important in catalysis because reaction pathways depend critically on the spin multiplicity71. We simulate our framework applied to the S2H-1b structural model from ref. 28, by first computing the bare density of states \({D}^{{\mathbb{1}}}(A)\) (Fig. 5b). Then, we identify a low-lying cluster of eigenstates and compute spin-projected densities \({D}^{{P}_{s}}(A)\) to resolve the spin ladder (Fig. 5c). The spin ladder can also be measured experimentally, providing a way to evaluate candidate models of reaction intermediates. To highlight this, we simulate the Heisenberg model for an alternate pathway (S2H-2b)28 and observe the modification reverses the ordering of the spin ladder, indicating that S2H-1b is more consistent with measurements (Fig. 5d).

Fig. 5: Application to the OEC.
figure 5

Our programmable quantum simulation framework can be used to compute detailed model spin Hamiltonian properties. a, Here, we illustrate the procedure on the OEC, an organometallic catalyst with strong spin correlations. In particular, we simulate model spin Hamiltonians for two structures S2H-1b and S2H-2b, which have three spin-3/2 and one spin-2 Mn (purple) active sites (reproduced from ref. 28 with permission from the Royal Society of Chemistry). Model Heisenberg coefficients for both hypothetical structures have been computed from broken-symmetry DFT28. b, A density of states \({D}^{{\mathbb{1}}}(\omega )\) calculation is simulated for the S2H-1b model spin Hamiltonian. Here, we use a polarized reference state \(\left\vert S\right\rangle ={\left\vert 0\right\rangle }^{\otimes 13}\) a probe states \(\left\vert R\right\rangle\) generated by random single-site rotations and an evolution time t. We select 50,000 circuits with independently chosen \(\left\vert R\right\rangle ,t\) pairs and draw ten snapshots from each circuit. c, Focusing on the lowest-lying states, we see three distinct peaks in \({D}^{{\mathbb{1}}}(\omega )\). However, by evaluating spin-resolved quantities \({D}^{{P}_{s}}(\omega )\) on the same set of measurements, we identify three additional peaks, whose energies and total-spin match exact diagonalization results (vertical dotted lines). d, This information is known as the spin ladder and can be computed using many-body spectroscopy for both the 1b and 2b states. Importantly, the spin ladder can also be measured experimentally (Exp.) and, therefore, can be used to help determine which structure appears in nature. In this example, experimental measurements indicate a spin-5/2 ground state and spin-7/2 first excited state. However, the ordering of low-energy states is flipped in the 2b configuration, indicating that the S2H-1b hypothesis is more likely28. We note that quantities beyond total spin can also be readily evaluated in low-lying eigenstates by inserting different operators A (Methods).

The framework can also be applied to study low-energy properties of extended systems, including strongly correlated materials. We illustrate this on the ferromagnetic, square lattice Heisenberg model (Fig. 6). For such large systems, we envision utilizing an approximate ground-state preparation method for \(\left\vert S\right\rangle\), so that low-energy properties can be accessed in a noise resilient manner via local controlled perturbations R. Then, local Green’s functions—two-point operators at different positions and times—can be measured to access properties of low-lying quasi-particle excitations, such as the dispersion relation of single-particle excitations (see ‘Two-dimensional Heisenberg calculations’ section in Methods for details).

Fig. 6: Application to 2D magnetic materials.
figure 6

a, Properties of extended materials can also be investigated using our quantum simulation framework. As an example, we study the square-lattice ferromagnetic (J > 0) Heisenberg model \({H}_{{\rm{2D}}}=-J{\sum }_{\langle ij\rangle }{\hat{{\bf{s}}}}_{i}\cdot {\hat{{\bf{s}}}}_{j}\). By preparing the polarized ground state \(\left\vert S\right\rangle ={\left\vert 0\right\rangle }^{\otimes N}\), applying a single-site perturbation R = X0 on the central site and measuring the system in the X basis after time evolution, we can estimate the single-particle Green’s function \(G({\bf{r}},t)=\left\langle S\right\vert {X}_{{\bf{r}}}(t){X}_{{\bf{0}}}\left\vert S\right\rangle\). Therefore we select O = Xr for various positions r as the observables in equation (7), all of which are diagonal in the measurement basis. We visualize the real part of G(r, t), where the plotted intensity and colour denotes the magnitude and sign, respectively, at Jt = 0, 0.5 and 1.0. b, The structure of excited states is extracted by classical processing of these measurements. Even though the spectrum is continuous, additional structure can be identified by computing the momentum-resolved density of states \({D}^{{P}_{{\bf{k}}}}(\omega )\), where Pk is a projector onto plane-wave states (Methods). Restricting to ky = 0 and evolving to maximum time \(J{T}_{\max }=8.0\), we see \({D}^{{P}_{{\bf{k}}}}(\omega )\) forms a band-like structure, from which a peak ω can be estimated for each k (black line). c, This peak extraction allows us to directly estimate the single-particle dispersion ω(k) across the 2D Brillouin zone.

Outlook

These considerations indicate that reconfigurable quantum processors enable a powerful, hardware-efficient framework for quantum simulation of problems from chemistry and materials science, illustrating potential directions for the search for useful quantum advantage. Specifically, in addition to the OEC, other organometallic catalysts could be studied with this approach, including iron–sulfur clusters23,72, for which bi-quadratic terms appear in the model Hamiltonian to capture higher-order perturbative charge fluctuation effects73. Another promising direction involves quantum simulation of low-energy properties of 2D and three-dimensional frustrated spin systems, including model Hamiltonians for Kitaev materials74,75,76 and molecular magnets77,78. The ability to realize non-local interactions further opens the door to simulation of spin Hamiltonians defined on non-Euclidean interaction geometries79,80.

The efficiency of the Hamiltonian engineering approach originates from co-designing Floquet engineering and hardware-specific multi-qubit gates. Extending this approach to larger classes of strongly correlated model Hamiltonians is an outstanding and exciting frontier. In particular, it would be especially interesting to further develop the toolbox to incorporate charge transport and electron–phonon interactions. This could enable simulation of more complex model Hamiltonians, such as the tJ, Hubbard and Hubbard–Holstein models, and expand the class of accessible chemistry problems9,81. Application to other settings, including lattice gauge theories50 and quantum optimization problems82, is also of interest. Incorporation of error mitigation and correction into the Hamiltonian simulation should be considered; specifically, the present method can potentially be generalized to control logical, encoded degrees of freedom in a hardware-efficient way83.

Finally, characterization and development of the model Hamiltonian approach itself is an interesting and challenging problem. Key challenges include development of efficient schemes to compute parameters for higher-order interactions73, estimation of corrections arising from coupling to states outside the model space84 and validation of the model Hamiltonian approximation19. Feedback between the classical and quantum parts of the computation is an important part of these developments25,85,86. For these reasons, the large-scale simulation of model Hamiltonians on quantum processors will be invaluable for testing approximations by comparing simulation outputs with experimental measurements. Hence, the approach proposed in this work facilitates exciting directions in computational chemistry and quantum simulation, aiming towards constructing a novel ab initio simulation pipeline that utilizes hybrid quantum-classical resources.

Methods

Hamiltonian engineering

The Hamiltonian engineering toolbox introduced here is based on the average Hamiltonian approach. This approach uses the fact that, in the high-frequency limit, the effective Floquet cycle period Kτ is much smaller than the inverse local energy scales of Hi,j(p), and the Floquet Hamiltonian HF can be well approximated by expanding in a small parameter \(\frac{K\tau }{| | H(p)| {| }_{{\rm{local}}}}\) (ref. 52). The leading contribution is the average Hamiltonian

$${H}_{{\mathrm{F}}}^{\,(0)}=\frac{1}{K}\mathop{\sum }\limits_{k=0}^{K-1}H(k).$$
(8)

The second order term also takes a simple form

$${H}_{{\mathrm{F}}}^{\,(1)}=\frac{\tau }{2K}\sum _{k < {k}^{{\prime} }}\left[H(k),H({k}^{{\prime} })\right].$$
(9)

The results presented in the main text involve engineering the average Hamiltonian \({H}_{{\mathrm{F}}}^{\,(0)}\) to reproduce the target (equation (1)). In the setting where θp is constant, one can apply the results of ref. 54 to construct an alternative asymptotic expansion, where each term in the Floquet Hamiltonian commutes with HP, implying the encoding is preserved to exponentially long times \({\mathrm{e}}^{O(| | H(p)| {| }_{{\rm{local}}}/K)}\). However, in this setting, the second order term is non-zero and generates simulation errors at order O(τ). It can be cancelled by selecting time-reversal symmetric sequences of length 2K, where the second half of the pulse is defined by Θk = ΘK−1−k. This reduces simulation errors to order O(τ2) but might potentially alter the prethermal properties of the Floquet Hamiltonian54, which is an interesting problem for further research.

Going beyond average Hamiltonian engineering is also possible by optimizing the Floquet sequence89,90,91. We demonstrate engineering of higher-order terms for a small system composed of two interacting spin-3/2 particles, to controllably engineer up to bi-cubic terms \({({\hat{{\bf{S}}}}_{i}\cdot {\hat{{\bf{S}}}}_{j})}^{3}\) using only two- and three-qubit operations in Extended Data Fig. 1 and Supplementary Information section 1.

High-spin Hamiltonian engineering with dynamical Floquet projection

Our goal is to Floquet engineer the Hamiltonian H = HI + HP, in the limit λJij, where the system is effectively projected into the symmetric ground space of HP. Our approach relies on implementing the combined evolution

$$\begin{array}{rc}{U}_{{\mathrm{F}}}&=\mathop{\prod }\limits_{p=1}^{{N}_{p}}\left({{\mathrm{e}}}^{-i{\theta }_{p}{\lambda }^{-1}{H}_{{\mathrm{P}}}}\mathop{\prod }\limits_{j=1}^{D}{{\mathrm{e}}}^{-i\tau {H}_{{\mathrm{I}},\,j}}\right)\\ &=\mathop{\prod }\limits_{p=1}^{{N}_{p}}\mathop{\prod }\limits_{j=1}^{D}{{\mathrm{e}}}^{-i\tau {H}_{{\mathrm{I}},\,j}(p)}={{\mathrm{e}}}^{-iK\tau {H}_{{\mathrm{F}}}},\end{array}$$
(10)

where K = NpD is the full length of the sequence and θpλ−1 and τ parameterize the evolution times. To analyse this sequence, we transform into an interaction picture such that intermediate terms are evolved by HP with a cumulative phase \({\Theta }_{p}\,=\,{\sum }_{{p}^{{\prime} } < p}{\theta }_{{p}^{{\prime} }}\); for the rotating frame to be periodic, we require \({\Theta }_{{N}_{p}+1}\,(\mathrm{mod}\,\,2\uppi )=0\).

Then, the transformed interactions can be written as

$$\begin{array}{l} H_{{\mathrm{I}},\,j}(p)= {\mathrm{e}}^{i {{{{\varTheta}}}}_p \lambda^{-1} H_{\mathrm{P}}} H_{{\mathrm{I}},\,j} {\mathrm{e}}^{-i {{{{\varTheta}}}}_p \lambda^{-1} H_{\mathrm{P}}}\\\qquad\quad\,\,= \underbrace{H_{{\mathrm{I}},\,j}^{\,(0)}}_{{{\mathrm{symmetric}}}} + \underbrace{ \left(\sum\limits_{n=1}^{n_{{{\max}}}} {\mathrm{e}}^{i n {{{{\varTheta}}}}_p} H_{{\mathrm{I}},\,j}^{\,(n)} +{\mathrm{h.c.}}\right)}_{{{\mathrm{symmetry}}\,{\mathrm{violating}}}}. \end{array}$$
(11)

Here, h.c. denotes the Hermitian conjugate, and \({H}_{{\mathrm{I}},\,j}^{\,(n)}\) captures unwanted terms that change the total spin on n sites, while the symmetric terms comprise the target Hamiltonian \({\sum }_{j}{H}_{{\mathrm{I}},\,j}^{\,(0)}\,=\,{H}_{{\mathrm{T}}}\).

To compute the form of the interaction Hamiltonian in the rotated frame HI(p) = ∑jHI,j(p) as used in equation (11), let us consider a single spin-1/2 particle belonging to a spin-Si cluster and define projectors \({P}_{i}=P[{\hat{{\bf{S}}}}_{i}^{2}]\) onto the symmetric space and \({Q}_{i}={\mathbb{1}}-{P}_{i}\) onto its complement. The spin-1/2 term can be split into four parts

$$\hat{{{\bf{s}}}}_{i,1} = \underbrace{P_i \hat{{{\bf{s}}}}_{i,1} P_i + Q_i \hat{{{\bf{s}}}}_{i,1} Q_i}_{{{\rm{symmetric}}}} + \underbrace{P_i \hat{{{\bf{s}}}}_{i,1} Q_i + Q_i \hat{{{\bf{s}}}}_{i,1} P_i}_{{{\text{non-symmetric}}}},$$
(12)

where the first two terms preserve the on-site total spin and the second two change the on-site total spin. Since HP acts as 1 − Pi on the ith spin, we label terms by how they change the expectation value of (1 − Pi)

$${\hat{{\bf{s}}}}_{i,1}^{(0)}={P}_{i}{\hat{{\bf{s}}}}_{i,1}{P}_{i}+{Q}_{i}{\hat{{\bf{s}}}}_{i,1}{Q}_{i}$$
(13)
$${\hat{{\bf{s}}}}_{i,1}^{(+1)}={Q}_{i}{\hat{{\bf{s}}}}_{i,1}{P}_{i}$$
(14)
$${\hat{{\bf{s}}}}_{i,1}^{(-1)}={P}_{i}{\hat{{\bf{s}}}}_{i,1}{Q}_{i}.$$
(15)

This decomposition ensures that each term transforms under conjugation by HP, as in equation (11), by picking up a global phase. Specifically, using the following

$${{\mathrm{e}}}^{-i\theta {P}_{i}}{P}_{i}={P}_{i}{{\mathrm{e}}}^{-i\theta {P}_{i}}={{\mathrm{e}}}^{-i\theta }{P}_{i}$$
(16)
$${{\mathrm{e}}}^{-i\theta {Q}_{i}}{P}_{i}={P}_{i}{{\mathrm{e}}}^{-i\theta {Q}_{i}}={P}_{i}$$
(17)
$${{\mathrm{e}}}^{-i\theta {P}_{i}}{Q}_{i}={Q}_{i}{{\mathrm{e}}}^{-i\theta {P}_{i}}={Q}_{i}$$
(18)
$${{\mathrm{e}}}^{-i\theta {Q}_{i}}{Q}_{i}={Q}_{i}{{\mathrm{e}}}^{-i\theta {Q}_{i}}={{\mathrm{e}}}^{-i\theta }{Q}_{i}$$
(19)

one can show that

$${{\mathrm{e}}}^{i\theta {\lambda }^{-1}{H}_{{\mathrm{P}}}}{\hat{{\bf{s}}}}_{i,1}^{(n)}{{\mathrm{e}}}^{-i\theta {\lambda }^{-1}{H}_{{\mathrm{P}}}}={{\mathrm{e}}}^{i\theta n}{\hat{{\bf{s}}}}_{i,1}^{(n)}.$$
(20)

This rule can be extended to higher-weight operators. For example, two-spin interactions between clusters \({h}_{ij}={\hat{{\bf{s}}}}_{i,1}\cdot {\hat{{\bf{s}}}}_{j,1}\) decompose into five parts \({h}_{ij}^{(n)},n=-2,-1,0,1,2\),

$$h_{ij} = \underbrace{\hat{{{\bf{s}}}}_{i,1}^{(0)} \cdot \hat{{{\bf{s}}}}_{j,1}^{(0)} + \hat{{{\bf{s}}}}_{i,1}^{(+1)} \cdot \hat{{{\bf{s}}}}_{j,1}^{(-1)} + \hat{{{\bf{s}}}}_{i,1}^{(+1)} \cdot \hat{{{\bf{s}}}}_{j,1}^{(-1)}}_{h_{ij}^{(0)}}$$
(21)
$$+ \underbrace{\left(\hat{{{\bf{s}}}}_{i,1}^{(+1)} \cdot \hat{{{\bf{s}}}}_{j,1}^{(0)} + \hat{{{\bf{s}}}}_{i,1}^{(0)} \cdot \hat{{{\bf{s}}}}_{j,1}^{(+1)} \right)}_{h_{ij}^{(+1)}} + {\mathrm{h.c.}}$$
(22)
$$+ \underbrace{\left(\hat{{{\bf{s}}}}_{i,1}^{(+1)} \cdot \hat{{{\bf{s}}}}_{j,1}^{(+1)} \right)}_{h_{ij}^{(+2)}} + {\mathrm{h.c.}},$$
(23)

corresponding to the different ways to change the expectation value of Qi + Qj. The n-spin interactions will have terms running from h(−n) to h(+n). Therefore, in the rotating frame, the Hamiltonian terms transform as

$${h}_{ij}^{(n)}(p)={{\mathrm{e}}}^{i{\varTheta }_{p}{\lambda }^{-1}{H}_{{\mathrm{P}}}}{h}_{ij}^{(n)}{{\mathrm{e}}}^{-i{\varTheta }_{p}{\lambda }^{-1}{H}_{{\mathrm{P}}}}$$
(24)
$$={{\mathrm{e}}}^{i{\varTheta }_{p}n}{h}_{ij}^{(n)}$$
(25)
$${h}_{ij}(p)=\mathop{\sum }\limits_{n=-{n}_{\max }}^{{n}_{\max }}{{\mathrm{e}}}^{i{\varTheta }_{p}n}{h}^{(n)}.$$
(26)

We further choose a sequence of Θp such that only the h(0) contribution is non-zero on average. The simplest sequence that satisfies these conditions is a family of cyclic pulses of order P

$${\varTheta }_{p}=\frac{2\uppi i}{P}p\qquad\qquad{p}=0,\ldots ,P-1,$$
(27)

which satisfy the cancellation condition as long as \({n}_{\max } < P\). In the two-body case, of the terms that contribute to \({h}_{ij}^{(0)}\), only \({\hat{{\bf{s}}}}_{i,1}^{(0)}\cdot {\hat{{\bf{s}}}}_{j,1}^{(0)}\) acts non-trivially in the symmetric subspace, so we focus on this term. An explicit form can be computed by decomposing \({\hat{{\bf{s}}}}_{i,1}\) into its permutation-symmetric and orthogonal components

$$\hat{{{\bf{s}}}}_{i,1} = \underbrace{\frac{1}{2S_i}\mathop{\sum}\nolimits_{a=1}^{2S_i} \hat{{{\bf{s}}}}_{i,a}}_{{{\rm{symmetric}}}} + \underbrace{\left(\frac{2S_i-1}{2S_i}\hat{{{\bf{s}}}}_{i,1} - \frac{1}{2S_i}\mathop{\sum}\nolimits_{a{'}=2}^{2S_i} \hat{{{\bf{s}}}}_{i,a{'}} \right)}_{{{\text{non-symmetric}}}}.$$
(28)

Therefore, the symmetric part \({\hat{{\bf{s}}}}_{i,1}^{(0)}=\frac{1}{2{S}_{i}}{\hat{{\bf{S}}}}_{i}\) is proportional to the collective spin.

With this understanding, we can construct a spin-1/2 interaction Hamiltonian HI that recovers equation (1) under projection, by replacing each n-site high-spin interaction with an analagous spin-1/2 one. For example, for n = 2, the replacement proceeds as

$$\begin{array}{rc}{J}_{ij}^{\,\alpha \beta }{\hat{S}}_{i}^{\alpha }{\hat{S}}_{j}^{\,\beta }&\to {\overline{{J}_{ij}}}^{\,\alpha \beta }{\hat{s}}_{i,a}^{\,\alpha }{\hat{s}}_{j,b}^{\,\beta }\\ {\overline{{J}_{ij}}}^{\,\alpha \beta }&=4{S}_{i}{S}_{j}{J}_{ij}^{\,\alpha \beta },\end{array}$$
(29)

where intra-cluster indexes a and b encode which representative from spins i and j is used to generate the interaction. The interaction strength is further boosted to \(\overline{{J}_{ij}}\) to account for the \(\frac{1}{2S}\) factor in equation (28). A straightforward calculation shows that, for higher-weight interactions (for example, \({n}_{\max }\)), the interaction should also be boosted (for example, \(\overline{{K}_{ijk}}=8{S}_{i}{S}_{j}{S}_{k}{K}_{ijk}\)) to recover the target large-spin operator under projection.

In general, it may not be feasible to uniquely assign each spin-Si interaction in H to qubits in HI especially when a spin-Si is involved in more than 2S interactions. In this case, HI is implemented by splitting it into a sequence of non-overlapping groups HI,1, …, HI,D that approximate HI on average. Each sequence can handle up to D(2S) interactions per spin, so if d is the interaction degree, then we require a sequence of length \(D=\lceil \frac{d}{2S}\rceil\). Although manual decompositions sufficed for the models studied here, automated methods to determine efficient decompositions will have to be developed for more complex systems92,93.

Multi-qubit gates with Rydberg blockade

Rydberg atom arrays are a natural platform to realize the Floquet engineering scheme described above94,95,96,97,98. The Rydberg Hamiltonian governing a cluster of N atoms is

$$\begin{array}{rc}{H}_{{\rm{cluster}}}&=\frac{{\varOmega }_{q}(t)}{2}\mathop{\sum}\limits_{i}{\left\vert 1\right\rangle }_{i}\left\langle 0\right\vert +\frac{{\varOmega }_{r}(t)}{2}\mathop{\sum}\limits_{i}{\left\vert r\right\rangle }_{i}\left\langle 1\right\vert +{\mathrm{h.c.}}\\ &+\mathop{\sum}\limits_{i < j}{V}_{ij}{\left\vert r\right\rangle }_{i}\left\langle r\right\vert \otimes {\left\vert r\right\rangle }_{j}\left\langle r\right\vert \end{array},$$
(30)

where Ωq(t), Ωr(t) are complex valued driving fields99,100. In the blockade approximation, which is valid when VijΩr, there is at most one atom in state \(\left\vert r\right\rangle\) (refs. 58,101). Therefore, at leading order in Ωr/Vij, Hcluster is approximated by projecting into the manifold of blockade consistent states. This produces an interacting model with an emergent permutation symmetry (see equation (S5) in Supplementary Information). This symmetry allows us to write Hcluster in a low-dimensional representation of the Hilbert space scaling as O(NNS) for a representation102 including the NS largest total-spin sectors (see ‘Low-dimensional construction’ in Supplementary Information).

Figure 2b shows optimization results for an alternating ansatz with separate Ωr(t) and Ωq(t) applications (solid lines) versus a dual driving scheme with simultaneous field control (dashed lines). For the alternating ansatz, the optimization process begins with finding short sequences of symmetric diagonal gates D(ϕ) and global single-qubit rotations Q(θ) that combined realize US and UP (see equation (S11) in Supplementary Information). While Q(θ) uses global Ωq(t) control, D(ϕ) involves multi-qubit interactions, and the pulse sequences to realize D(ϕ) can be optimized through GrAPE51,59,60,103,104. Numerical optimization is feasible due to the manageable size of the low-dimensional basis. We find that the maximum gate time T*(n) for generic phases (Supplementary Fig. 1). Gate times in Fig. 2 are computed by multiplying T*(n) by the shortest sequence length determined in the first step. The operations Q(θ) and D(ϕ) can also be promoted to controlled operations (see ‘Alternating ansatz’ in Supplementary Information), as required for the controlled perturbation in Fig. 4a.

Dual driving gate profiles are directly optimized using GrAPE, with an added smoothness regularization to ensure driving profiles can be implemented with available classical controls (see equation (S8) in Supplementary Information and Extended Data Fig. 1). Interestingly, the optimized fidelity remains roughly independent of the system size, when tolerating noise-free error rates around 10−3. We select this fidelity threshold and plot the resulting gate times in Fig. 2b (dotted lines). However, for more stringent noise-free thresholds, such as 10−6, we observe an approximately linear dependence in n, producing gate times comparable to the alternating decomposition. Theoretical characterization of the asymptotic scaling of gate time with cluster size and error threshold remains an intriguing open question.

For comparison, we estimate gate counts for a decomposition of US and UP into single- and two-qubit gates (Fig. 2c). Using the Qiskit transpiler105, we find two-qubit decompositions into single-qubit rotations and CPhase gates for UP. For US, the diagonal gates D(ϕ) found numerically in the alternating ansatz only require two-qubit interactions, and can be decomposed into \(\left(\begin{array}{l}n\\ 2\end{array}\right)\) two-qubit CPhase gates, outperforming the Qiskit result for n ≥ 3. Finally, we show in Supplementary Information that symmetric operations like US and UP can be implemented with poly(n) two-qubit gates using ancilla qubits, by constructing an efficient matrix product operator (MPO) representation of arbitrary n-spin operations.

The fidelity of the multi-qubit gates is subject to errors such as spontaneous emission and dephasing due to a finite \({T}_{2}^{\,* }\), as seen in current Rydberg gates51. In general, errors accumulate with gate time; hence, shorter gates tend to produce higher-fidelity implementations. Certain errors can also be mitigated by improving the excitation schemes. Specifically, single-photon schemes as used in as in refs. 87,88, avoiding intermediate-state scattering, may enhance performance for bigger clusters. This is because the rate of decay from the Rydberg state does not depend on cluster size, since the number of Rydberg excitations is never larger than 1. By contrast, the rate of scattering from the intermediate state grows linearly with cluster size.

Estimating simulation time

The simulation time is defined as the maximum evolution time, before which the typical error per qubit is below some target threshold ϵ. We account for both coherent Hamiltonian simulation errors and incoherent gate errors. For a symmetrized sequence, we estimate the scaling of both contributions to be

$${\varepsilon }_{{\rm{sim}}}={({c}_{2}{\tau }^{2})}^{2}{T}^{2},{\varepsilon }_{{\rm{gate}}}=\frac{gT}{\tau },$$
(31)

with target evolution time T and step size τ. Here, the coefficient c2 depends on the detail of the Hamiltonian simulation protocol and can be estimated from numerics or the third-order term in the Magnus expansion (see equation (S18) in Supplementary Information). The coefficient g measures the gate error probability per cycle and is determined by assuming that each multi-qubit gate has a fixed probability of failure Tgate g0 that scales linearly in the time of the gate. Such a scaling is valid when errors are dominated by spontaneous emission from the Rybderg state. For simplicity, we utilize the estimated Tgate for large-angle unitary UP with θ = π. Both c2 and g are estimated to grow extensively in system size. Thus, we work instead with the intensive version of these quantities, \({\tilde{c}}_{2}={c}_{2}/L\) and \(\tilde{g}=g/L\).

Optimizing the step size τ to minimize error (see ‘Heuristic simulation time estimates’ in Supplementary Information), the maximum evolution time scales as

$${T}_{{\rm{opt}}}=\frac{{2}^{2/3}}{{5}^{5/6}}\frac{{\varepsilon }^{5/6}/L}{{\left({\tilde{c}}_{2}{\tilde{g}}^{2}\right)}^{1/3}}$$
(32)

for a target error rate ε.

In Fig. 3, we use this formula, along with numerical estimates for c2 in models (i) and (ii), and heuristic estimates for models (iii) and (iv). We further estimate g using the simultaneous driving gate times of Fig. 2b and select an error per Rabi cycle of g0 = 10−3. The target error we select is ε5/6/L = 0.1, which grows approximately extensively with system size. We illustrate the benefits of our approach on four example Hamiltonians (see Fig. 3 and ‘Heuristic simulation time estimates’ in Supplementary Information).

Many-body spectroscopy

To complete our simulation framework, we also develop tools for resource-efficient readout of Hamiltonian properties. First, we illustrate how to compute two-time correlation functions of the form

$${C}_{O,R}(t)=\left\langle S\right\vert O(t)R(0)\left\vert S\right\rangle ,$$
(33)

using the circuit in Fig. 4a. The real and imaginary parts of CO,R(t) are independently accessed by measuring the ancilla in the X and Y basis, respectively63,64,66,106,107. More concretely, consider the state of the system right before measurement, including both the ancilla qubit and the system

$$\left\vert {\psi }_{{\rm{f}}}\right\rangle =\frac{1}{\sqrt{2}}\left(\left\vert 0\right\rangle \otimes U(t)\left\vert S\right\rangle +\left\vert 1\right\rangle \otimes U(t)R\left\vert S\right\rangle \right),$$
(34)

where U is the time-evolution operator. Measuring X O or Y O results in

$${\left\langle X\otimes O\right\rangle }_{{\psi }_{{\rm{f}}}}=\frac{1}{2}\left(\left\langle S\right\vert {R}^{\dagger }O(t)\left\vert S\right\rangle +\left\langle S\right\vert O(t)R\left\vert S\right\rangle \right),$$
(35)
$${\left\langle Y\otimes O\right\rangle }_{{\psi }_{{\rm{f}}}}=\frac{i}{2}\left(\left\langle S\right\vert {R}^{\dagger }O(t)\left\vert S\right\rangle -\left\langle S\right\vert O(t)R\left\vert S\right\rangle \right),$$
(36)

which together gives the full complex-valued CO,R(t) by taking a linear combination of the two,

$${C}_{O,R}={\left\langle (X+iY\,)\otimes O\right\rangle }_{{\psi }_{{\rm{f}}}}.$$
(37)

For observables O diagonal in the measurement basis, CO,R can be efficiently estimated in parallel from snapshots. During the ith run, let μ(i) = {x, y} be the randomly sampled ancilla measurement basis and a(i) = {0, 1}, and \(\left\vert {b}^{(i)}\right\rangle\) be the ancilla and system measurement outcomes respectively. Then, the estimator can be written as

$$\begin{array}{r}\overline{{C}_{O,R}(t)}=\frac{1}{M}\mathop{\sum }\limits_{i=1}^{M}2\sigma\left({\mu }^{(i)},{a}^{(i)}\right)\left\langle {b}^{(i)}\right\vert O\left\vert {b}^{(i)}\right\rangle \end{array},$$
(38)

where σ is a function taking on the values

$$\begin{array}{l}\sigma (x,0)=+1,\qquad\qquad\sigma (x,1)=-1,\\\sigma (\,y,0)=+i,\qquad\qquad\sigma (\,y,1)=-i,\end{array}$$
(39)

and \(\left\vert {b}^{(i)}\right\rangle\) is the measured projected state.

These measurements can be used to compute the operator-resolved density of states (6), which can be rewritten as

$$\begin{array}{l}{D}^{A}(\omega )=\displaystyle\int\,{\mathrm{d}}t\,{{\mathrm{e}}}^{i\omega t}\mathop{\sum}\limits_{n}{\rm{Tr}}[A\left\vert n\right\rangle {{\mathrm{e}}}^{-i{\epsilon }_{n}t}\left\langle n\right\vert ]\\\qquad\quad\,=\displaystyle\int\,{\mathrm{d}}t\,{{\mathrm{e}}}^{i\omega t}{\rm{Tr}}[AU(t)],\end{array}$$
(40)

where we have replaced \(\delta (\omega -{\epsilon }_{n})\to \int\,{\mathrm{d}}t\,{{\mathrm{e}}}^{i(\omega -{\epsilon }_{n})t}\). In practice, we will sample evolution times t from a probability distribution p(t), such that the integral is normalized to one when ω = ϵn, that is,\(\int\,{\mathrm{d}}t=\mathop{\int}\nolimits_{-\infty }^{\infty }p(t){\mathrm{d}}t=1\). To arrive at equation (7), we can replace the trace with an average over probe states108

$${\rm{Tr}}[AU(t)]={\tilde{{\mathbb{E}}}}_{R \sim {\mathcal{R}}}\left\langle R\right\vert AU(t)\left\vert R\right\rangle ,$$
(41)

where \({\tilde{{\mathbb{E}}}}_{R \sim {\mathcal{R}}}={\rm{Tr}}[{\mathbb{1}}]{{\mathbb{E}}}_{R \sim {\mathcal{R}}}\) is a normalized expectation value and \({\rm{Tr}}[{\mathbb{1}}]\) is the dimensionality of the Hilbert space. This is valid as long as the ensemble forms a two-design

$${{\mathbb{E}}}_{R \sim {\mathcal{R}}}\left\vert R\right\rangle \left\langle R\right\vert =\frac{{\mathbb{1}}}{{\rm{Tr}}[{\mathbb{1}}]}.$$
(42)

For example, we can sample perturbations R from the ensemble of random single-qubit rotations. Such perturbations can be implemented using the techniques from Fig. 2, by applying a sequence of two-qubit gates between an ancilla qubit and the system qubits. In general, convergence properties of the estimator depend on higher moments of \({\mathcal{R}}\) (see equation (S27) in Supplementary Information).

Observables such as equation (41) can, in principle, be computed via a modified Hadamard test by applying controlled-time evolution (see ref. 64). Since time evolution is generally the most costly step, we avoid the overhead associated with controlled evolution and instead utilize a reference state \(\left\vert S\right\rangle\) with simple time evolution. In particular, we select an ensemble of observables Os such that

$$\frac{1}{{\mathcal{N}}({O}_{s})}\sum _{s}{O}_{s}U(t)\left\vert S\right\rangle \left\langle S\right\vert {U}^{\,\dagger }(t){O}_{s}={\mathbb{1}}.$$
(43)

The normalization factor \({\mathcal{N}}({O}_{s})\) depends on the choice of ensemble. Then, we can insert this resolution of the identity into equation (41) to get equation (7). In Figs. 4 and 5, we consider the polarized reference state \(\left\vert S\right\rangle ={\left\vert 0\right\rangle }^{\otimes N}\), which is an exact eigenstate of the Heisenberg Hamiltonian, and the ensemble of Pauli-X operators \({O}_{s}={X}_{s}{ = \bigotimes }_{i = 1}^{N}{({X}_{i})}^{{s}_{i}}\), where s is an N-bit string. This satisfies the condition and has \({\mathcal{N}}({X}_{s})=1\) since \({X}_{s}{\left\vert 0\right\rangle }^{\otimes N}\) is an orthonormal basis. For generic reference states \(\left\vert S\right\rangle\) prepared by applying S to \({\left\vert 0\right\rangle }^{\otimes N}\), the ensemble Os = SXsS satisfies the condition and can be measured by applying the inverse preparation circuit S before measuring in the X basis. Lastly, an ensemble that is independent of the reference state is the set of Pauli strings \({P}_{s}{ = \bigotimes }_{i = 1}^{N}{\sigma }_{i}^{{s}_{i}}\), where s is a base-four string and \({\sigma }^{{s}_{i}}\) denotes the four Pauli operators I, X, Y and Z; this can be accessed with randomized measurements (see equations (S56)–(S57) in Supplementary Information) and has a normalization factor \({\mathcal{N}}({P}_{s})={2}^{N}\).

Thermal expectation values

The operator-resolved density of states can be used to compute thermal expectation values via64

$${\langle A\rangle }_{\beta }=\frac{\int\,{{\mathrm{e}}}^{-\beta \omega }{D}^{A}(\omega ){\mathrm{d}}\omega }{\int\,{{\mathrm{e}}}^{-\beta \omega }{D}^{{\mathbb{1}}}(\omega ){\mathrm{d}}\omega }.$$
(44)

For example, to compute the magnetic susceptibility, we simply select the operator \(A=\beta {({S}^{z})}^{2}\), where β = 1/T is the inverse temperature. Interestingly, this method of estimating thermal expectation values is insensitive to uniform spectral broadening of each peak, due to a cancellation between the numerator and denominator (see discussion resulting in equation (S69) in Supplementary Information). However, it is highly sensitive to noise at low ω, which is exponentially amplified by eβω. To address this, we estimate the SNR for each DA(ω) independently and zero-out all points with SNR below three times the average SNR. This potentially introduces some bias by eliminating peaks with low signal but ensures that the effects of shot noise are well controlled.

Noise modelling

To quantify the effect of noise on the engineered time dynamics, we simulate a microscopic error model by applying a local depolarizing channel with an error probability p at each gate. This results in a decay of the obtained signals for the correlator \({D}_{R}^{A}(t)\). The rate of the exponential decay grows roughly linearly with the weight of the measured operators (Extended Data Fig. 2). This scaling with operator weight can be captured by instead applying a single depolarizing channel at the end of the time evolution, with a per-site error probability of γt with an effective noise rate γ. This effective γ also scales roughly linear as a function of the single-qubit error rate per gate p (Extended Data Fig. 2).

Scaling the approach

Quantum simulations are constrained by the required number of samples and the simulation time needed to reach a certain target accuracy. These factors are crucial for determining the size of Hamiltonians that can be accessed for particular quantum hardware.

Focusing on a single gapped eigenstate we determine the number of snapshots CM needed to distinguish a spectral peak from noise (Extended Data Fig. 4). The signal arises from the overlap of the probe states with the target eigenstate. The noise is given by the variance of the estimator equation (7) and decays as ϵ ≈ M−1/2. For certain ensembles of probe states, the variance can be made system size independent (see equation (S52) in Supplementary Information). However, a random probe state will have exponentially vanishing overlap with any specific eigenstate. One approach to mitigate this is to initialize probe states with higher overlap. In Extended Data Fig. 3b, we show that, for a spin-1 anti-ferromagnetic chain, a simple bond-dimension 2 MPS can outperform product states by orders of magnitude in ground-state estimation. While bond-dimension 2 states can be efficiently prepared with simple circuits of two-qubit gates, more general ansatze can also be efficiently realized using the simulation techniques described here109,110. Optimized ansatze could be further combined with importance sampling111 to improve the sample efficiency of computing finite temperature or excited state properties (Supplementary Fig. 3).

The simulation time \({T}_{\max }\) will depend on the required spectral resolution, which does not scale with system size for a gapped eigenstate. However, the rate of spectral broadening depends sensitively on the weight of measured observables (Extended Data Fig. 2). When the reference state is high in energy, such as the polarized state for an AFM chain, the relevant observables typically have extensive weight, requiring \({T}_{\max } \approx N\) to maintain constant spectral resolution. By contrast, preparing a low-energy reference state, such as the ground state \(\left\vert S\right\rangle =\left\vert GS\right\rangle\), allows coupling to other low-energy states using low-weight operators. This results in a noise-resilient and system-size-independent procedure (Extended Data Fig. 3). We further note that ground-state preparation can be approximate, which would result in additional spectral broadening in the computation of DA(ω). While the spectral resolution requirements should also grow as the gap shrinks, we have illustrated that operator resolution can mitigate this in certain settings (for example, Figs. 5 and 6). As such, understanding the general capabilities of this approach is an interesting direction for continued research.

OEC Hamiltonians

The OEC is a paradigmatic example of a transition-metal complex112,113,114,115,116. The two candidates for its closed S2 state of the OEC are parameterized with Heisenberg models \(H=-{\sum }_{ij}{J}_{ij}{\hat{{\bf{S}}}}_{i}\cdot {\hat{{\bf{S}}}}_{j}\) (refs. 27,117,118,119,120), and the parameters used are summarized in Supplementary Information (OEC parameters). In addition to their energies, additional information about eigenstates can be calculated by choosing the operator A in the operator-resolved density of states appropriately, and multiplying by a narrow band-pass filter in Fourier space to isolate a small set of frequencies64,121. For example, we investigate the total spin of the cubane subunit, that is, the three magnetic sites supported on opposite vertices of the cube, using \(A={({\hat{{\bf{S}}}}_{1}+{\hat{{\bf{S}}}}_{2}+{\hat{{\bf{S}}}}_{3})}^{2}\), and compute

$$\begin{array}{r}A(\omega )=\frac{{D}^{\,A}(\omega )}{{D}^{{\mathbb{1}}}(\omega )}\end{array}$$
(45)

evaluated at energies ωn of the eigenstates, which can be extract from peaks in the spectral functions (Extended Data Fig. 5). We further use spin projection to improve the estimate in the presence of broadening. For example, if the peak at ωn occurs in spin sector PS, we insert the projector PS in the numerator and denominator, \(A({\omega }_{n})=\frac{{D}^{\,A{P}_{S}}({\omega }_{n})}{{D}^{{P}_{S}}({\omega }_{n})}\).

Two-dimensional Heisenberg calculations

The square lattice Heisenberg calculation was performed on a large (L × L) system, with Hamiltonian

$${H}_{2{\mathrm{D}}}=-J\sum _{{\bf{r}}}\left({\hat{{\bf{S}}}}_{{\bf{r}}}\cdot {\hat{{\bf{S}}}}_{{\bf{r}}+\hat{x}}+{\hat{{\bf{S}}}}_{{\bf{r}}}\cdot {\hat{{\bf{S}}}}_{{\bf{r}}+\hat{y}}\right).$$
(46)

We measure the Green’s function \(G({\bf{r}},t)=\left\langle S\right\vert {X}_{{\bf{r}}}(t){X}_{{\bf{0}}}\left\vert S\right\rangle\) from a polarized reference \({\left\vert S\right\rangle }^{\otimes {L}^{2}}\) (ref. 122). Since Sz is conserved under the dynamics, G(r, t) is classically simulated by restricting it to the space containing the \(\left\vert S\right\rangle\) and single spin-flip states \({X}_{{\bf{r}}}\left\vert S\right\rangle\), which has dimension L2 + 1. We evolve under equally spaced times up to \(J{T}_{\max }=8\) and select L = 23, which is large enough such that G(r, t) vanishes far from the boundaries. Therefore, by letting G(r, t) = 0 outside the simulated region, this provides a good approximation for the L →  limit. As such, we define projectors onto (unnormalized) plane-wave states \({P}_{{\bf{k}}}=\left\vert {\bf{k}}\right\rangle \left\langle {\bf{k}}\right\vert\), where \(\left\langle {\bf{k}}\right\vert {X}_{{\bf{r}}}\left\vert S\right\rangle ={{\mathrm{e}}}^{-i{\bf{k}}\cdot {\bf{r}}}\). Then, \({D}^{{P}_{{\bf{k}}}}(\omega )\) can be written as

$$D^{P_{{\mathbf{k}}}}(\omega) = {\int} \,{{\mathrm{d}}t} {\mathrm{e}}^{i (\omega - \omega_0) t}\sum\limits_{{\mathbf{r}}} \underbrace{\left\langle S \right\vert X_{{{\mathbf{0}}}} P_k X_{{\mathbf{r}}} \left\vert S \right\rangle}_{{\mathrm{e}}^{i {{\mathbf{k}}} \cdot ({{\mathbf{r}}} - {{\mathbf{0}}})}} \underbrace{\left\langle S \right\vert X_{{\mathbf{r}}}(t) X_{{\mathbf{0}}} \left\vert S \right\rangle}_{G(r,t)},$$
(47)

which reduces to the Fourier transform G(k, ω) when the energy of the polarized state ω0 is set to zero. Plotting this for a continuous set of k and ω produces the spectral weight depicted in Fig. 6. This further shows that finite-size systems are sufficient to simulate extended systems at finite evolution times.

By computing the peak value of ω for each k in the vicinity of the single-particle excitations, we estimate the dispersion relation ω(k) associated with a single spin-flip excitation. Computations of the many-particle Green’s function could be performed similarly by applying multisite perturbations, to extract finite-temperature properties and characterize the interactions between the quasi-particles.