A Survey of Quantum Computing For Finance
A Survey of Quantum Computing For Finance
A Survey of Quantum Computing For Finance
Dylan A. Herman?1 , Cody Googin?2 , Xiaoyuan Liu?3 , Alexey Galda4 , Ilya Safro3 , Yue Sun1 ,
Marco Pistoia1 , and Yuri Alexeev5
1
JPMorgan Chase Bank, N.A., New York, NY, USA — {dylan.a.herman,yue.sun,marco.pistoia}@jpmorgan.com
2
University of Chicago, Chicago, IL, USA — codygoogin@uchicago.edu
3
University of Delaware, Newark, DE, USA — {joeyxliu,isafro}@udel.edu
4
Menten AI, San Francisco, CA, USA — alexey.galda@menten.ai
5
Argonne National Laboratory, Lemont, IL, USA — yuri@anl.gov
arXiv:2201.02773v4 [quant-ph] 27 Jun 2022
?
These authors contributed equally to this work.
i
Contents
1 Introduction 1
5 Stochastic Modeling 12
5.1 Monte Carlo Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2 Numerical Solutions of Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.2.1 Quantum-linear-system-based algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.2.2 Variational algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.3 Financial Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.3.1 Derivative Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Collateralized Debt Obligations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.3.2 Risk Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Value at Risk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Economic Capital Requirement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
The Greeks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Credit Value Adjustments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6 Optimization 20
6.1 Combinatorial Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.1.1 Quantum Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.1.2 Quantum Approximate Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . 22
6.1.3 Variational Quantum Eigensolver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.1.4 Variational Quantum Imaginary Time Evolution (VarQITE) . . . . . . . . . . . . . . . . 23
6.1.5 Optimization by Quantum Unstructured Search . . . . . . . . . . . . . . . . . . . . . . . . 24
6.2 Convex Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.3 Large-Scale Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.4 Financial Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.4.1 Portfolio Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Combinatorial Formulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Convex Formulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.4.2 Swap Netting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
ii
6.4.3 Optimal Arbitrage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.4.4 Identifying Creditworthiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.4.5 Financial Crashes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7 Machine Learning 28
7.1 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.2.1 Quantum Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.2.2 Quantum Nearest-Neighbors Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.3 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.3.1 Quantum k-Means Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.3.2 Quantum Spectral Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.4 Dimensionality Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.5 Generative Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.5.1 Quantum Circuit Born Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.5.2 Quantum Bayesian Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.5.3 Quantum Boltzmann Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.5.4 Quantum Generative Adversarial Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.6 Quantum Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.6.1 Quantum Feedforward Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.6.2 Quantum Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.6.3 Quantum Graph Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.7 Quantum Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.8 Natural Language Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.9 Financial Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.9.1 Anomaly Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.9.2 Asset Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.9.3 Implied Volatility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
iii
1 Introduction
Quantum computation relies on a fundamentally different means of processing and storing information than
today’s classical computers use. The reason is that the information does not obey the laws of classical mechanics
but those of quantum mechanics. Usually, quantum-mechanical effects become apparent only at very small scales,
when quantum systems are properly isolated from the surrounding environments. These conditions, however,
make the realization of a quantum computer a challenging task. Even so, according to a McKinsey & Co.
report [1], finance is estimated to be the first industry sector to benefit from quantum computing (Section 2),
largely because of the potential for many financial use cases to be formulated as problems that can be solved
by quantum algorithms suitable for near-term quantum computers. This is important because current quantum
computers are small-scale and noisy, yet the hope is that we can still find use cases for them. In addition, a variety
of quantum algorithms will be more applicable when large-scale, robust quantum computers become available,
which will significantly speed up computations used in finance.
The diversity in the potential hardware platforms or physical realizations of quantum computation is com-
pletely unlike any of the classical computational devices available today. There exist proposals based on vastly
different physical systems: superconductors, trapped ions, neutral atoms, photons, and others (Section 3.3), yet
no clear winner has emerged. A large number of companies are competing to be the first to develop a quan-
tum computer capable of running applications useful in a production environment. In theory, any computation
that runs on a quantum computer may also be executed on a classical computer—the benefit that quantum
computing may bring is the potential reduction in time and memory space with which the computational tasks
are performed [2], which in turn may lead to unprecedented scalability and accuracy of the computations. In
addition, any classical algorithm can be modified (in a potentially nontrivial way) such that it can be executed
on a universal quantum computer, but without any speedup. Obtaining quantum speedup requires developing
new algorithms that specifically take advantage of quantum-mechanical properties. Thus, classical and quantum
computers will need to work together. Moreover, in order to solve a real-world problem, a classical computer
should be able to efficiently insert and obtain the necessary data into and from a quantum computer.
This promised efficiency of quantum computers enables certain computations that are otherwise infeasible
for current classical computers to complete in any reasonable amount of time (Section 3). In general, however,
the speedup for each task can vary greatly or may even be currently unknown (Section 4). While these speedups,
if found, can have a tremendous impact in practice, they are typically difficult to obtain. And even if they are
discovered, the underlying quantum hardware must be powerful enough to minimize errors without introducing
an overhead that counteracts the algorithmic speedup. We do not know exactly when such robust hardware will
exist. Thus, the goal of quantum computing research is to develop quantum algorithms (Section 4) that solve
useful problems faster and to build robust hardware platforms to run them on. The industry needs to understand
the problems that can best benefit from quantum computing and the extent of these benefits, in order to make
full use of its revolutionary power when production-grade quantum devices are available.
To this end, we offer a comprehensive overview of the applicability of quantum computing to finance. Ad-
ditionally, we provide insight into the nature of quantum computation, focusing particularly on specific financial
problems for which quantum computing can provide potential speedups compared with classical computing. The
sections in this article can be grouped into two parts. The first part contains introductory material: Section
2 introduces computationally intensive financial problems that potentially benefit from quantum computing,
whereas Sections 3 and 4 introduce the core concepts of quantum computation and quantum algorithms. The
second part—the main focus of the article—reviews research performed by the scientific community to develop
quantum-enhanced versions of three major problem-solving techniques used in finance: stochastic modeling (Sec-
tion 5), optimization (Section 6), and machine learning (Section 7). The connections between financial problems
and these problem-solving techniques are outlined in Table 1. Section 8 covers experiments on current quantum
hardware that have been performed by researchers to solve financial use cases.
1
The article by Bouland et al. [4] focuses on works done by the QC Ware team with a central emphasis on
quantum Monte Carlo integration. Some more recent work has been done in that area and is included in our
survey. Also Bouland et al. focus on portfolio optimization. While this is an important financial problem that
we also cover, we have tried to include other financial applications that use optimization as well. Moreover, we
tried to delve deeper into the various quantum machine learning approaches for generative modeling and neural
networks.
The survey by Orus et al. [5] does an excellent job at highlighting financial applications that make use of
quantum annealers. However, we believe the quantitative finance and quantum computing communities would
also benefit from hearing about other quantum optimization approaches and devices. We also tried to delve a
little bit deeper into how quantum annealing works and discuss universal adiabatic computation. Given that the
field of quantum computation is so dynamic, we believe the community can benefit from seeing more recent work.
A recent survey by Pistoia et al. [6] covers a variety of quantum machine learning algorithms applicable to
finance. In our review, we discuss a broader array of applications outside the realm of machine learning, such as
financial applications that make use of stochastic modeling and optimization.
There are also several problem-specific surveys such as the derivative pricing [7], supply chain finance [8] and
high-frequency trading [9] that tackle different goals than that of our survey.
2
significant time to train algorithms, whereas quantum algorithms may be able to speed up these computationally
intensive and time-consuming components. As financial institutions continue to generate data, they must be
able to employ that data in a functional way to improve their business strategy. Additionally, data organization
can allow financial institutions to engage with customers’ finances more specifically and effectively, supporting
their customer service and keeping customers engaged despite other options such as FinTech. Much of this data
analysis is done by dealing with large matrices, which can be computationally demanding for classical computers.
3
implied volatility calculation (Section 7.9.3). Applications of quantum machine learning in these use cases will
be discussed in the respective sections.
In Table 1 we present a summary of the quantum techniques and their applicable financial use cases covered
in this survey.
Table 1: Financial Use Cases with Corresponding Classical and Quantum Solutions
4
of quantum devices to solve more useful problems faster and/or more accurately than classical computers, such
as large-scale financial problems (Section 2.2), is called quantum advantage. Quantum advantage is more elusive
and, arguably, has not been demonstrated yet. The main challenge is that existing quantum hardware does not
yet seem capable of running algorithms on large enough problem instances. Current quantum hardware (Section
3.3) is in the noisy intermediate-scale quantum (NISQ) technology era [32], meaning that the current quantum
devices are underpowered and suffer from multiple issues. The fault-tolerant era refers to a yet unknown period
in the future in which large-scale quantum devices that are robust against errors will be present. In the next two
subsections we briefly discuss quantum information (Section 3.1) and models for quantum computation (Section
3.2). In the last subsection (Section 3.3) we give an overview of the current state of quantum hardware and the
challenges that must be overcome.
1
Generally, quantum states live in a projective Hilbert space, which can be infinite-dimensional [33].
5
vector outer product.2 For n qubits, the set {|ki | k ∈ {0, 1}n } forms the computational basis. Measuring in the
computational basis probabilistically projects the quantum state onto a computational basis state. A measure-
ment in the context of quantum computation, without further clarification, usually refers to a measurement in
the computational basis.
A positive semi-definite operator ρ on the quantum state space with Tr(ρ) = 1 is called a density operator,
P mixture of pure states |ψi i, in other words, state
where Tr(·) is the trace operator. The classical probabilistic
vectors, with probabilities pi has density operator ρ := p |ψi ihψi | and is called a mixed state. In general, a
i i
density operator ρ represents a mixed state if Tr(ρ2 ) < 1 and a pure 2
pstate if Tr(ρ ) = 1. Fidelity is one of the
distance metrics for quantum states ρ and σ, defined as F (ρ, σ) = Tr( ρ1/2 σρ1/2 ) [2]. The fidelity of an operation
refers to the computed fidelity between the quantum state that resulted after the operation was performed and
the expected state. Decoherence is the loss of information about a quantum state to its environment, due to
interactions with it. Decoherence constitutes an important source of error for near-term quantum computers.
For further details, the works of Nielsen and Chuang [2] and Kitaev et al. [37] are the standard references on
the subject of quantum computation. The former covers a variety of topics from quantum information theory,
while the latter focuses on quantum computational complexity theory.
2
The Pauli operators are also sometimes denoted as σx , σy , and σz , respectively.
3
There are other models of quantum computation that are not based on qubits, such as n-level quantum
systems, for n > 2, called qudits [38], and continuous-variable, infinite-dimensional systems [39]. This article
discusses only quantum models and algorithms based on qubits.
6
under the operation of composition, the Clifford group, which is the normalizer of the group of Pauli gates [2].
The time complexity of an algorithm is measured in terms of the depth of the circuit implementing it. This is in
terms of native gates when running the circuit on hardware. The majority of commercially available quantum
computers implement the quantum circuit model.
We note that quantum circuits generated for gate-based quantum computers can also be simulated on classical
hardware. It is generally a computationally inefficient process, however, with enormous memory requirements
scaling exponentially with the number of qubits [45–48]. But for certain types of computations such as computing
the expectation values of observables, these requirements can be dramatically reduced by using tensor network
simulators [49–51]. Classical simulation is commonly used for the development of algorithms, verification, and
benchmarking, to name a few applications.
7
3.3.1 Noise
Qubits lose their desired quantum state or decohere (Section 3.1) over time. The decoherence times of each
of the qubits are important attributes of a quantum device. Various mechanisms of qubit decoherence exist,
such as quantum amplitude and phase relaxation processes and depolarizing quantum noise [2]. One potentially
serious challenge for superconducting systems is cosmic rays [57]. While single-qubit decoherence is relatively well
understood, the multiqubit decoherence processes, generally called crosstalk, pose more serious challenges [58].
Even two-qubit operations have an order of magnitude higher error rate than do single-qubit operations. This
makes it difficult to entangle a large number of qubits without errors. Various error-mitigation techniques [59, 60]
have been developed for the near term. In the long term, quantum error correction using logical operations, briefly
mentioned in Section 3.2, will be necessary [43, 61]. However, this requires multiple orders of magnitude more
physical qubits than available on today’s NISQ systems, and native operations must have sufficiently low error
rates. Thus, algorithms executed on current hardware must contend with the presence of noise and are typically
limited to low-depth circuits. In addition, current quantum error correction techniques theoretically combat local
errors [43, 58, 61]. The robustness of these techniques to non-local errors is still being investigated [62].
3.3.2 Connectivity
Quantum circuits need to be optimally mapped to the topology of a quantum device in order to minimize errors
and total run time. With current quantum hardware, qubits can interact only with neighboring subsets of other
qubits. Existing superconducting and trapped-ion processors have a fixed topology that defines the allowed
connectivity. However, trapped-ion devices can change the ordering of the ions in the trap to allow for, originally,
non-neighboring qubits to interact [63]. Since this approach utilizes a different degree of freedom from that used
to store the quantum information for computation, technically the device can be viewed as having an all-to-all
connected topology. For existing superconducting processors, the positioning of the qubits cannot change. As a
result, two-qubit quantum operations between remote qubits have to be mediated by a chain of additional two-
qubit SWAP gates via the qubits connecting them. Moreover, the two-qubit gate error of current NISQ devices
is high. Therefore, quantum circuit optimizers and quantum hardware must be developed with these limitations
in mind. Connectivity is also a problem with current quantum annealers, which is discussed in Section 6.1.1.
8
if and only if h ∈ O(f (n)) and h ∈ Ω(f (n)).4 Computational problems, which are represented as functions, are
divided into complexity classes. The most common classes are defined based on decision problems, which can
be represented as Boolean functions. P denotes the complexity class of decision problems that can be solved in
polynomial time (i.e., the time required is upper bounded, in “Big-O” terms, by a polynomial in the input size) by a
deterministic Turing machine (TM), in other words, a traditional classical computer [2, 37]. An amount of time or
space required for computation that is upper bounded by a polynomial is called asymptotically efficient, whereas,
for example, an amount upper bounded by an exponential function is not. However, asymptotic efficiency does
not necessarily imply efficiency in practice; the coefficients or order of the polynomial can be large. NP denotes
the class of decision problems for which proofs of correctness exist, called witnesses, that can be executed in
polynomial time on a deterministic TM [37].
A problem is hard for a complexity class if any problem in that class can be reduced in polynomial time to
it, and it is complete if it is both hard and contained in the class [65]. Problems in P can be solved efficiently on
a classical or quantum device. While NP contains P, it also contains many problems not known to be efficiently
solvable on either a quantum or classical device. #P is a set of counting problems. More specifically, it is the
class of functions that count the number of witnesses that can be computed in polynomial time on a deterministic
TM for NP problems [65]. BQP, which contains P, denotes the class of decision problems solvable in polynomial
time on a quantum computer with bounded probability of error [2]. A common way to represent the complexity
of quantum algorithms is by using query complexity [66]. Roughly speaking, the problem setup provides the
algorithm access to functions that the algorithm considers as “black boxes.” These are represented as unitary
operators called quantum oracles. The asymptotic complexity is computed based on the number of calls, or
queries, to the oracles.
The goal of quantum algorithms research is to develop quantum algorithms that provide computational
speedups, a reduction in computational complexity. In practice, however, one commonly finds algorithms with
improved efficiency without any proven reduction in complexity. These algorithms typically utilize heuristics. As
mentioned in Section 3.3, a majority of algorithms for NISQ fall into this category. When one can theoretically
show a reduction in asymptotic complexity, the algorithm has a provable speedup for the problem. In the
discussions that follow, we emphasize the category into which each algorithm currently falls.
4
Let ∗ signify O, Ω, or Θ. It is common practice to abuse notation and use the symbol = instead of ∈ to
signify g(n) is in ∗(f (n)) [64]. Similarly, it is common to interchange the phrases “the complexity is ∗(f (n))” and
“the complexity is in ∗(f (n)).”
9
addition, if |Φi has nonzero overlap with the marked state |φi, namely, hΦ| φi 6= 0, then without loss of generality
|Φi = sin(θa ) |φi + cos(θa )|φ⊥ i, where hφ| φ⊥ i = 0 and sin(θa ) = hΦ| φi. For example, in the case of Grover’s
algorithm, |Φi is a uniform superposition over states corresponding to the elements of X , and |φi is the marked
state. QAA will amplify the amplitude sin(θa ) and as a consequence the probability sin2 (θa ) of measuring |φi
(using an observable with |φi as an eigenstate) to Ω(1)5 using O( sin(θ 1
a)
) queries to U and Oφ . This is done
utilizing a series of interleaved reflection operators involving U , |ψi, and Oφ [73].
This algorithm can be understood as a quantum analogue to classical probabilistic boosting techniques and
achieves a quadratic speedup [73]. Classically, O( sin21(θa ) ) samples would be required, in expectation, to measure
|φi with Ω(1) probability. QAA is an indispensable technique for quantum computation because of the inherit
randomness of quantum mechanics. Improvements to the algorithm have been made to handle issues that occur
when one cannot perform the required reflection about |ψi [74], the number of queries to make (which requires
knowing sin(θa )) is not known [75, 76], and Oφ is imperfect with bounded error [77]. The last two improvements
are also useful for Grover’s algorithm (Section 4.1). A version also exists for quantum algorithms (i.e., a unitary
U ) that can be decomposed into multiple steps with different stopping times [78].
5
Ω(1) probability means the probability that the event occurs is at least some constant. This is used to signify
that the desired output of the algorithm occurs with a probability that is independent of the variables in the
algorithm’s time complexity or query complexity. In addition, this implies an asymptotically efficient amount of
repetitions (classical probabilistic boosting) can be used to increase the probability of success close to 1 (e.g., see
Section 4.3).
10
for desired error . This complexity was computed under the assumption that a procedure based on higher-
order Suzuki–Trotter methods [90, 91] was used for sparse Hamiltonian simulation. However, a variety of more
efficient techniques have been developed since the paper’s inception [92, 93]. These potentially reduce the original
HHL’s quadratic dependence on sparsity. HHL’s query complexity, which is independent of the complexity of
Hamiltonian simulation, is O(κ2 /) [94].
Alternative quantum linear systems solvers also exist that have better dependence on some of the parameters,
such as almost linear in the condition number from Ambainis [78] and polylogarithmic dependence on 1/ for
precision from Childs, Kothari, and Somma (CKS) [87]. In addition, Wossnig et al. [95] utilized the quantum
singular value estimation (QSVE) √ algorithm of Kerenedis and Prakash [69] to implement a QLS solver for dense
matrices. This algorithm has O( N polylog(N )) dependence on N and hence offers no exponential speedup.
However, it still obtains a quadratic speedup over HHL for dense matrices. Following this, Kerenedis and Prakash
generalized the QSVE-based linear systems solver to handle both sparse and dense matrices and introduced
a technique for spectral norm estimation [96]. In addition, the quantum singular value transform (QSVT)
framework provides methods for QLS that have the improved dependencies on κ and 1/ mentioned above
[97, 98]. The QSVT framework also provides algorithms for a variety of other linear algebraic routines such as
implementing singular-value-threshold projectors [69, 99] and matrix-vector multiplication [97]. As an alternative
to QLS based on the QSVT, Costa et al. [94] devised a discrete-time adiabatic approach [100] to the QLSP that
has optimal [88] query complexity: linear in κ and logarithmic in 1/. Since QLS algorithms invert Hermitian
matrices, they can also be used to compute the Moore–Penrose pseudoinverse of an arbitrary matrix. This
requires computing the Hermitian dilation of the matrix and filtering out singular values that are near zero
[88, 99].
While solving the QLSP does not provide classical access to the solution vector, this can still be done
by utilizing methods for vector-state tomography [101]. Applying tomography would no longer allow for an
exponential speedup in the system size N . However, polynomial speedups using tomography for some linear-
algebra-based algorithms are still possible (Section 6.2). In addition, without classical access to the solution,
certain statistics, such as the expectation of an observable with respect to the solution state, can still be obtained
without losing the exponential speedup in N . Providing quantum access to A and ~b is a difficult problem that can,
in theory, be dealt with by using quantum models for data access. Two main models of data access for quantum
linear algebra routines exist [97]: the sparse-data access model and the quantum-accessible data structure. The
sparse-data access model provides efficient quantum access to the nonzero elements of a sparse matrix. The
quantum-accessible data structure, suitable for fixed-sized, non-sparse inputs, is a classical data structure stored
in a quantum memory (e.g., qRAM) and provides efficient quantum queries to its elements [97].
For problems that involve only low-rank matrices, classical Monte Carlo methods for numerical linear algebra
(e.g., the FKV algorithm [102]) can be used. These techniques have been used to produce classical algorithms
that have an asymptotic exponential speedup in dimension for various problems involving low-rank linear algebra
(e.g., some machine learning problems) [103, 104]. As of this writing, these classical dequantized algorithms have
impractical polynomial dependence on other parameters, making the quantum algorithms still useful. In addition,
since these results do not apply to sparse linear systems, there is still the potential for provable speedups for
problems involving sparse, high-rank matrices.
11
4.7 Variational Quantum Algorithms
Variational quantum algorithms (VQAs) [130], also known as classical-quantum hybrid algorithms, are a class
of algorithms, typically for gate-based devices, that modify the parameters of a quantum (unitary) procedure
based on classical information obtained from running the procedure on a quantum device. This information is
typically in the form of a cost function dependent on the expectations of observables with respect to the state
that the quantum procedure produces. Generally, the training of quantum variational algorithms is NP-hard
[131] which withholds us from reaching arbitrarily good approximate local minima. In addition, these methods
are heuristic. The primordial quantum-circuit-based variational algorithm is called the variational quantum
eigensolver (VQE) [132] and is used to compute the minimum eigenvalue of an observable (e.g., the ground-state
energy of a physically realizable Hamiltonian). However, it is also applicable to optimization problems (Section
6.1.3) and both linear (e.g., quantum linear systems [133, 134]) and nonlinear problems (Section 5.2.2). VQE
utilizes one of the quantum variational principles based on the Rayleigh quotient of a Hermitian operator (Section
6.1.3). The quantum procedure that provides the state, called the ansatz, is typically a parameterized quantum
circuit (PQC) built from Pauli rotations and two-qubit gates (Section 3.2.1). For VQAs in general, finding the
optimal set of parameters is a non-convex optimization problem [135]. A variation of this algorithm, specific
to combinatorial optimization problems, known as the quantum approximate optimization algorithm (QAOA)
[136], utilizes the alternating operator ansatz [137] and is typically used for observables that are diagonal in the
computational basis. There are also variational algorithms for approximating certain dynamics, such as real-time
and imaginary-time quantum evolution that utilize variational principles for dynamics [138] (e.g., McLachlan’s
principle [139]). Furthermore, variational algorithms utilizing PQCs and various cost functions have been applied
to machine learning tasks (Section 7.6) [140]. Problems using VQAs are seen as one of the leading potential
applications of near-term quantum computing.
5 Stochastic Modeling
Stochastic processes are commonly used to model phenomena in physical sciences, biology, epidemiology, and
finance. In finance, stochastic modeling is often used to help make investment decisions, usually with a goal of
maximizing return and minimizing risks. Quantities that are descriptive of the market condition, including stock
prices, interest rates, and their volatilities, are often modeled by stochastic processes and represented by random
variables. The evolution of such stochastic processes is governed by stochastic differential equations (SDEs), and
stochastic modeling aims to solve the SDEs for the expectation value of a certain random variable of interest,
such as the expected payoff of a financial derivative at a future time, which determines the price of the derivative.
Although analytical solutions for SDEs are available for a few simple cases, such as the well-known Black–
Scholes equation for European options [106], a vast majority of financial models involve SDEs of more complex
forms that have to resort to numerical approaches.
In the following subsections, we review two commonly used numerical methods for solving SDEs, namely,
Monte Carlo integration (MCI, Section 5.1) and numerical solutions of differential equations (ODEs/PDEs,
Section 5.2), and we discuss respective quantum solutions and potential advantages. Then, in Section 5.3, we
show example financial applications for these quantum solutions.
6
Somma et al. [120] produced a quantum algorithm with provable speedup, in terms of the spectral gap, for
simulated annealing using quantum walks (Section 4.6).
12
methods have been used for inference [147], integration [148], and optimization [149]. Monte Carlo integration
(MCI), the focus of this subsection, is critical to finance for pricing and risk predictions [12]. These tasks often
require large amounts of computational power and time to achieve the desired precision in the solution. Therefore,
MCI is another key technique, heavily used in finance, for which a quantum approach is appealing. Interestingly,
there exists a quantum algorithm with a proven advantage over classical Monte Carlo methods for numerical
integration.
In stochastic modeling, MCI typically is used to estimate the expectation of a quantity that is a function of
other random variables. The methodology usually starts with a stochastic model (e.g. SDEs) for the underlying
random variables from which samples can be taken and the corresponding values of the target quantity subse-
quently evaluated given the samples drawn. For example, consider the following sequence of random variables,
X0 , . . . , XT . These random variables are often assumed to follow a diffusion process [107], where each Xt with
t ∈ {0, 1, . . . , T } represents the value of the quantity of interest at time t, and the collective values of the ran-
dom variables from the outcome of a single drawing are known as a sample path. Suppose the quantity whose
expectation we want to compute is a function of this process at various time points: g(X0 , . . . , XT ). In order to
estimate the expectation, many sample paths are drawn, and sample averages of g are computed. The estimation
error decays as O( √1N ) independent of the problem dimension, where Ns is the number of paths taken. This is
s
in accordance with the law of large numbers and Chebyshev’s inequality [148].
Quantum MCI (QMCI) [150–152] provides a quadratic speedup for Monte Carlo integration by making use
of the QAE algorithm (Section 4.4).7 With QAE, using the example from the previous paragraph, the error in
the expectation computation decays as O( N1q ), where Nq is the number of queries made to a quantum oracle that
computes g(X0 , . . . , XT ). This is in contrast to the complexity in terms of the number of samples mentioned
earlier for MCI. Thus, if samples are considered as classical queries, QAE requires quadratically fewer queries
than classical MCI requires in order to achieve the same desired error.
The general procedure for Monte Carlo integration utilizing QAE is outlined below [154]. Let Ω be the set
of potential sample paths ω of a stochastic process that is distributed according to p(ω), and f : Ω 7→ A is a
real-valued function on Ω, where A ⊂ R is bounded. The task is to compute E[f (ω)] [152], which can be achieved
with QAE in three steps as follows.
1. Construct a unitary operator Pl to load a discretized and truncated version of p(ω). The probability value
p(ω) translates to the amplitude of the quantum state |ωi representing the discrete sample path ω. In
mathematical form, it is Xp
Pl |0i = p(ω) |ωi . (1)
ω∈Ω
2. Convert f into a normalized function f˜ : Ω 7→ [0, 1], and construct a unitary Pf that computes f˜(ω) and
loads the value onto the amplitude of |ωi. 8 The resultant state after applying Pl and Pf is
Xq q
Pf Pl |0i = (1 − f˜(ω))p(ω) |ωi |0i + f˜(ω)p(ω) |ωi |1i . (2)
ω∈Ω
3. Using the notation from Section 4.4, perform quantum amplitude estimation with U = Pf Pl and an
P O
oracle, φ , that marks states with the last qubit being |1i. The result of QAE will be an approximation
to f˜(ω)p(ω) = E[f˜(ω)]. This value can be estimated to a desired error utilizing O( 1 ) evaluations of
ω∈Ω
U and its inverse [152]. Then scale E[f˜(ω)] to the original bounded range, A, of f to obtain E[f (ω)].
Variants [152] of QMCI also exist that can be applied when f (ω) has bounded L2 norm and bounded variance
(both absolutely and relatively). The problem of estimating a multidimensional random variable is called multi-
variate Monte Carlo estimation. Quantum algorithms that obtain similar quadratic speedups in terms of error
have been proposed by Cornelissen and Jerbi [156]. They solved this problem in the framework of Markov reward
processes, and thus it has applicability to reinforcement learning (Section 7.7).
As mentioned earlier (Section 4.1), preparing an arbitrary quantum state is a difficult problem. In the case
of QMCI, we need to prepare the state in (2). The Grover–Rudolph algorithm [157] is a promising technique
for loading efficiently integrable distributions (e.g., log-concave distributions). It requires the ability to compute
the probability mass over subregions of the sample space. However, it has been shown that when numerical
7
To disambiguate, there are classical computational techniques called quantum Monte Carlo methods used in
physics to classically simulate quantum systems [153]. These are not the topic of this discussion.
p
8
In general, quantum arithmetic methods [155] can be used to load arcsin f˜(ω) into a register and perform
p
controlled rotations to load f˜(ω) onto the amplitudes [154].
13
methods such as classical MCI are used to integrate the distribution, QMCI does not provide a quadratic speedup
when using this state preparation technique [158]. Additionally, variational methods (Section 4.7) have been
developed to load distributions [159], such as normal distributions [154]. Geometric Brownian motion (GBM) is
an example of a diffusion process whose solution is a series of log-normal random variables. These variables can be
implemented by utilizing procedures for loading standard normal distributions, followed by quantum arithmetic
[154, 155].
While GBM is common for modeling stock prices, it is often considered unrealistic [160] because it assumes
a constant volatility. In addition, unlike realistic models, the SDE of GBM can be solved in closed form; that
is, the distribution at every point in time is known, which avoids the need to use approximations involving the
SDE to simulate the process. The local volatility (LV) model is one approach for accounting for a changing
volatility by making it a function of the current price and time step. However, the corresponding SDE does
not have a closed-form solution, and hence this means numerical methods such as Euler–Maruyama schemes
[161] need to be used to simulate paths of the SDE with discretized time steps. Kaneko et al. [162] proposed
quantum circuits implementing the required arithmetic for simulating the LV model, namely, the operation Pl ,
based on improved techniques for simulating stochastic dynamics on a quantum device [163]. However, the
discretized simulation of the SDE introduces additional error and complexities when computing expectations of
functions of the stochastic process the SDE defines. Popular classical approaches to this problem are known as
Multilevel Monte Carlo (MLMC) methods [164], which can be used to approximately recover the error scaling of
classical single-level MCI. An et al. [165] proposed a quantum-enhanced version of MLMC that utilizes QMCI as
subroutine to compute all expectations involved. This allowed them to approximately recover the error scaling
of QMCI, and the approach benefits from being applicable to a wider variety of stochastic dynamics beyond
GBM and LV. With regard to implementing payoff functions, Herbert [166] developed a near-term technique for
functions that can be approximated well by truncated Fourier series. Furthermore, if fault tolerance (Section 3)
is to be taken into account, advancements in the field of quantum error correction are required in order to realize
quantum advantage [167].
Also worth noting are non-unitary methods for efficient state preparation [44, 168, 169]. These methods
are usually incompatible with QAE, however, because of the non-invertibility of the operations involved. The
methods would be more useful in a broader context where the inverse of the state preparation operation is not
required; this topic is out of the scope of this survey.
9
For details on numerical methods for PDEs, see, for example, [172].
14
solution. Specifically, these algorithms have complexities that scale at least linearly with the number of points N
on which the solution is to be evaluated and exponentially in the number of dimensions d of the spatial variable.
On the other hand, quantum algorithms often address the same problem by representing a vector of size N
using O(log N ) space, and operating on it in potentially O(poly(log N )) time.
L(u(x)) = f (x),
where x ∈ Cd is a d-dimensional vector, f (x) ∈ C is a scalar function that accounts for the inhomogeneity
in the PDE, u(x) ∈ C is the scalar function that we are solving, and L is a linear differential operator. The
differential equations are transformed into a set of linear equations by using the same approaches as the classical
algorithms mentioned above for computing numerical derivatives given a discretization. The resultant linear
system can then be solved by QLSAs. This approach will potentially give the PDE solver algorithm a complexity
of O(poly(d, log(1/))), where is the error tolerance, which is an exponential improvement in d compared with
that of the best-known classical algorithms. P
InPparticular, FDM-based quantum algorithms [174–177, 179] use quantum states |ai = x
u(x) |xi and
|bi = x f (x) |xi to encode the functions u(x) and f (x) (up to a normalization factor), with the N computational
basis states representing the N discretized points in x and amplitudes proportional to the scalar values of the
functions at the respective points. A finite-difference scheme that defines how the derivatives are approximated
using values from neighboring grid points is then used to convert L into an N × N matrix operator A acting on
|ai, and hence the solution of the differential equations becomes a QLSP:
A |ai = |bi ,
15
d + 1-dimensional (d spatial dimensions and time) nonlinear PDE to a 2d + 1- and a d + 2-dimensional PDE,
respectively.
For a general nonlinear PDE with d + 1 dimensions, one may discretize the nonlinear PDE and convert it to
a system of nonlinear ODEs [187], and then use the linearization techniques mentioned above to further convert
the nonlinear ODEs to linear PDEs or ODEs. Therefore, same as the approach for general nonlinear ODEs, this
algorithm will have a quantum advantage only when M is large for general nonlinear PDEs.
We note that in all of the QLSA-based algorithms discussed above, the output will be in the form of a
quantum state, which is generally difficult to be accessed classically without losing information or voiding the
overall quantum speedup. Nevertheless, as discussed in Section 4.5, one can still efficiently obtain certain statistics
of the solution without classical access to the quantum output state. This is usually the assumption of these
QLSA-based algorithms.
16
least one probability measure, equivalent to the real-world one P, under which the discounted (stochastic) value
process, {Vt }Tt=0 , of a portfolio of financial instruments is a martingale. Such a measure is called an equivalent
martingale measure or risk-neutral measure Q. The value process being a Q martingale, also called the fair value,
implies that the current value of the portfolio is equal to the expected value at maturity:
The second equality follows since the definition of the fair value at maturity VT , as seen from time t, is the
discounted accumulated cash flows realized from time t until T , which from now on we denote C(t, T ). If the
stronger assumption of a complete market is included, namely, all participants have access to all information that
could be obtained about the market, then the risk-neutral measure is unique.
An important type of pricing problem in finance is the pricing of derivatives. A derivative is a contract
that derives its value from another source (e.g., collection of assets, financial benchmarks) called the underlying,
which is modeled by {Xt (ω)}Tt=0 , where ω ∈ Ω. The value of a derivative is typically calculated by simulating the
dynamics of the underlying and computing the payoff accordingly. The payoff is a sequence of Borel-measurable
functions zt , which map the evolution of the underlying to the (discounted) payoff process Zt := zt (X0 , . . . , Xt ).
The functions zt are based on the contract. Thus, for derivatives, C(t, T ) is the accumulation of all Zt occurring
from t until maturity T . We can apply Equation (3) under a risk-neutral measure. Thus, the value of the derivative
at t, Vt , is equal E[C(t, T )|Ft ]. This justifies the usage of MCI for derivative pricing [12, 200], assuming sample
paths follow the risk-neutral measure.
We next discuss the application of QMCI and quantum differential equation solvers to the pricing of financial
derivatives. Specifically, we consider two types of derivatives: options and collateralized debt obligations (CDOs).
Options. An option gives the holder the right, but not the obligation, to purchase (call option) or sell (put
option) an asset at a specified price (strike price) and within a given time frame (exercise window) in the future.
EOne of the most well-known models for options is the Black–Scholes model [106]. This model assumes that
the price of the underlying (risky) asset, {Xt }Tt=0 , evolves like a GBM with a constant volatility. The Black–
Scholes PDE, which governs the evolution of the price of an option on the underlying asset, has a closed-form
solution, called the Black–Scholes formula, for the simple European option. A European call option has a payoff
that satisfies zt = 0, ∀t < T , and zT (XT ) = e−rT (XT − K)+ .10 This payoff depends only on the state of the
underlying at maturity; in other words, it is path independent. The interest rate r is the return of a risk-free
asset, which is included in the Black–Scholes model. According to Equation (3), the fair value of the European
option at time t is Vt = ert EQ [zT (XT )|Xt ], and the Black–Scholes formula provides Vt in closed form for all
t. However, many pricing tasks involve more complicated options that are often path dependent, and hence
information about the asset at multiple time steps is required; that is, zT depends on previous Xt . This is one of
the reasons MCI-based approaches [201] are more widely used than those based on directly solving PDEs, since
the path-dependent PDE can be complicated.
Stamatopoulos et al. [202] constructed quantum procedures for the uncertainty distribution loading, Pl , and
the payoff function, Pf , for European options. The operator Pl loads a distribution over the computational basis
states representing the price at maturity. The operator Pf approximately computes the rectified linear function,
characteristic of the payoff for European options. QMCI has also been applied to path-dependent options such
as barrier and Asian options [202, 203], and to multi-asset options [202]. For these options, prices at different
points on a sample path or for different assets are represented by separate quantum registers, and the payoff
operator Pf is applied to all of these registers. Pricing of more complicated derivatives with QAE can be found
in the literature. For example, Chakrabarti et al. performed an in-depth resource estimation and error analysis
for QAE applied to the pricing of autocallable options and target accrual redemption forwards (TARFs) [154].
Miyamoto and Kubo [204] utilized the FDM-based approach to solve the multi-asset Black–Scholes PDE
accelerated by using QLSA, as mentioned in Section 5.2.1. However, since applying the quantum FDM method
to the Black–Scholes PDE produces a quantum state encoding the value of the option at exponentially many
different initial prices, the quantum speedup is lost if we need to read out a single amplitude. The authors avoided
this issue by reading out the expected value of the option at a future time point under the probability distribution
of the price of the underlying at the same time point, which is the current price of the option as a consequence
of the value process being a martingale under the risk-neutral measure. This approach results in an exponential
speedup in terms of the dimension of the PDE over the classical FDM-based approach. Linden et al. [205]
compared various quantum and classical approaches for solving the heat equation, which the Black–Scholes
PDE reduces to. The quantum approaches discussed make use of the QLSA-based PDE solvers and quantum
walks (Section 4.6). Fontanela et al. [189] transformed the Black–Scholes PDE into the Schödinger equation
with a non-Hermitian Hamiltonian. They utilized VarQITE to simulate the Wick-rotated Schödinger equation.
Similarly, Alghassi et al. [190] applied imaginary time evolution to the more general Feynmann–Kac formula
10
(A)+ := max{A, 0}
17
for linear PDEs. Alternatively, Gonzalez–Conde et al. [206] used unitary dilation, that is, unitary evolution in
an expanded space, and postselection to simulate the nonunitary evolution associated with the non-Hermitian
Hamiltonian that results from mapping the Black–Scholes PDE to the Schrödinger. They also mentioned the
potential applicability beyond constant-volatility models.
Another prominent option is the American-style option [199]. While European, barrier, and Asian options
allow the holder to exercise their right only at maturity, American options allow the holder to buy/sell the
underlying asset at any point in time up to maturity, T . Thus, the buyer of the option needs to determine the
optimal exercise time that maximizes the expected payoff. More generally, finding the optimal execution time
is an optimal stopping problem [207]. In modern probability theory, a stopping time is an F-adapted stochastic
process τ : Ω 7→ [0, ∞). The adaptedness of τ implies that making the decision to stop at time t only requires
information obtained at time steps up to and including t. The corresponding stopped process at time t, using
stopping time τ , is defined as (Zτ )t := Zmin(t,τ (ω)) . This definition is used because the option payoff cannot
change once the buyer has decided to exercise the option.
The American option pricing problem involves a sequence of stopping times. An optimal stopping time τt
starting at time t consists of choosing to exercise at the current time when the payoff is greater than or equal
to the expected payoff that could be obtained from continuing, that is, E[Zτt+1 |Xt ], and is called a continuation
value. This recursive definition implies that finding the optimal stopping time can be formulated as solving a
dynamic programming problem. The value of the American option is the maximum expected payoff; and since
an optimal stopping time maximizes the expected payoff, this price is equal to E[Zτ0 |X0 ], where the current time
is 0. This is can be computed by using Monte Carlo integration with sampled payoffs, with discretized time
steps, that have optimally stopped.11 However, this also requires computing the continuation values. A popular
approach to estimate continuation values is through least-squares regression using a fixed set of basis functions.
The combined technique of regressing the conditional expectation values and sampling optimally-stopped payoffs
to solve the dynamic programming problem is called the least-squares Monte Carlo method (LSM) [208].
Doriguello et al. [209] proposed a quantum version of LSM that makes use of QMCI. The authors presented
unitary operators for computing the stopping time at time step t, τt , which was done by backtracking from
time step T . The stopping time τt−1 can be computed from the stopping time τt by using the definition of
the dynamic program for the optimal stopping time. Note that although in the classical case, we can store all
previously computed τt as we compute them backward in time, in the quantum case, we need to compute τt
for each linear regression step by starting at the maturity date T . As a result, while the quantum LSM has an
improved error dependence, it now depends quadratically on the maturity time T . Miyamoto [210] performed
a similar quantization of LSM for Bermudan options, similar to American options but the exercise times are
a discrete subset of time steps until maturity. Alghassi et al. [190] also mentioned the potential applicability
of their VarQITE approach, mentioned in Section 5.2.2, to the pricing of American options and options with a
stochastic volatility.
11
Technically, the discretization of time converts the American option into a Bermudan option.
18
5.3.2 Risk Modeling
Risk modeling is another crucial problem for financial institutions, since risk metrics can determine the probability
and amount of loss under various financial scenarios. Risk modeling is also important in terms of compliance with
regulations. Because of the Basel III regulations mentioned earlier, financial institutions need to calculate and
meet requirements for their VaR, CVaR, and other metrics. However, the current classical approach of Monte
Carlo methods [12] is computationally intensive. Similar to the pricing problem mentioned above, a quantum
approach could also benefit this financial problem. In this subsection we explore the application of the presented
quantum algorithms to computing risk metrics such as VaR and CVaR. We also review quantum approaches for
sensitivity analysis and credit valuation adjustments.
Value at Risk. As first mentioned in Section 2.1.1, VaR and CVaR are common metrics used for risk
analysis. Mathematically, VaR at confidence α ∈ [0, 1] is defined as VaRα [X] = infx≥0 {x | P[X ≤ x] ≥ α}, usually
computed by using Monte Carlo integration [214]. CVaR is defined as CVaRα [X] = E[X | 0 ≤ X ≤ VaRα [X]]
[215]; α is generally around 99.9% for the finance industry. QAE can be used to evaluate VaRα and CVaRα
faster than with classical MCI. The procedure to do so, by Woerner and Egger [215, 216], is as follows.
Similar to the CDO case above (Section 5.3.1), the risk can be modeled by using a Gaussian conditional
independence model. Quantum algorithms for computing VaRα and CVaRα via QAE can utilize the same
realization of Pl as for the CDO case [215]. Note that P[X ≤ x] = E[h(X)], where h(X) = 1 for X ≤ x and
0 otherwise. Thus Pf implements a quantum comparator similar to both of the previous applications discussed
for derivatives. A bisection search can be used to identify the value of x such that P[X ≤ x] ≥ α using multiple
iterations of QAE. CVaRα uses a comparator to check against the obtained value for VaRα when computing the
conditional expectation of X.
Economic Capital Requirement. Economic capital requirement (ECR) summarizes the amount of
capital required to remain solvent at a given confidence level and time horizon [216]. Egger et al. [216] presented
a quantum method using QMCI to compute ECR. Consider a portfolio of K assets with PK L1 , . . . , LK denoting the
losses associated with each asset. The expected value of the total loss, L, is E[L] = m=1 E[Lm ]. The losses can
be modeled by using the same Gaussian conditional independence model discussed above. ECR at confidence
level α is defined as
The Greeks. Sensitivity analysis is an important component of financial risk modeling. The stochastic model
of the underlying assets of a financial derivative, which is the process Xt , typically contains multiple variables
(i)
representing the market state. Some common examples are the spot prices of the underlying assets St , current
(i)
volatilities σt , and the maturity times T (i) . In mathematical finance, the partial derivatives, or sensitivities,
of Vt with respect to these quantities are called Greeks. For example, ∂V(i) t
are called Deltas, ∂V(i)
t
are called
∂St ∂σt
∂ 2 Vt ∂Vt
Vegas, (i) are called Gammas, and ∂T (i)
are called Thetas. Classically, this can be computed utilizing finite-
(∂σt )2
difference methods and MCI. The optimal classical method, which uses variance reduction techniques, requires
O(k/2 ) samples to compute k Greeks. Stamatopoulos et al. [217] proposed using quantum finite-difference
methods, which were first proposed by Jordan [218], to accelerate the computation of Greeks.
The quantum setting uses what is called a probability oracle. In the context of financial sensitivity, this
makes use of the operation in (2) from the QMCI algorithm, namely, operators Pl and Pf . For computing the
k-dimensional numerical gradient, Jordan’s original algorithm utilized a single call to a quantum binary oracle,
which computes a fixed-point approximation to Vt into a register. The derivative of Vt is then encoded in the
relative phases of quantum state utilizing phase kickback. In the case described above, however, we are provided
an analog quantity encoded in the amplitude of a quantum state, and thus we would require an analog to digital,
e.g. QAE, to analog conversion. The query complexity of QAE scales exponentially in the number of bits in
the digital representation. Gilyén, Arunachalam, and Wiebe (GAW) [219] proposed a conversion technique that
avoids the digitization step. Their approach takes as input a probability oracle and uses block-encoding-based
Hamiltonian simulation [97, 220] to perform the phase encoding. In other words, it converts the probability oracle
to a phase oracle. The GAW algorithm makes use of higher-order central difference methods [221]; and √ under
certain smoothness conditions of the function whose derivative we are taking [222], the algorithm uses O( k/)
queries to a probability oracle to compute the k-dimensional gradient. Stamatopoulos et al. showed that this
can be used to produce a quadratic speedup for computing k Greeks over the classical MCI and finite-difference-
based methods. They also proposed an alternative technique that does not attain the full quadratic speedup
19
but avoids the Hamiltonian simulation required for the probability to phase oracle conversion. As mentioned
by Stamatopoulos et al., the number of Greeks, k, can be large in practice, which makes the quadratic speedup
significant.
Credit Value Adjustments. Value adjustments [223], or XVAs, are a set of corrections to the value
of a financial instrument due to factors that are not included in the pricing model. Specifically, credit value
adjustments (CVAs) modify the price of a financial contract to take into account the risk of loss associated with
a counterparty defaulting or failing to meet their obligation, namely, the counterparty-credit risk (CCR). The
usual risk-neutral valuation does not take CCR into account. For example, from the perspective of a holder of
an option, if the seller of the contract defaults at time t, the loss to the holder is the value of the option at t.
For certain contracts, a proportion of the lost value can be recovered. This fraction is called the recovery rate,
R. The value that is lost after recovery is called the loss given default (LGD). The LGD is computed as 1 − R.
We consider a finite set of time points: t0 , t1 , . . . , tN , where tN is the largest maturity date of all contracts
in the portfolio. If we are performing a valuation of the CCR at time t0 , the expected loss if a default occurs at
ti > t0 requires determining the fair value of the instrument at time ti . Since ti is in the future, however, the
expected value lost if default occurs at time ti , called the expected exposure profile, is EQ [(Vti )+ |Ft0 ], where Vti
is the fair price of a portfolio of contracts at ti . Since default can occur before or at maturity probabilistically,
the CVA at t0 is defined to be
N
X
CVAt0 := E[(1 − R)(Vτ )+ |Ft0 ] = (1 − R) P(ti < τ < ti+1 )EQ [(Vti )+ |Ft0 ], (4)
i=0
where measure P denotes the probability of defaulting in a given time period. The maximum is to ensure we
consider only obligations of the counterparty, and τ is a random stopping time representing the time of default.
The adjusted price is
Vt00 := Vt0 − CVAt0 . (5)
As mentioned by Han and Rebentrost [224], if the joint distribution can be represented as a simple product,
namely,
(t )
qj i := P(ti < τ < ti+1 )Q(j), (6)
(j)
where j is a series of economic events, for example a path of the underlying assets, leading to a price of Vti
at time ti occurring with risk-neutral probability Q, then the CVA computation can be viewed as the following
inner product: X (t ) (j)
CVAt0 = qj i Vti := ~ ~.
q·V (7)
i,j
Han and Rebentrost presented a variety of quantum inner product estimation algorithms that make use of QMCI
~ according to ~
to obtain a quadratic speedup over the classical approach of sampling the entries of V q and taking
sample averages. Their approach also applies to the problem of pricing a portfolio of derivatives. Along similar
lines, Alcazar et al. [225] proposed a more near-term approach to solving this problem that makes use of a
Bayesian approach to QAE [226, 227]. This Bayesian QAE approach has a complexity that varies according to
the expected hardware noise and allows for interpolating between the complexities of MCI and QMCI. They also
performed an end-to-end resource analysis of their approach.
6 Optimization
Solving optimization problems (e.g., portfolio optimization, arbitrage) presents the most promising commercially
relevant applications on NISQ hardware (Section 3.3). In this section we discuss the potential of using quantum
algorithms to help solve various optimization problems. We start by discussing quantum algorithms for the
NP-hard12 combinatorial optimization problems [26, 228] in Section 6.1, with particular focus on those with
quadratic cost functions. Neither classical nor quantum algorithms are believed to be able to solve these NP-hard
problems asymptotically efficiently (Section 4). However, the hope is that quantum computers can solve these
problems faster than classical computers on realistic problems such that it has an impact in practice. Section 6.2
focuses on convex optimization. In its most general form convex optimization is NP-hard. However, in a variety
of situations such problems can be solved efficiently [229] and have a lot of nice properties and a rich theory
[27]. There exist quantum algorithms that utilize QLS (Section 4.5), and other techniques to provide polynomial
speedups for certain convex, specifically conic, optimization problems. These can have a significant impact
12
By definition (Section 4), optimization problems are not in NP, but they can be NP-hard. However, there is
an analogous class for optimization problems called NPO [228].
20
in practice. Section 6.3 discusses large-scale optimization, which utilizes hybrid approaches that decompose
large problems into smaller subtasks solvable on a near-term quantum device. The more general mixed-integer
programming problems, in which there are both discrete and continuous variables, can potentially be handled by
using quantum algorithms for optimization (both those mentioned in Section 6.1 and, in certain scenarios, Section
6.2) as subroutines in a classical-quantum hybrid approach (i.e., decomposing the problem in a similar manner
to the ways mentioned in Section 6.3) [146, 230]. Many quantum algorithms for combinatorial optimization are
NISQ friendly and heuristic (i.e., with no asymptotically proven benefits). The convex optimization algorithms
are usually for the fault-tolerant era of quantum computation.
where ~b ∈ R , Q ∈ R
N N ×N
, and B = {0, 1}. Unconstrained integer quadratic programming (IQP) problems can
be converted to QUBO in the same way that IP can be reduced to general binary integer programming [232].
Constraints can be accounted for by optimizing the Lagrangian function [27] of the constrained problem, where
each dual variable is a hyperparameter that signifies a penalty for violating the constraint [233].
QUBO has a connection with finding the ground state of the generalized Ising Hamiltonian from statistical
mechanics [233]. A simple example of an Ising model is a two-dimensional lattice under an external longitudinal
magnetic field that is potentially site-dependent; in other words, the strength of the field at a site on the lattice
is a function of location. At each site j, the direction of the magnetic moment of spin zj may take on values in
the set {−1, +1}. The value of zj is influenced by both the moments of its neighboring sites and the strength of
the external field. This model can be generalized to an arbitrary graph where the vertices on the graph represent
the spin sites and weights on the edges signify the interaction strength between the sites, hence allowing for
long-range interactions. The classical Hamiltonian for this generalized model is
X X
H=− Jij zi zj − h j zj ,
ij j
where the magnitude of Jij represents the amount of interaction between sites i and j and its sign is the desired
relative orientation; hj represents the external field strength and direction at site j [234]. Using a change of
variables zj = 2xj − 1, a QUBO cost function as defined in Equation (8) can be transformed into a classical Ising
Hamiltonian, with xj being the jth element of ~ x. The classical Hamiltonian can then be “quantized” by replacing
the classical variable zj for the jth site with the Pauli-Z operator σjz (Section 3.1). Therefore, finding the optimal
solution for the QUBO is equivalent to finding the ground state of the corresponding Ising Hamiltonian.
Higher-order combinatorial problems can be modeled by including terms for multispin interactions. Some of
the quantum algorithms presented in this section can handle such problems (e.g., QAOA in Section 6.1.2). From
the quantum point of view, these higher-order versions are observables that are diagonal in the computational
basis. The following subsections will more fully show why the quantum Ising or more generally the diagonal
observable formulation results in quantum algorithms for solving combinatorial problems. In Section 6.4 we
discuss a variety of financial problems that can be formulated as combinatorial optimization and solved by using
the quantum methods described in this section.
21
This formulation natively allows for solving QUBO. As discussed earlier, however, with proper encoding and
additional variables, unconstrained IQP problems can be converted to QUBO. Techniques also exist for converting
higher-order combinatorial problems into QUBO problems, at the cost of more variables [235, 236]. QUBO, as
mentioned earlier, can, through a transformation of variables, be encoded in the site-dependent longitudinal
magnetic field strengths {hi } and coupling terms {Jij } of an Ising Hamiltonian. The P QA process initializes
the system in a uniform superposition of all states, which is the ground state of − i σix . σix is the Pauli-X
(Section 3.2) operator applied to the ith qubit. This initialization implies the presence of a large transverse-field
component, A(t) in Equation (9). The annealing schedule, controlled by tuning the strength of the transverse
magnetic field, is defined by the functions A(t) and B(t) in Equation (9). If the evolution is slow enough,
according to the adiabatic theorem (Section (3.2.2)), the system will remain in the instantaneous ground state
for the entire duration and end in the ground state of the Ising Hamiltonian encoding the problem. This process
is called forward annealing. There also exists reverse annealing [237], which starts in a user-provided classical
state,13 increases the transverse-field strength, pauses and then, assuming an adiabatic path, ends in a classical
state. The pause is useful when reverse quantum annealing is used in combination with other classical optimizers.
The D-Wave devices are examples of commercially available quantum annealers [238]. These devices have
limited connectivity resulting in extra qubits being needed to encode arbitrary QUBO problems (i.e., Ising
models with interactions at arbitrary distances). Finding such an embedding, however, is in general a hard
problem [239]. Thus heuristic algorithms [240] or precomputed templates [241] are typically used. Assuming the
hardware topology is fixed, however, an embedding can potentially be efficiently found for QUBOs with certain
structure [242]. Alternatively, the embedding problem associated with restricted connectivity can be avoided
if the hardware supports three- and four-qubit interactions, by using the LHZ encoding [243] or its extension
by Ender et al. [244]. The extension by Ender et al. even supports higher-order binary optimization problems
[245, 246]. Moreover, the current devices for QA provide no guarantees on the adiabaticity of the trajectory that
the system follows. Despite the mentioned limitations of today’s annealers, a number of financial problems still
remain that can be solved by using these devices (Section 6.4).
where γ = (γ1 , . . . , γp ), β = (β1 , . . . , βp ). The unitary U (C, γ) is called the phase operator or phase separator
and defined as U (C, γ) = e−iγC , where C is a diagonal observable that encodes the objective function f (z). In
addition, U (B, β) is the mixing operator defined as U (B, β) = e−iβB , where, in the initial QAOA formulation,
PN x
B = σ . The initial state, |si, is a uniform superposition, which is an excited state of B with maximum
i=1 i
energy. However, other choices for B exist that allow QAOA to incorporate constraints without penalty terms
[137]. Equation (10), where the choice of B can vary, is called the alternating operator ansatz [137]. Preparation
of the state is then followed by a measurement in the computational basis (Section 3.1), indicating the assignments
of the N binary variables. The parameters are updated such that the expectation of C, that is, hγ, β| C |γ, βi,
is maximized. Note that we can multiply the cost function by −1 to convert it to a minimization problem. The
structure of the QAOA ansatz allows for finding good parameters purely classically for certain problem instances
[136]. In general, however, finding such parameters is a challenging task. As mentioned earlier (Section 4.7), the
training of quantum variational algorithms is NP-hard. This does in fact withhold us from reaching arbitrarily
good approximate or local minima and serves as merely an indication of the complexity of finding the exact global
13
The term “classical state” refers to any computational basis state. The quantum Ising Hamiltonian has an
eigenbasis consisting of computational basis states.
14
In the case of standard QAOA, however, the adiabatic trajectory would move between instantaneous eigen-
states with the highest energy [136]. This still follows from the adiabatic theorem (Section 3.2.2).
22
minimum. However, a lot of recent research has been proposing practical approaches for effective parameter and
required circuit depth estimation [250–255] as well as reducing the phase operator’s complexity [256, 257] and
mixer’s [258] complexity. Recent demonstrations of QAOA on gate-based quantum hardware have revealed the
capability of start-of-the-art quantum hardware technologies in handling unconstrained [259, 260] and constrained
optimization problems [261].
where H is a quantum Hamiltonian and Eτ = hψ(τ )|H |ψ(τ )i. The time-evolved state (solution to Equation
(11)) is |ψ(τ )i = C(τ )e−Hτ |ψ(0)i, where C(τ ) is a time-dependent normalization constant [269]. As τ − → ∞, the
smallest eigenvalue of the non-unitary operator e−Hτ dominates, and the state approaches a ground state of H.
This is assuming the initial state has nonzero overlap with a ground state. Thus, ITE presents another method
for finding the minimum eigenvalue of an observable [269, 270].
For variational quantum ITE (VarQITE) [138], the time-dependent state |ψ(τ )i is represented by a parame-
terized ansatz (i.e., PQC), |φ(θ(τ ))i, with a set of time-varying parameters, θ(τ ). The evolution is projected onto
the circuit parameters, and the convergence is also dependent on the expressibility of the ansatz [138]. However,
quantum ITE using McLachlan’s principle is typically expensive to implement because of the required metric
tensor computations and matrix inversion [138, 271]. Benedetti et al. developed an alternative method for Var-
QITE that avoids these computations and is gradient free [272]. Their variational time evolution approach is also
applicable to real-time evolution. VarQITE can be used to solve a combinatorial optimization problem by letting
H encode the combinatorial cost function, in a similar manner to QAOA and VQE, and is also not restricted to
QUBO. In addition, VarQITE has been used to prepare states that do not result from unitary evolution in real
time, such as the quantum Gibbs state [273].
We note that the variational principles for real-time evolution, mentioned in Section 4.7, can be used to
approximate adiabatic trajectories [274]. This can potentially be used for combinatorial optimization as well.
23
6.1.5 Optimization by Quantum Unstructured Search
Dürr and Høyer [275] presented an algorithm that applies quantum unstructured search (Section 4.1) to the
problem of finding the global [276] minimum of a black-box function. This approach makes use of the work of
Boyer et al. [75] to apply Grover’s search when the number of marked states is unknown. Unlike the metaheuristics
of quantum annealing and gate-based variational approaches, the Dürr-Høyer algorithm has a provable √ quadratic
speedup in query complexity. That is, given a search space X of size N , the algorithm requires O( N ) queries
to an oracle that evaluates the function f : X 7→ R to minimize. A classical program would require O(N )
evaluations.
The requirements to achieve quadratic speedup are the same as for Grover’s algorithm. The quantum oracle
in this case is Ogy |xi = (−1)gy (x) |xi; gy : X 7→ {0, 1} is a classical function mapping gy (x) to 1 if f (x) ≤ y,
otherwise 0; and y is the current smallest value of f evaluated so far [276]. The Grover adaptive search framework
of Bulger et al. [277] generalizes and extends the Dürr–Høyer algorithm. Gilliam et al. [278] proposed a method
for efficiently implementing the oracle Ogy by converting to Fourier space. This applies particularly to binary
polynomials, and thus this method can be used to solve arbitrary constrained combinatorial problems with a
quadratic speedup over classical exhaustive search.
24
In portfolio optimization, each asset class, such as stocks, bonds, futures, and options, is assigned a weight.
Additionally, all assets within the class are allocated in the portfolio depending on their respective risks, returns,
time to maturity, and liquidity. The variables controlling how much the portfolio should invest in an asset can be
continuous or discrete. However, the overall problem can contain variables of both types. Continuous variables
are suitable for representing the proportion of the portfolio invested in the asset positions when non-discrete
allocations are allowed. Discrete variables are used in situations where assets can be purchased or sold only
in fixed amounts. The signs of these variables can also be used to indicate long or short positions. When
risk is accounted for, the problem is typically a quadratic program, usually with constraints. Depending on the
formulation, this problem can be solved with techniques for convex [291] or mixed-integer programming [292, 293].
Speeding up the execution of and improving the quality of portfolio optimization is particularly important when
the number of variables is large.
This section first focuses on combinatorial formulations using the algorithms mentioned in Section 6.1. As
mentioned there, the algorithms presented can be generalized to integer optimization problems. The second part
of this section focuses on convex formulations of the problem solved by using algorithms from Section 6.2. The
handling of mixed-integer programming was briefly mentioned at the beginning of Section 6.
where W is an N × N diagonal matrix with the entries of w ~ ∈ [0, 1]N , such that kwk
~ 1 = 1, on the diagonal; w,
~
contains a fixed weight for each asset j indicating the proportion of the portfolio that should hold a long or short
position in j. In both cases, the authors utilized quantum annealing (Section 6.1.1).
Instead of just risk minimization, the problem can be reformulated to specify a desired fixed expected return,
µ ∈ R,
xT Σ~
min ~ x : ~rT ~
x = µ, ~xT~1 = β (13)
x∈BN
~
xT Σ~
min q~ x − ~rT ~ xT~1 = β,
x : ~ (14)
x∈BN
~
where ~x is a vector of Boolean decision variables, ~r ∈ RN contains the expected returns of the assets, Σ is the
same as above, q > 0 is the risk level, and β ∈ N is the budget. In both cases, the Lagrangian of the problem,
where the dual variables are used to encode user-defined penalties, can be mapped to a QUBO (Section 6.1).
These formulations are known as mean-variance portfolio optimization problems [291]. More constraints can also
be added to these problems.
Hodson et al. [297] and Slate et al. [298] both utilized QAOA to solve a problem similar to Equation (14).
They allowed for three possible decisions: long position, short position, or neither. In addition, they utilized
mixing operators (Section 6.1.2) that accounted for constraints. Alternatively, Gilliam et al. [278] utilized an
extension of Grover adaptive search (Section 6.1.5) to solve Equation (14) with a quadratic speedup over classical
unstructured search. Rosenberg et al. [299] and Mugel et al. [300] solved dynamic versions of Equation (14),
where portfolio decisions are made for multiple time steps. Additionally, an analysis of benchmarking quantum
annealers for portfolio optimization can be found in a paper by Grant et al. [301]. Furthermore, an approach
using reverse quantum annealing (Section 6.1.1) was proposed by Venturelli and Kondratyev (Section 8.2). Note
that any of the methods from Section 6.1 can be applied whenever quantum annealing can. Currently, however,
25
because quantum annealers possess more qubits than existing gate-based devices possess, more experimental
results with larger problems have been obtained with annealers.
Convex Formulations. This part of the section focuses on formulations of the portfolio optimization
problem that are convex. The original formulation of mean-variance portfolio optimization by Markowitz [291]
was convex, and thus it did not contain integer constraints, as were included in the discussion above. Thus,
if the binary variable constraints are relaxed, the optimization problem, represented by Equation (13), can be
reformulated as a convex quadratic program with linear equality constraints with both a fixed desired return and
budget (Equation 15):
~ T Σw
min w ~T w
~ : p ~ = ξ, ~rT w
~ = µ, (15)
w∈R
~ N
where w~ ∈ RN is the allocation vector. The ith entry of the vector w ~ represents the proportion of the portfolio
that should invest in asset i. This is in contrast to the combinatorial formulations where the result was a Boolean
vector. The vectors p~ ∈ RN and ~r ∈ RN contain the assets’ prices and returns, respectively; ξ ∈ R is the budget;
and µ ∈ R is desired expected return. In contrast to Equation (13), Equation (15) admits a closed-form solution
[302]. However, more problem-specific conic constraints and cost terms (assuming these terms are convex) can be
added, increasing the complexity of the problem, in which case it can potentially be solved with more sophisticated
algorithms (e.g., interior-point methods). As mentioned, there exist polynomial speedups provided by quantum
computing for common cone programs (Section 6.2). Equation (15) with additional positivity constraints on the
allocation vector can be represented as an SOCP and solved with quantum IPMs [302].
Considering the exact formulation in Equation (15), however, we might be able to obtain superpolynomial
speedup if we relax the problem even further. Rebentrost and Lloyd [303] presented a relaxed problem where
the goals were to sample from the solution and/or compute statistics. Their approach was to apply the method
of Lagrange multipliers to Equation (15) and obtain a linear system. This system can be formulated as a
quantum linear systems problem and solved with QLS methods (Section 4.5). The result of solving the QLSP
(Section 4.5) is a normalized quantum state corresponding to the solution found with time complexity that is
polylogarithmic in the system size, N . Thus, if only partial information about the solution is required, QLS
methods potentially provide an exponential speedup, in N , over all known classical algorithms for this scenario,
although the sparsity and well-conditioned matrix assumptions mentioned in Section 4.5 have to be met to
obtain this speedup. As mentioned in Section 4.5, however, in scenarios involving low-rank matrices randomized
numerical linear algebra algorithms can be applied to obtain the same exponential speedup in dimension [304].
However, these methods typically have a high polynomial dependence on the rank and precision, which still allows
for a potential polynomial quantum advantage. This dequantized approach was benchmarked in [305].
where ~x ∈ BN is a vector of decision variables indicating whether to include the swap in the netted subset,
~ ∈ RN is the vector of notional values associated with the swaps, and d~ ∈ {+1, −1}N indicates the direction of
p
the fixed-rate interest payments: +1 if the clearing house pays the counterparty and −1 if the opposite is true.
26
×N
V ∈ RN ≥0 signifies how incompatible two potentially “nettable” swaps are, and α and β weigh the importance
of the second and third terms. The combined goal of the three terms is to maximize the total notional value
netted (first term) such that the notional values of the selected swaps cancel (second term) and the swaps are
compatible (third term). The QUBO problems were solved by utilizing quantum annealing (Section 6.1.1).
27
satisfy
7 Machine Learning
The field of machine learning has become a crucial part of various applications in the finance industry. Rich
historical financial data and advances in machine learning make it possible, for example, to train sophisticated
models to detect patterns in stock markets, find outliers and anomalies in financial transactions, automatically
classify and categorize financial news, and optimize portfolios.
Quantum algorithms for machine learning can be further classified by whether they follow a fault-tolerant
approach, a near-term approach, or a mixture of the two approaches. The fault-tolerant approach typically
requires a fully error-corrected quantum computer, and the main goal is to find speedups compared with their
classical counterparts, achieved mostly by applying the quantum algorithms introduced in Section 4.5 as sub-
routines. Thus the applicability of these algorithms may be limited since most of them begin with first loading
the data into a quantum system, which can require exponential time [89, 322]. This issue can be addressed in
theory by having access to qRAM (Section 4.1); as of today, however, no concrete implementation exists. On
the other hand, with the development of more NISQ devices, near-term approaches aim to explore the capability
of these small-scale quantum devices. Quantum algorithms such as VQAs (Section 4.7) consider another type of
enhancement for machine learning: the quantum models might be able to generate correlations that are hard to
represent classically [323, 324]. Such models face many challenges, however, such as trainability, accuracy, and
efficiency, that need to be addressed in order to maintain the hope of achieving quantum advantage when scaling
up these near-term quantum devices [130]. Substantial efforts are needed before we can answer the question of
whether quantum machine learning algorithms can bring us practical and useful applications, and it might take
decades. Nevertheless, quantum machine learning shows great promise to find improvements and speedups to
elevate the current capabilities vastly.
7.1 Regression
Regression is the process of fitting a numeric function from the training data set. This process is often used
to understand how the value changes when the attributes vary, and it is a key tool for economic forecasting.
Typically, the problem is solved by applying least-squares fitting, where the goal is to find a continuous function
that approximates a set of N data points {xi , yi }. The fit function is of the form
M
X
f (x, ~λ) := fj (x)λj ,
j=1
where ~λ is a vector of parameters and fj (x) is function of x and can be nonlinear. The optimal parameters can
be found by minimizing the least-squares error:
N
X 2
min f (xi , ~λ) − yi = |F ~λ − y|2 ,
~
λ
i=1
where the N × M matrix F is defined through Fij = fj (xi ). Wiebe et al. [325] proposed a quantum algorithm to
solve this problem, where they encoded the optimal parameters of the model into the amplitudes of a quantum
28
state and developed a quantum algorithm for estimating the quality of the least-squares fit by building on the HHL
algorithm (Section 4.5). The algorithm consists of three subroutines: a quantum algorithm for performing the
pseudoinverse of F , an algorithm that estimates the fitting quality, and an algorithm for learning the parameter ~λ.
Later, Wang [326] proposed using the CKS matrix-inversion algorithm (Section 4.5), which has better dependence
on the precision in the output than HHL has, and used amplitude estimation (Section 4.4) to estimate the optimal
parameters. Kerenedis and Prakash developed algorithms utilizing QSVE (Section 4.5) to perform a coherent
gradient descent for normal and weighted least-squares regression [96]. Moreover, quantum annealing (Section
6.1.1) has been used to solve least-squares regression when formulated as a QUBO (Section 6.1) [327].
A Gaussian process (GP) is another widely used model for regression in supervised machine learning. It has
been used, for example, in predicting the price behavior of commodities in financial markets. Given a training
set of N data points {xi , yi }, the goal is to model a latent function f (x) such that
y = f (x) + noise ,
where noise ∼ N (0, σn2 )
is independent and identically distributed Gaussian noise. A practical implementation
of the Gaussian process regression (GPR) model needs to compute the Cholesky decomposition and therefore
requires O(N 3 ) running time. Zhao et al. [328] proposed using the HHL algorithm to speed up computations in
GPR. By repeated sampling of the results of specific quantum measurements on the output states of the HHL
algorithm, the mean predictor and the associated variance can be estimated with bounded error with potentially
an exponential speedup over classical algorithms.
Another recent development is to use a feedforward quantum neural network with a quantum feature map,
i.e., a unitary with configurable parameters used to encode the classical input data, and a Hamiltonian cost
function evaluation for the purposes of continuous-variable regression in a technique called quantum circuit
learning (QCL) [329]. This technique allows for a low-depth circuit and has important application opportunities
in quantum algorithms for finance.
7.2 Classification
Classification is the process of placing objects into predefined groups. This type of process is also called pattern
recognition. This area of machine learning can be used effectively in risk management and large data processing
when the group information is of particular interest, for example, in creditworthiness identification and fraud
detection.
T
PM
where α~ = (α1 , · · · , αM ) , subject to the constraints α = 0 and yj αj ≥ 0. Here, we have a key quantity
j=1 j
for the supervised machine learning problem [330], the kernel matrix Kjk = k(~ xj , ~ xj · ~
xk ) = ~ xk . Solving the dual
form involves evaluating the dot products using the kernel matrix and then finding the optimal parameters αj
by quadratic programming, which takes O(M 3 ) in the non-sparse case. A quantum SVM was first proposed by
Rebentrost et al. [331], who showed that a quantum SVM can be implemented with O(log M N ) runtime in both
training and classification stages, PN under the assumption that there are oracles for the training data that return
quantum states |~ xj i = k~x1j k2 k=1
xj )k |ki, the norms k~
(~ xj k2 and the labels yj are given, and the states are
constructed by using qRAM [70, 332]. The core of this quantum classification algorithm is a non-sparse matrix
exponentiation technique for efficiently performing a matrix inversion of the training data inner-product (kernel)
matrix. Lloyd et al. [332] showed that, effectively, a low-rank approximation to kernel matrix is used due to
the eigenvalue filtering procedure used by HHL [88]. As mentioned in Section 6.2, Kerenedis et al. proposed
a quantum enhancement to second-order cone programming that was subsequently used to provide a small
polynomial speedup to the training of the classical `1 SVM [281]. In addition, classical SVMs using quantum-
enhanced feature spaces to construct quantum kernels have been proposed by Havlíček et al. [333]. One benefit
of quantum kernels is that there exist metrics for testing for potential quantum advantage [324]. In addition,
there have been techniques proposed to enable generalization [334, 335], which has been observed to be an issue
with standard quantum-kernel methods [336].
29
7.2.2 Quantum Nearest-Neighbors Algorithm
The k-nearest neighbors technique is an algorithm that can be used for classification or regression. Given a data
set, the algorithm assumes that data points closer together are more similar and uses distance calculations to
group close points together and define a class based on the commonality of the nearby points. Classically, the
algorithm works as follows: (1) determine the distance between the query example and each other data point in
the set, (2) sort the data points in a list indexed from closest to the query example to farthest, and (3) choose the
first k entries, and, if regression, return the mean of the k labels and, if classification, return the mode of k labels.
The computationally expensive step is to compute the distance between elements in the data set. The quantum
nearest-neighbor classification algorithm was proposed by Wiebe et al. [337], who used the Euclidean distance
and the inner product as distance metrics. The distance between two points is encoded in the amplitude of a
quantum state. They used amplitude amplification (Section 4.2) to amplify the probability of creating a state
to store the distance estimate in a register, without measuring it. They then applied the Dürr–Høyer algorithm
(Section 6.1.5). Later, Ruan et al. proposed using the Hamming distance as the metric [338]. More recently,
Basheer et al. have proposed using a quantum k maxima-finding algorithm to find the k-nearest neighbors and
use the fidelity and inner product as measures of similarity [339].
7.3 Clustering
Clustering, or cluster analysis, is an unsupervised machine learning task. It explores and discovers the grouping
structure of the data. In finance, cluster analysis can be used to develop a trading approach that helps investors
build a diversified portfolio. It can also be used to analyze different stocks such that the stocks with high
correlations in returns fall into one basket.
30
7.4 Dimensionality Reduction
The primary linear technique for dimensionality reduction, principal component analysis, performs a linear map-
ping of the data to a lower-dimensional space in such a way that the variance of the data in the low-dimensional
representation is maximized. Generally, principal component analysis mathematically amounts to finding domi-
nant eigenvalues and eigenvectors of a very large matrix. The standard context for PCA involves a data set with
observations on M variables for each of N objects. The data defines an N × M data matrix X, in which the
jth column is the vector ~xj of observations on the jth variable. The aim is to find a linear combination of the
columns of the matrix X with maximum variance. This process boils down to finding the largest eigenvalues and
corresponding eigenvectors of the covariance matrix. Lloyd et al. [353] proposed quantum PCA. In addition, they
showed that multiple copies of a quantum system with density matrix ρ can be used to construct the unitary
transformation e−iρt , which leads to revealing the eigenvectors corresponding to the large eigenvalues in quan-
tum form for an unknown low-rank density matrix. He et al. introduced a low-complexity quantum principal
component analysis algorithm [354]. Other quantum algorithms for PCA were proposed in [355–357]. Lastly, Li
et al. [358] presented a quantum version of kernel PCA, Cong [359] presented quantum discriminant analysis,
and Kerenidis and Luongo [360] developed quantum slow feature analysis.
For data sets that are high dimensional, incomplete, and noisy, the extraction of information is generally
challenging. Topological data analysis (TDA) is an approach to study qualitative features and analyze and
explore the complex topological and geometric structures of data sets. Persistent homology (PH) is a method
used in TDA to study qualitative features of data that persist across multiple scales. Lloyd et al. [361] proposed
a quantum algorithm to compute the Betti numbers, that is, the numbers of connected components, holes, and
voids in the data set at varying scales. Essentially, the algorithm operates by finding the eigenvectors and
eigenvalues of the combinatorial Laplacian and estimating Betti numbers to all orders and to accuracy δ in time
O(n5 /δ). The most significant speedup is for dense clique complexes [362]. An improved version as well as a
NISQ version of the quantum algorithm for PH has been proposed by Ubaru et al. [363]. Further improvements
for implementing the Dirac operator, required for PH, have been made by Kerenedis and Prakash [364].
31
qubit states, and (3) obtain the desired probability amplitudes through controlled rotation gates. In a later work
they tested it on IBM quantum hardware [373]. The applications in finance include portfolio simulation [374]
and decision-making modeling [375].
32
information processing. Quantum Fourier convolutional neural networks have been developed by Shen and Liu
[401]. Cong et al. [402] developed circuit-based quantum analogues to CNNs.
where γ ∈ [0, 1] is a discount factor and π(s, a) is the probability of the agent to selecting action a at state s
according to policy π.
Dong et al. [405] proposed representing the state set with a quantum superposition state; the eigenstate
obtained by randomly observing the quantum state is the action. The probability of the eigen action is determined
by the probability amplitude, which is updated in parallel according to the rewards. The probability of the “good”
action is amplified by using iterations of Grover’s algorithm (Section 4.1). In addition, they showed that the
approach makes a good trade-off between exploration and exploitation using the probability amplitude and can
speed up learning through quantum parallelism. Paparo et al. [406] showed that the computational complexity
of a particular model, projective simulation, can be quadratically reduced. In addition, quantum neural networks
(Section 7.6) and quantum Boltzmann machines (Section 7.5.3) have been used for approximate RL [407–411].
Cherret et al. described quantum reinforcement learning via policy iteration [412]. Additionally, Cornelissen
explored how to apply quantum gradient estimation to quantum reinforcement learning [413].
33
7.8 Natural Language Modeling
Natural language processing (NLP) has flourished over the past couple of years as a result of advancements in
deep learning [414]. One important component of NLP is building language models [415]. A language model
is a statistical model used to predict what word will likely appear next in a sentence helpful. Today, deep
neural networks using transformers [416] are utilized to construct language models [417–419] for a variety of
tasks. Concerns remain, however, such as bias [420], controversial conclusions [421], energy consumption, and
environmental impact [422]. Thus it makes sense to look to quantum computing for potentially better language
models for NLP.
The DisCoCat framework, developed by Coecke et al. [423], combines both meaning and grammar, something
not previously done, into a single language model utilizing category theory. For grammar, they utilized pregroups
[424], sets with relaxed group axioms, to validate sentence grammar. Semantics is represented by a vector space
model, similar to those commonly used in NLP [425]. However, DisCoCat also allows for higher-order tensors to
distinguish different parts of speech.
Coecke et al. noted that pregroups and vector spaces are asymmetric and symmetric strict monoidal cate-
gories, respectively. Thus, they can be represented by utilizing the powerful diagrammatic framework associated
with monoidal categories [426]. The revelation made by this group [427] was that this same diagrammatic frame-
work is utilized by categorical quantum mechanics [428]. Because of the similarity between the two, the diagrams
can be represented by quantum circuits [427] utilizing the algebra of ZX-calculus [429]. Thus, because of this
connection and the use of tensor operations, the group argued that this natural language model is “quantum-
native,” meaning it is more natural to run the DisCoCat model on a quantum computer instead of a classical one
[427]. Coecke et al. ran multiple experiments on NISQ devices for a variety of simple NLP tasks [430–432].
In addition, there are many ways in which QML models, such as QNNs (Section 7.6), could potentially
enhance the capacity of existing natural language neural architectures [433–435].
34
7.9.3 Implied Volatility
Implied volatility is a metric that captures the market’s forecast of a likely movement in the price of a security.
It is one of the deciding factors in the pricing of options. Sakuma [446] investigated the use of the deep quantum
neural network proposed by Beer et al. [383] in the context of learning implied volatility. Numerical results
suggest that such a QML model is a promising candidate for developing powerful methods in finance [6].
Figure 1: Quantum amplitude estimation circuit for approximating the expected value of a portfolio
made up of one T-Bill. The algorithm uses 3 evaluation qubits, corresponding to 8 samples. (Image
source: [215]; use permitted under the Creative Commons Attribution 4.0 International License.)
Despite the observed speedup of the QMCI-based algorithm, compared with classical Monte Carlo integration,
practical quantum advantage cannot be achieved by using this algorithm on NISQ devices because of the relatively
small qubit count, qubit connectivity and quality, and decoherence limitations present in modern quantum
processors (Section 3.3). Additionally, Monte Carlo for numerical integration can be efficiently parallelized,
imposing even stronger requirements on quantum algorithms in order to outperform standard classical methods.
35
Figure 2: Results of executing the QAE algorithm on the IBMQ 5 Yorktown (ibmqx2) chip for a range
of evaluation qubits, with 8,192 shots each. Because of the better convergence rate of the quantum
algorithm (blue line), it achieves lower error rates than does Monte Carlo (orange line) for the version
with 4 qubits, with the performance gap estimated to grow even more for larger instances on 5 and 6
qubits (blue dashed line). (Image source: [215]; use permitted under the Creative Commons Attribution
4.0 International License, http://creativecommons.org/licenses/by/4.0/)
need to be overcome in order to perform simulations on the D-Wave quantum annealer. First, the standard way
of enforcing the number of selected assets by introducing a high-energy penalty term can be problematic with
D-Wave devices because of precision issues and the physical limit on energy scales that can be programmed on
the chip. The authors observed that this constraint can be enforced by artificially shifting the Sharpe ratios,
used when computing the risk-adjusted returns of the assets, by a fixed amount. Second, embedding the Ising
optimization problem in the Chimera topology of D-Wave quantum annealers presents a strong limitation on
the problems that can be solved on the device (Section 6.1.1). To overcome this limitation, one can employ the
minor-embedding compilation technique for fully connected graphs [242, 449], allowing the necessary embedding
to be performed using an extended set of variables.
The novelty of this work by Venturelli and Kondratyev is the use of the reverse annealing protocol [237],
which was found to be on average 100+ times faster than forward annealing, as measured by the time to solution
(TTS) metric. Reverse quantum annealing, briefly mentioned in Section 6.1.1, is illustrated in Figure 3. It
is a new approach enabled by the latest D-Wave architecture designed to help escape local minima through a
combination of quantum and thermal fluctuations. The annealing schedule starts in some classical state provided
by the user, as opposed to the quantum superposition state, as in the case of a forward annealing. The transverse
field is then increased, driving backward annealing from the classical initial state to some intermediate quantum
superposition state. The annealing schedule functions A(t) and B(t) (Section 6.1.1) are then kept constant for a
fixed amount of time, allowing the system to perform the search for a global minimum. The reverse annealing
schedule is completed by removing the transverse field, effectively performing the forward annealing protocol
with the system evolving to the ground state of the problem Hamiltonian.
The DW2000Q device used in [448] allows embedding up to 64 logical binary variables on a fully-connected
graph, and the authors considered portfolio optimization problems with 24–60 assets, with 30 randomly generated
instances for each size, and calculating the TTS. The results for both forward and reverse quantum annealing
were then compared with the industry-established genetic algorithm (GA) approach used as a classical benchmark
heuristic. The TTS for reverse annealing with the shortest annealing and pause times was found to be several
orders of magnitude smaller than for the forward annealing and GA approaches, demonstrating a promising step
toward practical applications of NISQ devices.
The development of new quantum annealing protocols and approaches, such as the reverse quantum annealing
considered in this section, is an active area of research.
" #
0 0 ~rT η µ
0 0 ~T θ = ξ ,
p (17)
~r p
~ Σ w
~ ~0
where η, θ are the dual variables. This lends itself to quantum linear systems algorithms (Section 4.5).
36
a b
𝐴𝐴 𝑡𝑡 𝐵𝐵 𝑡𝑡 𝐵𝐵 𝑡𝑡
Energy scale
Energy scale
𝐴𝐴 𝑡𝑡
𝑡𝑡 𝑡𝑡
Figure 3: Forward (a) and reverse (b) quantum annealing protocols. During reverse quantum annealing,
the system is initialized in a classical state (c, left), evolved following a “backward” annealing schedule to
some quantum superposition state. At that point the evolution is interrupted for a fixed amount of time,
allowing the system to perform a global search (c, middle). The system’s evolution then is completed by
the forward annealing schedule (c, right). Adapted from [237].
To reiterate, solving a QLSP consists of returning a quantum state, with bounded error, that is proportional
to the solution of a linear system A~ x = ~b. Quantum algorithms for this problem on sparse and well-conditioned
matrices potentially achieve an exponential speedup in the system size N . Although a QLS algorithm cannot
provide classical access to amplitudes of the solution state while maintaining the exponential speedup in N ,
potentially useful financial statistics can be obtained from the state.
The QLS algorithm discussed in this section is HHL. This algorithm is typically viewed as one that is intended
for the fault-tolerant era of quantum computation. Two of the main components of the algorithm that contribute
to this constraint are quantum phase estimation (Section 4.3) and the loading of the inverse of the eigenvalue
estimations onto the amplitudes of an ancilla, called eigenvalue inversion [88].15 Yalovetzky et al. [450] developed
an algorithm called NISQ-HHL with the goal of applying it to portfolio optimization problems on NISQ devices.
While it does not retain the asymptotic speedup of HHL, the authors’ approach can potentially provide benefits
in practice. They applied their algorithm to small mean-variance portfolio optimization problems and executed
them on the trapped-ion Quantinuum System Model H1 device [63].
QPE is typically difficult to run on near-term devices because of the large number of ancillary qubits and
controlled operations required for the desired eigenvalue precision (depth scales as O( 1δ ) for precision δ). An
alternative realization of QPE reduces the number of ancillary qubits to one and replaces all controlled-phase
gates in the quantum Fourier transform component of QPE with classically controlled single-qubit gates [451].
This method requires mid-circuit measurements, ground-state resets, and quantum conditional logic (QCL). These
features are available on the Quantinuum System Model H1 device. The paper calls this method QCL-QPE. The
differences between these two QPE methods are highlighted in Figure 4.
With regard to eigenvalue inversion, implementing the controlled rotations on near-term devices typically
requires a multicontrolled rotation for every possible eigenvalue that can be estimated by QPE, such as the
uniformly controlled rotation gate [452]. This circuit is exponentially deep in the number of qubits. Fault-
tolerant implementations can take advantage of quantum arithmetic methods to implement efficient algorithms
for the required arcsin computation. An alternative approach for the near term is a hybrid version of HHL
[453], which the developers of NISQ-HHL extended. This approach utilizes QPE to first estimate the required
eigenvalues to perform the rotations for. While the number of controlled rotations is now reduced, this approach
does not retain the asymptotic speedup provided by HHL because of the loss of coherence. A novel procedure
15
Hamiltonian simulation also contributes to this constraint.
37
Figure 4: Circuits for estimating the eigenvalues of the unitary operator U to three bits using QPE (left
circuit) or QCL-QPE (right circuit). S is the register that U is applied to, and j is a classical register;
H refers to the Hadamard gate; and Rk , for k = 2, 3, are the standard phase gates used by the quantum
Fourier transform [2]. Adapted from [450].
was developed by the authors for scaling the matrix by a factor γ to make it easier to distinguish the eigenvalues
in the output distribution of QPE and improve performance. Figure 5 displays experimental results collected
from the Quantinuum H1 device to show the effect of the scaling procedure.16
Figure 5: Probability distributions over the eigenvalue estimations from the QCL-QPE run using γ = 50
(left) and γ = 100 (right). The blue bars represent the experimental results on the H1 machine—1, 000
shots (left) and 2, 000 shots (right)—and we compare them with the theoretical eigenvalues (classically
calculated using a numerical solver) represented with the red dots and the results from the Qiskit [454]
QASM simulator represented with the orange dots. Adapted from [450].
These methods were executed on the Quantinuum H1 device for a few small portfolios (Table 2). A controlled-
SWAP test [455] was performed between the state from HHL and the solution to the linear system obtained
classically (loaded onto a quantum state) [450]. The number of H1 two-qubit ZZMax gates used is also displayed
in Table 2.
HHL NISQ-HHL Relative Change
Rotations 7 6 -14.29%
Eigenvalue Inversion Depth 138 120 -13.04%
Inner Product in Simulation 0.59 0.83 40.67%
Inner Product on H1-1 0.42 0.46 9.52%
Table 2: Comparison of the number of rotations in the eigenvalue inversion, ZZMax depth of the HHL
plus the controlled-SWAP test circuits and the inner product calculated from the Qiskit state vector
simulator and the Quantinuum H1 results with 3, 000 shots. Adapted from [450].
Statistics such as risk can be computed efficiently from the quantum state. Sampling can also be performed
if the solution state is sparse enough [303].
16
Let Πλ denote the eigenspace projector for an eigenvalue λ of the Hermitian matrix in Equation (17), and
let |bi be a quantum state proportional to the right side of Equation (17). To clarify, the y-axis value of the red
dot corresponding to λ, on either of the histograms in Figure 5, is kΠλ |bik22 .
38
creation and first adaptation of new technologies. This is true also when it comes to quantum computing. In
fact, finance is estimated to be the first industry sector to benefit from quantum computing, not only in the
medium and long terms, but even in the short term, because of the large number of financial use cases that
lend themselves to quantum computing and their amenability to be solved effectively even in the presence of
approximations.
A common misconception about quantum computing is that quantum computers are simply faster processors
and hence speedups will be automatically achieved by porting a classical algorithm from a classical computer
to a quantum one—just like moving a program to a system-upgraded hardware. In reality, quantum computers
are fundamentally different from classical computers, so much so that the algorithmic process for solving an
application has to be completely redesigned based on the architecture of the underlying quantum hardware.
Most of the use cases that characterize the finance industry sector have high computational complexity,
and for this reason they lend themselves to quantum computing. However, the computational complexity of a
problem per se is not a guarantee that quantum computing can make a difference. To assess commercial viability,
we must first answer a couple questions. First, is there in fact any need for improved speed of calculations
or computational-accuracy for the given application? Determining whether there is a gap in what the current
technology can provide us is essential and is a question that financial players typically ask when doing proof
of concept projects. Second, when adapting a classical finance application to quantum computing, can that
application achieve quantum speedup and, if so, when will that happen based on the estimated evolution of the
underlying quantum hardware?
Today’s quantum computers are not yet capable of solving real-life-scale problems in the industry more effi-
ciently and more accurately than classical computers can. Nevertheless, proofs of concept for quantum advantage
and even quantum supremacy have already been provided (see Section 3). There is hope that the situation will
change in the coming years with demonstrations of quantum advantage. It is crucial for enterprises, and particu-
larly financial institutions, to use the current time to become quantum ready in order to avoid being left behind
when quantum computers become operational in a production environment.
We are in the early stages of the quantum revolution, yet we are already observing a strong potential for
quantum technology to transform the financial industry. So far, the community has developed potential quantum
solutions for portfolio optimization, derivatives pricing, risk modeling, and several problems in the realm of
artificial intelligence and machine learning, such as fraud detection and NLP.
In this paper we have provided a comprehensive review of quantum computing for finance, specifically focusing
on quantum algorithms that can solve computationally challenging financial problems. We have described the
current landscape of the state of the industry. We hope this work will be used by the scientific community,
both in industry and in academia, not only as a reference, but also as a source of information to identify new
opportunities and advance the state of the art.
Competing Interests
The authors declare that they have no competing interests.
Funding
C.N.G’s work was supported in part by the U.S. Department of Energy, Office of Science, Office of Workforce
Development for Teachers and Scientists (WDTS) under the Science Undergraduate Laboratory Internships
Program (SULI). Y.A.’s work at Argonne National Laboratory was supported by the U.S. Department of Energy,
Office of Science, under contract DE-AC02-06CH11357.
Disclaimer
This paper was prepared for information purposes by the teams of researchers from the various institutions
identified above, including the Future Lab for Applied Research and Engineering (FLARE) group of JPMorgan
Chase Bank, N.A.. This paper is not a product of the Research Department of JPMorgan Chase & Co. or its
affiliates. Neither JPMorgan Chase & Co. nor any of its affiliates make any explicit or implied representation
or warranty and none of them accept any liability in connection with this paper, including, but limited to,
the completeness, accuracy, reliability of information contained herein and the potential legal, compliance, tax
or accounting effects thereof. This document is not intended as investment research or investment advice, or
a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial
product or service, or to be used in any way for evaluating the merits of participating in any transaction.
39
Authors’ Contributions
Dylan Herman, Cody Ning Googin, and Xiaoyuan Liu contributed equally to the manuscript. All authors
contributed to the writing and reviewed and approved the final manuscript.
Acronyms
NISQ Noisy-Intermediate Scale Quantum
AQC Adiabatic Quantum Computing
qRAM Quantum Random Access Memory
QCL Quantum Conditional Logic
VQE Variational Quantum Eigensolver
QAOA Quantum Approximate Optimization Algorithm
QML Quantum Machine Learning
PCA Principal Component Analysis
TDA Topological Data Analysis
QNN Quantum Neural Network
PQC Parameterized Quantum Circuit
GAN Generative Adversarial Network
SVM Support Vector Machine
PH Persistent Homology
GP Gaussian Process
NP Non-Deterministic Polynomial
VQA Variational Quantum Algorithm
QPE Quantum Phase Estimation
NLP Natural Language Processing
RL Reinforcement Learning
QAA Quantum Amplitude Amplification
QAE Quantum Amplitude Estimation
QLSP Quantum Linear Systems Problem
QLS Quantum Linear System
QLSA Quantum Linear Systems Algorithm
HHL Harrow Hassidim Lloyd
QSVT Quantum Singular Value Transform
QSVE Quantum Singular Value Estimation
CKS Childs, Kothari, and Somma
QA Quantum Annealing
IP Integer Programming
IQP Integer Quadratic Programming
QUBO Quadratic Unconstrained Binary Optimization
QAOA Quantum Approximate Optimization Algorithm
ITE Imaginary-Time Evolution
VarQITE Variational Quantum Imaginary Time Evolution
MCI Monte Carlo Integration
QMCI Quantum Monte Carlo Integration
VaR Value at Risk
CVaR Conditional Value at Risk
CDO Collateralized Debt Obligation
ECR Economic Capital Requirement
CPU Central Processing Unit
GPU Graphics Processing Unit
40
References
[1] Alexandre Ménard, Ivan Ostojic, Mark Patel, and Daniel Volz. A game plan for quantum computing, 2020.
[2] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge
University Press, 2010.
[3] Daniel J Egger, Claudio Gambella, Jakub Marecek, Scott McFaddin, Martin Mevissen, Rudy Raymond,
Andrea Simonetto, Stefan Woerner, and Elena Yndurain. Quantum computing for finance: State-of-the-art
and future prospects. IEEE Transactions on Quantum Engineering, 1:1–24, 2020.
[4] Adam Bouland, Wim van Dam, Hamed Joorati, Iordanis Kerenidis, and Anupam Prakash. Prospects and
challenges of quantum finance. arXiv preprint arXiv:2011.06492, 2020.
[5] Roman Orus, Samuel Mugel, and Enrique Lizaso. Quantum computing for finance: Overview and prospects.
Reviews in Physics, 4:100028, 2019.
[6] Marco Pistoia, Syed Farhan Ahmad, Akshay Ajagekar, Alexander Buts, Shouvanik Chakrabarti, Dylan Her-
man, Shaohan Hu, Andrew Jena, Pierre Minssen, Pradeep Niroula, Arthur Rattew, Yue Sun, and Romina
Yalovetzky. Quantum Machine Learning for Finance. IEEE/ACM International Conference On Computer
Aided Design (ICCAD), November 2021. ICCAD Special Session Paper.
[7] Andrés Gómez, Álvaro Leitao, Alberto Manzano, Daniele Musso, María R Nogueiras, Gustavo Ordóñez, and
Carlos Vázquez. A survey on quantum computational finance for derivatives pricing and VaR. Archives of
Computational Methods in Engineering, pages 1–27, 2022.
[8] Paul Griffin and Ritesh Sampat. Quantum computing for supply chain finance. In 2021 IEEE International
Conference on Services Computing (SCC), pages 456–459. IEEE, 2021.
[9] Apoorva Ganapathy. Quantum computing in high frequency trading and fraud detection. Engineering
International, 9(2):61–72, 2021.
[10] Research and Markets. Financial Services Global Market Report 2021: COVID-19 Impact and Recovery to
2030, 2021.
[11] Basel Committee on Banking Supervision. High-level summary of Basel III reforms, 2017.
[12] Paul Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer, 2004.
[13] Heath P. Terry, Debra Schwartz, and Tina Sun. The Future of Finance: Volume 3: The socialization of
finance, 2015.
[14] Salesforce. Trends in Financial Services, 2020.
[15] Tucker Bailey, Soumya Banerjee, Christopher Feeney, and Heather Hogsett. Cybersecurity: Emerging
challenges and solutions for the boards of financial-services companies, 2020.
[16] Peter W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum
computer. SIAM Journal on Computing, 26(5):1484–1509, Oct 1997.
[17] Craig Gidney and Martin Ekerå. How to factor 2048 bit RSA integers in 8 hours using 20 million noisy
qubits. Quantum, 5:433, Apr 2021.
[18] Martin Roetteler, Michael Naehrig, Krysta M Svore, and Kristin Lauter. Quantum resource estimates for
computing elliptic curve discrete logarithms. In International Conference on the Theory and Application of
Cryptology and Information Security, pages 241–270, 2017.
[19] Andrew M Childs and Wim Van Dam. Quantum algorithms for algebraic problems. Reviews of Modern
Physics, 82(1):1, 2010.
[20] D. J. Bernstein and T. Lange. Post-quantum cryptography. Nature, 2017.
[21] Nicolas Gisin, Grégoire Ribordy, Wolfgang Tittel, and Hugo Zbinden. Quantum cryptography. Reviews of
Modern Physics, 74(1):145, 2002.
[22] Matthew Amy, Olivia Di Matteo, Vlad Gheorghiu, Michele Mosca, Alex Parent, and John Schanck. Esti-
mating the cost of generic quantum pre-image attacks on SHA-2 and SHA-3. In International Conference on
Selected Areas in Cryptography, pages 317–337. Springer, 2016.
[23] Akinori Hosoyamada and Yu Sasaki. Quantum collision attacks on reduced SHA-256 and SHA-512. IACR
Cryptol. ePrint Arch., 2021:292, 2021.
[24] Samuel Jaques, Michael Naehrig, Martin Roetteler, and Fernando Virdia. Implementing Grover oracles for
quantum key search on AES and LowMC. Advances in Cryptology–EUROCRYPT 2020, 12106:280, 2020.
[25] Xavier Bonnetain, André Schrottenloher, and Ferdinand Sibleyras. Beyond quadratic speedups in quantum
attacks on symmetric schemes. IACR-EUROCRYPT-2022, 10 2021.
[26] Laurence A. Wolsey and George L Nemhauser. Integer and combinatorial optimization, volume 55. John
Wiley & Sons, 1999.
[27] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[28] Yuri Alexeev, Dave Bacon, Kenneth R. Brown, Robert Calderbank, Lincoln D. Carr, Frederic T. Chong,
Brian DeMarco, Dirk Englund, Edward Farhi, Bill Fefferman, Alexey V. Gorshkov, Andrew Houck, Jungsang
Kim, Shelby Kimmel, Michael Lange, Seth Lloyd, Mikhail D. Lukin, Dmitri Maslov, Peter Maunz, Christopher
Monroe, John Preskill, Martin Roetteler, Martin J. Savage, and Jeff Thompson. Quantum computer systems
for scientific discovery. PRX Quantum, 2(1), February 2021.
41
[29] Gagnesh Kumar and Sunil Agrawal. CMOS limitations and futuristic carbon allotropes. In 2017 8th IEEE
Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), pages 68–
71. IEEE, 2017.
[30] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas,
Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, et al. Quantum supremacy using a programmable
superconducting processor. Nature, 574(7779):505–510, 2019.
[31] John Preskill. Quantum computing and the entanglement frontier. arXiv preprint arXiv:1203.5813, 2012.
[32] John Preskill. Quantum computing in the NISQ era and beyond. Quantum, 2:79, Aug 2018.
[33] Ramamurti Shankar. Principles of quantum mechanics. Springer Science & Business Media, 1994.
[34] Oswald Veblen and John Wesley Young. Projective geometry, volume 2. Ginn, 1918.
[35] Steven Roman. Advanced linear algebra. Springer, 2008.
[36] John D. Dollard and Charles N. Friedman. Product Integration with Application to Differential Equations.
Encyclopedia of Mathematics and its Applications. London: Addison-Wesley, 1984.
[37] A. Yu. Kitaev, A. H. Shen, and M. N. Vyalyi. Classical and Quantum Computation. American Mathematical
Society, USA, 2002.
[38] Yuchen Wang, Zixuan Hu, Barry C. Sanders, and Sabre Kais. Qudits and high-dimensional quantum
computing. Frontiers in Physics, 8, Nov 2020.
[39] Samantha Buck, Robin Coleman, and Hayk Sargsyan. Continuous variable quantum algorithms: an intro-
duction. arXiv preprint arXiv:2107.02151, 2021.
[40] Dong-Sheng Wang. A comparative study of universal quantum computing models: towards a physical
unification a comparative study of universal quantum computing models: towards a physical unification.
Quantum Engineering, page e85, 2021.
[41] Dorit Aharonov, Wim Van Dam, Julia Kempe, Zeph Landau, Seth Lloyd, and Oded Regev. Adiabatic
quantum computation is equivalent to standard quantum computation. SIAM Review, 50(4):755–787, 2008.
[42] Daniel A. Lidar and Todd A. Brun. Quantum Error Correction. Cambridge University Press, 2013.
[43] A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland. Surface codes: Towards practical large-scale
quantum computation. Physical Review A, 86(3), Sep 2012.
[44] Arthur G Rattew, Yue Sun, Pierre Minssen, and Marco Pistoia. The efficient preparation of normal distri-
butions in quantum registers. Quantum, 5:609, 2021.
[45] Xin-Chuan Wu, Sheng Di, Emma Maitreyee Dasgupta, Franck Cappello, Hal Finkel, Yuri Alexeev, and
Frederic T. Chong. Full-state quantum circuit simulation by using data compression. Proceedings of the
International Conference for High Performance Computing, Networking, Storage and Analysis, November
2019.
[46] Xin-Chuan Wu, Sheng Di, Emma Maitreyee Dasgupta, Franck Cappello, Hal Finkel, Yuri Alexeev, and
Frederic T Chong. Full-state quantum circuit simulation by using data compression. In Proceedings of the
High Performance Computing, Networking, Storage and Analysis International Conference (SC19), Denver,
CO, USA, 2019. IEEE Computer Society.
[47] Xin-Chuan Wu, Sheng Di, Franck Cappello, Hal Finkel, Yuri Alexeev, and Frederic Chong. Memory-efficient
quantum circuit simulation by using lossy data compression. In Proceedings of the 3rd International Workshop
on Post-Moore Era Supercomputing (PMES) at SC18, Denver, CO, USA, 2018.
[48] Xin-Chuan Wu, Sheng Di, Franck Cappello, Hal Finkel, Yuri Alexeev, and Frederic T Chong. Amplitude-
aware lossy compression for quantum circuit simulation. In Proceedings of 4th International Workshop on
Data Reduction for Big Scientific Data (DRBSD-4) at SC18, 2018.
[49] Danylo Lykov, Roman Schutski, Alexey Galda, Valerii Vinokur, and Yuri Alexeev. Tensor network quantum
simulator with step-dependent parallelization. arXiv preprint arXiv:2012.02430, 2020.
[50] Danylo Lykov, Angela Chen, Huaxuan Chen, Kristopher Keipert, Zheng Zhang, Tom Gibbs, and Yuri
Alexeev. Performance evaluation and acceleration of the QTensor quantum circuit simulator on GPUs. 2021
IEEE/ACM Second International Workshop on Quantum Computing Software (QCS), nov 2021.
[51] Danylo Lykov and Yuri Alexeev. Importance of diagonal gates in tensor network simulations. 2021.
[52] Tameem Albash and Daniel A. Lidar. Adiabatic quantum computation. Reviews of Modern Physics, 90(1),
Jan 2018.
[53] Jérémie Roland and Nicolas J. Cerf. Quantum search by local adiabatic evolution. Physical Review A, 65(4),
Mar 2002.
[54] Jacob D. Biamonte and Peter J. Love. Realizable Hamiltonians for universal adiabatic quantum computers.
Physical Review A, 78(1), Jul 2008.
[55] P. Krantz, M. Kjaergaard, F. Yan, T. P. Orlando, S. Gustavsson, and W. D. Oliver. A quantum engineer’s
guide to superconducting qubits. Applied Physics Reviews, 6(2):021318, 2019.
[56] Colin D. Bruzewicz, John Chiaverini, Robert McConnell, and Jeremy M. Sage. Trapped-ion quantum
computing: Progress and challenges. Applied Physics Reviews, 6(2):021314, 2019.
42
[57] Antti P. Vepsäläinen, Amir H. Karamlou, John L. Orrell, Akshunna S. Dogra, Ben Loer, Francisca Vas-
concelos, David K. Kim, Alexander J. Melville, Bethany M. Niedzielski, Jonilyn L. Yoder, et al. Impact of
ionizing radiation on superconducting qubit coherence. Nature, 584(7822):551–556, 2020.
[58] Mohan Sarovar, Timothy Proctor, Kenneth Rudinger, Kevin Young, Erik Nielsen, and Robin Blume-Kohout.
Detecting crosstalk errors in quantum information processors. Quantum, 4:321, Sep 2020.
[59] Sergey Bravyi, Sarah Sheldon, Abhinav Kandala, David C. Mckay, and Jay M. Gambetta. Mitigating
measurement errors in multiqubit experiments. Phys. Rev. A, 103(4), 2021.
[60] Jacob R. West, Daniel A. Lidar, Bryan H. Fong, and Mark F. Gyure. High fidelity quantum gates via
dynamical decoupling. Phys. Rev. Lett., 105(23), Dec 2010.
[61] A. Yu. Kitaev. Fault-tolerant quantum computation by anyons. Annals of Physics, 303(1):2–30, 2003.
[62] Austin G Fowler and John M Martinis. Quantifying the effects of local many-qubit errors and nonlocal
two-qubit errors on the surface code. Physical Review A, 89(3):032316, 2014.
[63] J. Pino, Joan Dreiling, C. Figgatt, J. Gaebler, S. Moses, M. Allman, C. Baldwin, Michael Foss-Feig, D. Hayes,
K. Mayer, C. Ryan-Anderson, and Brian Neyenhuis. Demonstration of the trapped-ion quantum CCD com-
puter architecture. Nature, 2021.
[64] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to algorithms.
MIT press, 2009.
[65] Sanjeev Arora and Boaz Barak. Computational complexity: a modern approach. Cambridge University
Press, 2009.
[66] Andris Ambainis. Understanding quantum algorithms via query complexity. In Proceedings of the Interna-
tional Congress of Mathematicians: Rio de Janeiro 2018, pages 3265–3285. World Scientific, 2018.
[67] Lov K Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the twenty-eighth
annual ACM symposium on Theory of Computing, pages 212–219, 1996.
[68] Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani. Strengths and weaknesses of
quantum computing. SIAM Journal on Computing, 26(5):1510–1523, 1997.
[69] I. Kerenidis and A. Prakash. Quantum recommendation systems. arXiv preprint arXiv:1603.08675, 2016.
[70] Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Quantum random access memory. Physical Review
Letters, 100(16), April 2008.
[71] Aram W Harrow. Small quantum computers and large classical data sets. arXiv preprint arXiv:2004.00026,
2020.
[72] Andris Ambainis. Quantum search with variable times. Theory of Computing Systems, 47(3):786–807, 2010.
[73] Gilles Brassard, Peter Høyer, Michele Mosca, and Alain Tapp. Quantum amplitude amplification and
estimation. Quantum Computation and Information, page 53–74, 2002.
[74] Dominic W Berry, Andrew M Childs, Richard Cleve, Robin Kothari, and Rolando D Somma. Exponential
improvement in precision for simulating sparse Hamiltonians. In Proceedings of the forty-sixth annual ACM
symposium on Theory of Computing, pages 283–292, 2014.
[75] M. Boyer, G. Brassard, P. Høyer, and A. Tapp. Tight bounds on quantum searching. Fortschritte der
Physik: Progress of Physics, 46(4-5):493–505, 1998.
[76] Lov K. Grover. Fixed-point quantum search. Phys. Rev. Lett., 2005.
[77] Peter Høyer, Michele Mosca, and Ronald de Wolf. Quantum search on bounded-error inputs. Lecture Notes
in Computer Science, page 291–299, 2003.
[78] Andris Ambainis. Variable time amplitude amplification and quantum algorithms for linear algebra problems.
In STACS’12 (29th Symposium on Theoretical Aspects of Computer Science), volume 14, pages 636–647.
LIPIcs, 2012.
[79] A. Yu. Kitaev. Quantum measurements and the Abelian stabilizer problem. arXiv preprint quant-
ph/9511026, 1995.
[80] R. Cleve, A. Ekert, C. Macchiavello, and M. Mosca. Quantum algorithms revisited. Proceedings of the
Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 454(1969):339––354,
Jan 1998.
[81] Anupam Prakash. Quantum algorithms for linear algebra and machine learning. University of California,
Berkeley, 2014.
[82] Yohichi Suzuki, Shumpei Uno, Rudy Raymond, Tomoki Tanaka, Tamiya Onodera, and Naoki Yamamoto.
Amplitude estimation without phase estimation. Quantum Information Processing, 19(2), Jan 2020.
[83] Dmitry Grinko, Julien Gacon, Christa Zoufal, and Stefan Woerner. Iterative quantum amplitude estimation.
npj Quantum Information, 7(1), Mar 2021.
[84] Tudor Giurgica-Tiron, Iordanis Kerenidis, Farrokh Labib, Anupam Prakash, and William Zeng. Low depth
algorithms for quantum amplitude estimation. arXiv preprint arXiv:2012.03348, 2020.
[85] Kirill Plekhanov, Matthias Rosenkranz, Mattia Fiorentini, and Michael Lubasch. Variational quantum
amplitude estimation. arXiv preprint arXiv:2109.03687, 2021.
43
[86] Danial Dervovic, Mark Herbster, Peter Mountney, Simone Severini, Naïri Usher, and Leonard Wossnig.
Quantum linear systems algorithms: a primer. arXiv preprint arXiv:1802.08227, 2018.
[87] Andrew M. Childs, Robin Kothari, and Rolando D. Somma. Quantum algorithm for systems of linear equa-
tions with exponentially improved dependence on precision. SIAM Journal on Computing, 46(6):1920–1950,
Jan 2017.
[88] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for linear systems of equations.
Phys. Rev. Lett., 103:150502, Oct 2009.
[89] Scott Aaronson. Read the Fine Print. Nature Physics, 11(4):291–293, 04 2015.
[90] Dominic W. Berry, Graeme Ahokas, Richard Cleve, and Barry C. Sanders. Efficient quantum algorithms
for simulating sparse Hamiltonians. Communications in Mathematical Physics, 270(2):359—-371, Dec 2006.
[91] Andrew M. Childs, Yuan Su, Minh C. Tran, Nathan Wiebe, and Shuchen Zhu. Theory of Trotter error with
commutator scaling. Physical Review X, 11(1), Feb 2021.
[92] Dominic W. Berry, Andrew M. Childs, and Robin Kothari. Hamiltonian simulation with nearly optimal
dependence on all parameters. 2015 IEEE 56th Annual Symposium on Foundations of Computer Science,
Oct 2015.
[93] Guang Hao Low and Isaac L. Chuang. Optimal Hamiltonian simulation by quantum signal processing. Phys.
Rev. Lett., 118(1), Jan 2017.
[94] Pedro Costa, Dong An, Yuval R Sanders, Yuan Su, Ryan Babbush, and Dominic W Berry. Optimal scaling
quantum linear systems solver via discrete adiabatic theorem. arXiv preprint arXiv:2111.08152, 2021.
[95] Leonard Wossnig, Zhikuan Zhao, and Anupam Prakash. Quantum linear system algorithm for dense matri-
ces. Physical Review Letters, 120(5), Jan 2018.
[96] Iordanis Kerenidis and Anupam Prakash. Quantum gradient descent for linear systems and least squares.
Physical Review A, 101(2), Feb 2020.
[97] Shantanav Chakraborty, András Gilyén, and Stacey Jeffery. The power of block-encoded matrix powers:
improved regression techniques via faster Hamiltonian simulation. arXiv preprint arXiv:1804.01973, 2018.
[98] Sander Gribling, Iordanis Kerenidis, and Dániel Szilágyi. Improving quantum linear system solvers via a
gradient descent perspective. arXiv preprint arXiv:2109.04248, 2021.
[99] András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular value transformation and
beyond: exponential improvements for quantum matrix arithmetics. Proceedings of the 51st Annual ACM
SIGACT Symposium on Theory of Computing, Junee 2019.
[100] Alexander Dranov, Johannes Kellendonk, and Rudolf Seiler. Discrete time adiabatic theorems for quantum
mechanical systems. Journal of Mathematical Physics, 39(3):1340–1349, 1998.
[101] Iordanis Kerenidis and Anupam Prakash. A quantum interior point method for LPs and SDPs. ACM
Transactions on Quantum Computing, 1(1), Oct 2020.
[102] Alan Frieze, Ravi Kannan, and Santosh Vempala. Fast Monte-Carlo algorithms for finding low-rank ap-
proximations. Journal of the ACM (JACM), 51(6):1025–1041, 2004.
[103] Ewin Tang. A quantum-inspired classical algorithm for recommendation systems. In Proceedings of the 51st
Annual ACM SIGACT Symposium on Theory of Computing. Association for Computing Machinery, 2019.
[104] Nai-Hui Chia, András Gilyén, Tongyang Li, Han-Hsuan Lin, Ewin Tang, and Chunhao Wang. Sampling-
based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning. Proceed-
ings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, June 2020.
[105] Gregory F. Lawler and Vlada Limic. Random Walk: A Modern Introduction. Cambridge Studies in
Advanced Mathematics. Cambridge University Press, 2010.
[106] Fischer Black and Myron Scholes. The pricing of options and corporate liabilities. In World Scientific
Reference on Contingent Claims Analysis in Corporate Finance: Volume 1: Foundations of CCA and Equity
Valuation, pages 3–21. World Scientific, 2019.
[107] F. C. Klebaner. Introduction to Stochastic Calculus with Applications. Imperial College Press, 3rd edition,
2012.
[108] László Lovász. Random walks on graphs: A survey, combinatorics, Paul Erdos is Eighty. Bolyai Soc. Math.
Stud., 2, Jan 1993.
[109] J. R. Norris. Markov Chains. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge
University Press, 1997.
[110] J. Kempe. Quantum random walks: An introductory overview. Contemporary Physics, 44(4):307–327, July
2003.
[111] Mario Szegedy. Quantum speed-up of Markov chain based algorithms. In 45th Annual IEEE symposium
on foundations of computer science, 2004.
[112] Frédéric Magniez, Ashwin Nayak, Jérémie Roland, and Miklos Santha. Search via quantum walk. SIAM
Journal on Computing, 40(1):14–164, Jan 2011.
[113] Neil Shenvi, Julia Kempe, and K. Birgitta Whaley. Quantum random-walk search algorithm. Physical
Review A, 67(5), May 2003.
44
[114] Andris Ambainis, Julia Kempe, and Alexander Rivosh. Coins make quantum walks faster. In Proceedings
of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’05, page 1099–1108, USA,
2005. Society for Industrial and Applied Mathematics.
[115] Andris Ambainis, Eric Bach, Ashwin Nayak, Ashvin Vishwanath, and John Watrous. One-dimensional
quantum walks. In Proceedings of the thirty-third annual ACM symposium on Theory of Computing, pages
37–49, 2001.
[116] D. Aharonov, A. Ambainis, J. Kempe, and U. Vazirani. Quantum walks on graphs. In Proceedings of the
thirty-third annual ACM symposium on Theory of Computing, pages 50–59, 2001.
[117] Cristopher Moore and Alexander Russell. Quantum walks on the hypercube. In Proceedings of the 6th
International Workshop on Randomization and Approximation Techniques, pages 164––178. Springer, 2002.
[118] Andris Ambainis. Quantum walk algorithm for element distinctness. SIAM Journal on Computing,
37(1):210–239, 2007.
[119] Simon Apers, András Gilyén, and Stacey Jeffery. A unified framework of quantum walk search. arXiv
preprint arXiv:1912.04233, 2019.
[120] R. D. Somma, S. Boixo, H. Barnum, and E. Knill. Quantum simulations of classical annealing processes.
Phys. Rev. Lett., Sep 2008.
[121] Andrew M. Childs and Jason M. Eisenberg. Quantum algorithms for subset finding. Quantum Info.
Comput., 5(7):593–604, Nov 2005.
[122] Andris Ambainis, András Gilyén, Stacey Jeffery, and Martins Kokainis. Quadratic speedup for finding
marked vertices by quantum walks. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory
of Computing, pages 412––424, New York, NY, USA, 2020. Association for Computing Machinery.
[123] Edward Farhi and Sam Gutmann. Quantum computation and decision trees. Physical Review A, 58(2):915–
928, Aug 1998.
[124] Andrew M. Childs, Edward Farhi, and Sam Gutmann. An example of the difference between quantum and
classical random walks. Quantum Information Processing, 2002.
[125] Andrew M. Childs, Richard Cleve, Enrico Deotto, Edward Farhi, Sam Gutmann, and Daniel A. Spielman.
Exponential algorithmic speedup by a quantum walk. Proceedings of the thirty-fifth ACM symposium on
Theory of computing - STOC ’03, 2003.
[126] Julia Kempe. Quantum random walks hit exponentially faster. arXiv preprint quant-ph/0205083, 2002.
[127] Yue Ruan, Samuel Marsh, Xilin Xue, Xi Li, Zhihao Liu, and Jingbo Wang. Quantum approximate algorithm
for NP optimization problems with constraints. arXiv preprint arXiv:2002.00943, 2020.
[128] Andrew M. Childs. On the relationship between continuous- and discrete-time quantum walk. Communi-
cations in Mathematical Physics, 294(2):581–603, Oct 2009.
[129] Salvador Elías Venegas-Andraca. Quantum walks: a comprehensive review. Quantum Information Pro-
cessing, 11(5):101–1106, Jul 2012.
[130] M. Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C. Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R.
McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and et al. Variational quantum algorithms. Nature
Reviews Physics, 3(9):625––644, Aug 2021.
[131] Lennart Bittel and Martin Kliesch. Training variational quantum algorithms is NP-hard. Phys. Rev. Lett.,
127(12):120502, 2021.
[132] Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J. Love, Alán
Aspuru-Guzik, and Jeremy L. O’Brien. A variational eigenvalue solver on a photonic quantum processor.
Nat. Commun., 5(1), July 2014.
[133] Carlos Bravo-Prieto, Ryan LaRose, Marco Cerezo, Yigit Subasi, Lukasz Cincio, and Patrick J Coles.
Variational quantum linear solver. arXiv preprint arXiv:1909.05820, 2019.
[134] Hsin-Yuan Huang, Kishor Bharti, and Patrick Rebentrost. Near-term quantum algorithms for linear systems
of equations with regression loss functions. New Journal of Physics, 23(11):113021, Nov 2021.
[135] Patrick Huembeli and Alexandre Dauphin. Characterizing the loss landscape of variational quantum cir-
cuits. Quantum Science and Technology, 6(2):025011, Feb 2021.
[136] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum approximate optimization algorithm.
arXiv preprint arXiv:1411.4028, 2014.
[137] Stuart Hadfield, Zhihui Wang, Bryan O’Gorman, Eleanor Rieffel, Davide Venturelli, and Rupak Biswas.
From the quantum approximate optimization algorithm to a quantum alternating operator ansatz. Algorithms,
12(2):34, Feb 2019.
[138] Xiao Yuan, Suguru Endo, Qi Zhao, Ying Li, and Simon C. Benjamin. Theory of variational quantum
simulation. Quantum, 3:191, Oct 2019.
[139] A.D. McLachlan. A variational solution of the time-dependent Schrödinger equation. Molecular Physics,
8(1):39–44, 1964.
[140] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii. Quantum circuit learning. Physical Review A, 98(3),
Sep 2018.
45
[141] Edward Farhi, Jeffrey Goldstone, Sam Gutmann, and Michael Sipser. Quantum computation by adiabatic
evolution. arXiv preprint quant-ph/0001106, 2000.
[142] Peter JM Van Laarhoven and Emile HL Aarts. Simulated annealing. In Simulated annealing: Theory and
applications, pages 7–15. Springer, 1987.
[143] Kostyantyn Kechedzhi and Vadim N. Smelyanskiy. Open-system quantum annealing in mean-field models
with exponential degeneracy. Physical Review X, 6(2), May 2016.
[144] Vasil S. Denchev, Sergio Boixo, Sergei V. Isakov, Nan Ding, Ryan Babbush, Vadim Smelyanskiy, John
Martinis, and Hartmut Neven. What is the computational value of finite-range tunneling? Physical Review
X, 6(3), Aug 2016.
[145] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. Quantum adiabatic evolution algorithms versus
simulated annealing. arXiv preprint quant-ph/0201031, 2002.
[146] Akshay Ajagekar, Travis Humble, and Fengqi You. Quantum computing based hybrid solution strategies
for large-scale discrete-continuous optimization problems. Computers & Chemical Engineering, 132:106630,
Jan 2020.
[147] Radford M Neal. Probabilistic inference using Markov chain Monte Carlo methods. Department of Com-
puter Science, University of Toronto Toronto, ON, Canada, 1993.
[148] Christian P Robert, George Casella, and George Casella. Monte Carlo statistical methods, volume 2.
Springer, 2004.
[149] Tito Homem-de-Mello and Güzin Bayraksan. Monte Carlo sampling-based methods for stochastic opti-
mization. Surveys in Operations Research and Management Science, 19(1):56–85, 2014.
[150] Gilles Brassard, Frederic Dupuis, Sebastien Gambs, and Alain Tapp. An optimal quantum algorithm to
approximate the mean and its application for approximating the median of a set of points over an arbitrary
distance. arXiv preprint arXiv:1106.4267, 2011.
[151] S. Heinrich. Quantum summation with an application to integration. Journal of Complexity, 18(1):1–50,
2002.
[152] Ashley Montanaro. Quantum speedup of Monte Carlo methods. Proceedings of the Royal Society A:
Mathematical, Physical and Engineering Sciences, 471(2181):20150301, Sep 2015.
[153] D. Ceperley and B. Alder. Quantum Monte Carlo. Science, 231(4738):555–560, 1986.
[154] Shouvanik Chakrabarti, Rajiv Krishnakumar, Guglielmo Mazzola, Nikitas Stamatopoulos, Stefan Woerner,
and William J. Zeng. A threshold for quantum advantage in derivative pricing. Quantum, 5:463, June 2021.
[155] Thomas Häner, Martin Roetteler, and Krysta M Svore. Optimizing quantum circuits for arithmetic. arXiv
preprint arXiv:1805.12445, 2018.
[156] Arjan Cornelissen and Sofiene Jerbi. Quantum algorithms for multivariate Monte Carlo estimation. 2021.
[157] Lov Grover and Terry Rudolph. Creating superpositions that correspond to efficiently integrable probability
distributions. arXiv preprint quant-ph/0208112, 2002.
[158] Steven Herbert. No quantum speedup with grover-rudolph state preparation for quantum Monte Carlo
integration. Physical Review E, 103(6), Jun 2021.
[159] Christa Zoufal, Aurélien Lucchi, and Stefan Woerner. Quantum generative adversarial networks for learning
and loading random distributions. npj Quantum Information, 5(1), 2019.
[160] Bruno Dupire, The Black–scholes Model (see Black, and Gives Options. Pricing with a smile. Risk Magazine,
pages 18–20, 1994.
[161] Gisiro Maruyama. On the transition probability functions of the Markov process. Nat. Sci. Rep. Ochano-
mizu Univ, 5:10–20, 1954.
[162] Kazuya Kaneko, Koichi Miyamoto, Naoyuki Takeda, and Kazuyoshi Yoshino. Quantum pricing with a
smile: Implementation of local volatility model on quantum computer. 2020.
[163] Koichi Miyamoto and Kenji Shiohara. Reduction of qubits in a quantum algorithm for Monte Carlo
simulation by a pseudo-random-number generator. Physical Review A, 102(2), aug 2020.
[164] Michael B Giles. Multilevel Monte Carlo path simulation. Operations research, 56(3):607–617, 2008.
[165] Dong An, Noah Linden, Jin-Peng Liu, Ashley Montanaro, Changpeng Shao, and Jiasu Wang. Quantum-
accelerated multilevel monte carlo methods for stochastic differential equations in mathematical finance.
Quantum, 5:481, June 2021.
[166] Steven Herbert. Quantum Monte-Carlo integration: The full advantage in minimal circuit depth. arXiv
preprint arXiv:2105.09100, 2021.
[167] Ryan Babbush, Jarrod R. McClean, Michael Newman, Craig Gidney, Sergio Boixo, and Hartmut Neven.
Focus beyond quadratic speedups for error-corrected quantum advantage. PRX Quantum, 2(1), March 2021.
[168] Niladri Gomes, Anirban Mukherjee, Feng Zhang, Thomas Iadecola, Cai-Zhuang Wang, Kai-Ming Ho,
Peter P Orth, and Yong-Xin Yao. Adaptive variational quantum imaginary time evolution approach for
ground state preparation. Advanced Quantum Technologies, page 2100114, 2021.
[169] Arthur G. Rattew and Bálint Koczor. Preparing arbitrary continuous functions in quantum registers with
logarithmic complexity. 2022.
46
[170] Mark Kac. On distributions of certain Wiener functionals. Transactions of the American Mathematical
Society, 65(1):1–13, 1949.
[171] Richard P Feynman. The principle of least action in quantum mechanics. In Feynman’s Thesis—A New
Approach to Quantum Theory, pages 1–69. World Scientific, 2005.
[172] Christian Grossmann, Hans-Görg Roos, and Martin Stynes. Numerical treatment of partial differential
equations, volume 154. Springer, 2007.
[173] Jie Shen, Tao Tang, and Li-Lian Wang. Spectral methods: algorithms, analysis and applications, volume 41.
Springer Science & Business Media, 2011.
[174] Yudong Cao, Anargyros Papageorgiou, Iasonas Petras, Joseph Traub, and Sabre Kais. Quantum algorithm
and circuit design solving the Poisson equation. New Journal of Physics, 15(1):013021, 2013.
[175] Dominic W Berry. High-order quantum algorithm for solving linear differential equations. Journal of
Physics A: Mathematical and Theoretical, 47(10):105301, 2014.
[176] Dominic W Berry, Andrew M Childs, Aaron Ostrander, and Guoming Wang. Quantum algorithm for linear
differential equations with exponentially improved dependence on precision. Communications in Mathematical
Physics, 356(3):1057–1081, 2017.
[177] Andrew M Childs, Jin-Peng Liu, and Aaron Ostrander. High-precision quantum algorithms for partial
differential equations. Quantum, 5:574, 2021.
[178] Ashley Montanaro and Sam Pallister. Quantum algorithms and the finite element method. Physical Review
A, 93(3):032324, 2016.
[179] Pedro CS Costa, Stephen Jordan, and Aaron Ostrander. Quantum algorithm for simulating the wave
equation. Physical Review A, 99(1):012323, 2019.
[180] François Fillion-Gourdeau and Emmanuel Lorin. Simple digital quantum algorithm for symmetric first-
order linear hyperbolic systems. Numerical Algorithms, 82(3):1009–1045, 2019.
[181] Torsten Carleman. Application de la théorie des équations intégrales linéaires aux systèmes d’équations
différentielles non linéaires. Acta Mathematica, 59:63–87, 1932.
[182] Krzysztof Kowalski and W-H Steeb. Nonlinear dynamical systems and Carleman linearization. World
Scientific, 1991.
[183] Jin-Peng Liu, Herman Øie Kolden, Hari K Krovi, Nuno F Loureiro, Konstantina Trivisa, and Andrew M
Childs. Efficient quantum algorithm for dissipative nonlinear differential equations. Proceedings of the Na-
tional Academy of Sciences, 118(35), 2021.
[184] Sarah K Leyton and Tobias J Osborne. A quantum algorithm to solve nonlinear differential equations.
arXiv preprint arXiv:0812.4423, 2008.
[185] Seth Lloyd, Giacomo De Palma, Can Gokler, Bobak Kiani, Zi-Wen Liu, Milad Marvian, Felix Tennie, and
Tim Palmer. Quantum algorithm for nonlinear differential equations. arXiv preprint arXiv:2011.06571, 2020.
[186] Ilon Joseph. Koopman–von Neumann approach to quantum simulation of nonlinear classical dynamics.
Physical Review Research, 2(4):043102, 2020.
[187] Shi Jin and Nana Liu. Quantum algorithms for computing observables of nonlinear partial differential
equations. arXiv preprint arXiv:2202.07834, 2022.
[188] Shi Jin and Xiantao Li. Multi-phase computations of the semiclassical limit of the Schrödinger equation
and related problems: Whitham vs Wigner. Physica D: Nonlinear Phenomena, 182(1-2):46–85, 2003.
[189] Filipe Fontanela, Antoine Jacquier, and Mugad Oumgari. A quantum algorithm for linear PDEs arising in
finance. SIAM Journal on Financial Mathematics, 12(4):SC98–SC114, 2021.
[190] Hedayat Alghassi, Amol Deshmukh, Noelle Ibrahim, Nicolas Robles, Stefan Woerner, and Christa Zoufal.
A variational quantum algorithm for the Feynman–Kac formula. arXiv preprint arXiv:2108.10846, 2021.
[191] Michael Lubasch, Jaewoo Joo, Pierre Moinier, Martin Kiffner, and Dieter Jaksch. Variational quantum
algorithms for nonlinear problems. Physical Review A, 101(1):010301, 2020.
[192] Oleksandr Kyriienko, Annie E Paine, and Vincent E Elfving. Solving nonlinear differential equations with
differentiable quantum circuits. Physical Review A, 103(5):052416, 2021.
[193] Oleksandr Kyriienko, Annie E Paine, and Vincent E Elfving. Protocols for trainable and differentiable
quantum generative modelling. arXiv preprint arXiv:2202.08253, 2022.
[194] Annie E. Paine, Vincent E. Elfving, and Oleksandr Kyriienko. Quantum quantile mechanics: Solving
stochastic differential equations for generating time-series. 2021.
[195] Gyorgy Steinbrecher and William Shaw. Quantile mechanics. European Journal of Applied Mathematics -
EUR J APPL MATH, 19, 08 2007.
[196] James E Gentle. Random number generation and Monte Carlo methods, volume 381. Springer, 2003.
[197] Kenji Kubo, Yuya O. Nakagawa, Suguru Endo, and Shota Nagayama. Variational quantum simulations of
stochastic differential equations. Physical Review A, 103(5), May 2021.
[198] Phelim P. Boyle. Option valuation using a three jump process. 1986.
[199] Hans Föllmer and Alexander Schied. Stochastic finance. In Stochastic Finance. de Gruyter, 2016.
[200] Stefan Weinzierl. Introduction to Monte Carlo methods. arXiv preprint hep-ph/0006269, 2000.
47
[201] Phelim P Boyle. Options: A Monte Carlo approach. Journal of Financial Economics, 4(3):323–338, 1977.
[202] Nikitas Stamatopoulos, Daniel J. Egger, Yue Sun, Christa Zoufal, Raban Iten, Ning Shen, and Stefan
Woerner. Option pricing using quantum computers. Quantum, 4:291, Jul 2020.
[203] Patrick Rebentrost, Brajesh Gupt, and Thomas R. Bromley. Quantum computational finance: Monte
Carlo pricing of financial derivatives. Physical Review A, 98(2), Aug 2018.
[204] Koichi Miyamoto and Kenji Kubo. Pricing multi-asset derivatives by finite difference method on a quantum
computer, 2021.
[205] Noah Linden, Ashley Montanaro, and Changpeng Shao. Quantum vs. classical algorithms for solving the
heat equation. 2020.
[206] Javier Gonzalez-Conde, Ángel Rodríguez-Rozas, Enrique Solano, and Mikel Sanz. Simulating option price
dynamics with exponential quantum speedup. 2021.
[207] Albert N Shiryaev. Optimal stopping rules, volume 8. Springer Science & Business Media, 2007.
[208] Francis A Longstaff and Eduardo S Schwartz. Valuing American options by simulation: a simple least-
squares approach. The Review of Financial Studies, 14(1):113–147, 2001.
[209] João F. Doriguello, Alessandro Luongo, Jinge Bao, Patrick Rebentrost, and Miklos Santha. Quantum
algorithm for stochastic optimal stopping problems with applications in finance, 2021.
[210] Koichi Miyamoto. Bermudan option pricing by quantum amplitude estimation and chebyshev interpolation,
2021.
[211] Hao Tang, Anurag Pal, Tian-Yu Wang, Lu-Feng Qiao, Jun Gao, and Xian-Min Jin. Quantum computation
for pricing the collateralized debt obligations. Quantum Engineering, 3(4):e84, 2021.
[212] David X. Li. On default correlation: A copula function approach. The Journal of Fixed Income, 9(4):43–54,
2000.
[213] Ole E. Barndorff-Nielsen. Normal inverse Gaussian distributions and stochastic volatility modelling. Scan-
dinavian Journal of Statistics, 24(1):1–13, 1997.
[214] L. Jeff Hong, Zhaolin Hu, and Guangwu Liu. Monte Carlo methods for value-at-risk and conditional
value-at-risk: A review. ACM Trans. Model. Comput. Simul., 24(4), Nov 2014.
[215] Stefan Woerner and Daniel J. Egger. Quantum risk analysis. npj Quantum Information, 5(1), Feb 2019.
[216] D. J. Egger, R. Garcia Gutierrez, J. Mestre, and S. Woerner. Credit risk analysis using quantum computers.
IEEE Transactions on Computers, 70(12):2136–2145, Dec 2021.
[217] Nikitas Stamatopoulos, Guglielmo Mazzola, Stefan Woerner, and William J. Zeng. Towards quantum
advantage in financial market risk using quantum gradient algorithms. 2021.
[218] Stephen P. Jordan. Fast quantum algorithm for numerical gradient estimation. Phys. Rev. Lett., 95:050501,
July 2005.
[219] András Gilyén, Srinivasan Arunachalam, and Nathan Wiebe. Optimizing quantum optimization algorithms
via faster quantum gradient computation. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on
Discrete Algorithms, SODA ’19, page 1425–1444, USA, 2019. Society for Industrial and Applied Mathematics.
[220] Guang Hao Low and Isaac L Chuang. Hamiltonian simulation by qubitization. Quantum, 3:163, 2019.
[221] Jianping Li. General explicit difference formulas for numerical differentiation. Journal of Computational
and Applied Mathematics, 183(1):29–52, 2005.
[222] Arjan Cornelissen. Quantum gradient estimation of gevrey functions. arXiv preprint arXiv:1909.13528,
2019.
[223] Andrew Green. XVA: credit, funding and capital valuation adjustments. John Wiley & Sons, 2015.
[224] Jeong Yu Han and Patrick Rebentrost. Quantum advantage for multi-option portfolio pricing and valuation
adjustments. 2022.
[225] Javier Alcazar, Andrea Cadarso, Amara Katabarwa, Marta Mauri, Borja Peropadre, Guoming Wang, and
Yudong Cao. Quantum algorithm for credit valuation adjustments. New Journal of Physics, 24(2):023036,
feb 2022.
[226] Guoming Wang, Dax Enshan Koh, Peter D. Johnson, and Yudong Cao. Minimizing estimation runtime on
noisy quantum computers. PRX Quantum, 2(1), March 2021.
[227] Dax Enshan Koh, Guoming Wang, Peter D. Johnson, and Yudong Cao. Foundations for Bayesian inference
with engineered likelihood functions for robust amplitude estimation. Journal of Mathematical Physics,
63(5):052202, may 2022.
[228] J. Hromkovič. Algorithmics for hard problems: introduction to combinatorial optimization, randomization,
approximation, and heuristics. Springer Science & Business Media, 2013.
[229] Yurii Nesterov and Arkadii Nemirovskii. Interior-point polynomial algorithms in convex programming.
SIAM, 1994.
[230] Lee Braine, Daniel J. Egger, Jennifer Glick, and Stefan Woerner. Quantum algorithms for mixed binary
optimization applied to transaction settlement. IEEE Transactions on Quantum Engineering, 2:1–8, 2021.
48
[231] Gary Kochenberger, Jin-Kao Hao, Fred Glover, Mark Lewis, Zhipeng Lü, Haibo Wang, and Yang Wang.
The unconstrained binary quadratic programming problem: a survey. Journal of combinatorial optimization,
28(1):58–81, 2014.
[232] S. Okada, M. Ohzeki, and S. Taguchi. Efficient partition of integer optimization problems with one-hot
encoding. Scientific Reports, 2019.
[233] Andrew Lucas. Ising formulations of many NP problems. Frontiers in Physics, 2:5, 2014.
[234] Tadashi Kadowaki and Hidetoshi Nishimori. Quantum annealing in the transverse Ising model. Physical
Review E, 58(5):5355––5363, Nov 1998.
[235] Nicholas Chancellor, Stefan Zohren, and Paul A Warburton. Circuit design for multi-body interactions
in superconducting quantum annealing systems with applications to a scalable architecture. npj Quantum
Information, 3(1):1–7, 2017.
[236] Krzysztof Domino, Akash Kundu, Özlem Salehi, and Krzysztof Krawiec. Quadratic and higher-order
unconstrained binary optimization of railway dispatching problem for quantum computing. arXiv preprint
arXiv:2107.03234, 2021.
[237] James King, Masoud Mohseni, William Bernoudy, Alexandre Fréchette, Hossein Sadeghi, Sergei V
Isakov, Hartmut Neven, and Mohammad H Amin. Quantum-assisted genetic algorithm. arXiv preprint
arXiv:1907.00707, 2019.
[238] Masayuki Ohzeki. Breaking limitation of quantum annealer in solving optimization problems under con-
straints. Scientific Reports, 10, 02 2020.
[239] Elisabeth Lobe and Annette Lutz. Minor embedding in broken chimera and pegasus graphs is NP-complete.
arXiv preprint arXiv:2110.08325, 2021.
[240] Jun Cai, William G Macready, and Aidan Roy. A practical heuristic for finding graph minors. arXiv
preprint arXiv:1406.2741, 2014.
[241] Timothy Goodrich, Blair D. Sullivan, and T. Humble. Optimizing adiabatic quantum program compilation
using a graph-theoretic framework. Quantum Information Processing, 17:1–26, 2018.
[242] Davide Venturelli, Salvatore Mandrà, Sergey Knysh, Bryan O’Gorman, Rupak Biswas, and Vadim Smelyan-
skiy. Quantum optimization of fully connected spin glasses. Phys. Rev. X, 5:031040, Sep 2015.
[243] Wolfgang Lechner, Philipp Hauke, and Peter Zoller. A quantum annealing architecture with all-to-all
connectivity from local interactions. Science Advances, 1(9):e1500838, 2015.
[244] Kilian Ender, Roeland ter Hoeven, Benjamin E Niehoff, Maike Drieb-Schön, and Wolfgang Lechner. Parity
quantum optimization: Compiler. arXiv preprint arXiv:2105.06233, 2021.
[245] Maike Drieb-Schön, Younes Javanmard, Kilian Ender, and Wolfgang Lechner. Parity quantum optimiza-
tion: Encoding constraints. arXiv preprint arXiv:2105.06235, 2021.
[246] Michael Fellner, Kilian Ender, Roeland ter Hoeven, and Wolfgang Lechner. Parity quantum optimization:
Benchmarks. arXiv preprint arXiv:2105.06240, 2021.
[247] Henrique Silvério, Sebastián Grijalva, Constantin Dalyac, Lucas Leclerc, Peter J Karalekas, Nathan
Shammah, Mourad Beji, Louis-Paul Henry, and Loïc Henriet. Pulser: An open-source package for the design
of pulse sequences in programmable neutral-atom arrays. arXiv preprint arXiv:2104.15044, 2021.
[248] Loïc Henriet, Lucas Beguin, Adrien Signoles, Thierry Lahaye, Antoine Browaeys, Georges-Olivier Reymond,
and Christophe Jurczak. Quantum computing with neutral atoms. Quantum, 4:327, September 2020.
[249] S. Ebadi, A. Keesling, M. Cain, T. T. Wang, H. Levine, D. Bluvstein, G. Semeghini, A. Omran, J.-G. Liu,
R. Samajdar, X.-Z. Luo, B. Nash, X. Gao, B. Barak, E. Farhi, S. Sachdev, N. Gemelke, L. Zhou, S. Choi,
H. Pichler, S.-T. Wang, M. Greiner, V. Vuletić, and M. D. Lukin. Quantum optimization of maximum
independent set using rydberg atom arrays. Science, 376(6598):1209–1215, 2022.
[250] Ruslan Shaydulin, Stuart Hadfield, Tad Hogg, and Ilya Safro. Classical symmetries and the quantum
approximate optimization algorithm. Quantum Information Processing, 20(11):359, Oct 2021.
[251] Ruslan Shaydulin, Ilya Safro, and Jeffrey Larson. Multistart methods for quantum approximate optimiza-
tion. In 2019 IEEE High Performance Extreme Computing Conference (HPEC), pages 1–8, 2019.
[252] Alexey Galda, Xiaoyuan Liu, Danylo Lykov, Yuri Alexeev, and Ilya Safro. Transferability of optimal QAOA
parameters between random graphs. In 2021 IEEE International Conference on Quantum Computing and
Engineering (QCE), pages 171–180. IEEE, 2021.
[253] Michael Streif and Martin Leib. Training the quantum approximate optimization algorithm without access
to a quantum processing unit. Quantum Science and Technology, 5(3):034008, 2020.
[254] Sami Khairy, Ruslan Shaydulin, Lukasz Cincio, Yuri Alexeev, and Prasanna Balaprakash. Learning to
optimize variational quantum circuits to solve combinatorial problems. In Proceedings of the Thirty-Fourth
AAAI Conference on Artificial Intelligence (AAAI), 2020.
[255] Ruslan Shaydulin and Yuri Alexeev. Evaluating quantum approximate optimization algorithm: A case
study. In Proceedings of the 2nd International Workshop on Quantum Computing for Sustainable Computing,
2019.
49
[256] Zhen-Duo Wang, Pei-Lin Zheng, Biao Wu, and Yi Zhang. Quantum dropout for efficient quantum approx-
imate optimization algorithm on combinatorial optimization problems. arXiv preprint arXiv:2203.10101,
2022.
[257] Xiaoyuan Liu, Ruslan Shaydulin, and Ilya Safro. Quantum approximate optimization algorithm with
sparsified phase operator, 2022.
[258] Sahil Gulania, Bo Peng, Yuri Alexeev, and Niranjan Govind. Quantum time dynamics of 1D-Heisenberg
models employing the Yang–Baxter equation for circuit compression. 2021.
[259] Matthew P Harrigan, Kevin J Sung, Matthew Neeley, Kevin J Satzinger, Frank Arute, Kunal Arya, Juan
Atalaya, Joseph C Bardin, Rami Barends, Sergio Boixo, et al. Quantum approximate optimization of non-
planar graph problems on a planar superconducting processor. Nature Physics, 17(3):332–336, 2021.
[260] Johannes S Otterbach, Riccardo Manenti, Nasser Alidoust, A Bestwick, M Block, B Bloom, S Caldwell,
N Didier, E Schuyler Fried, S Hong, et al. Unsupervised machine learning on a hybrid quantum computer.
arXiv preprint arXiv:1712.05771, 2017.
[261] Pradeep Niroula, Ruslan Shaydulin, Romina Yalovetzky, Pierre Minssen, Dylan Herman, Shaohan Hu, and
Marco Pistoia. Constrained quantum optimization for extractive summarization on a trapped-ion quantum
computer. 2022.
[262] Dmitry A Fedorov, Bo Peng, Niranjan Govind, and Yuri Alexeev. VQE method: A short survey and recent
developments. Materials Theory, 6(1):1–21, 2022.
[263] Arthur G. Rattew, Shaohan Hu, Marco Pistoia, Richard Chen, and Steve Wood. A Domain-
agnostic, Noise-resistant, Hardware-efficient Evolutionary Variational Quantum Eigensolver. arXiv preprint
arXiv:1910.09694, 2020.
[264] David Amaro, Carlo Modica, Matthias Rosenkranz, Mattia Fiorentini, Marcello Benedetti, and Michael
Lubasch. Filtering variational quantum algorithms for combinatorial optimization. arXiv preprint
arXiv:2106.10055, 2021.
[265] Xuchen You, Shouvanik Chakrabarti, and Xiaodi Wu. A convergence theory for over-parameterized varia-
tional quantum eigensolvers. 2022.
[266] Gregory L Naber. The Geometry of Minkowski Spacetime: An Introduction to the Mathematics of the
Special Theory of Relativity, volume 92. Springer Science & Business Media, 01 2012.
[267] Konrad Osterwalder and Robert Schrader. Axioms for Euclidean Green’s functions. Communications in
mathematical physics, 31(2):83–112, 1973.
[268] G. C. Wick. Properties of Bethe-Salpeter wave functions. Phys. Rev., 96:1124–1134, Nov 1954.
[269] Sam McArdle, Tyson Jones, Suguru Endo, Ying Li, Simon C. Benjamin, and Xiao Yuan. Variational
ansatz-based quantum simulation of imaginary time evolution. npj Quantum Information, 5(1), Sep 2019.
[270] Mario Motta, Chong Sun, Adrian T. K. Tan, Matthew J. O’Rourke, Erika Ye, Austin J. Minnich, Fernando
G. S. L. Brandão, and Garnet Kin-Lic Chan. Determining eigenstates and thermal states on a quantum
computer using quantum imaginary time evolution. Nature Physics, 16(2):205–210, Nov 2019.
[271] Kosuke Mitarai and Keisuke Fujii. Methodology for replacing indirect measurements with direct measure-
ments. Physical Review Research, 1(1), Aug 2019.
[272] Marcello Benedetti, Mattia Fiorentini, and Michael Lubasch. Hardware-efficient variational quantum algo-
rithms for time evolution. Phys. Rev. Research, 3:033083, Jul 2021.
[273] Christa Zoufal, Aurélien Lucchi, and Stefan Woerner. Variational quantum Boltzmann machines. Quantum
Machine Intelligence, 3(1), Feb 2021.
[274] Ming-Cheng Chen, Ming Gong, Xiaosi Xu, Xiao Yuan, Jian-Wen Wang, Can Wang, Chong Ying, Jin Lin,
Yu Xu, Yulin Wu, and et al. Demonstration of adiabatic variational quantum computing with a supercon-
ducting quantum coprocessor. Physical Review Letters, 125(18), Oct 2020.
[275] Christoph Dürr and Peter Høyer. A quantum algorithm for finding the minimum. arXiv preprint quant-
ph/9607014, 1996.
[276] William P Baritompa, David W Bulger, and Graham R Wood. Grover’s quantum algorithm applied to
global optimization. SIAM Journal on Optimization, 15(4):1170–1184, 2005.
[277] David Bulger, William P. Baritompa, and Graham R. Wood. Implementing pure adaptive search with
Grover’s quantum algorithm. Journal of optimization theory and applications, 116(3):517–529, 2003.
[278] Austin Gilliam, Stefan Woerner, and Constantin Gonciulea. Grover adaptive search for constrained poly-
nomial binary optimization. Quantum, 5:428, 2021.
[279] Dimitris Bertsimas and John N Tsitsiklis. Introduction to linear optimization, volume 6. Athena Scientific
Belmont, MA, 1997.
[280] Giacomo Nannicini. Fast quantum subroutines for the simplex method. In Integer Programming and
Combinatorial Optimization - 22nd International Conference, IPCO 2021, Atlanta, GA, USA, May 19-21,
2021, Proceedings, volume 12707 of Lecture Notes in Computer Science, pages 311–325. Springer, 2021.
[281] Iordanis Kerenidis, Anupam Prakash, and Dániel Szilágyi. Quantum algorithms for second-order cone
programming and support vector machines. Quantum, 5:427, April 2021.
50
[282] Renato Monteiro and Takashi Tsuchiya. Polynomial convergence of primal-dual algorithms for the second-
order cone program based on the MZ-family of directions. Mathematical programming, 88(1):61–83, 2000.
[283] Fernando G.S.L. Brandao and Krysta M. Svore. Quantum speed-ups for solving semidefinite programs. In
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 415–426, 2017.
[284] Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi
Wu. Quantum SDP solvers: Large speed-ups, optimality, and applications to quantum learning. In 46th
International Colloquium on Automata, Languages, and Programming (ICALP 2019), volume 132 of Leibniz
International Proceedings in Informatics (LIPIcs), pages 27:1–27:14. Schloss Dagstuhl–Leibniz-Zentrum fuer
Informatik, 2019.
[285] Joran van Apeldoorn, András Gilyén, Sander Gribling, and Ronald de Wolf. Quantum SDP-solvers: Better
upper and lower bounds. Quantum, 4:230, Feb 2020.
[286] Sanjeev Arora and Satyen Kale. A combinatorial, primal-dual approach to semidefinite programs. J. ACM,
63(2), may 2016.
[287] Ruslan Shaydulin, Hayato Ushijima-Mwesigwa, Christian FA Negre, Ilya Safro, Susan M Mniszewski, and
Yuri Alexeev. A hybrid approach for solving optimization problems on small quantum computers. Computer,
52(6):18–26, 2019.
[288] Ruslan Shaydulin, Hayato Ushijima-Mwesigwa, Ilya Safro, Susan Mniszewski, and Yuri Alexeev. Network
community detection on small quantum computers. Advanced Quantum Technologies, 2(9):1900029, 2019.
[289] Hayato Ushijima-Mwesigwa, Ruslan Shaydulin, Christian F. A. Negre, Susan M. Mniszewski, Yuri Alexeev,
and Ilya Safro. Multilevel combinatorial optimization across quantum architectures. ACM Transactions on
Quantum Computing, 2(1), feb 2021.
[290] Guillaume Chapuis, Hristo Djidjev, Georg Hahn, and Guillaume Rizk. Finding maximum cliques on the
D-Wave quantum annealer. Journal of Signal Processing Systems, 91(3):363–377, 2019.
[291] Harry Markowitz. Portfolio selection. The Journal of Finance, 7(1):77–91, 1952.
[292] S. Benati and R. Rizzi. A mixed integer linear programming formulation of the optimal mean/value-at-risk
portfolio problem. European Journal of Operational Research, 2007.
[293] Renata Mansini, Włodzimierz Ogryczak, and M. Grazia Speranza. Linear and mixed integer programming
for portfolio optimization. Springer, 2015.
[294] Gerard Cornuejols, Marshall L Fisher, and George L Nemhauser. Exceptional paper—location of bank
accounts to optimize float: An analytic study of exact and approximate algorithms. Management Science,
23(8):789–810, 1977.
[295] Angad Kalra, Faisal Qureshi, and Michael Tisi. Portfolio asset identification using graph algorithms on a
quantum annealer. Available at SSRN 3333537, 2018.
[296] Gili Rosenberg and Maxwell Rounds. Long-short minimum risk parity optimization using a quantum or
digital annealer. White Paper 1Qbit, 2018.
[297] Mark Hodson, Brendan Ruck, Hugh Ong, David Garvin, and Stefan Dulman. Portfolio rebalancing exper-
iments using the quantum alternating operator ansatz. arXiv preprint arXiv:1911.05296, 2019.
[298] N. Slate, E. Matwiejew, S. Marsh, and J. B. Wang. Quantum walk-based portfolio optimisation. Quantum,
5:513, July 2021.
[299] Gili Rosenberg, Poya Haghnegahdar, Phil Goddard, Peter Carr, Kesheng Wu, and Marcos Lopez de Prado.
Solving the optimal trading trajectory problem using a quantum annealer. IEEE Journal of Selected Topics
in Signal Processing, 10(6):1053–1060, Sep 2016.
[300] Samuel Mugel, Carlos Kuchkovsky, Escolástico Sánchez, Samuel Fernández-Lorenzo, Jorge Luis-Hita, En-
rique Lizaso, and Román Orús. Dynamic portfolio optimization with real datasets using quantum processors
and quantum-inspired tensor networks. Phys. Rev. Research, 4:013006, Jan 2022.
[301] Erica Grant, Travis S. Humble, and Benjamin Stump. Benchmarking quantum annealing controls with
portfolio optimization. Physical Review Applied, 15(1), Jan 2021.
[302] Iordanis Kerenidis, Anupam Prakash, and Dániel Szilágyi. Quantum algorithms for portfolio optimization.
In Proceedings of the 1st ACM Conference on Advances in Financial Technologies, AFT ’19, pages 147—-155,
New York, NY, USA, 2019. Association for Computing Machinery.
[303] Patrick Rebentrost and Seth Lloyd. Quantum computational finance: quantum algorithm for portfolio
optimization. arXiv preprint arXiv:1811.03975, 2018.
[304] András Gilyén, Seth Lloyd, and Ewin Tang. Quantum-inspired low-rank stochastic regression with loga-
rithmic dependence on the dimension. 2018.
[305] Juan Miguel Arrazola, Alain Delgado, Bhaskar Roy Bardhan, and Seth Lloyd. Quantum-inspired algorithms
in practice. Quantum, 4:307, aug 2020.
[306] G. Rosenberg, C. Adolphs, A. Milne, and A. Lee. Swap netting using a quantum annealer. White Paper
1Qbit, 2016.
[307] René M Stulz. Credit default swaps and the credit crisis. Journal of Economic Perspectives, 24(1):73–92,
2010.
51
[308] Henry TC Hu. Swaps, the modern process of financial innovation and the vulnerability of a regulatory
paradigm. U. Pa. L. Rev., 138:333, 1989.
[309] James Bicksler and Andrew H. Chen. An economic analysis of interest rate swaps. The Journal of Finance,
41(3):645–655, 1986.
[310] Darrell Duffie and Ming Huang. Swap rates and credit quality. The Journal of Finance, 51(3):921–949,
1996.
[311] Andrei Shleifer and Robert W. Vishny. The limits of arbitrage. The Journal of Finance, 52(1):35–55, 1997.
[312] Freddy Delbaen and Walter Schachermayer. The mathematics of arbitrage. Springer Science & Business
Media, 2006.
[313] Gilli Rosenberg. Finding optimal arbitrage opportunities using a quantum annealer. White Paper 1Qbit,
2016.
[314] Boris V Cherkassky and Andrew V Goldberg. Negative-cycle detection algorithms. Mathematical Program-
ming, 85(2), 1999.
[315] Wanmei Soon and Heng-Qing Ye. Currency arbitrage detection using a binary integer programming model.
International Journal of Mathematical Education in Science and Technology, 42(3):369–376, 2011.
[316] Andrew Milne, Maxwell Rounds, and Phil Goddard. Optimal feature selection in credit scoring and
classification using a quantum annealer. White Paper 1Qbit, 2017.
[317] Matthew Elliott, Benjamin Golub, and Matthew O. Jackson. Financial networks and contagion. American
Economic Review, 104(10):3115–53, October 2014.
[318] Brett Hemenway and Sanjeev Khanna. Sensitivity and computational complexity in financial networks.
Algorithmic Finance, 5(3-4):95–110, 2016.
[319] Ilene Grabel. Predicting financial crisis in developing economies: astronomy or astrology? Eastern Eco-
nomic Journal, 29(2):243–258, 2003.
[320] Arturo Estrella and Frederic S. Mishkin. Predicting U.S. recessions: Financial variables as leading indica-
tors. The Review of Economics and Statistics, 1998.
[321] Román Orús, Samuel Mugel, and Enrique Lizaso. Forecasting financial crashes with quantum computing.
Phys. Rev. A, 99(6), Jun 2019.
[322] Martin Plesch and Časlav Brukner. Quantum-state preparation with universal gate decompositions. Phys.
Rev. A, 83:032302, Mar 2011.
[323] Jarrod R McClean, Jonathan Romero, Ryan Babbush, and Alán Aspuru-Guzik. The theory of variational
hybrid quantum-classical algorithms. New Journal of Physics, 18(2):023023, Feb 2016.
[324] Hsin-Yuan Huang, Michael Broughton, Masoud Mohseni, Ryan Babbush, Sergio Boixo, Hartmut Neven,
and Jarrod R McClean. Power of data in quantum machine learning. Nature Communications, 12(1):1–9,
2021.
[325] Nathan Wiebe, Daniel Braun, and Seth Lloyd. Quantum algorithm for data fitting. Phys. Rev. Lett.,
109:050505, Aug 2012.
[326] Guoming Wang. Quantum algorithm for linear regression. Phys. Rev. A, 96:012335, Jul 2017.
[327] Prasanna Date and Thomas Potok. Adiabatic quantum linear regression. Scientific reports, 11(1):21905,
2021.
[328] Zhikuan Zhao, Jack K. Fitzsimons, and Joseph F. Fitzsimons. Quantum-assisted Gaussian process regres-
sion. Physical Review A, 99(5):052331, 2019.
[329] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii. Quantum circuit learning. Phys. Rev. A, 98:032309,
Sep 2018.
[330] K.-R. Muller, S. Mika, G. Ratsch, K. Tsuda, and B. Scholkopf. An introduction to kernel-based learning
algorithms. IEEE Transactions on Neural Networks, 12(2):181–201, 2001.
[331] Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big data
classification. Phys. Rev. Lett., 2014.
[332] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and unsuper-
vised machine learning. arXiv preprint arXiv:1307.0411, 2013.
[333] Vojtěch Havlíček, Antonio D. Córcoles, Kristan Temme, Aram W. Harrow, Abhinav Kandala, Jerry M.
Chow, and Jay M. Gambetta. Supervised learning with quantum-enhanced feature spaces. Nature,
567(7747):209–212, March 2019.
[334] Ruslan Shaydulin and Stefan M. Wild. Importance of kernel bandwidth in quantum machine learning.
2021.
[335] Abdulkadir Canatar, Evan Peters, Cengiz Pehlevan, Stefan M. Wild, and Ruslan Shaydulin. Bandwidth
enables generalization in quantum kernel models. 2022.
[336] Jonas M. Kübler, Simon Buchholz, and Bernhard Schölkopf. The inductive bias of quantum kernels. 2021.
[337] N. Wiebe, A. Kapoor, and K. Svore. Quantum algorithms for nearest-neighbor methods for supervised and
unsupervised learning. Quantum Information & Computation, 15, 2015.
52
[338] Yue Ruan, Xiling Xue, Heng Liu, Jianing Tan, and Xi Li. Quantum algorithm for k-nearest neighbors classi-
fication based on the metric of Hamming distance. International Journal of Theoretical Physics, 56(11):3496–
3507, 2017.
[339] Afrad Basheer, A Afham, and Sandeep K Goyal. Quantum k-nearest neighbors algorithm. arXiv preprint
arXiv:2003.09187, 2020.
[340] James MacQueen. Some methods for classification and analysis of multivariate observations. In Proceedings
of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281–297. Oakland,
CA, USA, 1967.
[341] Iordanis Kerenidis, Jonas Landman, Alessandro Luongo, and Anupam Prakash. q-means: A quantum
algorithm for unsupervised machine learning. arXiv preprint arXiv:1812.03584, 2018.
[342] Sumsam Ullah Khan, Ahsan Javed Awan, and Gemma Vall-Llosera. K-means clustering on noisy interme-
diate scale quantum computers. arXiv preprint arXiv:1909.12183, 2019.
[343] Hideyuki Miyahara, Kazuyuki Aihara, and Wolfgang Lechner. Quantum expectation-maximization algo-
rithm. Phys. Rev. A, 101:012326, Jan 2020.
[344] Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm.
In Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and
Synthetic, pages 849—-856, 2001.
[345] Ammar Daskin. Quantum spectral clustering through a biased phase estimation algorithm. TWMS Journal
of Applied and Engineering Mathematics, 10(1):24–33, 2017.
[346] Simon Apers and Ronald de Wolf. Quantum speedup for graph sparsification, cut approximation and
Laplacian solving. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS),
pages 637–648, 2020.
[347] I. Kerenidis and J. Landman. Quantum spectral clustering. Physical Review A, 103(4):042415, 2021.
[348] J. S. Otterbach, R. Manenti, N. Alidoust, A. Bestwick, M. Block, B. Bloom, S. Caldwell, N. Didier,
E. Schuyler Fried, S. Hong, P. Karalekas, C. B. Osborn, A. Papageorge, E. C. Peterson, G. Prawiroatmodjo,
N. Rubin, Colm A. Ryan, D. Scarabelli, M. Scheer, E. A. Sete, P. Sivarajah, Robert S. Smith, A. Staley,
N. Tezak, W. J. Zeng, A. Hudson, Blake R. Johnson, M. Reagor, M. P. da Silva, and C. Rigetti. Unsupervised
machine learning on a hybrid quantum computer, 2017.
[349] E. Aïmeur, G. Brassard, and S. Gambs. Quantum clustering algorithms.
[350] E. Aïmeur, G. Brassard, and S. Gambs. Quantum speed-up for unsupervised learning. Mach Learn,
20:261–287, 2013.
[351] Iordanis Kerenidis, Alessandro Luongo, and Anupam Prakash. Quantum expectation-maximization for
Gaussian mixture models. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International
Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5187–5197.
PMLR, 13–18 Jul 2020.
[352] Vaibhaw Kumar, Gideon Bass, Casey Tomlin, and Joseph Dulny. Quantum annealing for combinatorial
clustering. Quantum Information Processing, 17(2), jan 2018.
[353] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum principal component analysis. Nature
Physics, 10(9):631––633, July 2014.
[354] Chen He, Jiazhen Li, Weiqi Liu, and Z. Jane Wang. A low complexity quantum principal component
analysis algorithm, 2020.
[355] CH. Yu, F. Gao, and S. et al Lin. Quantum data compression by principal component analysis. Quantum
Inf Process, 18:249, 2019.
[356] Jie Lin, Wan-Su Bao, Shuo Zhang, Tan Li, and Xiang Wang. An improved quantum principal component
analysis algorithm based on the quantum singular threshold method. Physics Letters A, 383(24):2862–2868,
2019.
[357] Armando Bellante, Alessandro Luongo, and Stefano Zanero. Quantum algorithms for data representation
and analysis, 2021.
[358] YaoChong Li, Ri-Gui Zhou, RuiQing Xu, WenWen Hu, and Ping Fan. Quantum algorithm for the nonlinear
dimensionality reduction with arbitrary kernel. Quantum Science and Technology, 6(1):014001, Nov 2020.
[359] Iris Cong and Luming Duan. Quantum discriminant analysis for dimensionality reduction and classification.
New Journal of Physics, 18(7):073011, July 2016.
[360] Iordanis Kerenidis and Alessandro Luongo. Classification of the MNIST data set with quantum slow feature
analysis. Phys. Rev. A, 101:062327, Jun 2020.
[361] Seth Lloyd, Silvano Garnerone, and Paolo Zanardi. Quantum algorithms for topological and geometric
analysis of data. Nat. Commun., 7(1):1–7, 2016.
[362] Casper Gyurik, Chris Cade, and Vedran Dunjko. Towards quantum advantage via topological data analysis.
arXiv preprint arXiv:2005.02607, 2020.
53
[363] Shashanka Ubaru, Ismail Yunus Akhalwaya, Mark S Squillante, Kenneth L Clarkson, and Lior
Horesh. Quantum topological data analysis with linear depth and exponential speedup. arXiv preprint
arXiv:2108.02811, 2021.
[364] Iordanis Kerenidis and Anupam Prakash. Quantum machine learning with subspace states. 2022.
[365] Marcello Benedetti, Delfina Garcia-Pintos, Oscar Perdomo, Vicente Leyton-Ortega, Yunseong Nam, and
Alejandro Perdomo-Ortiz. A generative modeling approach for benchmarking and training shallow quantum
circuits. npj Quantum Information, 5(1), May 2019.
[366] Jin-Guo Liu and Lei Wang. Differentiable learning of quantum circuit Born machines. Physical Review A,
98(6), Dec 2018.
[367] Brian Coyle, Maxwell Henderson, Justin Chan Jin Le, Niraj Kumar, Marco Paini, and Elham Kashefi.
Quantum versus classical generative modelling in finance. Quantum Science and Technology, 6(2):024013,
2021.
[368] Elton Yechao Zhu, Sonika Johri, Dave Bacon, Mert Esencan, Jungsang Kim, Mark Muir, Nikhil Mur-
gai, Jason Nguyen, Neal Pisenti, Adam Schouela, et al. Generative quantum learning of joint probability
distribution functions. arXiv preprint arXiv:2109.06315, 2021.
[369] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques. The MIT
Press, 2009.
[370] Guang Hao LLow, Theodore James Yoder, and Isaac L Chuang. Quantum inference on Bayesian networks.
Phys. Rev. A, 89, June 2014.
[371] Robert R Tucci. Quantum Bayesian nets. International Journal of Modern Physics B, 09(03):295–337,
1995.
[372] S. E. Borujeni, S. Nannapaneni, N. H. Nguyen, E. C. Behrman, and J. E. Steck. Quantum circuit repre-
sentation of Bayesian networks. Expert Systems with Applications, 2021.
[373] Sima E Borujeni, Nam H Nguyen, Saideep Nannapaneni, Elizabeth C Behrman, and James E Steck.
Experimental evaluation of quantum bayesian networks on IBM QX hardware. In 2020 IEEE International
Conference on Quantum Computing and Engineering (QCE), pages 372–378. IEEE, 2020.
[374] G. Klepac. Chapter 12 – The Schrödinger equation as inspiration for a client portfolio simulation hybrid
system based on dynamic Bayesian networks and the REFII model. In Quantum Inspired Computational
Intelligence, pages 391–416. Morgan Kaufmann, Boston, 2017.
[375] Catarina Moreira and Andreas Wichert. Quantum-like Bayesian networks for modeling decision making.
Frontiers in Psychology, 7:11, 2016.
[376] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT Press, 2016.
[377] Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation,
14(8):1771–1800, 2002.
[378] Ruslan Salakhutdinov and Geoffrey Hinton. Deep Boltzmann machines. In Proceedings of the Twelth
International Conference on Artificial Intelligence and Statistics, volume 5, pages 448–455. PMLR, 2009.
[379] Marcello Benedetti, John Realpe-Gómez, Rupak Biswas, and Alejandro Perdomo-Ortiz. Estimation of
effective temperatures in quantum annealers for sampling applications: A case study with possible applications
in deep learning. Physical Review A, 94(2):022308, 2016.
[380] Vivek Dixit, Raja Selvarajan, Muhammad A Alam, Travis S Humble, and Sabre Kais. Training restricted
Boltzmann machines with a d-wave quantum annealer. Front. Phys., 9:589626, 2021.
[381] Mohammad H. Amin, Evgeny Andriyash, Jason Rolfe, Bohdan Kulchytskyy, and Roger Melko. Quantum
Boltzmann machine. Phys. Rev. X, 8:021050, May 2018.
[382] Seth Lloyd and Christian Weedbrook. Quantum generative adversarial learning. Phys. Rev. Lett.,
121:040502, July 2018.
[383] Kerstin Beer, Dmytro Bondarenko, Terry Farrelly, Tobias J. Osborne, Robert Salzmann, Daniel Scheier-
mann, and Ramona Wolf. Training deep quantum neural networks. Nature Communications, 11(1), Feb
2020.
[384] Yudong Cao, Gian Giacomo Guerreschi, and Alán Aspuru-Guzik. Quantum neuron: an elementary building
block for machine learning on quantum computers. arXiv preprint arXiv:1711.11240, 2017.
[385] Shuai Li, Kui Jia, Yuxin Wen, Tongliang Liu, and Dacheng Tao. Orthogonal deep neural networks. IEEE
transactions on pattern analysis and machine intelligence, 43(4):1352–—1368, April 2021.
[386] Iordanis Kerenidis, Jonas Landman, and Natansh Mathur. Classical and quantum algorithms for orthogonal
neural networks. arXiv preprint arXiv:2106.07198, 2021.
[387] Jonathan Allcock, Chang-Yu Hsieh, Iordanis Kerenidis, and Shengyu Zhang. Quantum algorithms for
feedforward neural networks. ACM Transactions on Quantum Computing, 1(1):1–24, 2020.
[388] Edward Farhi and Hartmut Neven. Classification with quantum neural networks on near term processors.
arXiv preprint arXiv:1802.06002, 2018.
[389] Maxwell Henderson, Samriddhi Shakya, Shashindra Pradhan, and Tristan Cook. Quanvolutional neural
networks: powering image recognition with quantum circuits. Quantum Machine Intelligence, 2(1):1–9, 2020.
54
[390] Nathan Killoran, Thomas R. Bromley, Juan Miguel Arrazola, Maria Schuld, Nicolás Quesada, and Seth
Lloyd. Continuous-variable quantum neural networks. Phys. Rev. Research, 1:033063, Oct 2019.
[391] Henry Liu, Junyu Liu, Rui Liu, Henry Makhanov, Danylo Lykov, Anuj Apte, and Yuri Alexeev. Embedding
learning in hybrid quantum-classical neural networks, 2022.
[392] Maria Schuld, Ryan Sweke, and Johannes Jakob Meyer. Effect of data encoding on the expressive power
of variational quantum-machine-learning models. Physical Review A, 103(3):032430, 2021.
[393] A. Abbas, D. Sutter, C. Zoufal, A. Lucchi, A. Figalli, and S. Woerner. The Power of Quantum Neural
Networks. Nature Computational Science, 1(6):403–409, 2021.
[394] Maria Schuld. Supervised quantum machine learning models are kernel methods. arXiv preprint
arXiv:2101.11020, 2021.
[395] Dylan Herman, Rudy Raymond, Muyuan Li, Nicolas Robles, Antonio Mezzacapo, and Marco Pistoia.
Expressivity of variational quantum machine learning on the Boolean cube. 2022.
[396] Sofiene Jerbi, Lukas J. Fiderer, Hendrik Poulsen Nautrup, Jonas M. Kübler, Hans J. Briegel, and Vedran
Dunjko. Quantum machine learning beyond kernel methods. 2021.
[397] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization
in neural networks. arXiv preprint arXiv:1806.07572, 2018.
[398] Kouhei Nakaji, Hiroyuki Tezuka, and Naoki Yamamoto. Quantum-enhanced neural networks in the neural
tangent kernel framework. arXiv preprint arXiv:2109.03786, 2021.
[399] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278–2324, 1998.
[400] Iordanis Kerenidis, Jonas Landman, and Anupam Prakash. Quantum algorithms for deep convolutional
neural networks. arXiv preprint arXiv:1911.01117, 2019.
[401] Feihong Shen and Jun Liu. Qfcnn: Quantum fourier convolutional neural network. arXiv preprint
arXiv:2106.10421, 2021.
[402] Iris Cong, Soonwon Choi, and Mikhail D. Lukin. Quantum convolutional neural networks. Nature Physics,
15(12):1273–1278, 2019.
[403] Guillaume Verdon, Trevor McCourt, Enxhell Luzhnica, Vikash Singh, Stefan Leichenauer, and Jack Hidary.
Quantum graph neural networks. arXiv preprint arXiv:1909.12264, 2019.
[404] Jaeho Choi, Seunghyeok Oh, and Joongheon Kim. A tutorial on quantum graph recurrent neural network
(QGRNN). In 2021 International Conference on Information Networking (ICOIN), pages 46–49, 2021.
[405] Daoyi Dong, Chunlin Chen, Hanxiong Li, and Tzyh-Jong Tarn. Quantum reinforcement learning. IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 38(5):1207–1220, 2008.
[406] G. D. Paparo, V. Dunjko, A. Makmal, M. A. Martin-Delgado, and H. J. Briegel. Quantum speedup for
active learning agents. Phys. Rev. X, 2014.
[407] Samuel Yen-Chi Chen, Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, and Hsi-Sheng Goan.
Variational quantum circuits for deep reinforcement learning. IEEE Access, 8:141007–141024, 2020.
[408] Samuel Yen-Chi Chen, Chih-Min Huang, Chia-Wei Hsing, Hsi-Sheng Goan, and Ying-Jer Kao. Variational
quantum reinforcement learning via evolutionary optimization. Machine Learning: Science and Technology,
2021.
[409] Owen Lockwood and Mei Si. Reinforcement learning with quantum variational circuits. In Proceedings
of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 16, pages
245–251. AAAI Press, 2020.
[410] Sofiene Jerbi, Lea M. Trenkwalder, Hendrik Poulsen Nautrup, Hans J. Briegel, and Vedran Dunjko. Quan-
tum enhancements for deep reinforcement learning in large spaces. PRX Quantum, 2:010328, Feb 2021.
[411] Daniel Crawford, Anna Levit, Navid Ghadermarzy, Jaspreet S Oberoi, and Pooya Ronagh. Reinforcement
learning using quantum Boltzmann machines. arXiv preprint arXiv:1612.05695, 2016.
[412] El Amine Cherrat, Iordanis Kerenidis, and Anupam Prakash. Quantum reinforcement learning via policy
iteration, 2022.
[413] Arjan Cornelissen. Quantum gradient estimation and its application to quantum reinforcement learning.
Master’s thesis, Delft, Netherlands, 2018.
[414] Daniel W. Otter, Julian R. Medina, and Jugal K. Kalita. A survey of the usages of deep learning for natural
language processing. IEEE Transactions on Neural Networks and Learning Systems, 32(2):604–624, 2021.
[415] Usman Naseem, Imran Razzak, Shah Khalid Khan, and Mukesh Prasad. A comprehensive survey on word
representation models: From classical to state-of-the-art word representation language models. ACM Trans.
Asian Low-Resour. Lang. Inf. Process., 20(5), Jun 2021.
[416] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz
Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference
on Neural Information Processing Systems, NIPS’17, 2017.
[417] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirec-
tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American
55
Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long
and Short Papers). Association for Computational Linguistics, June 2019.
[418] Justin Sybrandt and Ilya Safro. CBAG: Conditional biomedical abstract generation. Plos one,
16(7):e0253905, 2021.
[419] Luciano Floridi and Massimo Chiriatti. GPT-3: Its nature, scope, limits, and consequences. Minds and
Machines, 30:1–14, 2020.
[420] Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. Language (technology) is power: A
critical survey of “bias” in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computa-
tional Linguistics. Association for Computational Linguistics, July 2020.
[421] Justin Sybrandt, Ilya Tyagin, Michael Shtutman, and Ilya Safro. Agatha: Automatic graph mining and
transformer based hypothesis generation approach. In Proceedings of the 29th ACM International Conference
on Information & Knowledge Management, pages 2757–2764, 2020.
[422] Payal Dhar. The carbon impact of artificial intelligence. Nature Machine Intelligence, 2(8):423–425, 2020.
[423] Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. Mathematical foundations for a compositional
distributional model of meaning. arXiv preprint arXiv:1003.4394, 2010.
[424] Joachim Lambek. The mathematics of sentence structure. Journal of Symbolic Logic, 33(4):627–628, 1968.
[425] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations
in vector space. In ICLR: Proceeding of the International Conference on Learning Representations Workshop,
2013.
[426] Bob Coecke and Eric Oliver Paquette. Categories for the practising physicist. In New structures for physics,
pages 173–286. Springer, 2010.
[427] Bob Coecke, Giovanni de Felice, Konstantinos Meichanetzidis, and Alexis Toumi. Foundations for near-term
quantum natural language processing. arXiv preprint arXiv:2012.03755, 2020.
[428] Samson Abramsky and Bob Coecke. Categorical quantum mechanics. Handbook of quantum logic and
quantum structures, 2:261–325, 2009.
[429] Bob Coecke and Ross Duncan. Interacting quantum observables: categorical algebra and diagrammatics.
New Journal of Physics, 13(4):043016, Apr 2011.
[430] Giovanni de Felice, Alexis Toumi, and Bob Coecke. DisCoPy: Monoidal categories in Python. Electronic
Proceedings in Theoretical Computer Science, 333:183–197, Feb 2021.
[431] R. Lorenz, A. Pearson, K. Meichanetzidis, D. Kartsaklis, and B. Coecke. QNLP in practice: Running
compositional models of meaning on a quantum computer. arXiv preprint arXiv:2102.12846, 2021.
[432] Konstantinos Meichanetzidis, Alexis Toumi, Giovanni de Felice, and Bob Coecke. Grammar-aware question-
answering on quantum computers. arXiv preprint arXiv:2012.03756, 2020.
[433] Johannes Bausch. Recurrent quantum neural networks. In H. Larochelle, M. Ranzato, R. Hadsell, M. F.
Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1368–1379.
Curran Associates, Inc., 2020.
[434] Samuel Yen-Chi Chen, Shinjae Yoo, and Yao-Lung L Fang. Quantum long short-term memory. arXiv
preprint arXiv:2009.01783, 2020.
[435] Yuto Takaki, Kosuke Mitarai, Makoto Negoro, Keisuke Fujii, and Masahiro Kitagawa. Learning temporal
data with a variational quantum recurrent neural network. Physical Review A, 103(5), May 2021.
[436] Mohiuddin Ahmed, Abdun Naser Mahmood, and Jiankun Hu. A survey of network anomaly detection
techniques. Journal of Network and Computer Applications, 60:19–31, 2016.
[437] Jarrod West and Maumita Bhattacharya. Intelligent financial fraud detection: A comprehensive review.
Computers & Security, 57:47–66, 2016.
[438] Guansong Pang, Chunhua Shen, Longbing Cao, and Anton Van Den Hengel. Deep learning for anomaly
detection: A review. ACM Comput. Surv., 54(2), March 2021.
[439] Daniel Herr, Benjamin Obert, and Matthias Rosenkranz. Anomaly detection with variational quantum
generative adversarial networks. Quantum Science and Technology, 6(4):045004, Jul 2021.
[440] T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs. Unsupervised anomaly detec-
tion with generative adversarial networks to guide marker discovery. In Information Processing in Medical
Imaging. Springer, 2017.
[441] Ming-Chao Guo, Hai-Ling Liu, Yong-Mei Li, Wen-Min Li, Su-Juan Qin, Qiao-Yan Wen, and Fei Gao.
Quantum algorithms for anomaly detection using amplitude estimation. arXiv preprint arXiv:2109.13820,
2021.
[442] Dongkuan Xu and Yingjie Tian. A comprehensive survey of clustering algorithms. Annals of Data Science,
2(2):165–193, 2015.
[443] Shihao Gu, Bryan Kelly, and Dacheng Xiu. Empirical asset pricing via machine learning. The Review of
Financial Studies, 33(5):2223–2273, 2020.
[444] L. Chen, M. Pelger, and J. Zhu. Deep learning in asset pricing. SSRN, 2020.
[445] Stefan Nagel. Machine Learning in Asset Pricing. Princeton University Press, 2021.
56
[446] T. Sakuma. Application of deep quantum neural networks to finance. arXiv preprint arXiv:2011.07319,
2020.
[447] Steven A Cuccaro, Thomas G Draper, Samuel A Kutin, and David Petrie Moulton. A new quantum
ripple-carry addition circuit. arXiv preprint quant-ph/0410184, 2004.
[448] Davide Venturelli and Alexei Kondratyev. Reverse quantum annealing approach to portfolio optimization
problems. Quantum Machine Intelligence, 1(1-2):17––30, Apr 2019.
[449] Tomas Boothby, Andrew D King, and Aidan Roy. Fast clique minor generation in Chimera qubit connec-
tivity graphs. Quantum Information Processing, 15(1):495–508, 2016.
[450] Romina Yalovetzky, Pierre Minssen, Dylan Herman, and Marco Pistoia. NISQ-HHL: Portfolio optimization
for near-term quantum hardware. arXiv preprint arXiv:2110.15958, 2021.
[451] Stephane Beauregard. Circuit for Shor’s algorithm using 2n+3 qubits. Quantum Info. Comput., 3(2):175—
-185, Machr 2003.
[452] Mikko Möttönen, Juha J. Vartiainen, Ville Bergholm, and Martti M. Salomaa. Transformation of quantum
states using uniformly controlled rotations. Quantum Info. Comput., 5(6):467––473, Sep 2005.
[453] Yonghae Lee, Jaewoo Joo, and Soojoon Lee. Hybrid quantum linear equation algorithm and its experimental
test on IBM quantum experience. Scientific Reports, 9, 03 2019.
[454] Gadi Aleksandrowicz, Thomas Alexander, Panagiotis Barkoutsos, Luciano Bello, Yael Ben-Haim, David
Bucher, Francisco Jose Cabrera-Hernández, Jorge Carballo-Franquis, Adrian Chen, Chun-Fu Chen, et al.
Qiskit: An open-source framework for quantum computing. 2019.
[455] R. G. Beausoleil, W. J. Munro, T. P. Spiller, and W. K. van Dam. Tests of quantum information. US
Patent 7,559,101 B2, 2008.
57