A Survey of Quantum Computing For Finance

Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

A Survey of Quantum Computing for Finance

Dylan A. Herman?1 , Cody Googin?2 , Xiaoyuan Liu?3 , Alexey Galda4 , Ilya Safro3 , Yue Sun1 ,
Marco Pistoia1 , and Yuri Alexeev5
1
JPMorgan Chase Bank, N.A., New York, NY, USA — {dylan.a.herman,yue.sun,marco.pistoia}@jpmorgan.com
2
University of Chicago, Chicago, IL, USA — codygoogin@uchicago.edu
3
University of Delaware, Newark, DE, USA — {joeyxliu,isafro}@udel.edu
4
Menten AI, San Francisco, CA, USA — alexey.galda@menten.ai
5
Argonne National Laboratory, Lemont, IL, USA — yuri@anl.gov
arXiv:2201.02773v4 [quant-ph] 27 Jun 2022

Abstract. Quantum computers are expected to surpass the computational capabil-


ities of classical computers during this decade and have transformative impact on
numerous industry sectors, particularly finance. In fact, finance is estimated to be
the first industry sector to benefit from quantum computing, not only in the medium
and long terms, but even in the short term. This survey paper presents a comprehen-
sive summary of the state of the art of quantum computing for financial applications,
with particular emphasis on stochastic modeling, optimization, and machine learn-
ing, describing how these solutions, adapted to work on a quantum computer, can
potentially help to solve financial problems, such as derivative pricing, risk model-
ing, portfolio optimization, natural language processing, and fraud detection, more
efficiently and accurately. We also discuss the feasibility of these algorithms on near-
term quantum computers with various hardware implementations and demonstrate
how they relate to a wide range of use cases in finance. We hope this article will not
only serve as a reference for academic researchers and industry practitioners but also
inspire new ideas for future research.

?
These authors contributed equally to this work.

i
Contents
1 Introduction 1

2 Applicability of Quantum Computing to Finance 2


2.1 Macroeconomic Challenges for Financial Institutions . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1.1 Keeping Up with Regulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1.2 Addressing Customer Expectations and Big-Data Requirements . . . . . . . . . . . . . . . 2
2.1.3 Ensuring Data Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Financial Use Cases for Quantum Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

3 Quantum Computing Concepts 4


3.1 Fundamentals of Quantum Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Models of Quantum Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2.1 Gate-Based Quantum Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2.2 Adiabatic Quantum Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.3 Quantum Hardware Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.3.1 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3.2 Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3.3 Gate Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3.4 Native Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

4 Foundational Quantum Algorithms 8


4.1 Quantum Unstructured Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.2 Quantum Amplitude Amplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.3 Quantum Phase Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.4 Quantum Amplitude Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.5 Quantum Linear System Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.6 Quantum Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.7 Variational Quantum Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.8 Adiabatic Quantum Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

5 Stochastic Modeling 12
5.1 Monte Carlo Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2 Numerical Solutions of Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.2.1 Quantum-linear-system-based algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.2.2 Variational algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.3 Financial Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.3.1 Derivative Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Collateralized Debt Obligations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.3.2 Risk Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Value at Risk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Economic Capital Requirement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
The Greeks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Credit Value Adjustments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

6 Optimization 20
6.1 Combinatorial Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.1.1 Quantum Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.1.2 Quantum Approximate Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . 22
6.1.3 Variational Quantum Eigensolver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.1.4 Variational Quantum Imaginary Time Evolution (VarQITE) . . . . . . . . . . . . . . . . 23
6.1.5 Optimization by Quantum Unstructured Search . . . . . . . . . . . . . . . . . . . . . . . . 24
6.2 Convex Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.3 Large-Scale Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.4 Financial Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.4.1 Portfolio Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Combinatorial Formulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Convex Formulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.4.2 Swap Netting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

ii
6.4.3 Optimal Arbitrage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.4.4 Identifying Creditworthiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.4.5 Financial Crashes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

7 Machine Learning 28
7.1 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.2.1 Quantum Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.2.2 Quantum Nearest-Neighbors Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.3 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.3.1 Quantum k-Means Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.3.2 Quantum Spectral Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.4 Dimensionality Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.5 Generative Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.5.1 Quantum Circuit Born Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.5.2 Quantum Bayesian Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.5.3 Quantum Boltzmann Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.5.4 Quantum Generative Adversarial Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.6 Quantum Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.6.1 Quantum Feedforward Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.6.2 Quantum Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.6.3 Quantum Graph Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.7 Quantum Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.8 Natural Language Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.9 Financial Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.9.1 Anomaly Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.9.2 Asset Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.9.3 Implied Volatility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

8 Hardware Implementations of Use Cases 35


8.1 Quantum Risk Analysis on a Transmon Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.2 Portfolio Optimization with Reverse Quantum Annealing . . . . . . . . . . . . . . . . . . . . . . 35
8.3 Portfolio Optimization on a Trapped-Ion Device . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

9 Conclusion and Outlook 38

iii
1 Introduction
Quantum computation relies on a fundamentally different means of processing and storing information than
today’s classical computers use. The reason is that the information does not obey the laws of classical mechanics
but those of quantum mechanics. Usually, quantum-mechanical effects become apparent only at very small scales,
when quantum systems are properly isolated from the surrounding environments. These conditions, however,
make the realization of a quantum computer a challenging task. Even so, according to a McKinsey & Co.
report [1], finance is estimated to be the first industry sector to benefit from quantum computing (Section 2),
largely because of the potential for many financial use cases to be formulated as problems that can be solved
by quantum algorithms suitable for near-term quantum computers. This is important because current quantum
computers are small-scale and noisy, yet the hope is that we can still find use cases for them. In addition, a variety
of quantum algorithms will be more applicable when large-scale, robust quantum computers become available,
which will significantly speed up computations used in finance.
The diversity in the potential hardware platforms or physical realizations of quantum computation is com-
pletely unlike any of the classical computational devices available today. There exist proposals based on vastly
different physical systems: superconductors, trapped ions, neutral atoms, photons, and others (Section 3.3), yet
no clear winner has emerged. A large number of companies are competing to be the first to develop a quan-
tum computer capable of running applications useful in a production environment. In theory, any computation
that runs on a quantum computer may also be executed on a classical computer—the benefit that quantum
computing may bring is the potential reduction in time and memory space with which the computational tasks
are performed [2], which in turn may lead to unprecedented scalability and accuracy of the computations. In
addition, any classical algorithm can be modified (in a potentially nontrivial way) such that it can be executed
on a universal quantum computer, but without any speedup. Obtaining quantum speedup requires developing
new algorithms that specifically take advantage of quantum-mechanical properties. Thus, classical and quantum
computers will need to work together. Moreover, in order to solve a real-world problem, a classical computer
should be able to efficiently insert and obtain the necessary data into and from a quantum computer.
This promised efficiency of quantum computers enables certain computations that are otherwise infeasible
for current classical computers to complete in any reasonable amount of time (Section 3). In general, however,
the speedup for each task can vary greatly or may even be currently unknown (Section 4). While these speedups,
if found, can have a tremendous impact in practice, they are typically difficult to obtain. And even if they are
discovered, the underlying quantum hardware must be powerful enough to minimize errors without introducing
an overhead that counteracts the algorithmic speedup. We do not know exactly when such robust hardware will
exist. Thus, the goal of quantum computing research is to develop quantum algorithms (Section 4) that solve
useful problems faster and to build robust hardware platforms to run them on. The industry needs to understand
the problems that can best benefit from quantum computing and the extent of these benefits, in order to make
full use of its revolutionary power when production-grade quantum devices are available.
To this end, we offer a comprehensive overview of the applicability of quantum computing to finance. Ad-
ditionally, we provide insight into the nature of quantum computation, focusing particularly on specific financial
problems for which quantum computing can provide potential speedups compared with classical computing. The
sections in this article can be grouped into two parts. The first part contains introductory material: Section
2 introduces computationally intensive financial problems that potentially benefit from quantum computing,
whereas Sections 3 and 4 introduce the core concepts of quantum computation and quantum algorithms. The
second part—the main focus of the article—reviews research performed by the scientific community to develop
quantum-enhanced versions of three major problem-solving techniques used in finance: stochastic modeling (Sec-
tion 5), optimization (Section 6), and machine learning (Section 7). The connections between financial problems
and these problem-solving techniques are outlined in Table 1. Section 8 covers experiments on current quantum
hardware that have been performed by researchers to solve financial use cases.

Overview of Earlier Surveys


Although the field of quantum computing is one of the most dynamic today and some scientific conclusions, ap-
proaches, and technologies become obsolete relatively quickly, we reiterate the importance of getting information
from different perspectives. Here we emphasize the difference between our survey and several existing surveys
that have recently been published or appeared in open domain. One of the key differences is that in our survey we
not only summarize the recent achievements but also discuss the limitations of quantum devices and algorithmic
approaches that are currently subject to massive abuse in the media.
In [3], Egger et al. focus on covering the hardware and algorithmic work done by IBM. We take a much
broader view and survey the entire landscape of quantum technologies and their applicability in the finance
domain. For example, we believe it is beneficial to also discuss the quantum annealing-based approaches and
other gate-based devices. Also, with regard to optimization applications, we discuss a much wider variety of
financial applications that make use of optimization and may take advantage of quantum hardware development.

1
The article by Bouland et al. [4] focuses on works done by the QC Ware team with a central emphasis on
quantum Monte Carlo integration. Some more recent work has been done in that area and is included in our
survey. Also Bouland et al. focus on portfolio optimization. While this is an important financial problem that
we also cover, we have tried to include other financial applications that use optimization as well. Moreover, we
tried to delve deeper into the various quantum machine learning approaches for generative modeling and neural
networks.
The survey by Orus et al. [5] does an excellent job at highlighting financial applications that make use of
quantum annealers. However, we believe the quantitative finance and quantum computing communities would
also benefit from hearing about other quantum optimization approaches and devices. We also tried to delve a
little bit deeper into how quantum annealing works and discuss universal adiabatic computation. Given that the
field of quantum computation is so dynamic, we believe the community can benefit from seeing more recent work.
A recent survey by Pistoia et al. [6] covers a variety of quantum machine learning algorithms applicable to
finance. In our review, we discuss a broader array of applications outside the realm of machine learning, such as
financial applications that make use of stochastic modeling and optimization.
There are also several problem-specific surveys such as the derivative pricing [7], supply chain finance [8] and
high-frequency trading [9] that tackle different goals than that of our survey.

2 Applicability of Quantum Computing to Finance


Numerous financial use cases require the ability to assess a wide range of potential outcomes. To this extent,
financial institutions employ statistical models and algorithms to predict future outcomes. Such techniques are
fairly effective but not infallible. In a world where huge amounts of data are generated daily, computers that
can perform predictive computations accurately are becoming a predominant need. For this reason, several
financial institutions are turning to quantum computing, given its promise to analyze vast amounts of data and
compute results faster and more accurately than any classical computer has ever been able to do. Financial
institutions believe that once they are capable of leveraging quantum computing, they are likely to see important
benefits. This section introduces financial use cases that are projected to lend themselves to a quantum computing
approach.

2.1 Macroeconomic Challenges for Financial Institutions


The financial services industry is expected to reach a market cap of $28.53 trillion by 2025, with a compound
annual growth rate of 6% [10]. While this industry is often considered incredibly stable and insulated from
change, shifting government regulations, new technologies, and customer expectations present challenges that
require the industry to adapt. We discuss here three key macroeconomic financial trends that can be impacted by
quantum computing: keeping up with regulations, addressing customer expectations and big data requirements,
and ensuring data security.

2.1.1 Keeping Up with Regulations


The Third Basel Accord, commonly known as Basel III, is an international regulatory framework set up in response
to the 2008 financial crisis. Basel III sets requirements on capital, risk coverage, leverage, risk management and
supervision, market discipline, liquidity, and supervisory monitoring [11]. These regulations require computing
numerous risk metrics, such as value at risk (VaR) and conditional value at risk (CVaR). Because of the stochastic
nature of the parameters involved in these computations, closed-form analytical solutions are generally not
available, and hence numerical methods have to be employed. One of the most widely used numerical methods
is Monte Carlo integration, since it is generic and scalable to high-dimensional problems [12]. Quantum Monte
Carlo integration (Section 5.1) has been shown to offer up to a quadratic speedup over the performance of its
classical counterpart. Engaging in ways to determine risk metrics with higher levels of accuracy and efficiency
will enable financial institutions to make better-informed loan-related decisions and financial projections.

2.1.2 Addressing Customer Expectations and Big-Data Requirements


Customer personalization has become increasingly important in finance. For example, financial technology (Fin-
Tech), in contrast to traditional finance, thrives on providing personalized services. A Goldman Sachs report has
highlighted the necessity for financial institutions to provide new big-data-driven services to align with changing
consumer behavior [13]. According to a Salesforce 2020 publication [14], 52% of customers expect companies
to always personalize offers, while only 27% of customers feel that the financial services industry provides great
service and support [14]. Financial institutions must meet this expectation in order to continue to retain their
customers. The current classical techniques for analyzing data have large computational demands and require

2
significant time to train algorithms, whereas quantum algorithms may be able to speed up these computationally
intensive and time-consuming components. As financial institutions continue to generate data, they must be
able to employ that data in a functional way to improve their business strategy. Additionally, data organization
can allow financial institutions to engage with customers’ finances more specifically and effectively, supporting
their customer service and keeping customers engaged despite other options such as FinTech. Much of this data
analysis is done by dealing with large matrices, which can be computationally demanding for classical computers.

2.1.3 Ensuring Data Security


Ensuring data security is one of the most pressing concerns for the financial services industry, according to an
article by McKinsey [15] discussing rising cybersecurity issues. An increase in digitization has occurred due to
the rise of FinTech. Additionally, as a result of the COVID-19 pandemic, substantially more financial activity
is performed online. According to the McKinsey article, 95% of board committees discuss cyber and tech risks
four or more times a year. Financial institutions must stay up to date with online security and remain vigilant
against attackers. Quantum computing data modeling and machine learning techniques could allow financial
institutions to identify potential risks with higher accuracy. Furthermore, even if one does not plan to use
quantum computation, one must be knowledgeable about it because of its ability to break current public-key
cryptographic standards [2, 16, 17], such as RSA and Diffie–Hellmann, including the common variants based on
elliptic curves [18]. This is due to quantum computing’s ability to efficiently solve the Abelian hidden subgroup
problem [19]. Although current quantum computers cannot achieve this task, financial institutions need to
become ready by investigating how to utilize quantum-safe protocols [20] and, potentially, quantum-enhanced
cryptography [21], each of which has its own potential usages in the layers of a network. While the situation is
by no means as serious for cryptographically-secure hash functions or symmetric-key ciphers, it is important to
be aware of the potential quantum-enhanced attacks [22–25]. Here, however, we are going to focus on quantum
algorithms that have direct applications in finance, so protocols for quantum-enhanced cryptographic attacks,
quantum-safe cryptography, and quantum-enhanced cryptography are all out of the scope of this survey.

2.2 Financial Use Cases for Quantum Computing


As discussed earlier, finance is one of the first industry sectors expected to benefit from quantum computing,
largely because financial problems lend themselves to be solved on near-term quantum computers. Since many of
the financial problems are based on stochastic variables as inputs, they are, more often than not, relatively more
tolerant to imprecisions in the solutions compared with problems in chemistry and physics. Arguably, certain
problems in chemistry and physics also can benefit from quantum computing. In this survey, however, we aim
to review common financial problems that can benefit from quantum computing and to discuss the associated
quantum algorithms applicable to solving them. We group these problems and quantum algorithms into three
broad categories: stochastic modeling, optimization, and machine learning.
Stochastic modeling is concerned with the study of the dynamics and statistical characteristics of stochastic
processes. In finance, one of the most commonly seen problems that involves stochastic modeling, using numerical
techniques, is estimating the prices of financial assets and their associated risks, whose values may depend
on certain stochastic processes. In Section 5, we will discuss two of the most used techniques for modeling
stochastic processes in finance—Monte Carlo methods and numerical solutions of differential equations, and the
corresponding quantum approaches for them, namely, quantum Monte Carlo integration (QMCI) and quantum
partial differential equation (PDE) solvers. We will also demonstrate the applicability of the two quantum
approaches to financial problems through examples of derivative pricing (Section 5.3.1) and risk modeling (Section
5.3.2). Stochastic modeling with quantum machine learning (QML) techniques will be discussed in Sections 7.9.2
and 7.9.3.
Optimization involves finding optimal inputs to a real-valued function so as to minimize or maximize the
value of the function. In Section 6 we will review prevailing quantum techniques for combinatorial and convex
optimization problems. We will cover adiabatic and variational quantum algorithms, as well as other quantum-
classical hybrid approaches. A large variety of financial problems that involve optimization could potentially
benefit from these quantum techniques, in particular, portfolio optimization (Section 6.4.1), hedging and swap
netting (Section 6.4.2), optimal arbitrage (Section 6.4.3), credit scoring (Section 6.4.4), and financial crash
prediction (Section 6.4.5).
Machine learning has become an essential aspect in modern business and research across various industries.
It has revolutionized data processing and decision-making, empowering organizations large and small with un-
precedented ability to leverage the ever-growing amount of easily accessible information. Using quantum machine
learning techniques can potentially speed up the training of the algorithm or certain parts of the whole process.
We will explore these techniques in Section 7. In finance, some of the most common use cases for machine learning
are anomaly detection (Section 7.9.1), natural language modeling (Section 7.8), asset pricing (Section 7.9.2), and

3
implied volatility calculation (Section 7.9.3). Applications of quantum machine learning in these use cases will
be discussed in the respective sections.
In Table 1 we present a summary of the quantum techniques and their applicable financial use cases covered
in this survey.

Problem Example Use Classical Solutions Quantum Solutions


Category Cases
Stochastic Derivative Pricing Monte Carlo Integra- Quantum Monte Carlo In-
Modeling (Section 5.3.1), tion, tegration (Section 5.1),
Risk Analysis (Section Numerical PDE Solver, Quantum PDE Solver (Sec-
5.3.2) Machine Learning tion 5.2),
Quantum Machine Learn-
ing (Section 7)
Optimization Portfolio Optimiza- Branch-and-Bound Quantum Optimization
tion (Section 6.4.1), (with cutting -planes, (Section 6)
Hedging, Swap Netting heuristics, etc.) for
(Section 6.4.2), Opti- non-convex cases [26]
mal Arbitrage (Section and Interior-Point
6.4.3), Credit Scoring Methods for certain
(Section 6.4.4), and convex cases [27]
Financial Crash Predic-
tion (Section 6.4.5)
Machine Anomaly Detection Deep Learning, Quantum Machine Learn-
Learning (Section 7.9.1), Cluster Analysis ing (Section 7),
Natural Language Quantum Cluster Analysis
Modeling (Virtual (Sections 7.3.1 and 7.3.2)
Agents, Analyzing Fi-
nancial Documents,
Section 7.8),
Risk Clustering (Sec-
tion 7.3)

Table 1: Financial Use Cases with Corresponding Classical and Quantum Solutions

3 Quantum Computing Concepts


Quantum computing is an emerging and promising field of science [28]. Global venture capital funds, public and
private companies, and governments have invested millions of dollars in this technology. Quantum computing
is driven by the need to solve computationally hard problems efficiently and accurately. For decades, the ad-
vancement of classical computing power has been remarkably following the well-known Moore’s law. Initially
introduced by Gordon Moore in 1965, Moore’s law forecasts that the number of transistors in a classical computer
chip will double every two years, resulting in lower prices for manufacturers and consumers. Today, transistors
are already at the size at which quantum-mechanical effects become apparent and problematic [29]. These effects
will only become more difficult to manage as classical chips decrease in size. This problem is partially circum-
vented by alternative forms of computing that do not use conventional transistors, such as photonic computing,
neuromorphic computing, biocomputing, and special-purpose chips. However, these techniques still use classical
algorithms and are subject to the scaling of them. Thus, it is imperative to investigate quantum computing,
which provides scaling advantages no classical architectures can achieve.
Google has demonstrated an enormous computational speedup using quantum computing for the task of
simulating the output of pseudo-random quantum circuits [30]. Specifically, Google claimed that what takes
its quantum device, Sycamore, 200 seconds to accomplish would take Summit, one of the most powerful su-
percomputers, approximately 10,000 years. Thus, quantum supremacy has been achieved. As defined by John
Preskill [31], quantum supremacy is the promise that certain computational (not necessarily useful) tasks can
be executed exponentially faster on a quantum processor than on a classical one. The decisive demonstration

4
of quantum devices to solve more useful problems faster and/or more accurately than classical computers, such
as large-scale financial problems (Section 2.2), is called quantum advantage. Quantum advantage is more elusive
and, arguably, has not been demonstrated yet. The main challenge is that existing quantum hardware does not
yet seem capable of running algorithms on large enough problem instances. Current quantum hardware (Section
3.3) is in the noisy intermediate-scale quantum (NISQ) technology era [32], meaning that the current quantum
devices are underpowered and suffer from multiple issues. The fault-tolerant era refers to a yet unknown period
in the future in which large-scale quantum devices that are robust against errors will be present. In the next two
subsections we briefly discuss quantum information (Section 3.1) and models for quantum computation (Section
3.2). In the last subsection (Section 3.3) we give an overview of the current state of quantum hardware and the
challenges that must be overcome.

3.1 Fundamentals of Quantum Information


All computing systems rely on the fundamental ability to store and manipulate information. Today’s classical
computers operate on individual bits, which store information as binary 0 or 1 states. In contrast, quantum
computers use the physical laws of quantum mechanics to manipulate information. At this fundamental level,
the unit of information is represented by a quantum bit, or qubit. Physically, a qubit is any two-level quantum
system [2, 33]. Mathematically, the state space of a single qubit can be associated with the complex projective
line, denoted CP1 [34].1 However, one commonly considers qubit states as elements ψ, called state vectors, of a
two-dimensional complex-vector space but restrict consideration to those that satisfy kψk2 = 1 and allow for ψ
and eiθ ψ to be used interchangeably (i.e., consider specific elements of the equivalence classes in CP1 ). A state
vector is usually denoted by using Dirac’s “bra-ket” notation: ψ is represented by the “ket” |ψi. Examples of two
single-qubit kets are the states |0i and |1i, which are analogous to the classical bits 0 and 1.
Multiqubit state spaces are expressed through the use of the vector space tensor product [35], which results
in a 2n -dimensional complex vector space for n qubits. The tensor product of two state vectors is denoted as
|ψ1 i⊗|ψ2 i, |ψ1 i |ψ2 i, or |ψ1 ψ2 i and extends naturally to multiple qubits. Entanglement is a quantum phenomenon
that can be present in multiqubit systems in which the states of the subsystems cannot be described independently.
This results in correlated observations (measurement results) even when physical communication between qubits,
which would correlate them in a classical sense, is impossible [2]. Entangled states are mathematically expressed
as tensors that are not factorable over the involved subsystems, in other words, non-simple tensors. A quantum
state is a product state of certain subsystems if those subsystems can be described independently and it is a
simple tensor with respect to those subsystems. Qubits are in a superposition state with respect to a given basis
for the state space if their state vector can be expressed as a nontrivial linear combination of basis states. Thus,
a qubit can be in a state that is an arbitrary superposition of |0i and |1i, whereas a classical bit can be in only
one of the analogous states 0 or 1. The coefficients of the superposition are complex numbers called amplitudes.
Although multiplying a state by a phase eiθ , called a global phase, has no physical significance, the relative phase
between the amplitudes of two basis states in a superposition does.
A measurement in quantum mechanics consists of probing a system to obtain a numerical result. Measure-
ment of a quantum system is probabilistic. A projective measurement of a system with respect to a Hermitian
operator A, called an observable, results in the state vector of the system being orthogonally projected onto an
eigenspace, with orthogonal projector Πλ , and the observable quantity is the associated eigenvalue, λ. A poten-
tial projective measurement result λ is observed with probability equal to kΠλ |ψik22 . The expected value of the
measurement is equal to hψ|A|ψi, where hψ|, called a “bra,” is the Hermitian adjoint. In physics, the quantum
Hamiltonian is the observable for a system’s energy. A ground state of a quantum Hamiltonian is a state vector in
an eigenspace associated with the smallest eigenvalue and thus has the lowest energy. Any physical transforma-
tion of a quantum system can be represented by a completely positive non-trace increasing linear operator. Two
important special cases of such operations are unitary operators and measurements, which are not unitary. The
dynamics of a closed quantum system follow the Schrödinger equation, i~ |ψ̇(t)i = H |ψ(t)i, where ~ is the reduced
Planck constant and H is the system’s quantum Hamiltonian. The unitary operator e−iH∆/~ , which transforms
a solution |ψ(t)i to the Schrödinger equation at one point in time t into another one at a different point in time
t + ∆, is called the time-evolution operator or propagator. There can also be a time-dependent Hamiltonian,
H(t), where the operator H changes over time. However, the propagator is computed by the product integral [36]
to take into account that H(ti ) and H(tj ) may not commute for all ti , tj in the specified evolution time interval
[33]. The simulation of time evolution on a quantum computer is called Hamiltonian simulation. For quantum
algorithms, the Hamiltonian can be an arbitrary observable with no connection to a physical system, and ~ := 1
is often assumed. Three important observables on a single qubit are the Pauli operators: X := |+ih+| − |−ih−|,
Y := |iihi| − |−iih−i|, and Z := |0ih0| − |1ih1|, where |±i := |0i±|1i

2
, and |±ii := |0i±i|1i

2
, where |ψihψ| denotes the

1
Generally, quantum states live in a projective Hilbert space, which can be infinite-dimensional [33].

5
vector outer product.2 For n qubits, the set {|ki | k ∈ {0, 1}n } forms the computational basis. Measuring in the
computational basis probabilistically projects the quantum state onto a computational basis state. A measure-
ment in the context of quantum computation, without further clarification, usually refers to a measurement in
the computational basis.
A positive semi-definite operator ρ on the quantum state space with Tr(ρ) = 1 is called a density operator,
P mixture of pure states |ψi i, in other words, state
where Tr(·) is the trace operator. The classical probabilistic
vectors, with probabilities pi has density operator ρ := p |ψi ihψi | and is called a mixed state. In general, a
i i
density operator ρ represents a mixed state if Tr(ρ2 ) < 1 and a pure 2
pstate if Tr(ρ ) = 1. Fidelity is one of the
distance metrics for quantum states ρ and σ, defined as F (ρ, σ) = Tr( ρ1/2 σρ1/2 ) [2]. The fidelity of an operation
refers to the computed fidelity between the quantum state that resulted after the operation was performed and
the expected state. Decoherence is the loss of information about a quantum state to its environment, due to
interactions with it. Decoherence constitutes an important source of error for near-term quantum computers.
For further details, the works of Nielsen and Chuang [2] and Kitaev et al. [37] are the standard references on
the subject of quantum computation. The former covers a variety of topics from quantum information theory,
while the latter focuses on quantum computational complexity theory.

3.2 Models of Quantum Computation


A qubit-based3 model of quantum computation is, simply, a methodology for performing computations on qubits.
A model can realize universal quantum computation if it can be used to approximate any n-qubit unitary, for all
n, with arbitrarily small error [2, 40]. Two universal models are polynomial-time equivalent if they can simulate
each other with only a polynomially-bounded overhead in computational resources [40, 41]. We briefly discuss
two polynomial-time-equivalent models that can be used to realize universal quantum computation: gate-based
quantum computing and adiabatic quantum computing. There are others based on qubits [40] that we will not
review since current quantum devices do not implement them, even in a non-universal fashion.
When running an algorithm on quantum hardware, one must contend with errors that can occur during
computation. In the near term, quantum error mitigation techniques are commonly used to reduce (mitigate)
errors, rather than eliminate them. However, error correction will be required in the long term. Techniques for
quantum error correction [42] or fault tolerance have been developed to construct logical qubits and operations
that tolerate errors. This strategy has been extensively studied for gate-based quantum computing [43].

3.2.1 Gate-Based Quantum Computing


Gate-based quantum computing, also known as the quantum circuit model, composes a countable sequence of
unitary operators, called gates, into a circuit to manipulate qubits that start in a fixed state [2, 37]. A circuit
can be viewed as a directed acyclic graph tracking the operations applied, which qubits they operate on, and
the time step at which they are applied. Typically, the circuit ends with measuring in the computational basis.
However, measurements of subsets of qubits before the end of the circuit and operations conditioned on the
results of measurements are also possible [44]. Sequences of gates that act on separate sets of qubits can, if
hardware supports it, be applied in parallel. The circuit depth is the longest sequence of gates in the circuit,
from initial state until measurement, that cannot be made shorter through further parallelization. The quantum
circuit model is analogous to the Boolean circuit model of classical computation [37]. An important difference
with quantum computation is that because of unitarity all gates are invertible, i.e., computation involving only
quantum gates is reversible. A shot is a single execution of a quantum circuit followed by measurements. Multiple
shots are required if the goal is to obtain statistics.
This model can be utilized to realize universal quantum computation by considering circuits with n qubits
built by using any type of single-qubit gate and only one type of two-qubit gate [2]. Any n-qubit unitary can be
approximated in this manner to arbitrarily small error. The two-qubit gates can be used to mediate entanglement.
A discrete set of gates that can be used for universal quantum computation is called a basis gate set. The basis
gate set realized by a quantum device is called its native gate set. The native gate set typically consists of a finite
number of (parameterized and non-parameterized) single-qubit gates and one two-qubit entangling operation.
An arbitrary single-qubit gate can be approximately decomposed into the native single-qubit gates.
Some common single-qubit gates are the Hadamard gate (H), the Phase (S) and π8 -Phase (T) gates, the Pauli
gates (X, Y, Z), and the rotation gates generated by Pauli operators (RX (θ), RY (θ), RZ (θ)). Examples of two-qubit
gates are the controlled-NOT gate denoted CX or CNOT and the SWAP gate. The H, S, and CX gates generate,

2
The Pauli operators are also sometimes denoted as σx , σy , and σz , respectively.
3
There are other models of quantum computation that are not based on qubits, such as n-level quantum
systems, for n > 2, called qudits [38], and continuous-variable, infinite-dimensional systems [39]. This article
discusses only quantum models and algorithms based on qubits.

6
under the operation of composition, the Clifford group, which is the normalizer of the group of Pauli gates [2].
The time complexity of an algorithm is measured in terms of the depth of the circuit implementing it. This is in
terms of native gates when running the circuit on hardware. The majority of commercially available quantum
computers implement the quantum circuit model.
We note that quantum circuits generated for gate-based quantum computers can also be simulated on classical
hardware. It is generally a computationally inefficient process, however, with enormous memory requirements
scaling exponentially with the number of qubits [45–48]. But for certain types of computations such as computing
the expectation values of observables, these requirements can be dramatically reduced by using tensor network
simulators [49–51]. Classical simulation is commonly used for the development of algorithms, verification, and
benchmarking, to name a few applications.

3.2.2 Adiabatic Quantum Computing


Adiabatic quantum computing (AQC) is a model for quantum computation that relies on the quantum adiabatic
theorem. This theorem states that as long as the evolution of a time-dependent Hamiltonian, H(t), is sufficiently
slow (defined in references) from an initial nth eigenstate of the time-dependent Hamiltonian at t = 0, it will
remain in an instantaneous nth eigenstate throughout the evolution for all t [52]. A unitary operation can be
applied by following an adiabatic trajectory from an initial Hamiltonian Hi whose ground state is easy to prepare
(e.g., a product state of qubits) to the final Hamiltonian Hf whose ground state is the result that would be
obtained after applying the unitary. A computation consists of evolving a time-dependent Hamiltonian H(s)
for time tf according to a schedule s(t) : [0, tf ] 7→ [0, 1], which interpolates between the two (non-commuting)
Hamiltonians: H(s) = (1−s)Hi +sHf [52]. The time required for an adiabatic algorithm to produce the result to
some specified error is related to the minimum spectral gap among all instantaneous Hamiltonians. The spectral
gap of the Hamiltonian at time t is the difference between the nth eigenvalue and the next closest one (e.g.,
difference between the ground-state energy and the first excited level). This quantity helps define how slow the
evolution needs to be in order to remain on the adiabatic trajectory—although it is notoriously difficult to bound
in practice [52]. One example of where this has been done is for the adiabatic version of Grover’s algorithm
(Section 4.1) [53]. There exist classes of time-dependent Hamiltonians that through adiabatic evolution can
approximate arbitrary unitary operators applied to qubits [52, 54]. Thus AQC can realize universal quantum
computation. As of this writing, there does not exist a commercially available device for universal AQC. By
polynomial equivalence to the circuit model, however, gate-based devices can efficiently simulate it. There are
commercially available devices that implement a specific AQC algorithm for combinatorial optimization, called
quantum annealing, discussed in Section 4.8. It is believed to be unlikely that the transverse-field Ising (stoquastic
[52]) Hamiltonian used for quantum annealing is universal [54].

3.3 Quantum Hardware Challenges


As mentioned, the quantum computers available today are called NISQ devices. Multiple physical realizations of
qubit-based quantum computation have been developed over the past decade. The majority of efforts to develop
quantum technologies at this time are driven by industry and, to a lesser degree, by academia. Superconducting-
based quantum computers [55] are manufactured by IBM, Google, D-Wave, and Rigetti, among others. Quantum
computers using trapped atomic ions [56] have been developed, for example, by IonQ, Quantinuum, and AQT.
These two technologies are currently the most widely available and thus, have been frequently utilized for research.
However, other promising technologies also are under development, including photonic systems by PsiQuantum
and Xanadu; neutral-atoms by ColdQuanta, QuEra Computing, Atom Computing and Pasqal, spins in silicon by
Silicon Quantum Computing; quantum dots; molecular qubits; and topological quantum systems. As mentioned
in Section 3.2, most of the current quantum devices and those that are in development, with the exception of
the D-Wave quantum annealers, follow the quantum circuit model. All of the mentioned quantum devices have
system-level attributes, which need to be considered in multiqubit systems. Significant scientific and engineering
efforts are needed to improve and optimize these devices. The most important attributes are discussed in the
remainder of this section.
The technical challenges of NISQ devices discussed below have a major impact on the current state of
quantum algorithms. The algorithms can be roughly split into two camps: (1) near-term algorithms designed to
run on NISQ devices and (2) algorithms that have a theoretically proven advantage for when hardware advances
enough but require a large number of logical qubits with extremely high-fidelity quantum gates. In this work
we cover both types of algorithms. One should keep in mind that arguably none of the discussed algorithms
implemented on NISQ devices provide a decisive advantage over classical algorithms yet.

7
3.3.1 Noise
Qubits lose their desired quantum state or decohere (Section 3.1) over time. The decoherence times of each
of the qubits are important attributes of a quantum device. Various mechanisms of qubit decoherence exist,
such as quantum amplitude and phase relaxation processes and depolarizing quantum noise [2]. One potentially
serious challenge for superconducting systems is cosmic rays [57]. While single-qubit decoherence is relatively well
understood, the multiqubit decoherence processes, generally called crosstalk, pose more serious challenges [58].
Even two-qubit operations have an order of magnitude higher error rate than do single-qubit operations. This
makes it difficult to entangle a large number of qubits without errors. Various error-mitigation techniques [59, 60]
have been developed for the near term. In the long term, quantum error correction using logical operations, briefly
mentioned in Section 3.2, will be necessary [43, 61]. However, this requires multiple orders of magnitude more
physical qubits than available on today’s NISQ systems, and native operations must have sufficiently low error
rates. Thus, algorithms executed on current hardware must contend with the presence of noise and are typically
limited to low-depth circuits. In addition, current quantum error correction techniques theoretically combat local
errors [43, 58, 61]. The robustness of these techniques to non-local errors is still being investigated [62].

3.3.2 Connectivity
Quantum circuits need to be optimally mapped to the topology of a quantum device in order to minimize errors
and total run time. With current quantum hardware, qubits can interact only with neighboring subsets of other
qubits. Existing superconducting and trapped-ion processors have a fixed topology that defines the allowed
connectivity. However, trapped-ion devices can change the ordering of the ions in the trap to allow for, originally,
non-neighboring qubits to interact [63]. Since this approach utilizes a different degree of freedom from that used
to store the quantum information for computation, technically the device can be viewed as having an all-to-all
connected topology. For existing superconducting processors, the positioning of the qubits cannot change. As a
result, two-qubit quantum operations between remote qubits have to be mediated by a chain of additional two-
qubit SWAP gates via the qubits connecting them. Moreover, the two-qubit gate error of current NISQ devices
is high. Therefore, quantum circuit optimizers and quantum hardware must be developed with these limitations
in mind. Connectivity is also a problem with current quantum annealers, which is discussed in Section 6.1.1.

3.3.3 Gate Speed


Having fast quantum gates is important for achieving quantum supremacy and quantum advantage with NISQ
devices in the quantum circuit model. However, some types of quantum devices are particularly slow, for example
trapped-ion quantum processors, compared with superconducting devices, although these devices typically have
lower gate error rates. There is a well-known trade-off between space, speed, and fidelity. Execution time is
particularly relevant for algorithms that require a large number of circuit execution repetitions, such as variational
algorithms (Section 4.7) and sampling circuits. Error rates typically increase sharply if the gate time is reduced
below a certain duration. Thus, one must find the right balance, which often requires tedious calibrations and
fine-tuning.

3.3.4 Native Gates


Another important attribute, specific to gate-based quantum devices, is the set of available quantum gates that
can be executed natively, namely, those that map to physical operations on the system (Section 3.2.1). The
existence of a diverse universal set of native gates is crucial for designing short-depth high-fidelity quantum
circuits. Developing advanced quantum compilers is, therefore, critical for efficiently mapping general quantum
gates to the native gates of a particular device.

4 Foundational Quantum Algorithms


In this section we discuss foundational quantum algorithms that are the building blocks for more sophisticated
algorithms. These algorithms have been extended to address problems in different domains, particularly finance,
as described in Sections 5, 6, and 7. We briefly review the concepts from computational complexity theory that
are relevant to the discussions in this survey. We refer the reader to the references for more rigorous discussions
on the topic.
Computational complexity, the efficiency in time or space with which computational problems can be solved,
is presented by using the standard asymptotic “Big-O” notation [64]: the set O(f (n)) for an upper bound on
complexity (worst case) and the set Ω(g(n)) for a lower bound on complexity (best case). Also, h(n) ∈ Θ(f (n))

8
if and only if h ∈ O(f (n)) and h ∈ Ω(f (n)).4 Computational problems, which are represented as functions, are
divided into complexity classes. The most common classes are defined based on decision problems, which can
be represented as Boolean functions. P denotes the complexity class of decision problems that can be solved in
polynomial time (i.e., the time required is upper bounded, in “Big-O” terms, by a polynomial in the input size) by a
deterministic Turing machine (TM), in other words, a traditional classical computer [2, 37]. An amount of time or
space required for computation that is upper bounded by a polynomial is called asymptotically efficient, whereas,
for example, an amount upper bounded by an exponential function is not. However, asymptotic efficiency does
not necessarily imply efficiency in practice; the coefficients or order of the polynomial can be large. NP denotes
the class of decision problems for which proofs of correctness exist, called witnesses, that can be executed in
polynomial time on a deterministic TM [37].
A problem is hard for a complexity class if any problem in that class can be reduced in polynomial time to
it, and it is complete if it is both hard and contained in the class [65]. Problems in P can be solved efficiently on
a classical or quantum device. While NP contains P, it also contains many problems not known to be efficiently
solvable on either a quantum or classical device. #P is a set of counting problems. More specifically, it is the
class of functions that count the number of witnesses that can be computed in polynomial time on a deterministic
TM for NP problems [65]. BQP, which contains P, denotes the class of decision problems solvable in polynomial
time on a quantum computer with bounded probability of error [2]. A common way to represent the complexity
of quantum algorithms is by using query complexity [66]. Roughly speaking, the problem setup provides the
algorithm access to functions that the algorithm considers as “black boxes.” These are represented as unitary
operators called quantum oracles. The asymptotic complexity is computed based on the number of calls, or
queries, to the oracles.
The goal of quantum algorithms research is to develop quantum algorithms that provide computational
speedups, a reduction in computational complexity. In practice, however, one commonly finds algorithms with
improved efficiency without any proven reduction in complexity. These algorithms typically utilize heuristics. As
mentioned in Section 3.3, a majority of algorithms for NISQ fall into this category. When one can theoretically
show a reduction in asymptotic complexity, the algorithm has a provable speedup for the problem. In the
discussions that follow, we emphasize the category into which each algorithm currently falls.

4.1 Quantum Unstructured Search


Grover’s algorithm [67] is a quantum procedure for solving unstructured search with an asymptotic quadratic
speedup, in terms of the number of oracle queries made, over the best known classical algorithm. The goal of
unstructured search is as follows: Given an oracle f : X 7→ {0, 1}, find w ∈ X such that f (w) = 1. Sometimes w
is also called a marked element. More specifically, the algorithm amplifies the probability of measuring the state
|wi encoding w such that f (w) = 1. In Grover’s algorithm, a marked state is identified by utilizing a phase oracle,
Of |xi = (−1)f (x) |xi, which can be a composition of an oracle computing f into a register and a conditional phase
gate. One can represent f by a unitary utilizing techniques for reversible computation [2]. Classically, √ in the
worst case f will be evaluated on all items in the set, O(N ) queries. Grover’s algorithm makes O( N ). The
complexity stated for this algorithm is said to be in terms of quantum query complexity. In fact, this algorithm
is optimal for unstructured search [68].
Along with an efficient unitary simulating f , the algorithm requires the ability to query uniform superpositions
of states representing elements of X . This criterion is typically referred to as quantum access to classical data
[69]. The general case can be achieved by using quantum random access memory (qRAM) [70]. As of this
writing, the problems associated with constructing a physical qRAM have not been solved. However, a uniform
superposition over computational basis states (i.e., with Hadamard gates, Section 3.2.1) can be used for the
simple case where X = {0, . . . , N − 1}. Moreover, hybrid techniques have been developed to deal with larger
classical data sets without qRAM [71]. Ambainis [72] developed a variable-time version of the algorithm that
more efficiently handles cases when queries can take different times to complete. We note that loading data onto
a quantum computer is generally exponentially expensive. It is believed that a quantum computer may help with
computing small but complex datasets, which are computationally easy to load onto a quantum computer.

4.2 Quantum Amplitude Amplification


The quantum amplitude amplification (QAA) algorithm [73] is a generalization of Grover’s algorithm (Section
4.1). Suppose an oracle Oφ exists that marks a quantum state |φi (e.g., Oφ only acts nontrivially on |φi, say,
by adding a phase shift by π). Also, assume there is a unitary U and |Φi := U |ψi for some input state |ψi. In

4
Let ∗ signify O, Ω, or Θ. It is common practice to abuse notation and use the symbol = instead of ∈ to
signify g(n) is in ∗(f (n)) [64]. Similarly, it is common to interchange the phrases “the complexity is ∗(f (n))” and
“the complexity is in ∗(f (n)).”

9
addition, if |Φi has nonzero overlap with the marked state |φi, namely, hΦ| φi 6= 0, then without loss of generality
|Φi = sin(θa ) |φi + cos(θa )|φ⊥ i, where hφ| φ⊥ i = 0 and sin(θa ) = hΦ| φi. For example, in the case of Grover’s
algorithm, |Φi is a uniform superposition over states corresponding to the elements of X , and |φi is the marked
state. QAA will amplify the amplitude sin(θa ) and as a consequence the probability sin2 (θa ) of measuring |φi
(using an observable with |φi as an eigenstate) to Ω(1)5 using O( sin(θ 1
a)
) queries to U and Oφ . This is done
utilizing a series of interleaved reflection operators involving U , |ψi, and Oφ [73].
This algorithm can be understood as a quantum analogue to classical probabilistic boosting techniques and
achieves a quadratic speedup [73]. Classically, O( sin21(θa ) ) samples would be required, in expectation, to measure
|φi with Ω(1) probability. QAA is an indispensable technique for quantum computation because of the inherit
randomness of quantum mechanics. Improvements to the algorithm have been made to handle issues that occur
when one cannot perform the required reflection about |ψi [74], the number of queries to make (which requires
knowing sin(θa )) is not known [75, 76], and Oφ is imperfect with bounded error [77]. The last two improvements
are also useful for Grover’s algorithm (Section 4.1). A version also exists for quantum algorithms (i.e., a unitary
U ) that can be decomposed into multiple steps with different stopping times [78].

4.3 Quantum Phase Estimation


The quantum phase estimation (QPE) algorithm [79] serves as a critical subroutine in many quantum algorithms
such as HHL (Section 4.5). Given a desired additive error of δ, QPE makes O( 1δ ) queries to a unitary U to
produce an estimate θ̃j of the phase θj ∈ [0, 1) of one of the eigenvalues of U , ei2πθj , such that |θj − θ̃j | ≤ δ.
The probability of successfully measuring an estimate θ̃j to the actual eigenvalue phase θj within error δ is Ω(1)
and can be boosted to 1 −  by adding and then discarding O(log(1/)) ancillary qubits [2, 80]. Alternatively,
1
it can be boosted to 1 − poly(n) by using O(log(n)) repetitions of QPE and taking the most frequent estimate,
according to standard Chernoff bounds [81]. This brings the overall query complexity to O( δ 1
) and O( log(n)
δ
),
respectively. In the cases where the actual eigenvalue phase θj , can be represented with finite precision, QPE can
perform the transformation |vj i |0i 7→ |vj i |θj i, when |vj i is an eigenvector of U . A more detailed explanation of
the algorithm and what happens when the eigenvalue phases cannot be perfectly represented with the specified
precision, δ, is provided by Nielsen and Chuang [2].

4.4 Quantum Amplitude Estimation


The quantum amplitude estimation (QAE) algorithm [73] estimates the total probability of measuring states
marked by a quantum oracle. Consider the state |Φi, oracle Oφ , unitary U , and amplitude sin(θa ) as defined in
1
Section 4.2. QAE utilizes O( δ sin(θ a)
) queries to U and Oφ to estimate sin2 (θa ) to relative additive error δ sin2 (θa )
[81]. The algorithm utilizes QPE (Section 4.3) as a subroutine, which scales as O( 1δ ), for desired additive error
δ. Some variants of QAE do not make use of QPE [82–84], including a variational (Section 4.7) one [85], and
are potentially more feasible on near-term devices. An important application of QAE to finance is to perform
Monte Carlo integration, discussed in Section 5.1. Other applications of QAE include estimating the probability
of success of a quantum algorithm and counting.

4.5 Quantum Linear System Algorithms


The quantum linear systems problem (QLSP) is defined as follows Given an O(1)–sparse invertible Hermitian
matrix A ∈ RN ×N and a vector ~b ∈ RN , output a quantum state |xi that is the solution to A~ x = ~b up to a
normalization constant with bounded error probability [86]. While this definition requires the sparsity to be
independent of the dimensions of the matrix, the quantum linear system algorithms (QLSAs) have a polynomial
dependence on sparsity. As can be understood by viewing the time complexities of these algorithms, we can
allow for O(polylog(N ))–sparse matrices, which is what is meant when we simply say “a sparse matrix” [87]. The
O(1)-sparsity specification comes from the decision problem version of QLSP that is BQP-complete [86, 88].
The Harrow–Hassidim–Lloyd (HHL) algorithm [88] is the first algorithm invented for solving the QLSP. It
provides an exponential speedup in the system size N for well-conditioned matrices (for QLSP this implies a
condition number in O(polylog(N ))) [89], over all known classical algorithms for a simulation of this problem.
For matrix sparsity s and condition number κ, HHL runs with worst-case time complexity O(s2 κ2 log(N )/),

5
Ω(1) probability means the probability that the event occurs is at least some constant. This is used to signify
that the desired output of the algorithm occurs with a probability that is independent of the variables in the
algorithm’s time complexity or query complexity. In addition, this implies an asymptotically efficient amount of
repetitions (classical probabilistic boosting) can be used to increase the probability of success close to 1 (e.g., see
Section 4.3).

10
for desired error . This complexity was computed under the assumption that a procedure based on higher-
order Suzuki–Trotter methods [90, 91] was used for sparse Hamiltonian simulation. However, a variety of more
efficient techniques have been developed since the paper’s inception [92, 93]. These potentially reduce the original
HHL’s quadratic dependence on sparsity. HHL’s query complexity, which is independent of the complexity of
Hamiltonian simulation, is O(κ2 /) [94].
Alternative quantum linear systems solvers also exist that have better dependence on some of the parameters,
such as almost linear in the condition number from Ambainis [78] and polylogarithmic dependence on 1/ for
precision from Childs, Kothari, and Somma (CKS) [87]. In addition, Wossnig et al. [95] utilized the quantum
singular value estimation (QSVE) √ algorithm of Kerenedis and Prakash [69] to implement a QLS solver for dense
matrices. This algorithm has O( N polylog(N )) dependence on N and hence offers no exponential speedup.
However, it still obtains a quadratic speedup over HHL for dense matrices. Following this, Kerenedis and Prakash
generalized the QSVE-based linear systems solver to handle both sparse and dense matrices and introduced
a technique for spectral norm estimation [96]. In addition, the quantum singular value transform (QSVT)
framework provides methods for QLS that have the improved dependencies on κ and 1/ mentioned above
[97, 98]. The QSVT framework also provides algorithms for a variety of other linear algebraic routines such as
implementing singular-value-threshold projectors [69, 99] and matrix-vector multiplication [97]. As an alternative
to QLS based on the QSVT, Costa et al. [94] devised a discrete-time adiabatic approach [100] to the QLSP that
has optimal [88] query complexity: linear in κ and logarithmic in 1/. Since QLS algorithms invert Hermitian
matrices, they can also be used to compute the Moore–Penrose pseudoinverse of an arbitrary matrix. This
requires computing the Hermitian dilation of the matrix and filtering out singular values that are near zero
[88, 99].
While solving the QLSP does not provide classical access to the solution vector, this can still be done
by utilizing methods for vector-state tomography [101]. Applying tomography would no longer allow for an
exponential speedup in the system size N . However, polynomial speedups using tomography for some linear-
algebra-based algorithms are still possible (Section 6.2). In addition, without classical access to the solution,
certain statistics, such as the expectation of an observable with respect to the solution state, can still be obtained
without losing the exponential speedup in N . Providing quantum access to A and ~b is a difficult problem that can,
in theory, be dealt with by using quantum models for data access. Two main models of data access for quantum
linear algebra routines exist [97]: the sparse-data access model and the quantum-accessible data structure. The
sparse-data access model provides efficient quantum access to the nonzero elements of a sparse matrix. The
quantum-accessible data structure, suitable for fixed-sized, non-sparse inputs, is a classical data structure stored
in a quantum memory (e.g., qRAM) and provides efficient quantum queries to its elements [97].
For problems that involve only low-rank matrices, classical Monte Carlo methods for numerical linear algebra
(e.g., the FKV algorithm [102]) can be used. These techniques have been used to produce classical algorithms
that have an asymptotic exponential speedup in dimension for various problems involving low-rank linear algebra
(e.g., some machine learning problems) [103, 104]. As of this writing, these classical dequantized algorithms have
impractical polynomial dependence on other parameters, making the quantum algorithms still useful. In addition,
since these results do not apply to sparse linear systems, there is still the potential for provable speedups for
problems involving sparse, high-rank matrices.

4.6 Quantum Walks


Formally, random walks are discrete-time stochastic processes formed through the summation of independent
and identically distributed random variables [105] and play a fundamental role in finance [106]. In the limit,
such discrete-time processes approach the continuous-time Wiener process [107]. One type of stochastic process
whose logarithm follows a Wiener process is geometric Brownian motion (GBM), commonly used to model market
uncertainty. Random walks have also been generalized to the vertices of graphs [108]. In this case they can be
viewed as finite-state Markov chains over the set of vertices [109]. Quantum walks are a quantum mechanical
analogue to the processes mentioned above [110]. Generally, quantum walks in discrete time can be viewed as a
Markov chain over its edges (i.e., a quantum analogue of a classical bipartite walk) [111, 112] and consist of a series
of interleaved reflection operators. This approach generalizes QAA (Section 4.2) and Grover’s algorithm (Section
4.1) [113, 114]. Discrete-time quantum walks have be shown to provide polynomially faster mixing times [115–
117] as well as polynomially faster hitting times for Markov-chain-based search algorithms [111, 112, 118–122].
Alternatively, there are quantum walks in continuous time [123, 124], which have certain provable advantages
[125], using a different notion of hitting time. A similar result, using this notion, has also been shown for
discrete-time quantum walks [126]. In addition, the various mixing operators used in the Quantum Approximate
Optimization Algorithm (QAOA, Section 6.1.2) can be viewed as continuous-time quantum walks connecting
feasible solutions [127]. There have been multiple proposed unifications of quantum walks in continuous and
discrete time [128, 129], something that is true for the classical versions [107]. For a comprehensive review of
quantum walks, see the survey by Venegas–Andraca [129]; and for an overview and a discussion of connections
between the various quantum-walk-based search algorithms, see the work of Apers et al. [119].

11
4.7 Variational Quantum Algorithms
Variational quantum algorithms (VQAs) [130], also known as classical-quantum hybrid algorithms, are a class
of algorithms, typically for gate-based devices, that modify the parameters of a quantum (unitary) procedure
based on classical information obtained from running the procedure on a quantum device. This information is
typically in the form of a cost function dependent on the expectations of observables with respect to the state
that the quantum procedure produces. Generally, the training of quantum variational algorithms is NP-hard
[131] which withholds us from reaching arbitrarily good approximate local minima. In addition, these methods
are heuristic. The primordial quantum-circuit-based variational algorithm is called the variational quantum
eigensolver (VQE) [132] and is used to compute the minimum eigenvalue of an observable (e.g., the ground-state
energy of a physically realizable Hamiltonian). However, it is also applicable to optimization problems (Section
6.1.3) and both linear (e.g., quantum linear systems [133, 134]) and nonlinear problems (Section 5.2.2). VQE
utilizes one of the quantum variational principles based on the Rayleigh quotient of a Hermitian operator (Section
6.1.3). The quantum procedure that provides the state, called the ansatz, is typically a parameterized quantum
circuit (PQC) built from Pauli rotations and two-qubit gates (Section 3.2.1). For VQAs in general, finding the
optimal set of parameters is a non-convex optimization problem [135]. A variation of this algorithm, specific
to combinatorial optimization problems, known as the quantum approximate optimization algorithm (QAOA)
[136], utilizes the alternating operator ansatz [137] and is typically used for observables that are diagonal in the
computational basis. There are also variational algorithms for approximating certain dynamics, such as real-time
and imaginary-time quantum evolution that utilize variational principles for dynamics [138] (e.g., McLachlan’s
principle [139]). Furthermore, variational algorithms utilizing PQCs and various cost functions have been applied
to machine learning tasks (Section 7.6) [140]. Problems using VQAs are seen as one of the leading potential
applications of near-term quantum computing.

4.8 Adiabatic Quantum Optimization


Adiabatic quantum optimization [141], commonly referred to as quantum annealing (QA), is an AQC (Section
3.2.2) algorithm for quadratic unconstrained binary optimization (QUBO). QUBO is an NP-hard combinatorial
optimization problem. These problems are discussed in Section 6.1. The main reason behind the interest in QA
is that it provides a nonclassical heuristic, called quantum tunneling, that is potentially helpful for escaping from
local minima. Its closest classical analogue is simulated annealing [142], a Markov chain Monte Carlo method
that was inspired by classical thermodynamics and uses temperature as a heuristic to escape from local minima.6
So far QA has proven useful for solving QUBOs with tall yet narrow peaks in the cost function [143, 144].
The overall benefit of QA is still a topic of ongoing research [144, 145]. However, the current scale of quantum
annealers allows for potential applicability in the near term [146].

5 Stochastic Modeling
Stochastic processes are commonly used to model phenomena in physical sciences, biology, epidemiology, and
finance. In finance, stochastic modeling is often used to help make investment decisions, usually with a goal of
maximizing return and minimizing risks. Quantities that are descriptive of the market condition, including stock
prices, interest rates, and their volatilities, are often modeled by stochastic processes and represented by random
variables. The evolution of such stochastic processes is governed by stochastic differential equations (SDEs), and
stochastic modeling aims to solve the SDEs for the expectation value of a certain random variable of interest,
such as the expected payoff of a financial derivative at a future time, which determines the price of the derivative.
Although analytical solutions for SDEs are available for a few simple cases, such as the well-known Black–
Scholes equation for European options [106], a vast majority of financial models involve SDEs of more complex
forms that have to resort to numerical approaches.
In the following subsections, we review two commonly used numerical methods for solving SDEs, namely,
Monte Carlo integration (MCI, Section 5.1) and numerical solutions of differential equations (ODEs/PDEs,
Section 5.2), and we discuss respective quantum solutions and potential advantages. Then, in Section 5.3, we
show example financial applications for these quantum solutions.

5.1 Monte Carlo Integration


Monte Carlo methods utilize sampling to approximate the solutions to problems that are intractable to solve
analytically or with numerical methods that scale poorly for high-dimensional problems. Classical Monte Carlo

6
Somma et al. [120] produced a quantum algorithm with provable speedup, in terms of the spectral gap, for
simulated annealing using quantum walks (Section 4.6).

12
methods have been used for inference [147], integration [148], and optimization [149]. Monte Carlo integration
(MCI), the focus of this subsection, is critical to finance for pricing and risk predictions [12]. These tasks often
require large amounts of computational power and time to achieve the desired precision in the solution. Therefore,
MCI is another key technique, heavily used in finance, for which a quantum approach is appealing. Interestingly,
there exists a quantum algorithm with a proven advantage over classical Monte Carlo methods for numerical
integration.
In stochastic modeling, MCI typically is used to estimate the expectation of a quantity that is a function of
other random variables. The methodology usually starts with a stochastic model (e.g. SDEs) for the underlying
random variables from which samples can be taken and the corresponding values of the target quantity subse-
quently evaluated given the samples drawn. For example, consider the following sequence of random variables,
X0 , . . . , XT . These random variables are often assumed to follow a diffusion process [107], where each Xt with
t ∈ {0, 1, . . . , T } represents the value of the quantity of interest at time t, and the collective values of the ran-
dom variables from the outcome of a single drawing are known as a sample path. Suppose the quantity whose
expectation we want to compute is a function of this process at various time points: g(X0 , . . . , XT ). In order to
estimate the expectation, many sample paths are drawn, and sample averages of g are computed. The estimation
error decays as O( √1N ) independent of the problem dimension, where Ns is the number of paths taken. This is
s
in accordance with the law of large numbers and Chebyshev’s inequality [148].
Quantum MCI (QMCI) [150–152] provides a quadratic speedup for Monte Carlo integration by making use
of the QAE algorithm (Section 4.4).7 With QAE, using the example from the previous paragraph, the error in
the expectation computation decays as O( N1q ), where Nq is the number of queries made to a quantum oracle that
computes g(X0 , . . . , XT ). This is in contrast to the complexity in terms of the number of samples mentioned
earlier for MCI. Thus, if samples are considered as classical queries, QAE requires quadratically fewer queries
than classical MCI requires in order to achieve the same desired error.
The general procedure for Monte Carlo integration utilizing QAE is outlined below [154]. Let Ω be the set
of potential sample paths ω of a stochastic process that is distributed according to p(ω), and f : Ω 7→ A is a
real-valued function on Ω, where A ⊂ R is bounded. The task is to compute E[f (ω)] [152], which can be achieved
with QAE in three steps as follows.

1. Construct a unitary operator Pl to load a discretized and truncated version of p(ω). The probability value
p(ω) translates to the amplitude of the quantum state |ωi representing the discrete sample path ω. In
mathematical form, it is Xp
Pl |0i = p(ω) |ωi . (1)
ω∈Ω

2. Convert f into a normalized function f˜ : Ω 7→ [0, 1], and construct a unitary Pf that computes f˜(ω) and
loads the value onto the amplitude of |ωi. 8 The resultant state after applying Pl and Pf is
Xq q
Pf Pl |0i = (1 − f˜(ω))p(ω) |ωi |0i + f˜(ω)p(ω) |ωi |1i . (2)
ω∈Ω

3. Using the notation from Section 4.4, perform quantum amplitude estimation with U = Pf Pl and an
P O
oracle, φ , that marks states with the last qubit being |1i. The result of QAE will be an approximation
to f˜(ω)p(ω) = E[f˜(ω)]. This value can be estimated to a desired error  utilizing O( 1 ) evaluations of
ω∈Ω
U and its inverse [152]. Then scale E[f˜(ω)] to the original bounded range, A, of f to obtain E[f (ω)].

Variants [152] of QMCI also exist that can be applied when f (ω) has bounded L2 norm and bounded variance
(both absolutely and relatively). The problem of estimating a multidimensional random variable is called multi-
variate Monte Carlo estimation. Quantum algorithms that obtain similar quadratic speedups in terms of error 
have been proposed by Cornelissen and Jerbi [156]. They solved this problem in the framework of Markov reward
processes, and thus it has applicability to reinforcement learning (Section 7.7).
As mentioned earlier (Section 4.1), preparing an arbitrary quantum state is a difficult problem. In the case
of QMCI, we need to prepare the state in (2). The Grover–Rudolph algorithm [157] is a promising technique
for loading efficiently integrable distributions (e.g., log-concave distributions). It requires the ability to compute
the probability mass over subregions of the sample space. However, it has been shown that when numerical

7
To disambiguate, there are classical computational techniques called quantum Monte Carlo methods used in
physics to classically simulate quantum systems [153]. These are not the topic of this discussion.
p 
8
In general, quantum arithmetic methods [155] can be used to load arcsin f˜(ω) into a register and perform
p
controlled rotations to load f˜(ω) onto the amplitudes [154].

13
methods such as classical MCI are used to integrate the distribution, QMCI does not provide a quadratic speedup
when using this state preparation technique [158]. Additionally, variational methods (Section 4.7) have been
developed to load distributions [159], such as normal distributions [154]. Geometric Brownian motion (GBM) is
an example of a diffusion process whose solution is a series of log-normal random variables. These variables can be
implemented by utilizing procedures for loading standard normal distributions, followed by quantum arithmetic
[154, 155].
While GBM is common for modeling stock prices, it is often considered unrealistic [160] because it assumes
a constant volatility. In addition, unlike realistic models, the SDE of GBM can be solved in closed form; that
is, the distribution at every point in time is known, which avoids the need to use approximations involving the
SDE to simulate the process. The local volatility (LV) model is one approach for accounting for a changing
volatility by making it a function of the current price and time step. However, the corresponding SDE does
not have a closed-form solution, and hence this means numerical methods such as Euler–Maruyama schemes
[161] need to be used to simulate paths of the SDE with discretized time steps. Kaneko et al. [162] proposed
quantum circuits implementing the required arithmetic for simulating the LV model, namely, the operation Pl ,
based on improved techniques for simulating stochastic dynamics on a quantum device [163]. However, the
discretized simulation of the SDE introduces additional error and complexities when computing expectations of
functions of the stochastic process the SDE defines. Popular classical approaches to this problem are known as
Multilevel Monte Carlo (MLMC) methods [164], which can be used to approximately recover the error scaling of
classical single-level MCI. An et al. [165] proposed a quantum-enhanced version of MLMC that utilizes QMCI as
subroutine to compute all expectations involved. This allowed them to approximately recover the error scaling
of QMCI, and the approach benefits from being applicable to a wider variety of stochastic dynamics beyond
GBM and LV. With regard to implementing payoff functions, Herbert [166] developed a near-term technique for
functions that can be approximated well by truncated Fourier series. Furthermore, if fault tolerance (Section 3)
is to be taken into account, advancements in the field of quantum error correction are required in order to realize
quantum advantage [167].
Also worth noting are non-unitary methods for efficient state preparation [44, 168, 169]. These methods
are usually incompatible with QAE, however, because of the non-invertibility of the operations involved. The
methods would be more useful in a broader context where the inverse of the state preparation operation is not
required; this topic is out of the scope of this survey.

5.2 Numerical Solutions of Differential Equations


While Monte Carlo methods are widely used for modeling stochastic processes, other numerical methods are also
commonly used to calculate properties of stochastic systems. In fact, according to the Feynman–Kac formula
[170, 171], the expectation of certain random variables, whose evolution is described by SDEs, can be formulated
as the solution to parabolic partial differential equations (PDEs). This connection between SDEs and PDEs
allows one to study stochastic processes using deterministic methods. Specifically, this enables an alternative
to Monte Carlo methods for studying SDEs. A prime example of an alternative to MCI, applicable to certain
problems, is the Black–Scholes PDE, which forms the keystone for pricing financial derivatives.
Classical numerical methods 9 for solving PDEs and ordinary differential equations (ODEs) usually require
discretization of the domain on which solutions are sought. Specifically, the finite difference method (FDM)
solves a PDE by approximating the solution and its derivatives on a predefined grid in both space and time
and advancing it in time on the grid. Similarly, the finite volume method (FVM) employs a grid to divide the
space into volumes, on which the function to be solved is integrated and the volume integrals are converted to
surface integrals using the divergence theorem. These surface integrals are then connected by the original PDE.
Another grid-based method, the finite element method (FEM), uses functions to form a basis of solutions on the
subdomains divided by the grid. These basis functions are then assembled into an approximate for the solution
in the entire domain.
In addition to grid based methods mentioned above, spectral methods may also be used to solve PDEs.
Spectral methods [173] expand the solution into a linear combination of basis functions in the entire domain,
whose spatial derivatives can be computed exactly. Although spectral methods do not require a grid when
computing the spatial derivatives, numerical solution on a computer still requires evaluation of the functions on
a grid. Moreover, spectral methods are often expensive to implement for PDEs with nonlinear terms. The reason
is that spectral methods often require Fourier transforms, which have a computational cost of O(N log N ), with
N being the number of points on which the solution is evaluated. In contrast, grid-based spatial differentiation
methods usually have a complexity of O(N ).
All these techniques can easily become computationally intractable for complex and high-dimensional prob-
lems, since they often require extremely large grid sizes to achieve desired accuracy and numerical stability in the

9
For details on numerical methods for PDEs, see, for example, [172].

14
solution. Specifically, these algorithms have complexities that scale at least linearly with the number of points N
on which the solution is to be evaluated and exponentially in the number of dimensions d of the spatial variable.
On the other hand, quantum algorithms often address the same problem by representing a vector of size N
using O(log N ) space, and operating on it in potentially O(poly(log N )) time.

5.2.1 Quantum-linear-system-based algorithms


Since classical algorithms for numerical solutions of PDEs such as the ones mentioned above often resort to the
solution of linear systems, a natural candidate for quantum algorithms for PDEs is to employ the quantum linear
system algorithms as mentioned in Section 4.5.
Because of the linear nature of QLS, QLSAs have been used to solve linear PDEs and ODEs [174–178]. These
algorithms often consider differential equations with the following form:

L(u(x)) = f (x),

where x ∈ Cd is a d-dimensional vector, f (x) ∈ C is a scalar function that accounts for the inhomogeneity
in the PDE, u(x) ∈ C is the scalar function that we are solving, and L is a linear differential operator. The
differential equations are transformed into a set of linear equations by using the same approaches as the classical
algorithms mentioned above for computing numerical derivatives given a discretization. The resultant linear
system can then be solved by QLSAs. This approach will potentially give the PDE solver algorithm a complexity
of O(poly(d, log(1/))), where  is the error tolerance, which is an exponential improvement in d compared with
that of the best-known classical algorithms. P
InPparticular, FDM-based quantum algorithms [174–177, 179] use quantum states |ai = x
u(x) |xi and
|bi = x f (x) |xi to encode the functions u(x) and f (x) (up to a normalization factor), with the N computational
basis states representing the N discretized points in x and amplitudes proportional to the scalar values of the
functions at the respective points. A finite-difference scheme that defines how the derivatives are approximated
using values from neighboring grid points is then used to convert L into an N × N matrix operator A acting on
|ai, and hence the solution of the differential equations becomes a QLSP:

A |ai = |bi ,

where |ai is the solution we would like to obtain.


Similarly, quantum versions of FVM [180] and FEM [178] also use amplitude-encoded quantum states to
represent the solution or the basis functions used to approximate the solution, and generate the matrix A based
on the relation of the grid points as defined by differential equations and the respective methods. The same
applies to quantum algorithms based on spectral methods, in which the discretization is done in the spectral
space [177].
To extend the QLSA-based algorithms to the solution of nonlinear differential equations, linearization of
the equations is usually required. One approach is Carleman linearization [181, 182], which approximates the
nonlinear functions in an ODE with truncated Taylor expansions of them, so that the nonlinear ODE can be
approximated by a system of linear ODEs of the different powers of the solution. Specifically, Liu et al. [183]
utilize Carleman linearization to convert dissipative quardratic ODEs into linear ODEs, which are then solved
by QLSA. The overall complexity achieved by this algorithm is O(qT 2 poly(log T, log d, log 1/)/), where T is
the evolution time requested for the solution, q measures the decay of the solution, and d and  are the same as
defined earlier.
Another approach for implementing nonlinearity is to build multiple copies of the quantum state representing
the solution, apply linear transformations in the expanded space to simulate the nonlinearities in the differential
equations, and then trace out the additional copies to achieve the effective nonlinear transformation on a single
copy of the quantum state [184, 185]. In particular, Lloyd et al. [185] combined the methods for simulating the
dynamics of the nonlinear Schrödinger equation with QLSA-based quantum linear differential equation solvers
to solve nonlinear ODEs with an m-th order polynomial nonlinear term. It was shown that by using a suffi-
ciently large number of copies, n, of the quantum state that scales quadratically with the evolution time T , the
approximation error  is suppressed by a factor of 1/n.
While the aforementioned linearization approaches all involve approximations, the Koopman–von Neumann
approach [186] is another linearization method that converts nonlinear ODEs to linear transport equations, and
the conversion is exact. Specifically, Jin et al. [187] showed that a system of d nonlinear ODEs can be exactly
converted into a d + 1 dimensional linear transport PDE, which can be solved with QLSA-based linear PDE
solvers. Although this approach applies generally to all nonlinear ODEs, it was shown that the only possible
quantum advantage one may get in the general case is when we have multiple (M  1) initial data points on
which an ensemble average of some physical variables are to be estimated.
In the same work, it is also demonstrated that for certain types of nonlinear PDEs such as the Hamilton–
Jacobi and hyperbolic PDEs, an exact linearization exists using the level set formalism [188], which converts the

15
d + 1-dimensional (d spatial dimensions and time) nonlinear PDE to a 2d + 1- and a d + 2-dimensional PDE,
respectively.
For a general nonlinear PDE with d + 1 dimensions, one may discretize the nonlinear PDE and convert it to
a system of nonlinear ODEs [187], and then use the linearization techniques mentioned above to further convert
the nonlinear ODEs to linear PDEs or ODEs. Therefore, same as the approach for general nonlinear ODEs, this
algorithm will have a quantum advantage only when M is large for general nonlinear PDEs.
We note that in all of the QLSA-based algorithms discussed above, the output will be in the form of a
quantum state, which is generally difficult to be accessed classically without losing information or voiding the
overall quantum speedup. Nevertheless, as discussed in Section 4.5, one can still efficiently obtain certain statistics
of the solution without classical access to the quantum output state. This is usually the assumption of these
QLSA-based algorithms.

5.2.2 Variational algorithms


In addition to QLSA-based PDE solvers, variational algorithms have been developed for solving linear and non-
linear PDEs. One approach is to transform the PDE to a Wick-rotated Schrödinger equation with an imaginary
time variable and then use variational quantum imaginary time evolution (VarQITE; see Section 6.1.4) to solve
the equation along the imaginary time axis. This approach has been demonstrated in solving linear PDEs such
as the Black–Scholes equation [189], the Feynman–Kac equation, and the Ornstein–Uhlenbeck equation [190],
and nonlinear PDEs such as the Hamilton-Jacobi and the Bellman equation [190].
In another approach proposed by Lubasch et al. [191], a variational circuit is built to construct the quantum
state representing the solution of the PDE in an amplitude encoding. Nonlinearities are implemented by creating
multiple copies of the variational circuit and hence the quantum state. The algorithm then employs tensor net-
works to efficiently calculate the linear operations acting on the solution functions, and the variational parameters
are optimized based on cost functions built on the expectations of these linear operators on the quantum state,
obtained by measurements at the end of the circuit.
While these two variational algorithms both encode the solution into a quantum state such that the com-
putational basis states denote the discretized values of the spatial variable and the amplitudes are proportional
to the corresponding values of the function, Kyriienko et al. [192, 193] proposed a different approach to encode
the function u(x). Quantum circuits in this approach start with a quantum feature map in which values of the
function variables x are used as rotation angles in the circuit. The feature map is followed by a variational circuit
parameterized by θ, which produces the quantum state |uθ (x)i. The encoded function value u(x) is then given
by the expectation value of a cost operator C, namely,
u(x) = huθ (x)| C |uθ (x)i .
Derivatives of u may be approximated by shifting the x parameters following a finite-difference scheme or com-
puted exactly by using quantum circuit differentiation with respect to x. Using the outputs of the quantum
circuits, a classical loss function is built to quantify how well the solution satisfies the PDE, and hence the best
solution can be obtained by minimizing the loss function through adjusting θ in the variational circuit.
Paine et al. [194] utilized the differentiable quantum circuit approach for simulating the quantile function of
an SDE. The quantile function, whose evolution satisfies a PDE [195], is the inverse of the cumulative distribution
function of the solution to the SDE and can be used for generating samples [196]. Kubo et al. [197] considered
the trinomial-tree model [198] for SDEs. The evolution of the probability distribution can be represented as the
solution to the Schrödinger equation with a non-Hermitian Hamiltonian and thus simulated with VarQITE. The
authors also proposed methods for computing expectation values of functions of the SDE variables.

5.3 Financial Applications


Two main financial applications can be solved via the quantum algorithms presented for stochastic modeling:
derivative pricing and risk modeling.

5.3.1 Derivative Pricing


The mathematical framework used for pricing financial assets uses techniques from measure-theoretic probability
theory [199]. Specifically, the market is modeled as a filtered probability space (Ω, F, P), where the sample space
Ω corresponds to potential states of the market, the filtration F := {Ft }∞t=0 corresponds to information and events
that become observable over time (i.e., sub-σ-algebras), and P is the real-world market probability measure. The
time t can be continuous or discrete. Nondeterministic financial quantities such as stock prices and interest rates
are represented as F-adapted stochastic processes. Powerful mathematical properties can be derived when the
market is assumed to be arbitrage free and complete. An arbitrage-free market is one in which there does not
exist a riskless profit with no initial investment. The arbitrage-free assumption is equivalent to the existence of at

16
least one probability measure, equivalent to the real-world one P, under which the discounted (stochastic) value
process, {Vt }Tt=0 , of a portfolio of financial instruments is a martingale. Such a measure is called an equivalent
martingale measure or risk-neutral measure Q. The value process being a Q martingale, also called the fair value,
implies that the current value of the portfolio is equal to the expected value at maturity:

Vt = EQ [VT |Ft ] = EQ [C(t, T )|Ft ]. (3)

The second equality follows since the definition of the fair value at maturity VT , as seen from time t, is the
discounted accumulated cash flows realized from time t until T , which from now on we denote C(t, T ). If the
stronger assumption of a complete market is included, namely, all participants have access to all information that
could be obtained about the market, then the risk-neutral measure is unique.
An important type of pricing problem in finance is the pricing of derivatives. A derivative is a contract
that derives its value from another source (e.g., collection of assets, financial benchmarks) called the underlying,
which is modeled by {Xt (ω)}Tt=0 , where ω ∈ Ω. The value of a derivative is typically calculated by simulating the
dynamics of the underlying and computing the payoff accordingly. The payoff is a sequence of Borel-measurable
functions zt , which map the evolution of the underlying to the (discounted) payoff process Zt := zt (X0 , . . . , Xt ).
The functions zt are based on the contract. Thus, for derivatives, C(t, T ) is the accumulation of all Zt occurring
from t until maturity T . We can apply Equation (3) under a risk-neutral measure. Thus, the value of the derivative
at t, Vt , is equal E[C(t, T )|Ft ]. This justifies the usage of MCI for derivative pricing [12, 200], assuming sample
paths follow the risk-neutral measure.
We next discuss the application of QMCI and quantum differential equation solvers to the pricing of financial
derivatives. Specifically, we consider two types of derivatives: options and collateralized debt obligations (CDOs).

Options. An option gives the holder the right, but not the obligation, to purchase (call option) or sell (put
option) an asset at a specified price (strike price) and within a given time frame (exercise window) in the future.
EOne of the most well-known models for options is the Black–Scholes model [106]. This model assumes that
the price of the underlying (risky) asset, {Xt }Tt=0 , evolves like a GBM with a constant volatility. The Black–
Scholes PDE, which governs the evolution of the price of an option on the underlying asset, has a closed-form
solution, called the Black–Scholes formula, for the simple European option. A European call option has a payoff
that satisfies zt = 0, ∀t < T , and zT (XT ) = e−rT (XT − K)+ .10 This payoff depends only on the state of the
underlying at maturity; in other words, it is path independent. The interest rate r is the return of a risk-free
asset, which is included in the Black–Scholes model. According to Equation (3), the fair value of the European
option at time t is Vt = ert EQ [zT (XT )|Xt ], and the Black–Scholes formula provides Vt in closed form for all
t. However, many pricing tasks involve more complicated options that are often path dependent, and hence
information about the asset at multiple time steps is required; that is, zT depends on previous Xt . This is one of
the reasons MCI-based approaches [201] are more widely used than those based on directly solving PDEs, since
the path-dependent PDE can be complicated.
Stamatopoulos et al. [202] constructed quantum procedures for the uncertainty distribution loading, Pl , and
the payoff function, Pf , for European options. The operator Pl loads a distribution over the computational basis
states representing the price at maturity. The operator Pf approximately computes the rectified linear function,
characteristic of the payoff for European options. QMCI has also been applied to path-dependent options such
as barrier and Asian options [202, 203], and to multi-asset options [202]. For these options, prices at different
points on a sample path or for different assets are represented by separate quantum registers, and the payoff
operator Pf is applied to all of these registers. Pricing of more complicated derivatives with QAE can be found
in the literature. For example, Chakrabarti et al. performed an in-depth resource estimation and error analysis
for QAE applied to the pricing of autocallable options and target accrual redemption forwards (TARFs) [154].
Miyamoto and Kubo [204] utilized the FDM-based approach to solve the multi-asset Black–Scholes PDE
accelerated by using QLSA, as mentioned in Section 5.2.1. However, since applying the quantum FDM method
to the Black–Scholes PDE produces a quantum state encoding the value of the option at exponentially many
different initial prices, the quantum speedup is lost if we need to read out a single amplitude. The authors avoided
this issue by reading out the expected value of the option at a future time point under the probability distribution
of the price of the underlying at the same time point, which is the current price of the option as a consequence
of the value process being a martingale under the risk-neutral measure. This approach results in an exponential
speedup in terms of the dimension of the PDE over the classical FDM-based approach. Linden et al. [205]
compared various quantum and classical approaches for solving the heat equation, which the Black–Scholes
PDE reduces to. The quantum approaches discussed make use of the QLSA-based PDE solvers and quantum
walks (Section 4.6). Fontanela et al. [189] transformed the Black–Scholes PDE into the Schödinger equation
with a non-Hermitian Hamiltonian. They utilized VarQITE to simulate the Wick-rotated Schödinger equation.
Similarly, Alghassi et al. [190] applied imaginary time evolution to the more general Feynmann–Kac formula

10
(A)+ := max{A, 0}

17
for linear PDEs. Alternatively, Gonzalez–Conde et al. [206] used unitary dilation, that is, unitary evolution in
an expanded space, and postselection to simulate the nonunitary evolution associated with the non-Hermitian
Hamiltonian that results from mapping the Black–Scholes PDE to the Schrödinger. They also mentioned the
potential applicability beyond constant-volatility models.
Another prominent option is the American-style option [199]. While European, barrier, and Asian options
allow the holder to exercise their right only at maturity, American options allow the holder to buy/sell the
underlying asset at any point in time up to maturity, T . Thus, the buyer of the option needs to determine the
optimal exercise time that maximizes the expected payoff. More generally, finding the optimal execution time
is an optimal stopping problem [207]. In modern probability theory, a stopping time is an F-adapted stochastic
process τ : Ω 7→ [0, ∞). The adaptedness of τ implies that making the decision to stop at time t only requires
information obtained at time steps up to and including t. The corresponding stopped process at time t, using
stopping time τ , is defined as (Zτ )t := Zmin(t,τ (ω)) . This definition is used because the option payoff cannot
change once the buyer has decided to exercise the option.
The American option pricing problem involves a sequence of stopping times. An optimal stopping time τt
starting at time t consists of choosing to exercise at the current time when the payoff is greater than or equal
to the expected payoff that could be obtained from continuing, that is, E[Zτt+1 |Xt ], and is called a continuation
value. This recursive definition implies that finding the optimal stopping time can be formulated as solving a
dynamic programming problem. The value of the American option is the maximum expected payoff; and since
an optimal stopping time maximizes the expected payoff, this price is equal to E[Zτ0 |X0 ], where the current time
is 0. This is can be computed by using Monte Carlo integration with sampled payoffs, with discretized time
steps, that have optimally stopped.11 However, this also requires computing the continuation values. A popular
approach to estimate continuation values is through least-squares regression using a fixed set of basis functions.
The combined technique of regressing the conditional expectation values and sampling optimally-stopped payoffs
to solve the dynamic programming problem is called the least-squares Monte Carlo method (LSM) [208].
Doriguello et al. [209] proposed a quantum version of LSM that makes use of QMCI. The authors presented
unitary operators for computing the stopping time at time step t, τt , which was done by backtracking from
time step T . The stopping time τt−1 can be computed from the stopping time τt by using the definition of
the dynamic program for the optimal stopping time. Note that although in the classical case, we can store all
previously computed τt as we compute them backward in time, in the quantum case, we need to compute τt
for each linear regression step by starting at the maturity date T . As a result, while the quantum LSM has an
improved error dependence, it now depends quadratically on the maturity time T . Miyamoto [210] performed
a similar quantization of LSM for Bermudan options, similar to American options but the exercise times are
a discrete subset of time steps until maturity. Alghassi et al. [190] also mentioned the potential applicability
of their VarQITE approach, mentioned in Section 5.2.2, to the pricing of American options and options with a
stochastic volatility.

Collateralized Debt Obligations. A collaterialized debt obligation (CDO) is a derivative that is


backed by a pool of loans and other assets that serve as collateral if the loan defaults. The tool is used to free
up capital and shift risk. A typical CDO pool is divided into three tranches: equity, mezzanine, and senior. The
equity tranche is the first to bear loss; the mezzanine tranche investors bear loss if the loss is greater than the first
attachment point; and the senior tranche investors lose money if the loss is greater than the second attachment
point. Although the senior tranche is protected from loss, other default events can cause the CDO to collapse,
such as events that caused the 2008 financial crisis [211].
Conditional independence models [212] are often used for estimating the chances of parties defaulting on the
credit in the CDO pool. In such models, given the systemic risk W , the default risks B1 , . . . , Bn of the assets
involved are independent. The random variable W is then used as a latent variable to introduce correlations
between the default risks. The distributions of the default risks and the systemic risk are usually either Gaussian
[212] or normal-inverse Gaussian [213].
Tang et al. [211] presented quantum procedures for both the Pl and Pf operators used for QMCI. The goal
was to utilize QAE to estimate the expected loss of a given tranche. Pl is a composition of rotations to load
the uncorrelated probabilities of default (i.e., Bernoulli random variables). Then a procedure is used for loading
the distribution of W , and controlled rotations are used to apply the correlations. Pf computes the loss under a
given realization of the variables B1 , . . . , Bn and uses a quantum comparator to determine whether the loss falls
within the specified tranche range. QAE is then used, instead of MCI, to compute the expectation of the total
loss given that it falls within the tranche range.

11
Technically, the discretization of time converts the American option into a Bermudan option.

18
5.3.2 Risk Modeling
Risk modeling is another crucial problem for financial institutions, since risk metrics can determine the probability
and amount of loss under various financial scenarios. Risk modeling is also important in terms of compliance with
regulations. Because of the Basel III regulations mentioned earlier, financial institutions need to calculate and
meet requirements for their VaR, CVaR, and other metrics. However, the current classical approach of Monte
Carlo methods [12] is computationally intensive. Similar to the pricing problem mentioned above, a quantum
approach could also benefit this financial problem. In this subsection we explore the application of the presented
quantum algorithms to computing risk metrics such as VaR and CVaR. We also review quantum approaches for
sensitivity analysis and credit valuation adjustments.

Value at Risk. As first mentioned in Section 2.1.1, VaR and CVaR are common metrics used for risk
analysis. Mathematically, VaR at confidence α ∈ [0, 1] is defined as VaRα [X] = infx≥0 {x | P[X ≤ x] ≥ α}, usually
computed by using Monte Carlo integration [214]. CVaR is defined as CVaRα [X] = E[X | 0 ≤ X ≤ VaRα [X]]
[215]; α is generally around 99.9% for the finance industry. QAE can be used to evaluate VaRα and CVaRα
faster than with classical MCI. The procedure to do so, by Woerner and Egger [215, 216], is as follows.
Similar to the CDO case above (Section 5.3.1), the risk can be modeled by using a Gaussian conditional
independence model. Quantum algorithms for computing VaRα and CVaRα via QAE can utilize the same
realization of Pl as for the CDO case [215]. Note that P[X ≤ x] = E[h(X)], where h(X) = 1 for X ≤ x and
0 otherwise. Thus Pf implements a quantum comparator similar to both of the previous applications discussed
for derivatives. A bisection search can be used to identify the value of x such that P[X ≤ x] ≥ α using multiple
iterations of QAE. CVaRα uses a comparator to check against the obtained value for VaRα when computing the
conditional expectation of X.

Economic Capital Requirement. Economic capital requirement (ECR) summarizes the amount of
capital required to remain solvent at a given confidence level and time horizon [216]. Egger et al. [216] presented
a quantum method using QMCI to compute ECR. Consider a portfolio of K assets with PK L1 , . . . , LK denoting the
losses associated with each asset. The expected value of the total loss, L, is E[L] = m=1 E[Lm ]. The losses can
be modeled by using the same Gaussian conditional independence model discussed above. ECR at confidence
level α is defined as

ECRα [L] = VaRα [L] − E[L],


where α is generally around 99.9% for the finance industry. The expected total loss E[L] can be efficiently
computed classically, so only the estimate of VaRα [L] is computed by using QAE [216].

The Greeks. Sensitivity analysis is an important component of financial risk modeling. The stochastic model
of the underlying assets of a financial derivative, which is the process Xt , typically contains multiple variables
(i)
representing the market state. Some common examples are the spot prices of the underlying assets St , current
(i)
volatilities σt , and the maturity times T (i) . In mathematical finance, the partial derivatives, or sensitivities,
of Vt with respect to these quantities are called Greeks. For example, ∂V(i) t
are called Deltas, ∂V(i)
t
are called
∂St ∂σt
∂ 2 Vt ∂Vt
Vegas, (i) are called Gammas, and ∂T (i)
are called Thetas. Classically, this can be computed utilizing finite-
(∂σt )2
difference methods and MCI. The optimal classical method, which uses variance reduction techniques, requires
O(k/2 ) samples to compute k Greeks. Stamatopoulos et al. [217] proposed using quantum finite-difference
methods, which were first proposed by Jordan [218], to accelerate the computation of Greeks.
The quantum setting uses what is called a probability oracle. In the context of financial sensitivity, this
makes use of the operation in (2) from the QMCI algorithm, namely, operators Pl and Pf . For computing the
k-dimensional numerical gradient, Jordan’s original algorithm utilized a single call to a quantum binary oracle,
which computes a fixed-point approximation to Vt into a register. The derivative of Vt is then encoded in the
relative phases of quantum state utilizing phase kickback. In the case described above, however, we are provided
an analog quantity encoded in the amplitude of a quantum state, and thus we would require an analog to digital,
e.g. QAE, to analog conversion. The query complexity of QAE scales exponentially in the number of bits in
the digital representation. Gilyén, Arunachalam, and Wiebe (GAW) [219] proposed a conversion technique that
avoids the digitization step. Their approach takes as input a probability oracle and uses block-encoding-based
Hamiltonian simulation [97, 220] to perform the phase encoding. In other words, it converts the probability oracle
to a phase oracle. The GAW algorithm makes use of higher-order central difference methods [221]; and √ under
certain smoothness conditions of the function whose derivative we are taking [222], the algorithm uses O( k/)
queries to a probability oracle to compute the k-dimensional gradient. Stamatopoulos et al. showed that this
can be used to produce a quadratic speedup for computing k Greeks over the classical MCI and finite-difference-
based methods. They also proposed an alternative technique that does not attain the full quadratic speedup

19
but avoids the Hamiltonian simulation required for the probability to phase oracle conversion. As mentioned
by Stamatopoulos et al., the number of Greeks, k, can be large in practice, which makes the quadratic speedup
significant.

Credit Value Adjustments. Value adjustments [223], or XVAs, are a set of corrections to the value
of a financial instrument due to factors that are not included in the pricing model. Specifically, credit value
adjustments (CVAs) modify the price of a financial contract to take into account the risk of loss associated with
a counterparty defaulting or failing to meet their obligation, namely, the counterparty-credit risk (CCR). The
usual risk-neutral valuation does not take CCR into account. For example, from the perspective of a holder of
an option, if the seller of the contract defaults at time t, the loss to the holder is the value of the option at t.
For certain contracts, a proportion of the lost value can be recovered. This fraction is called the recovery rate,
R. The value that is lost after recovery is called the loss given default (LGD). The LGD is computed as 1 − R.
We consider a finite set of time points: t0 , t1 , . . . , tN , where tN is the largest maturity date of all contracts
in the portfolio. If we are performing a valuation of the CCR at time t0 , the expected loss if a default occurs at
ti > t0 requires determining the fair value of the instrument at time ti . Since ti is in the future, however, the
expected value lost if default occurs at time ti , called the expected exposure profile, is EQ [(Vti )+ |Ft0 ], where Vti
is the fair price of a portfolio of contracts at ti . Since default can occur before or at maturity probabilistically,
the CVA at t0 is defined to be
N
X
CVAt0 := E[(1 − R)(Vτ )+ |Ft0 ] = (1 − R) P(ti < τ < ti+1 )EQ [(Vti )+ |Ft0 ], (4)
i=0

where measure P denotes the probability of defaulting in a given time period. The maximum is to ensure we
consider only obligations of the counterparty, and τ is a random stopping time representing the time of default.
The adjusted price is
Vt00 := Vt0 − CVAt0 . (5)
As mentioned by Han and Rebentrost [224], if the joint distribution can be represented as a simple product,
namely,
(t )
qj i := P(ti < τ < ti+1 )Q(j), (6)
(j)
where j is a series of economic events, for example a path of the underlying assets, leading to a price of Vti
at time ti occurring with risk-neutral probability Q, then the CVA computation can be viewed as the following
inner product: X (t ) (j)
CVAt0 = qj i Vti := ~ ~.
q·V (7)
i,j

Han and Rebentrost presented a variety of quantum inner product estimation algorithms that make use of QMCI
~ according to ~
to obtain a quadratic speedup over the classical approach of sampling the entries of V q and taking
sample averages. Their approach also applies to the problem of pricing a portfolio of derivatives. Along similar
lines, Alcazar et al. [225] proposed a more near-term approach to solving this problem that makes use of a
Bayesian approach to QAE [226, 227]. This Bayesian QAE approach has a complexity that varies according to
the expected hardware noise and allows for interpolating between the complexities of MCI and QMCI. They also
performed an end-to-end resource analysis of their approach.

6 Optimization
Solving optimization problems (e.g., portfolio optimization, arbitrage) presents the most promising commercially
relevant applications on NISQ hardware (Section 3.3). In this section we discuss the potential of using quantum
algorithms to help solve various optimization problems. We start by discussing quantum algorithms for the
NP-hard12 combinatorial optimization problems [26, 228] in Section 6.1, with particular focus on those with
quadratic cost functions. Neither classical nor quantum algorithms are believed to be able to solve these NP-hard
problems asymptotically efficiently (Section 4). However, the hope is that quantum computers can solve these
problems faster than classical computers on realistic problems such that it has an impact in practice. Section 6.2
focuses on convex optimization. In its most general form convex optimization is NP-hard. However, in a variety
of situations such problems can be solved efficiently [229] and have a lot of nice properties and a rich theory
[27]. There exist quantum algorithms that utilize QLS (Section 4.5), and other techniques to provide polynomial
speedups for certain convex, specifically conic, optimization problems. These can have a significant impact

12
By definition (Section 4), optimization problems are not in NP, but they can be NP-hard. However, there is
an analogous class for optimization problems called NPO [228].

20
in practice. Section 6.3 discusses large-scale optimization, which utilizes hybrid approaches that decompose
large problems into smaller subtasks solvable on a near-term quantum device. The more general mixed-integer
programming problems, in which there are both discrete and continuous variables, can potentially be handled by
using quantum algorithms for optimization (both those mentioned in Section 6.1 and, in certain scenarios, Section
6.2) as subroutines in a classical-quantum hybrid approach (i.e., decomposing the problem in a similar manner
to the ways mentioned in Section 6.3) [146, 230]. Many quantum algorithms for combinatorial optimization are
NISQ friendly and heuristic (i.e., with no asymptotically proven benefits). The convex optimization algorithms
are usually for the fault-tolerant era of quantum computation.

6.1 Combinatorial Optimization


In the general case, integer programming (IP) problems, which involve variables restricted to integers (including
those with integer variable constraints), are NP-hard [26]. This section focuses on combinatorial optimization,
sometimes used synonymously with discrete or integer optimization. In this paper, combinatorial optimization
refers specifically to integer optimization problems that only consist of binary variables, namely, binary integer
programs [26]. Problems that consist of selecting objects from a finite set to optimize some cost function, poten-
tially including constraints, fit this formalism. However, integer programs can be reduced to binary programs by
representing integers in binary or using one-hot encoding. Most of the financial optimization problems presented
in this article can be reduced to combinatorial problems with quadratic cost functions called binary quadratic
programming problems [26].
A binary quadratic program without constraints, which is also NP-hard, is called a quadratic unconstrained
binary optimization problem [231] and lends itself naturally to many quantum algorithms. A QUBO problem on
N binary variables can be expressed as
xT Q~
min ~ x+~ xT~b, (8)
x∈BN
~

where ~b ∈ R , Q ∈ R
N N ×N
, and B = {0, 1}. Unconstrained integer quadratic programming (IQP) problems can
be converted to QUBO in the same way that IP can be reduced to general binary integer programming [232].
Constraints can be accounted for by optimizing the Lagrangian function [27] of the constrained problem, where
each dual variable is a hyperparameter that signifies a penalty for violating the constraint [233].
QUBO has a connection with finding the ground state of the generalized Ising Hamiltonian from statistical
mechanics [233]. A simple example of an Ising model is a two-dimensional lattice under an external longitudinal
magnetic field that is potentially site-dependent; in other words, the strength of the field at a site on the lattice
is a function of location. At each site j, the direction of the magnetic moment of spin zj may take on values in
the set {−1, +1}. The value of zj is influenced by both the moments of its neighboring sites and the strength of
the external field. This model can be generalized to an arbitrary graph where the vertices on the graph represent
the spin sites and weights on the edges signify the interaction strength between the sites, hence allowing for
long-range interactions. The classical Hamiltonian for this generalized model is
X X
H=− Jij zi zj − h j zj ,
ij j

where the magnitude of Jij represents the amount of interaction between sites i and j and its sign is the desired
relative orientation; hj represents the external field strength and direction at site j [234]. Using a change of
variables zj = 2xj − 1, a QUBO cost function as defined in Equation (8) can be transformed into a classical Ising
Hamiltonian, with xj being the jth element of ~ x. The classical Hamiltonian can then be “quantized” by replacing
the classical variable zj for the jth site with the Pauli-Z operator σjz (Section 3.1). Therefore, finding the optimal
solution for the QUBO is equivalent to finding the ground state of the corresponding Ising Hamiltonian.
Higher-order combinatorial problems can be modeled by including terms for multispin interactions. Some of
the quantum algorithms presented in this section can handle such problems (e.g., QAOA in Section 6.1.2). From
the quantum point of view, these higher-order versions are observables that are diagonal in the computational
basis. The following subsections will more fully show why the quantum Ising or more generally the diagonal
observable formulation results in quantum algorithms for solving combinatorial problems. In Section 6.4 we
discuss a variety of financial problems that can be formulated as combinatorial optimization and solved by using
the quantum methods described in this section.

6.1.1 Quantum Annealing


Quantum annealing, initially mentioned in Section 4.8, consists of adiabatically evolving according to the time-
dependent transverse-field Ising Hamiltonian [234], which is mathematically defined as
! !
X X X
H(t) = A(t) − σix + B(t) − Jij σiz σjz − hi σiz . (9)
i ij i

21
This formulation natively allows for solving QUBO. As discussed earlier, however, with proper encoding and
additional variables, unconstrained IQP problems can be converted to QUBO. Techniques also exist for converting
higher-order combinatorial problems into QUBO problems, at the cost of more variables [235, 236]. QUBO, as
mentioned earlier, can, through a transformation of variables, be encoded in the site-dependent longitudinal
magnetic field strengths {hi } and coupling terms {Jij } of an Ising Hamiltonian. The P QA process initializes
the system in a uniform superposition of all states, which is the ground state of − i σix . σix is the Pauli-X
(Section 3.2) operator applied to the ith qubit. This initialization implies the presence of a large transverse-field
component, A(t) in Equation (9). The annealing schedule, controlled by tuning the strength of the transverse
magnetic field, is defined by the functions A(t) and B(t) in Equation (9). If the evolution is slow enough,
according to the adiabatic theorem (Section (3.2.2)), the system will remain in the instantaneous ground state
for the entire duration and end in the ground state of the Ising Hamiltonian encoding the problem. This process
is called forward annealing. There also exists reverse annealing [237], which starts in a user-provided classical
state,13 increases the transverse-field strength, pauses and then, assuming an adiabatic path, ends in a classical
state. The pause is useful when reverse quantum annealing is used in combination with other classical optimizers.
The D-Wave devices are examples of commercially available quantum annealers [238]. These devices have
limited connectivity resulting in extra qubits being needed to encode arbitrary QUBO problems (i.e., Ising
models with interactions at arbitrary distances). Finding such an embedding, however, is in general a hard
problem [239]. Thus heuristic algorithms [240] or precomputed templates [241] are typically used. Assuming the
hardware topology is fixed, however, an embedding can potentially be efficiently found for QUBOs with certain
structure [242]. Alternatively, the embedding problem associated with restricted connectivity can be avoided
if the hardware supports three- and four-qubit interactions, by using the LHZ encoding [243] or its extension
by Ender et al. [244]. The extension by Ender et al. even supports higher-order binary optimization problems
[245, 246]. Moreover, the current devices for QA provide no guarantees on the adiabaticity of the trajectory that
the system follows. Despite the mentioned limitations of today’s annealers, a number of financial problems still
remain that can be solved by using these devices (Section 6.4).

6.1.2 Quantum Approximate Optimization Algorithm


QAOA is a variational algorithm (Section 4.7) initially inspired by the adiabatic evolution of quantum annealing.
The initial formulation of QAOA can be seen as a truncated or Trotterized [2] version of the QA evolution to a
finite number of time steps.14 QAOA follows the adiabatic trajectory in the limit of infinite time steps, which
provides evidence of good convergence properties [136]. This algorithm caters to gate-based computers. However,
it has been tested on an analog quantum simulator [247–249].
QAOA can be used to solve general unconstrained combinatorial problems. This capability allows QAOA
to solve, without additional variables, a broader class of problems than QA can. As for QA, constraints can be
encoded by using the Lagrangian of the optimization problem (i.e., penalty terms). Consider an unconstrained
combinatorial maximization problem with N binary variables {xi }N i=1 concatenated into the bit string z :=
x1 . . . xN and cost function f (z). The algorithm seeks a string z ∗ that maximizes f . This is done by first
preparing a parameter-dependent N -qubit quantum state (realizable as a PQC, Section 4.7):

|γ, βi = U (B, βp )U (C, γp ) · · · U (B, β1 )U (C, γ1 ) |si , (10)

where γ = (γ1 , . . . , γp ), β = (β1 , . . . , βp ). The unitary U (C, γ) is called the phase operator or phase separator
and defined as U (C, γ) = e−iγC , where C is a diagonal observable that encodes the objective function f (z). In
addition, U (B, β) is the mixing operator defined as U (B, β) = e−iβB , where, in the initial QAOA formulation,
PN x
B = σ . The initial state, |si, is a uniform superposition, which is an excited state of B with maximum
i=1 i
energy. However, other choices for B exist that allow QAOA to incorporate constraints without penalty terms
[137]. Equation (10), where the choice of B can vary, is called the alternating operator ansatz [137]. Preparation
of the state is then followed by a measurement in the computational basis (Section 3.1), indicating the assignments
of the N binary variables. The parameters are updated such that the expectation of C, that is, hγ, β| C |γ, βi,
is maximized. Note that we can multiply the cost function by −1 to convert it to a minimization problem. The
structure of the QAOA ansatz allows for finding good parameters purely classically for certain problem instances
[136]. In general, however, finding such parameters is a challenging task. As mentioned earlier (Section 4.7), the
training of quantum variational algorithms is NP-hard. This does in fact withhold us from reaching arbitrarily
good approximate or local minima and serves as merely an indication of the complexity of finding the exact global

13
The term “classical state” refers to any computational basis state. The quantum Ising Hamiltonian has an
eigenbasis consisting of computational basis states.
14
In the case of standard QAOA, however, the adiabatic trajectory would move between instantaneous eigen-
states with the highest energy [136]. This still follows from the adiabatic theorem (Section 3.2.2).

22
minimum. However, a lot of recent research has been proposing practical approaches for effective parameter and
required circuit depth estimation [250–255] as well as reducing the phase operator’s complexity [256, 257] and
mixer’s [258] complexity. Recent demonstrations of QAOA on gate-based quantum hardware have revealed the
capability of start-of-the-art quantum hardware technologies in handling unconstrained [259, 260] and constrained
optimization problems [261].

6.1.3 Variational Quantum Eigensolver


The VQE algorithm, mentioned in Section 4.7, is a prominent algorithm in the NISQ era because it can be run on
current hardware [262]. Essentially, the algorithm solves the problem of approximating the smallest eigenvalue
of some observable and gives a prescription for reconstructing an approximation of a corresponding eigenstate.
Given an observable, the expected value or equivalently the Rayleigh quotient (Section 4.7) with respect to any
state is an upper bound of the smallest eigenvalue. There is equality only when the state is an eigenstate with
the smallest eigenvalue. The variational procedure based on minimizing the Rayleigh quotient is known as the
Rayleigh–Ritz method [138].
VQE utilizes a parameterized ansatz to represent a space of candidate wave functions. A classical optimizer
modifies the parameters to minimize the expectation of the observable (i.e., the cost function) and thus tries to
approach the value of the minimum eigenvalue. The hybrid system works in an iterative loop where the trial wave
function is loaded onto the quantum processor and then the classical computer feeds the quantum processor new
sets of parameters to improve the initial trial state based on the computed expected value. Like QAOA, VQE
can also be used to find a minimum eigenvalue state of an observable, which is diagonal in the computational
basis, used to encode a combinatorial optimization problem. VQE is more general than QAOA (Section 6.1.2).
The reason is that QAOA specifically uses the alternating operator ansatz, while, VQE can utilize any PQC as
an ansatz. However, certain ansätze are more suitable for particular problems because they take into account
prior information about the ground-state wave function(s) [132]. Alternatively, there exist evolutionary, noise-
resistant techniques for optimizing the structure of the ansatz and significantly increasing the space of candidate
wave functions [263]. Amaro et al. [264] introduced quantum variational filtering, which can be used to increase
the probability of sampling low-eigenvalue states of an observable when the filtering operator is applied to an
arbitrary quantum state. Their algorithm, Filtering VQE, was applied to combinatorial optimization problems
and outperformed both standard VQE and QAOA. A rigorous analysis of the convergence of VQE with an
overparameterized ansatz has been performed by Xuchen et al. [265].

6.1.4 Variational Quantum Imaginary Time Evolution (VarQITE)


In relativistic terms, real-time evolution of a quantum state happens in Minkowski spacetime, which, unlike
Euclidean space, has an indefinite metric tensor [266]. Imaginary-time evolution (ITE) [267] transforms the
evolution to Euclidean space and is performed by making the replacement t 7→ it = τ , known as a Wick
rotation [268]. The imaginary time dynamics follow the Wick-rotated Schrödinger equation

|ψ̇(τ )i = −(H − Eτ ) |ψ(τ )i , (11)

where H is a quantum Hamiltonian and Eτ = hψ(τ )|H |ψ(τ )i. The time-evolved state (solution to Equation
(11)) is |ψ(τ )i = C(τ )e−Hτ |ψ(0)i, where C(τ ) is a time-dependent normalization constant [269]. As τ − → ∞, the
smallest eigenvalue of the non-unitary operator e−Hτ dominates, and the state approaches a ground state of H.
This is assuming the initial state has nonzero overlap with a ground state. Thus, ITE presents another method
for finding the minimum eigenvalue of an observable [269, 270].
For variational quantum ITE (VarQITE) [138], the time-dependent state |ψ(τ )i is represented by a parame-
terized ansatz (i.e., PQC), |φ(θ(τ ))i, with a set of time-varying parameters, θ(τ ). The evolution is projected onto
the circuit parameters, and the convergence is also dependent on the expressibility of the ansatz [138]. However,
quantum ITE using McLachlan’s principle is typically expensive to implement because of the required metric
tensor computations and matrix inversion [138, 271]. Benedetti et al. developed an alternative method for Var-
QITE that avoids these computations and is gradient free [272]. Their variational time evolution approach is also
applicable to real-time evolution. VarQITE can be used to solve a combinatorial optimization problem by letting
H encode the combinatorial cost function, in a similar manner to QAOA and VQE, and is also not restricted to
QUBO. In addition, VarQITE has been used to prepare states that do not result from unitary evolution in real
time, such as the quantum Gibbs state [273].
We note that the variational principles for real-time evolution, mentioned in Section 4.7, can be used to
approximate adiabatic trajectories [274]. This can potentially be used for combinatorial optimization as well.

23
6.1.5 Optimization by Quantum Unstructured Search
Dürr and Høyer [275] presented an algorithm that applies quantum unstructured search (Section 4.1) to the
problem of finding the global [276] minimum of a black-box function. This approach makes use of the work of
Boyer et al. [75] to apply Grover’s search when the number of marked states is unknown. Unlike the metaheuristics
of quantum annealing and gate-based variational approaches, the Dürr-Høyer algorithm has a provable √ quadratic
speedup in query complexity. That is, given a search space X of size N , the algorithm requires O( N ) queries
to an oracle that evaluates the function f : X 7→ R to minimize. A classical program would require O(N )
evaluations.
The requirements to achieve quadratic speedup are the same as for Grover’s algorithm. The quantum oracle
in this case is Ogy |xi = (−1)gy (x) |xi; gy : X 7→ {0, 1} is a classical function mapping gy (x) to 1 if f (x) ≤ y,
otherwise 0; and y is the current smallest value of f evaluated so far [276]. The Grover adaptive search framework
of Bulger et al. [277] generalizes and extends the Dürr–Høyer algorithm. Gilliam et al. [278] proposed a method
for efficiently implementing the oracle Ogy by converting to Fourier space. This applies particularly to binary
polynomials, and thus this method can be used to solve arbitrary constrained combinatorial problems with a
quadratic speedup over classical exhaustive search.

6.2 Convex Optimization


Quantum algorithms have been invented for common convex optimization problems [27] such as linear program-
ming, second-order cone programming (SOCP), and semidefinite programming (SDP). These conic programs have
a variety of financial applications, such as portfolio optimization, discussed in Section 6.4.1. The quantum linear
systems algorithms from Section 4.5 have been used as subroutines to improve algorithms for linear programming
such as the simplex method [279] in the work by Nannicini [280] and interior-point methods (IPMs) [229] in the
work by Kerenedis and Prakash [101]. The quantum IPM utilizes QLS solvers with tomography to accelerate the
computation of the Newton linear system. Kerenedis and Prakash [281] also applied quantum IPM to provide,
under certain conditions, small polynomial speedups for the primal-dual IPM method for SOCP [282]. Further-
more, various approaches using quantum computing to solve SDPs have been presented. Brandão [283], Brandão
et al. [284], and van Apeldoorn et al. [285] proposed quantum improvements to the multiplicative-weights method
of Arora and Kale [286] for SDP. Alternatively, Kerenedis and Prakash applied their quantum IPM to SDP [101].

6.3 Large-Scale Optimization


Near-term quantum devices are expected to have a limited number of qubits in the foreseeable future. This is
a major obstacle for real applications in finance that are large scale in many domains. The hybridization of
quantum and classical computers using decomposition-based approaches is a prominent solution for this problem
[287]. The idea behind this class of approaches is similar to what is done with numerical methods and high-
performance computing. The main driving routine that is executed on a classical machine decomposes the
problem into multiple parts. These subproblems are then each solved separately on a quantum device. The
classical computer then combines solutions received from the quantum computer. For example, in quantum
local search for modularity optimization [288] and an extension of it to a multilevel approach [289], the main
driving routine identifies small subsets of variables and solves each subproblem on a quantum device. A similar
decomposition approach for finding maximum graph cliques was developed by Chapuis et al. [290].

6.4 Financial Applications


In this section we explore multiple applications of the quantum optimization algorithms presented above to
financial problems.

6.4.1 Portfolio Optimization


In finance, one of the most commonly seen optimization problems is portfolio optimization. Portfolio optimization
is the process of selecting the best set of assets and their quantities, from a pool of assets being considered,
according to some predefined objective. The objective can vary depending on the investor’s preference regarding
financial risk and expected return. Modern portfolio theory [291] focuses on the trade-offs between risk and
return to produce what is known as an efficient portfolio, which maximizes the expected return given a certain
amount of risk. This trade-off relationship is represented by a curve known as the efficient frontier. The expected
return and risk of a financial portfolio can often be modeled respectively by looking at the mean and variance
of static portfolio returns. The problem setup for portfolio optimization can be formulated as constrained utility
maximization [291].

24
In portfolio optimization, each asset class, such as stocks, bonds, futures, and options, is assigned a weight.
Additionally, all assets within the class are allocated in the portfolio depending on their respective risks, returns,
time to maturity, and liquidity. The variables controlling how much the portfolio should invest in an asset can be
continuous or discrete. However, the overall problem can contain variables of both types. Continuous variables
are suitable for representing the proportion of the portfolio invested in the asset positions when non-discrete
allocations are allowed. Discrete variables are used in situations where assets can be purchased or sold only
in fixed amounts. The signs of these variables can also be used to indicate long or short positions. When
risk is accounted for, the problem is typically a quadratic program, usually with constraints. Depending on the
formulation, this problem can be solved with techniques for convex [291] or mixed-integer programming [292, 293].
Speeding up the execution of and improving the quality of portfolio optimization is particularly important when
the number of variables is large.
This section first focuses on combinatorial formulations using the algorithms mentioned in Section 6.1. As
mentioned there, the algorithms presented can be generalized to integer optimization problems. The second part
of this section focuses on convex formulations of the problem solved by using algorithms from Section 6.2. The
handling of mixed-integer programming was briefly mentioned at the beginning of Section 6.

Combinatorial Formulations. The first combinatorial formulation considered is risk minimization or


hedging [294]. The risk term is quadratic and is derived from the covariance or correlation matrix computed
by using historical pricing information. A simplified version of this problem can be formulated as the following
QUBO:
xT Σ~
min ~ x, (12)
x∈BN
~
N ×N
where Σ ∈ R is the covariance or correlation matrix. This problem can be modified to include budget
constraints. With regard to this problem, Kalra et al. used a time-indexed correlation graph such that the vertices
represent assets and an edge between vertices represents the existence of a significant correlation between them
[295]. The price changes are modeled by data collected daily. Instead of continuous correlation values, Kalra et
al. utilized a threshold on the correlations to decide whether to create edges between assets. This was to create
sparsity in the graph. One approach they took to solve risk minimization was formulating the problem as finding
a maximum-independent set in the graph constructed. This is similar to Equation (12). They also presented a
modified formulation suitable for the minimum graph coloring problem. Instead of only choosing whether to pick
an asset or not, Rosenberg and Rounds [296] used binary variables to represent whether to hold long or short
positions. This approach is known as long-short minimum risk parity optimization. The authors represented the
problem by an Ising model, which is equivalent to a QUBO problem, as follows:

min ~sT W T ΣW ~s,


s∈{−1,1}N
~

where W is an N × N diagonal matrix with the entries of w ~ ∈ [0, 1]N , such that kwk
~ 1 = 1, on the diagonal; w,
~
contains a fixed weight for each asset j indicating the proportion of the portfolio that should hold a long or short
position in j. In both cases, the authors utilized quantum annealing (Section 6.1.1).
Instead of just risk minimization, the problem can be reformulated to specify a desired fixed expected return,
µ ∈ R,
xT Σ~
min ~ x : ~rT ~
x = µ, ~xT~1 = β (13)
x∈BN
~

or to maximize the expected return,

xT Σ~
min q~ x − ~rT ~ xT~1 = β,
x : ~ (14)
x∈BN
~

where ~x is a vector of Boolean decision variables, ~r ∈ RN contains the expected returns of the assets, Σ is the
same as above, q > 0 is the risk level, and β ∈ N is the budget. In both cases, the Lagrangian of the problem,
where the dual variables are used to encode user-defined penalties, can be mapped to a QUBO (Section 6.1).
These formulations are known as mean-variance portfolio optimization problems [291]. More constraints can also
be added to these problems.
Hodson et al. [297] and Slate et al. [298] both utilized QAOA to solve a problem similar to Equation (14).
They allowed for three possible decisions: long position, short position, or neither. In addition, they utilized
mixing operators (Section 6.1.2) that accounted for constraints. Alternatively, Gilliam et al. [278] utilized an
extension of Grover adaptive search (Section 6.1.5) to solve Equation (14) with a quadratic speedup over classical
unstructured search. Rosenberg et al. [299] and Mugel et al. [300] solved dynamic versions of Equation (14),
where portfolio decisions are made for multiple time steps. Additionally, an analysis of benchmarking quantum
annealers for portfolio optimization can be found in a paper by Grant et al. [301]. Furthermore, an approach
using reverse quantum annealing (Section 6.1.1) was proposed by Venturelli and Kondratyev (Section 8.2). Note
that any of the methods from Section 6.1 can be applied whenever quantum annealing can. Currently, however,

25
because quantum annealers possess more qubits than existing gate-based devices possess, more experimental
results with larger problems have been obtained with annealers.

Convex Formulations. This part of the section focuses on formulations of the portfolio optimization
problem that are convex. The original formulation of mean-variance portfolio optimization by Markowitz [291]
was convex, and thus it did not contain integer constraints, as were included in the discussion above. Thus,
if the binary variable constraints are relaxed, the optimization problem, represented by Equation (13), can be
reformulated as a convex quadratic program with linear equality constraints with both a fixed desired return and
budget (Equation 15):

~ T Σw
min w ~T w
~ : p ~ = ξ, ~rT w
~ = µ, (15)
w∈R
~ N

where w~ ∈ RN is the allocation vector. The ith entry of the vector w ~ represents the proportion of the portfolio
that should invest in asset i. This is in contrast to the combinatorial formulations where the result was a Boolean
vector. The vectors p~ ∈ RN and ~r ∈ RN contain the assets’ prices and returns, respectively; ξ ∈ R is the budget;
and µ ∈ R is desired expected return. In contrast to Equation (13), Equation (15) admits a closed-form solution
[302]. However, more problem-specific conic constraints and cost terms (assuming these terms are convex) can be
added, increasing the complexity of the problem, in which case it can potentially be solved with more sophisticated
algorithms (e.g., interior-point methods). As mentioned, there exist polynomial speedups provided by quantum
computing for common cone programs (Section 6.2). Equation (15) with additional positivity constraints on the
allocation vector can be represented as an SOCP and solved with quantum IPMs [302].
Considering the exact formulation in Equation (15), however, we might be able to obtain superpolynomial
speedup if we relax the problem even further. Rebentrost and Lloyd [303] presented a relaxed problem where
the goals were to sample from the solution and/or compute statistics. Their approach was to apply the method
of Lagrange multipliers to Equation (15) and obtain a linear system. This system can be formulated as a
quantum linear systems problem and solved with QLS methods (Section 4.5). The result of solving the QLSP
(Section 4.5) is a normalized quantum state corresponding to the solution found with time complexity that is
polylogarithmic in the system size, N . Thus, if only partial information about the solution is required, QLS
methods potentially provide an exponential speedup, in N , over all known classical algorithms for this scenario,
although the sparsity and well-conditioned matrix assumptions mentioned in Section 4.5 have to be met to
obtain this speedup. As mentioned in Section 4.5, however, in scenarios involving low-rank matrices randomized
numerical linear algebra algorithms can be applied to obtain the same exponential speedup in dimension [304].
However, these methods typically have a high polynomial dependence on the rank and precision, which still allows
for a potential polynomial quantum advantage. This dequantized approach was benchmarked in [305].

6.4.2 Swap Netting


A swap is a contract where two parties agree to exchange cash flows periodically for a specific term [306]. Common
types of swaps are credit default swaps [307], foreign exchange currency swaps, and interest rate swaps [308].
The simplest example is the fixed-to-floating interest rate swap, where two parties exchange the responsibility
of paying fixed-interest rate and floating-interest rate payments based on a principal amount called the notional
value [309]. Examples of potential reasons to enter into such a contract could be to hedge or take advantage
of the opposite party’s comparative advantage [309]. Once an agreement between the parties has been made, a
clearing house converts an agreement between two parties into two separate agreements with the clearing house.
The clearing house potentially has multiple swaps with multiple parties. In the case that multiple cash flows
cancel, the clearing house would like to net all of the contracts that cancel into a new one for only the net flow
[310]. This is to reduce the risk exposure associated with having multiple contracts.
Rosenberg et al. [306] formulated the task of finding a set of “nettable” swaps associated with a single
counterparty that maximizes the total notional value as a QUBO. This optimization problem can be solved in
parallel for each subset of potentially nettable swaps. Since swaps can be netted for potentially many reasons, the
set of combinations can grow significantly. The procedure proposed by the authors is as follows. First, partition
the set of swaps associated with a counterparty into subsets that can be netted. Second, for a subset M in the
partition, with N := |M|, construct the following QUBO:
!2
X
max ~
x pT
~−α xi d~i p
~ ~i xT V ~
− β~ x,
x∈BN
~
i∈M

where ~x ∈ BN is a vector of decision variables indicating whether to include the swap in the netted subset,
~ ∈ RN is the vector of notional values associated with the swaps, and d~ ∈ {+1, −1}N indicates the direction of
p
the fixed-rate interest payments: +1 if the clearing house pays the counterparty and −1 if the opposite is true.

26
×N
V ∈ RN ≥0 signifies how incompatible two potentially “nettable” swaps are, and α and β weigh the importance
of the second and third terms. The combined goal of the three terms is to maximize the total notional value
netted (first term) such that the notional values of the selected swaps cancel (second term) and the swaps are
compatible (third term). The QUBO problems were solved by utilizing quantum annealing (Section 6.1.1).

6.4.3 Optimal Arbitrage


An arbitrage opportunity occurs when one can profit from buying and selling similar financial assets in different
markets because of price differences [311]. Arbitrage is due to the lack of market equilibrium; acting on arbitrage
opportunities should shift the market back into equilibrium [312]. Thus, it is to one’s advantage to be able to
identify these opportunities as quickly and efficiently as possible. An example is currency arbitrage, where one
can buy and sell currencies in foreign exchange (FX) markets such that when one ends up holding the initial
currency again, one has profited because of the differences in FX rates (conversion rates) [311].
One way to formulate the problem of arbitrage detection is to view the different assets as the vertices of a
graph with weighted edges between them indicating the negative logarithm of the conversion rate. The problem
then reduces to identifying negative cycles [313], for which multiple classical algorithms exist [314]. However, the
arbitrage opportunity detected is not necessarily the optimal one.
Soon and Ye [315] formulated the task of identifying the optimal currency arbitrage opportunity in the graph
structure mentioned above as a binary linear programming problem. Thus this problem is NP-hard (Section
6.1). Rosenberg [313] reformulated this as a QUBO and solved it using quantum annealing. Alternatively, this
can be solved by utilizing the other techniques for combinatorial optimization from Section 6.1. The QUBO
resulted from converting the constrained binary linear program of Soon and Ye into an unconstrained problem.
In addition, Rosenberg provided an alternative model that allows for arbitrage decisions that revisit the same
asset (i.e., the same vertex). By definition, this is not allowed in a cycle. Solving this formulation of the problem
effectively consists of selecting the assets to buy or sell and the point in the arbitrage loop at which the buying
or selling occurs. However, this results in a problem with significantly more binary variables [313].

6.4.4 Identifying Creditworthiness


Creditworthiness identification is an important problem in finance. With large sums of money being loaned
globally, it is important to be able to identify the key independent features that are influential in determining
the creditworthiness of the requester. The task of identifying features that balance independence and influence
can be expressed as a QUBO problem.
The problem setup for this combinatorial feature selection task is as follows. Assume that from an initial
set of N features, we want to select K to use. The data is cleaned and set up in a matrix U ∈ RM ×N where
each column represents a feature and each row contains the values of the features for each of the M past credit
recipients. The past decisions that were made regarding creditworthiness are represented in a column vector
~v ∈ BM . The goal is to find a set of columns in U that are most correlated with ~v but not with each other.
The next step is to construct a quadratic objective function that is a convex combination of terms evaluating the
influence and independence of the features. Milne et al. [316] utilized quantum annealing to solve the QUBO.
Thus the constraint of selecting K features can be accounted for by adding a penalty term to the QUBO. However,
the problem can be solved by using any of the techniques mentioned in Section 6.1. Classical or quantum machine
learning algorithms (Section 7) can then be used to classify new applicants based on the features selected [316].

6.4.5 Financial Crashes


A financial network can viewed as a collection of entities whose valuations are interconnected to the valuations
of other members of the network. The analysis of financial networks is important for being able to predict crises.
Elliot et al. [317] developed a problem formulation for counting the number of intuitions that fail in a financial
network. Their proposed problem was found to be NP-hard [318]. If solved, however, it allows for analyzing how
small changes in the network propagate through a web of complex interdependencies, something that is difficult
[319] with alternative techniques for crash prediction [320]. By modeling a financial system, mapping the system
to a QUBO problem, and perturbing the system to find the ground state it reaches, we can determine the effect
of small perturbations on the market.
We briefly discuss the formulation of Elliott et al. [317]. First, suppose the financial network consists of N
N ×M
~ ∈ RM
institutions and M assets, with prices p ≥0 , that the institutions can own. D ∈ R≥0 contains the percentage
of each asset owned by each institution. In addition, the institutions can have a stake, cross holdings, in each
×N
other represented by the matrix C ∈ RN ≥0 . The diagonal entries of C are set to 0, and the self-ownership is
P
represented by the diagonal matrix C̃ such that C̃jj := I − i Cij . The equity valuations, ~v , at equilibrium

27
satisfy

~v = C̃(I − C)−1 (D~


p − ~b(~v , p
~)), (16)
where ~b(~v , p
~) is a term that decreases the equity value further if it falls past a certain threshold. This signifies a
loss of investor confidence or the inability to pay operating costs, signifying failure [317].
Orús et al. [321] considered the problem of minimizing a quadratic function of the difference between the left
and right sides of Equation (16). This allowed for a variational method to be used. Thus the minimum is attained
when Equation (16) is satisfied. They then formulated this as a QUBO. Thus, after perturbing the system, the
new equilibrium can be found by solving the QUBO. The next step is to count the number of institutions that
go into failure as a function of the perturbation to assess the stability. The valuations ~v are continuous variables
that can be approximated by using finite binary expansions. This method converts the initial problem into a
binary optimization problem. Orús et al. solved the QUBO using quantum annealing, and Fellner et al. [246]
solved it with QAOA. However, the right side of Equation (16) contains terms that are not quadratic. Thus, the
resulting diagonal observable encoding it is not two-local (more specifically, not Ising). Orús et al. utilized the
approach of Chancellor et al. [235] to convert the higher-order terms into two-body interaction terms, at the cost
of extra variables. For the approach of Fellner et al. the problem Hamiltonian did not need to be mapped to an
Ising Hamiltonian (Section 6.1.2).

7 Machine Learning
The field of machine learning has become a crucial part of various applications in the finance industry. Rich
historical financial data and advances in machine learning make it possible, for example, to train sophisticated
models to detect patterns in stock markets, find outliers and anomalies in financial transactions, automatically
classify and categorize financial news, and optimize portfolios.
Quantum algorithms for machine learning can be further classified by whether they follow a fault-tolerant
approach, a near-term approach, or a mixture of the two approaches. The fault-tolerant approach typically
requires a fully error-corrected quantum computer, and the main goal is to find speedups compared with their
classical counterparts, achieved mostly by applying the quantum algorithms introduced in Section 4.5 as sub-
routines. Thus the applicability of these algorithms may be limited since most of them begin with first loading
the data into a quantum system, which can require exponential time [89, 322]. This issue can be addressed in
theory by having access to qRAM (Section 4.1); as of today, however, no concrete implementation exists. On
the other hand, with the development of more NISQ devices, near-term approaches aim to explore the capability
of these small-scale quantum devices. Quantum algorithms such as VQAs (Section 4.7) consider another type of
enhancement for machine learning: the quantum models might be able to generate correlations that are hard to
represent classically [323, 324]. Such models face many challenges, however, such as trainability, accuracy, and
efficiency, that need to be addressed in order to maintain the hope of achieving quantum advantage when scaling
up these near-term quantum devices [130]. Substantial efforts are needed before we can answer the question of
whether quantum machine learning algorithms can bring us practical and useful applications, and it might take
decades. Nevertheless, quantum machine learning shows great promise to find improvements and speedups to
elevate the current capabilities vastly.

7.1 Regression
Regression is the process of fitting a numeric function from the training data set. This process is often used
to understand how the value changes when the attributes vary, and it is a key tool for economic forecasting.
Typically, the problem is solved by applying least-squares fitting, where the goal is to find a continuous function
that approximates a set of N data points {xi , yi }. The fit function is of the form
M
X
f (x, ~λ) := fj (x)λj ,
j=1

where ~λ is a vector of parameters and fj (x) is function of x and can be nonlinear. The optimal parameters can
be found by minimizing the least-squares error:
N
X 2
min f (xi , ~λ) − yi = |F ~λ − y|2 ,
~
λ
i=1

where the N × M matrix F is defined through Fij = fj (xi ). Wiebe et al. [325] proposed a quantum algorithm to
solve this problem, where they encoded the optimal parameters of the model into the amplitudes of a quantum

28
state and developed a quantum algorithm for estimating the quality of the least-squares fit by building on the HHL
algorithm (Section 4.5). The algorithm consists of three subroutines: a quantum algorithm for performing the
pseudoinverse of F , an algorithm that estimates the fitting quality, and an algorithm for learning the parameter ~λ.
Later, Wang [326] proposed using the CKS matrix-inversion algorithm (Section 4.5), which has better dependence
on the precision in the output than HHL has, and used amplitude estimation (Section 4.4) to estimate the optimal
parameters. Kerenedis and Prakash developed algorithms utilizing QSVE (Section 4.5) to perform a coherent
gradient descent for normal and weighted least-squares regression [96]. Moreover, quantum annealing (Section
6.1.1) has been used to solve least-squares regression when formulated as a QUBO (Section 6.1) [327].
A Gaussian process (GP) is another widely used model for regression in supervised machine learning. It has
been used, for example, in predicting the price behavior of commodities in financial markets. Given a training
set of N data points {xi , yi }, the goal is to model a latent function f (x) such that
y = f (x) + noise ,
where noise ∼ N (0, σn2 )
is independent and identically distributed Gaussian noise. A practical implementation
of the Gaussian process regression (GPR) model needs to compute the Cholesky decomposition and therefore
requires O(N 3 ) running time. Zhao et al. [328] proposed using the HHL algorithm to speed up computations in
GPR. By repeated sampling of the results of specific quantum measurements on the output states of the HHL
algorithm, the mean predictor and the associated variance can be estimated with bounded error with potentially
an exponential speedup over classical algorithms.
Another recent development is to use a feedforward quantum neural network with a quantum feature map,
i.e., a unitary with configurable parameters used to encode the classical input data, and a Hamiltonian cost
function evaluation for the purposes of continuous-variable regression in a technique called quantum circuit
learning (QCL) [329]. This technique allows for a low-depth circuit and has important application opportunities
in quantum algorithms for finance.

7.2 Classification
Classification is the process of placing objects into predefined groups. This type of process is also called pattern
recognition. This area of machine learning can be used effectively in risk management and large data processing
when the group information is of particular interest, for example, in creditworthiness identification and fraud
detection.

7.2.1 Quantum Support Vector Machine


In machine learning, support vector machines (SVMs) are supervised learning models used to analyze data for
classification and regression analysis. Given M training data elements of the form {(~ xj ∈ RN , yj = ±1},
xj , y j ) : ~
the task for the SVM is to classify a vector into one of two classes; it finds a maximum margin hyperplane with
normal vector w ~ that separates the two classes. In addition, the margin is given by the two hyperplanes that
are parallel and separated by the maximum possible distance 2/kwk ~ 2 . Typically, we use the Lagrangian dual
formulation to maximize the function
M M
X 1 X
L(α) = yj αj − αj Kjk αk ,
2
j=1 j,k=1

T
PM
where α~ = (α1 , · · · , αM ) , subject to the constraints α = 0 and yj αj ≥ 0. Here, we have a key quantity
j=1 j
for the supervised machine learning problem [330], the kernel matrix Kjk = k(~ xj , ~ xj · ~
xk ) = ~ xk . Solving the dual
form involves evaluating the dot products using the kernel matrix and then finding the optimal parameters αj
by quadratic programming, which takes O(M 3 ) in the non-sparse case. A quantum SVM was first proposed by
Rebentrost et al. [331], who showed that a quantum SVM can be implemented with O(log M N ) runtime in both
training and classification stages, PN under the assumption that there are oracles for the training data that return
quantum states |~ xj i = k~x1j k2 k=1
xj )k |ki, the norms k~
(~ xj k2 and the labels yj are given, and the states are
constructed by using qRAM [70, 332]. The core of this quantum classification algorithm is a non-sparse matrix
exponentiation technique for efficiently performing a matrix inversion of the training data inner-product (kernel)
matrix. Lloyd et al. [332] showed that, effectively, a low-rank approximation to kernel matrix is used due to
the eigenvalue filtering procedure used by HHL [88]. As mentioned in Section 6.2, Kerenedis et al. proposed
a quantum enhancement to second-order cone programming that was subsequently used to provide a small
polynomial speedup to the training of the classical `1 SVM [281]. In addition, classical SVMs using quantum-
enhanced feature spaces to construct quantum kernels have been proposed by Havlíček et al. [333]. One benefit
of quantum kernels is that there exist metrics for testing for potential quantum advantage [324]. In addition,
there have been techniques proposed to enable generalization [334, 335], which has been observed to be an issue
with standard quantum-kernel methods [336].

29
7.2.2 Quantum Nearest-Neighbors Algorithm
The k-nearest neighbors technique is an algorithm that can be used for classification or regression. Given a data
set, the algorithm assumes that data points closer together are more similar and uses distance calculations to
group close points together and define a class based on the commonality of the nearby points. Classically, the
algorithm works as follows: (1) determine the distance between the query example and each other data point in
the set, (2) sort the data points in a list indexed from closest to the query example to farthest, and (3) choose the
first k entries, and, if regression, return the mean of the k labels and, if classification, return the mode of k labels.
The computationally expensive step is to compute the distance between elements in the data set. The quantum
nearest-neighbor classification algorithm was proposed by Wiebe et al. [337], who used the Euclidean distance
and the inner product as distance metrics. The distance between two points is encoded in the amplitude of a
quantum state. They used amplitude amplification (Section 4.2) to amplify the probability of creating a state
to store the distance estimate in a register, without measuring it. They then applied the Dürr–Høyer algorithm
(Section 6.1.5). Later, Ruan et al. proposed using the Hamming distance as the metric [338]. More recently,
Basheer et al. have proposed using a quantum k maxima-finding algorithm to find the k-nearest neighbors and
use the fidelity and inner product as measures of similarity [339].

7.3 Clustering
Clustering, or cluster analysis, is an unsupervised machine learning task. It explores and discovers the grouping
structure of the data. In finance, cluster analysis can be used to develop a trading approach that helps investors
build a diversified portfolio. It can also be used to analyze different stocks such that the stocks with high
correlations in returns fall into one basket.

7.3.1 Quantum k-Means Clustering


The k–means clustering algorithm (also known as Lloyd’s algorithm) [340] clusters training examples into k
clusters based on their distances to each of the k cluster centroids. The algorithm for k–means clustering is as
follows: (1) choose initial values for the k centroids randomly or by some chosen method, (2) assign each vector
to the closest centroid, and (3) recompute the centroids for each cluster; then repeat steps (2) and (3) until the
clusters converge. The problem of optimally clustering data is NP–hard, and thus finding the optimal clustering
using this algorithm can be computationally expensive. Quantum computing can be leveraged to accelerate a
single step of k-means. For each iteration, the quantum Lloyd’s algorithm [332] estimates the distance to the
centroids in O(M log(M N )), while classical algorithms take O(M 2 N ), where M is the number of data points
and N is the dimension of the vector. Wiebe et al. [337] showed that, with the same technique they used for
the quantum nearest-neighbor algorithm described
√ in Section 7.2, a step for k–means can be performed by using
a number of queries that scales as O(M k log(k)/). While a direct classical method requires O(kM N ), the
quantum solution is substantially better if kN  M . Kerenidis et al. [341] proposed q-means, which is the
quantum equivalent of δ-k-means. The q-means algorithm has a running time that depends polylogarithmically
on the number of data points. A NISQ version of k-means clustering using quantum computing has been proposed
by Khan et al. [342]. Another version of k-means clustering has been proposed by Miyahara et al. [343] based on
a quantum expectation-maximization algorithm for Gaussian mixture models.

7.3.2 Quantum Spectral Clustering


Spectral clustering methods [344] have had great successes in clustering tasks but suffer from a fast-growing
running time of O(N 3 ), where N is the number of points in the data set. These methods obtain a solution for
clustering by using the principal eigenvectors of the data matrix or the Laplacian matrix. Daskin [345] employed
QAE (Section 4.4) and QPE (Section 4.3) for spectral clustering. Apers et al. [346] proposed quantum algorithms
using the graph Laplacian for machine learning applications, including spectral k-means clustering. More recently,
Kerenidis and Landman [347] proposed a quantum algorithm to perform spectral clustering. They first created
a quantum state corresponding to the projected Laplacian matrix, which is efficient assuming qRAM access, and
then applied the q-means algorithm [341]. Both steps depend polynomially on the number of clusters k and
polylogarithmically on the dimension of the input vectors.
More discussions on unsupervised quantum machine learning techniques can be found in the works of Otter-
bach et al. [348] and Aïmeur et al. [349, 350]. Kerenidis et al. [351] discussed a quantum version of expectation-
maximization, a common tool used in unsupervised machine learning. Kumar et al. described a technique for
using quantum annealing for combinatorial clustering [352].

30
7.4 Dimensionality Reduction
The primary linear technique for dimensionality reduction, principal component analysis, performs a linear map-
ping of the data to a lower-dimensional space in such a way that the variance of the data in the low-dimensional
representation is maximized. Generally, principal component analysis mathematically amounts to finding domi-
nant eigenvalues and eigenvectors of a very large matrix. The standard context for PCA involves a data set with
observations on M variables for each of N objects. The data defines an N × M data matrix X, in which the
jth column is the vector ~xj of observations on the jth variable. The aim is to find a linear combination of the
columns of the matrix X with maximum variance. This process boils down to finding the largest eigenvalues and
corresponding eigenvectors of the covariance matrix. Lloyd et al. [353] proposed quantum PCA. In addition, they
showed that multiple copies of a quantum system with density matrix ρ can be used to construct the unitary
transformation e−iρt , which leads to revealing the eigenvectors corresponding to the large eigenvalues in quan-
tum form for an unknown low-rank density matrix. He et al. introduced a low-complexity quantum principal
component analysis algorithm [354]. Other quantum algorithms for PCA were proposed in [355–357]. Lastly, Li
et al. [358] presented a quantum version of kernel PCA, Cong [359] presented quantum discriminant analysis,
and Kerenidis and Luongo [360] developed quantum slow feature analysis.
For data sets that are high dimensional, incomplete, and noisy, the extraction of information is generally
challenging. Topological data analysis (TDA) is an approach to study qualitative features and analyze and
explore the complex topological and geometric structures of data sets. Persistent homology (PH) is a method
used in TDA to study qualitative features of data that persist across multiple scales. Lloyd et al. [361] proposed
a quantum algorithm to compute the Betti numbers, that is, the numbers of connected components, holes, and
voids in the data set at varying scales. Essentially, the algorithm operates by finding the eigenvectors and
eigenvalues of the combinatorial Laplacian and estimating Betti numbers to all orders and to accuracy δ in time
O(n5 /δ). The most significant speedup is for dense clique complexes [362]. An improved version as well as a
NISQ version of the quantum algorithm for PH has been proposed by Ubaru et al. [363]. Further improvements
for implementing the Dirac operator, required for PH, have been made by Kerenedis and Prakash [364].

7.5 Generative Models


Unsupervised generative modeling is at the forefront of deep learning research. The goal of generative modeling
is to model the probability distribution of observed data and generate new samples accordingly. One of the
most promising aspects of achieving potential quantum advantage lies in the sampling advantage, and many
applications in finance require generating samples from complex distributions. Therefore, further investigations
of generative quantum machine learning for finance are needed.

7.5.1 Quantum Circuit Born Machine


The quantum circuit Born machine (QCBM) [365] directly exploits the inherent probabilistic interpretation
of quantum wave functions and represents a probability distribution using a quantum pure state instead of
the thermal distribution. Liu et al. [366] developed an efficient gradient-based learning algorithm to train the
QCBM. Numerical simulations suggest that in the task of learning the distribution, QCBM at least matches
the performance of the restricted Boltzmann machine (Section 7.5.3) and demonstrates superior performance as
the model scales [367]. QCBM was also used to model copulas [368], a family of multivariate distributions with
uniform marginals. Recently, Kyriienko et al. [193] proposed separating the training and sampling stages. In
the training stage, they build a model in the latent space, followed by a variational circuit. They showed how
probability distributions can be trained and sampled efficiently, and how SDEs can act as differential constraints
on such trainable quantum models.

7.5.2 Quantum Bayesian Networks


Bayesian networks are probabilistic graphical models [369] representing random variables and conditional de-
pendencies via a directed acyclic graph, where each edge corresponds to a conditional dependency and each
node corresponds to a unique random variable. Bayesian inference on this graph has many applications, such as
prediction, anomaly detection, diagnostics, reasoning, and decision-making with uncertainty. Despite the large
complications, exact inference is #P-hard, but a quadratic speedup in certain parameters can be obtained by
using quantum technologies [370]. Mapping this problem to a quantum Bayesian network seems plausible since
quantum mechanics naturally describes a probabilistic distribution. Tucci [371] introduced the quantum Bayesian
network as an analogue to classical Bayesian networks, using quantum complex amplitudes to represent the con-
ditional probabilities in a classical Bayesian network. Borujeni et al. [372] proposed a procedure to design a
quantum circuit to represent a generic discrete Bayesian network: (1) map each node in the Bayesian network to
one or more qubits, (2) map conditional probabilities of each node to the probability amplitudes associated with

31
qubit states, and (3) obtain the desired probability amplitudes through controlled rotation gates. In a later work
they tested it on IBM quantum hardware [373]. The applications in finance include portfolio simulation [374]
and decision-making modeling [375].

7.5.3 Quantum Boltzmann Machines


A Boltzmann machine is an undirected probabilistic graphical model (i.e., a Markov random field) [369] inspired
by thermodynamics. Inference, classically, is usually performed by utilizing Markov chain Monte Carlo methods
to sample from the model’s equilibrium distribution (i.e., the Boltzmann distribution) [376]. Because of the
intractability of the partition function in the general Boltzmann machine, the graph structure is typically bipartite,
resulting in a restricted Boltzmann machine [377, 378]. These methods were more popular prior to the current
neural-network-based deep learning revolution [376]. Quantum Boltzmann machines have been implemented
by utilizing quantum annealing (Section 6.1.1) [379–381]. This is possible because of the annealer’s ability
to approximately sample ground states of Ising models. In addition, a gate-based variational approach using
VarQITE (Section 6.1.4) has been designed by Zoufal et al. [273]. Since quantum states are always normalized,
this approach allows for implementing Boltzmann machines with more complex graph structures.

7.5.4 Quantum Generative Adversarial Networks


Generative adversarial networks (GANs) represent a powerful tool for classical machine learning: a generator
tries to create statistics for data that mimic those of the true data set, while a discriminator tries to discriminate
between the true and fake data. Lloyd and Weedbrook [382] introduced the notion of quantum generative
adversarial networks (QuGANs), where the data consists either of quantum states or of classical data and the
generator and discriminator are equipped with quantum information processors. They showed that when the
data consists of samples of measurements made on high-dimensional spaces, quantum adversarial networks may
exhibit an exponential advantage over classical adversarial networks. QuGAN has been used to learn and load
random distribution and can facilitate financial derivative pricing [159].

7.6 Quantum Neural Networks


Neural networks (NNs) have achieved many practical successes over the past decade, due to the novel compu-
tational power of GPUs. On the other side, the growing interest in quantum computing has led researchers to
develop different variants of quantum neural networks (QNNs). However, many challenges arise in developing
quantum analogues to the required components of NNs, such as the perceptron, an optimization algorithm, and
a loss function. A few different approaches to QNNs have been developed. Most, however, follow the same steps
[383]: initialize a network architecture; specify a learning task, implement a training algorithm, and simulate the
learning task.
Current proposals for QNNs, discussed in the following subsections, involve a diverse collection of ideas with
varying degrees of proximity to classical neural networks.

7.6.1 Quantum Feedforward Neural Network


Cao et al. [384] proposed a quantum neuron and demonstrated its applicability as a building block of quantum
neural networks. Their approach uses repeat-until-success techniques for quantum gate synthesis. Kerenedis et
al. developed a quantum analogue to the orthogonal feedforward neural network [385]; the orthogonality of the
weight matrix is naturally preserved by using the reconfigurable beam splitter gate [386]. In addition, quantum
algorithms can potentially speed up the operations used by classical feedforward neural networks [387]. Another
quantum feedfoward NN has been developed by Beer et al. [383].
A different proposed path to designing quantum analogues to feedforward NNs on near-term devices is by
using parameterized quantum circuits (Section 4.7) [333, 388–391]. Various attempts have been made to analyze
the expressive power of the circuit-based QNN [324, 392–395]. Interestingly, the circuit-based QNN can be viewed
as a kernel method with a quantum-enhanced feature map [324, 333, 394, 396]. Furthermore, analysis has been
performed in the neural tangent kernel framework [397] to show the benefits of using a quantum-enhanced feature
map [333] in conjunction with a classical NN [398].

7.6.2 Quantum Convolutional Neural Network


Convolutional neural networks (CNNs) were originally developed by LeCun et al. [399] in the 1980s. Recently,
Kerenidis et al. [400] proposed a quantum convolutional neural network as a shallow circuit, reproducing com-
pletely the classical CNN, by allowing nonlinearities and pooling operations. They used a new quantum to-
mography algorithm with `∞ norm guarantees and new applications of probabilistic sampling in the context of

32
information processing. Quantum Fourier convolutional neural networks have been developed by Shen and Liu
[401]. Cong et al. [402] developed circuit-based quantum analogues to CNNs.

7.6.3 Quantum Graph Neural Networks


Graph neural networks have recently attracted a lot of attention from the machine learning community. Effective
neural networks are designed by modeling interactions (edges) between objects of interest (nodes) and assigning
features to nodes and edges. The approach exploits real-world interconnectedness in finance. Graph neural
networks are particularly helpful for training low-dimensional vector space representations of nodes that encode
both complex graph structure and node/edge features. Verdon et al. [403] introduced quantum graph neural
networks (QGNNs). This is a new type of quantum neural network ansatz that takes advantage of the graph
structure of quantum processes and makes them effective on distributed quantum systems. In addition, quantum
graph recurrent neural networks [404], quantum graph convolutional neural networks, and quantum spectral
graph convolutional neural networks are introduced as special cases. A networked quantum system is described
with an underlying graph G = (V, E) in which each node v corresponds N to a quantum subsystem with its Hilbert
space Hv such that the entire system forms a Hilbert space H = v∈V
Hv . This can be extended to edges
HE , and each node can represent more complicated quantum systems than a single qubit (e.g., several qubits
or the entire part of a distributed quantum system). With this representation, the edges of a graph describe
communication between vertex subspaces. The QGNN ansatz is a parameterized quantum circuit on a network,
which consists of a sequence of different Hamiltonian evolutions, with the whole sequence repeated several times
with variational parameterization. In this setting, the parameterized Hamiltonians have a graph topology of
interactions.

7.7 Quantum Reinforcement Learning


Reinforcement learning is an area of machine learning that considers how agents ought to take actions in an
environment in order to maximize their reward. It has been applied to many financial applications, including
pricing and hedging of contingent claims, investment and portfolio allocation, buying and selling of a portfolio
of securities subject to transaction costs, market making, asset liability management, and optimization of tax
consequences.
The standard framework of reinforcement learning is based on discrete-time, finite-state Markov decision
processes (MDPs). It comprises five components: {S, Ai , pij (a), ri,a , V, i, j ∈ S, a ∈ Ai }, where S is the state
space; Ai is the action space for state i; pij (a) is the transition probability; r is the reward function defined as
r : Γ → {−∞, +∞}, where Γ = {(i, a)|i ∈ S, a ∈ Ai }; and V is a criterion function or objective function. The
MDP has a history of successive states and decisions, defined as hn = (s0 , a0 , s1 , a1 , · · · , sn−1 , an−1 , sn ), and the
policy π is a sequence π = (π0 , π1 , · · · ). When the history at n is hn , the agent will make a decision according to
the probability distribution πn (·|hn ) on Asn . At each step t, the agent observes the state of the environment st
and then chooses an action at . The agent will then receive a reward rt+1 . The environment then will change to
the next state st+1 under action at , and the agent will again observe and act. The goal of reinforcement learning
is to learn a mapping fromS states to action that maximize the expected sum of discounted reward or, formally,
to learn a policy π : S × i∈S Ai → [0, 1], such that

max Vsπ = E[rt+1 + γrt+2 + · · · |st = s, π]


X X
= π(s, a)[rsa + γ pss0 Vsπ0 ],
a∈As s0

where γ ∈ [0, 1] is a discount factor and π(s, a) is the probability of the agent to selecting action a at state s
according to policy π.
Dong et al. [405] proposed representing the state set with a quantum superposition state; the eigenstate
obtained by randomly observing the quantum state is the action. The probability of the eigen action is determined
by the probability amplitude, which is updated in parallel according to the rewards. The probability of the “good”
action is amplified by using iterations of Grover’s algorithm (Section 4.1). In addition, they showed that the
approach makes a good trade-off between exploration and exploitation using the probability amplitude and can
speed up learning through quantum parallelism. Paparo et al. [406] showed that the computational complexity
of a particular model, projective simulation, can be quadratically reduced. In addition, quantum neural networks
(Section 7.6) and quantum Boltzmann machines (Section 7.5.3) have been used for approximate RL [407–411].
Cherret et al. described quantum reinforcement learning via policy iteration [412]. Additionally, Cornelissen
explored how to apply quantum gradient estimation to quantum reinforcement learning [413].

33
7.8 Natural Language Modeling
Natural language processing (NLP) has flourished over the past couple of years as a result of advancements in
deep learning [414]. One important component of NLP is building language models [415]. A language model
is a statistical model used to predict what word will likely appear next in a sentence helpful. Today, deep
neural networks using transformers [416] are utilized to construct language models [417–419] for a variety of
tasks. Concerns remain, however, such as bias [420], controversial conclusions [421], energy consumption, and
environmental impact [422]. Thus it makes sense to look to quantum computing for potentially better language
models for NLP.
The DisCoCat framework, developed by Coecke et al. [423], combines both meaning and grammar, something
not previously done, into a single language model utilizing category theory. For grammar, they utilized pregroups
[424], sets with relaxed group axioms, to validate sentence grammar. Semantics is represented by a vector space
model, similar to those commonly used in NLP [425]. However, DisCoCat also allows for higher-order tensors to
distinguish different parts of speech.
Coecke et al. noted that pregroups and vector spaces are asymmetric and symmetric strict monoidal cate-
gories, respectively. Thus, they can be represented by utilizing the powerful diagrammatic framework associated
with monoidal categories [426]. The revelation made by this group [427] was that this same diagrammatic frame-
work is utilized by categorical quantum mechanics [428]. Because of the similarity between the two, the diagrams
can be represented by quantum circuits [427] utilizing the algebra of ZX-calculus [429]. Thus, because of this
connection and the use of tensor operations, the group argued that this natural language model is “quantum-
native,” meaning it is more natural to run the DisCoCat model on a quantum computer instead of a classical one
[427]. Coecke et al. ran multiple experiments on NISQ devices for a variety of simple NLP tasks [430–432].
In addition, there are many ways in which QML models, such as QNNs (Section 7.6), could potentially
enhance the capacity of existing natural language neural architectures [433–435].

7.9 Financial Applications


The field of quantum machine learning is still in its infancy, and many exciting developments are under way.
We list here a few representative or potential financial applications. We also point readers to a recent survey on
quantum machine learning for finance [6].

7.9.1 Anomaly Detection


Anomaly detection is a critical task for financial intuitions, from both a financial point of view and a technological
one. In finance, fraud detection is an extremely important form of anomaly detection. Some examples are
identifying fraudulent credit card transactions and financial documents. Also important is anomaly detection on
communication networks used by the financial industry [436]. These tasks are typically performed, classically,
by utilizing techniques from statistical analysis and machine learning [437]. The machine learning technique
commonly used is unsupervised deep learning [438].
Researchers have also proposed quantum versions of these approaches, such as quantum GANs (Section
7.6). Herr et al. [439] used parameterized quantum circuits to model the generator used in AnoGan [440] for
anomaly detection. Quantum amplitude estimation (Section 4.4) was utilized to enhance density-estimation-
based anomaly detection [441]. Quantum variants of common clustering methods also exist, such as k-means
(Section 7.3), that could be useful for anomaly detection. Furthermore, quantum-enhanced kernels (Section 7.2)
could be utilized for kernel-based clustering methods [442]. Quantum Boltzmann machines (Section 7.5.3) have
also been used for fraud detection [273].

7.9.2 Asset Pricing


The fundamental goal of asset pricing is to understand the behavior of risk premiums. Classically, machine
learning has shown great potential to improve our empirical understanding of asset returns. For example, Gu et
al. [443] studied the problem of predicting returns in cross-section and time series. They studied a collection of
machine learning methods, including linear regression, generalized linear models, principal component regression,
and neural networks. Chen et al. [444] proposed an asset pricing model for individual stock returns using deep
neural networks.
The quantum version of these machine learning methods could potentially be applied to asset pricing, and
investigations of the applicability are needed. Nagel [445] goes into detail about classical machine learning
methods applied to asset pricing. Chen et al. [434] proposed a quantum version of the long short-term memory
architecture, which is a variant of the recurrent neural network that has been demonstrated to be effective for
temporal and sequence data modeling. Both make use of parameterized quantum circuits, and these QML models
may offer an advantage over classical models in terms of training complexity and prediction performance.

34
7.9.3 Implied Volatility
Implied volatility is a metric that captures the market’s forecast of a likely movement in the price of a security.
It is one of the deciding factors in the pricing of options. Sakuma [446] investigated the use of the deep quantum
neural network proposed by Beer et al. [383] in the context of learning implied volatility. Numerical results
suggest that such a QML model is a promising candidate for developing powerful methods in finance [6].

8 Hardware Implementations of Use Cases


In this section we discuss experiments that were performed on quantum hardware to solve financial problems
using some of the methods that have been presented. The hardware platforms include superconducting and
trapped-ion universal quantum processors, as well as superconducting quantum annealers.

8.1 Quantum Risk Analysis on a Transmon Device


In this section we present the experimental results collected by Woerner and Egger [215], using their method for
risk analysis mentioned in Section 5.3.2. They utilized the IBM transmon [55] quantum processors [215].
The authors tested the algorithm on two toy models: pricing a Treasury-bill, executed on quantum hardware,
and determining the financial risk of a two-asset portfolio, performed in simulations that included modeling
quantum noise on classical hardware. The QAE algorithm is used to estimate quantities related to a random
variable represented as a quantum state, usually using n qubits to map it to the interval {0, ..., N − 1}, with
N = 2n being the number of possible realizations. The authors presented a method for efficiently applying
an approximation to f (X), where QAE computes E[f (X)], given that f (x) = sin2 (p(x)) for some polynomial
p. They used this approach to compute statistical moments, such as the mean. The f (X) used for VaR, and
similarly CVaR, requires using a quantum comparator, as mentioned in Section 5.3.2. In addition, for computing
VaR, a bisection search is then performed utilizing multiple executions of QAE to find a threshold x such that
E[f (X)] = P[X ≤ x] ≥ α, for the chosen confidence α. For the comparator, the authors referenced the quantum
ripple-carry adder of Cuccaro et al. [447]. For each of these methods, they discussed the trade-off between
circuit depth, qubit count, and convergence rate. An example circuit for 3 evaluation qubits, corresponding to
23 = 8 samples (i.e., quantum queries, Section 5.1), is shown in Figure 1 for computing the value of a T-Bill.
Increasing the number of evaluation qubits above 3 yields smaller estimation errors than with classical Monte
Carlo integration (Section 5.1), demonstrating the predicted quantum speedup; see Figure 2.

Figure 1: Quantum amplitude estimation circuit for approximating the expected value of a portfolio
made up of one T-Bill. The algorithm uses 3 evaluation qubits, corresponding to 8 samples. (Image
source: [215]; use permitted under the Creative Commons Attribution 4.0 International License.)
Despite the observed speedup of the QMCI-based algorithm, compared with classical Monte Carlo integration,
practical quantum advantage cannot be achieved by using this algorithm on NISQ devices because of the relatively
small qubit count, qubit connectivity and quality, and decoherence limitations present in modern quantum
processors (Section 3.3). Additionally, Monte Carlo for numerical integration can be efficiently parallelized,
imposing even stronger requirements on quantum algorithms in order to outperform standard classical methods.

8.2 Portfolio Optimization with Reverse Quantum Annealing


As explained in Section 6.4.1, quantum annealing (Section 6.1.1) is highly amenable to solving portfolio op-
timization problems. The commercially available D-Wave flux-qubit-based [55] quantum annealers have been
successfully used to solve portfolio optimization instances. Determining the optimal portfolio composition is one
of the most studied optimization problems in finance. A typical portfolio optimization problem can be cast into
a QUBO or, equivalently, an Ising Hamiltonian suitable for a quantum annealer; see Section 6.1. The following
presents the work of Venturelli and Kondratyev [448], who used a reverse-quantum-annealing-based approach.
Portfolio optimization was defined in Section 6.4.1. The pairwise relationships between the asset choices
give rise to the quadratic form of the objective function to be optimized. The following technical challenges

35
Figure 2: Results of executing the QAE algorithm on the IBMQ 5 Yorktown (ibmqx2) chip for a range
of evaluation qubits, with 8,192 shots each. Because of the better convergence rate of the quantum
algorithm (blue line), it achieves lower error rates than does Monte Carlo (orange line) for the version
with 4 qubits, with the performance gap estimated to grow even more for larger instances on 5 and 6
qubits (blue dashed line). (Image source: [215]; use permitted under the Creative Commons Attribution
4.0 International License, http://creativecommons.org/licenses/by/4.0/)
need to be overcome in order to perform simulations on the D-Wave quantum annealer. First, the standard way
of enforcing the number of selected assets by introducing a high-energy penalty term can be problematic with
D-Wave devices because of precision issues and the physical limit on energy scales that can be programmed on
the chip. The authors observed that this constraint can be enforced by artificially shifting the Sharpe ratios,
used when computing the risk-adjusted returns of the assets, by a fixed amount. Second, embedding the Ising
optimization problem in the Chimera topology of D-Wave quantum annealers presents a strong limitation on
the problems that can be solved on the device (Section 6.1.1). To overcome this limitation, one can employ the
minor-embedding compilation technique for fully connected graphs [242, 449], allowing the necessary embedding
to be performed using an extended set of variables.
The novelty of this work by Venturelli and Kondratyev is the use of the reverse annealing protocol [237],
which was found to be on average 100+ times faster than forward annealing, as measured by the time to solution
(TTS) metric. Reverse quantum annealing, briefly mentioned in Section 6.1.1, is illustrated in Figure 3. It
is a new approach enabled by the latest D-Wave architecture designed to help escape local minima through a
combination of quantum and thermal fluctuations. The annealing schedule starts in some classical state provided
by the user, as opposed to the quantum superposition state, as in the case of a forward annealing. The transverse
field is then increased, driving backward annealing from the classical initial state to some intermediate quantum
superposition state. The annealing schedule functions A(t) and B(t) (Section 6.1.1) are then kept constant for a
fixed amount of time, allowing the system to perform the search for a global minimum. The reverse annealing
schedule is completed by removing the transverse field, effectively performing the forward annealing protocol
with the system evolving to the ground state of the problem Hamiltonian.
The DW2000Q device used in [448] allows embedding up to 64 logical binary variables on a fully-connected
graph, and the authors considered portfolio optimization problems with 24–60 assets, with 30 randomly generated
instances for each size, and calculating the TTS. The results for both forward and reverse quantum annealing
were then compared with the industry-established genetic algorithm (GA) approach used as a classical benchmark
heuristic. The TTS for reverse annealing with the shortest annealing and pause times was found to be several
orders of magnitude smaller than for the forward annealing and GA approaches, demonstrating a promising step
toward practical applications of NISQ devices.
The development of new quantum annealing protocols and approaches, such as the reverse quantum annealing
considered in this section, is an active area of research.

8.3 Portfolio Optimization on a Trapped-Ion Device


As discussed in Section 6.4.1, the convex mean-variance portfolio optimization problem with linear equality
constraints can be solved as a linear system of equations. The method of Lagrange multipliers turns the convex
quadratic program (Equation (15)) into

 " #  
0 0 ~rT η µ
0 0 ~T  θ =  ξ  ,
p (17)
~r p
~ Σ w
~ ~0

where η, θ are the dual variables. This lends itself to quantum linear systems algorithms (Section 4.5).

36
a b

𝐴𝐴 𝑡𝑡 𝐵𝐵 𝑡𝑡 𝐵𝐵 𝑡𝑡
Energy scale

Energy scale
𝐴𝐴 𝑡𝑡

𝑡𝑡 𝑡𝑡

Figure 3: Forward (a) and reverse (b) quantum annealing protocols. During reverse quantum annealing,
the system is initialized in a classical state (c, left), evolved following a “backward” annealing schedule to
some quantum superposition state. At that point the evolution is interrupted for a fixed amount of time,
allowing the system to perform a global search (c, middle). The system’s evolution then is completed by
the forward annealing schedule (c, right). Adapted from [237].
To reiterate, solving a QLSP consists of returning a quantum state, with bounded error, that is proportional
to the solution of a linear system A~ x = ~b. Quantum algorithms for this problem on sparse and well-conditioned
matrices potentially achieve an exponential speedup in the system size N . Although a QLS algorithm cannot
provide classical access to amplitudes of the solution state while maintaining the exponential speedup in N ,
potentially useful financial statistics can be obtained from the state.
The QLS algorithm discussed in this section is HHL. This algorithm is typically viewed as one that is intended
for the fault-tolerant era of quantum computation. Two of the main components of the algorithm that contribute
to this constraint are quantum phase estimation (Section 4.3) and the loading of the inverse of the eigenvalue
estimations onto the amplitudes of an ancilla, called eigenvalue inversion [88].15 Yalovetzky et al. [450] developed
an algorithm called NISQ-HHL with the goal of applying it to portfolio optimization problems on NISQ devices.
While it does not retain the asymptotic speedup of HHL, the authors’ approach can potentially provide benefits
in practice. They applied their algorithm to small mean-variance portfolio optimization problems and executed
them on the trapped-ion Quantinuum System Model H1 device [63].
QPE is typically difficult to run on near-term devices because of the large number of ancillary qubits and
controlled operations required for the desired eigenvalue precision (depth scales as O( 1δ ) for precision δ). An
alternative realization of QPE reduces the number of ancillary qubits to one and replaces all controlled-phase
gates in the quantum Fourier transform component of QPE with classically controlled single-qubit gates [451].
This method requires mid-circuit measurements, ground-state resets, and quantum conditional logic (QCL). These
features are available on the Quantinuum System Model H1 device. The paper calls this method QCL-QPE. The
differences between these two QPE methods are highlighted in Figure 4.
With regard to eigenvalue inversion, implementing the controlled rotations on near-term devices typically
requires a multicontrolled rotation for every possible eigenvalue that can be estimated by QPE, such as the
uniformly controlled rotation gate [452]. This circuit is exponentially deep in the number of qubits. Fault-
tolerant implementations can take advantage of quantum arithmetic methods to implement efficient algorithms
for the required arcsin computation. An alternative approach for the near term is a hybrid version of HHL
[453], which the developers of NISQ-HHL extended. This approach utilizes QPE to first estimate the required
eigenvalues to perform the rotations for. While the number of controlled rotations is now reduced, this approach
does not retain the asymptotic speedup provided by HHL because of the loss of coherence. A novel procedure

15
Hamiltonian simulation also contributes to this constraint.

37
Figure 4: Circuits for estimating the eigenvalues of the unitary operator U to three bits using QPE (left
circuit) or QCL-QPE (right circuit). S is the register that U is applied to, and j is a classical register;
H refers to the Hadamard gate; and Rk , for k = 2, 3, are the standard phase gates used by the quantum
Fourier transform [2]. Adapted from [450].
was developed by the authors for scaling the matrix by a factor γ to make it easier to distinguish the eigenvalues
in the output distribution of QPE and improve performance. Figure 5 displays experimental results collected
from the Quantinuum H1 device to show the effect of the scaling procedure.16

Figure 5: Probability distributions over the eigenvalue estimations from the QCL-QPE run using γ = 50
(left) and γ = 100 (right). The blue bars represent the experimental results on the H1 machine—1, 000
shots (left) and 2, 000 shots (right)—and we compare them with the theoretical eigenvalues (classically
calculated using a numerical solver) represented with the red dots and the results from the Qiskit [454]
QASM simulator represented with the orange dots. Adapted from [450].
These methods were executed on the Quantinuum H1 device for a few small portfolios (Table 2). A controlled-
SWAP test [455] was performed between the state from HHL and the solution to the linear system obtained
classically (loaded onto a quantum state) [450]. The number of H1 two-qubit ZZMax gates used is also displayed
in Table 2.
HHL NISQ-HHL Relative Change
Rotations 7 6 -14.29%
Eigenvalue Inversion Depth 138 120 -13.04%
Inner Product in Simulation 0.59 0.83 40.67%
Inner Product on H1-1 0.42 0.46 9.52%

Table 2: Comparison of the number of rotations in the eigenvalue inversion, ZZMax depth of the HHL
plus the controlled-SWAP test circuits and the inner product calculated from the Qiskit state vector
simulator and the Quantinuum H1 results with 3, 000 shots. Adapted from [450].
Statistics such as risk can be computed efficiently from the quantum state. Sampling can also be performed
if the solution state is sparse enough [303].

9 Conclusion and Outlook


Quantum computers are expected to surpass the computational capabilities of classical computers and achieve
disruptive impact on numerous industry sectors, such as global energy and materials, pharmaceuticals and medical
products, telecommunication, travel, logistics, and finance. In particular, the finance sector has a history of

16
Let Πλ denote the eigenspace projector for an eigenvalue λ of the Hermitian matrix in Equation (17), and
let |bi be a quantum state proportional to the right side of Equation (17). To clarify, the y-axis value of the red
dot corresponding to λ, on either of the histograms in Figure 5, is kΠλ |bik22 .

38
creation and first adaptation of new technologies. This is true also when it comes to quantum computing. In
fact, finance is estimated to be the first industry sector to benefit from quantum computing, not only in the
medium and long terms, but even in the short term, because of the large number of financial use cases that
lend themselves to quantum computing and their amenability to be solved effectively even in the presence of
approximations.
A common misconception about quantum computing is that quantum computers are simply faster processors
and hence speedups will be automatically achieved by porting a classical algorithm from a classical computer
to a quantum one—just like moving a program to a system-upgraded hardware. In reality, quantum computers
are fundamentally different from classical computers, so much so that the algorithmic process for solving an
application has to be completely redesigned based on the architecture of the underlying quantum hardware.
Most of the use cases that characterize the finance industry sector have high computational complexity,
and for this reason they lend themselves to quantum computing. However, the computational complexity of a
problem per se is not a guarantee that quantum computing can make a difference. To assess commercial viability,
we must first answer a couple questions. First, is there in fact any need for improved speed of calculations
or computational-accuracy for the given application? Determining whether there is a gap in what the current
technology can provide us is essential and is a question that financial players typically ask when doing proof
of concept projects. Second, when adapting a classical finance application to quantum computing, can that
application achieve quantum speedup and, if so, when will that happen based on the estimated evolution of the
underlying quantum hardware?
Today’s quantum computers are not yet capable of solving real-life-scale problems in the industry more effi-
ciently and more accurately than classical computers can. Nevertheless, proofs of concept for quantum advantage
and even quantum supremacy have already been provided (see Section 3). There is hope that the situation will
change in the coming years with demonstrations of quantum advantage. It is crucial for enterprises, and particu-
larly financial institutions, to use the current time to become quantum ready in order to avoid being left behind
when quantum computers become operational in a production environment.
We are in the early stages of the quantum revolution, yet we are already observing a strong potential for
quantum technology to transform the financial industry. So far, the community has developed potential quantum
solutions for portfolio optimization, derivatives pricing, risk modeling, and several problems in the realm of
artificial intelligence and machine learning, such as fraud detection and NLP.
In this paper we have provided a comprehensive review of quantum computing for finance, specifically focusing
on quantum algorithms that can solve computationally challenging financial problems. We have described the
current landscape of the state of the industry. We hope this work will be used by the scientific community,
both in industry and in academia, not only as a reference, but also as a source of information to identify new
opportunities and advance the state of the art.

Competing Interests
The authors declare that they have no competing interests.

Funding
C.N.G’s work was supported in part by the U.S. Department of Energy, Office of Science, Office of Workforce
Development for Teachers and Scientists (WDTS) under the Science Undergraduate Laboratory Internships
Program (SULI). Y.A.’s work at Argonne National Laboratory was supported by the U.S. Department of Energy,
Office of Science, under contract DE-AC02-06CH11357.

Disclaimer
This paper was prepared for information purposes by the teams of researchers from the various institutions
identified above, including the Future Lab for Applied Research and Engineering (FLARE) group of JPMorgan
Chase Bank, N.A.. This paper is not a product of the Research Department of JPMorgan Chase & Co. or its
affiliates. Neither JPMorgan Chase & Co. nor any of its affiliates make any explicit or implied representation
or warranty and none of them accept any liability in connection with this paper, including, but limited to,
the completeness, accuracy, reliability of information contained herein and the potential legal, compliance, tax
or accounting effects thereof. This document is not intended as investment research or investment advice, or
a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial
product or service, or to be used in any way for evaluating the merits of participating in any transaction.

39
Authors’ Contributions
Dylan Herman, Cody Ning Googin, and Xiaoyuan Liu contributed equally to the manuscript. All authors
contributed to the writing and reviewed and approved the final manuscript.

Acronyms
NISQ Noisy-Intermediate Scale Quantum
AQC Adiabatic Quantum Computing
qRAM Quantum Random Access Memory
QCL Quantum Conditional Logic
VQE Variational Quantum Eigensolver
QAOA Quantum Approximate Optimization Algorithm
QML Quantum Machine Learning
PCA Principal Component Analysis
TDA Topological Data Analysis
QNN Quantum Neural Network
PQC Parameterized Quantum Circuit
GAN Generative Adversarial Network
SVM Support Vector Machine
PH Persistent Homology
GP Gaussian Process
NP Non-Deterministic Polynomial
VQA Variational Quantum Algorithm
QPE Quantum Phase Estimation
NLP Natural Language Processing
RL Reinforcement Learning
QAA Quantum Amplitude Amplification
QAE Quantum Amplitude Estimation
QLSP Quantum Linear Systems Problem
QLS Quantum Linear System
QLSA Quantum Linear Systems Algorithm
HHL Harrow Hassidim Lloyd
QSVT Quantum Singular Value Transform
QSVE Quantum Singular Value Estimation
CKS Childs, Kothari, and Somma
QA Quantum Annealing
IP Integer Programming
IQP Integer Quadratic Programming
QUBO Quadratic Unconstrained Binary Optimization
QAOA Quantum Approximate Optimization Algorithm
ITE Imaginary-Time Evolution
VarQITE Variational Quantum Imaginary Time Evolution
MCI Monte Carlo Integration
QMCI Quantum Monte Carlo Integration
VaR Value at Risk
CVaR Conditional Value at Risk
CDO Collateralized Debt Obligation
ECR Economic Capital Requirement
CPU Central Processing Unit
GPU Graphics Processing Unit

40
References
[1] Alexandre Ménard, Ivan Ostojic, Mark Patel, and Daniel Volz. A game plan for quantum computing, 2020.
[2] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge
University Press, 2010.
[3] Daniel J Egger, Claudio Gambella, Jakub Marecek, Scott McFaddin, Martin Mevissen, Rudy Raymond,
Andrea Simonetto, Stefan Woerner, and Elena Yndurain. Quantum computing for finance: State-of-the-art
and future prospects. IEEE Transactions on Quantum Engineering, 1:1–24, 2020.
[4] Adam Bouland, Wim van Dam, Hamed Joorati, Iordanis Kerenidis, and Anupam Prakash. Prospects and
challenges of quantum finance. arXiv preprint arXiv:2011.06492, 2020.
[5] Roman Orus, Samuel Mugel, and Enrique Lizaso. Quantum computing for finance: Overview and prospects.
Reviews in Physics, 4:100028, 2019.
[6] Marco Pistoia, Syed Farhan Ahmad, Akshay Ajagekar, Alexander Buts, Shouvanik Chakrabarti, Dylan Her-
man, Shaohan Hu, Andrew Jena, Pierre Minssen, Pradeep Niroula, Arthur Rattew, Yue Sun, and Romina
Yalovetzky. Quantum Machine Learning for Finance. IEEE/ACM International Conference On Computer
Aided Design (ICCAD), November 2021. ICCAD Special Session Paper.
[7] Andrés Gómez, Álvaro Leitao, Alberto Manzano, Daniele Musso, María R Nogueiras, Gustavo Ordóñez, and
Carlos Vázquez. A survey on quantum computational finance for derivatives pricing and VaR. Archives of
Computational Methods in Engineering, pages 1–27, 2022.
[8] Paul Griffin and Ritesh Sampat. Quantum computing for supply chain finance. In 2021 IEEE International
Conference on Services Computing (SCC), pages 456–459. IEEE, 2021.
[9] Apoorva Ganapathy. Quantum computing in high frequency trading and fraud detection. Engineering
International, 9(2):61–72, 2021.
[10] Research and Markets. Financial Services Global Market Report 2021: COVID-19 Impact and Recovery to
2030, 2021.
[11] Basel Committee on Banking Supervision. High-level summary of Basel III reforms, 2017.
[12] Paul Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer, 2004.
[13] Heath P. Terry, Debra Schwartz, and Tina Sun. The Future of Finance: Volume 3: The socialization of
finance, 2015.
[14] Salesforce. Trends in Financial Services, 2020.
[15] Tucker Bailey, Soumya Banerjee, Christopher Feeney, and Heather Hogsett. Cybersecurity: Emerging
challenges and solutions for the boards of financial-services companies, 2020.
[16] Peter W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum
computer. SIAM Journal on Computing, 26(5):1484–1509, Oct 1997.
[17] Craig Gidney and Martin Ekerå. How to factor 2048 bit RSA integers in 8 hours using 20 million noisy
qubits. Quantum, 5:433, Apr 2021.
[18] Martin Roetteler, Michael Naehrig, Krysta M Svore, and Kristin Lauter. Quantum resource estimates for
computing elliptic curve discrete logarithms. In International Conference on the Theory and Application of
Cryptology and Information Security, pages 241–270, 2017.
[19] Andrew M Childs and Wim Van Dam. Quantum algorithms for algebraic problems. Reviews of Modern
Physics, 82(1):1, 2010.
[20] D. J. Bernstein and T. Lange. Post-quantum cryptography. Nature, 2017.
[21] Nicolas Gisin, Grégoire Ribordy, Wolfgang Tittel, and Hugo Zbinden. Quantum cryptography. Reviews of
Modern Physics, 74(1):145, 2002.
[22] Matthew Amy, Olivia Di Matteo, Vlad Gheorghiu, Michele Mosca, Alex Parent, and John Schanck. Esti-
mating the cost of generic quantum pre-image attacks on SHA-2 and SHA-3. In International Conference on
Selected Areas in Cryptography, pages 317–337. Springer, 2016.
[23] Akinori Hosoyamada and Yu Sasaki. Quantum collision attacks on reduced SHA-256 and SHA-512. IACR
Cryptol. ePrint Arch., 2021:292, 2021.
[24] Samuel Jaques, Michael Naehrig, Martin Roetteler, and Fernando Virdia. Implementing Grover oracles for
quantum key search on AES and LowMC. Advances in Cryptology–EUROCRYPT 2020, 12106:280, 2020.
[25] Xavier Bonnetain, André Schrottenloher, and Ferdinand Sibleyras. Beyond quadratic speedups in quantum
attacks on symmetric schemes. IACR-EUROCRYPT-2022, 10 2021.
[26] Laurence A. Wolsey and George L Nemhauser. Integer and combinatorial optimization, volume 55. John
Wiley & Sons, 1999.
[27] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[28] Yuri Alexeev, Dave Bacon, Kenneth R. Brown, Robert Calderbank, Lincoln D. Carr, Frederic T. Chong,
Brian DeMarco, Dirk Englund, Edward Farhi, Bill Fefferman, Alexey V. Gorshkov, Andrew Houck, Jungsang
Kim, Shelby Kimmel, Michael Lange, Seth Lloyd, Mikhail D. Lukin, Dmitri Maslov, Peter Maunz, Christopher
Monroe, John Preskill, Martin Roetteler, Martin J. Savage, and Jeff Thompson. Quantum computer systems
for scientific discovery. PRX Quantum, 2(1), February 2021.

41
[29] Gagnesh Kumar and Sunil Agrawal. CMOS limitations and futuristic carbon allotropes. In 2017 8th IEEE
Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), pages 68–
71. IEEE, 2017.
[30] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas,
Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, et al. Quantum supremacy using a programmable
superconducting processor. Nature, 574(7779):505–510, 2019.
[31] John Preskill. Quantum computing and the entanglement frontier. arXiv preprint arXiv:1203.5813, 2012.
[32] John Preskill. Quantum computing in the NISQ era and beyond. Quantum, 2:79, Aug 2018.
[33] Ramamurti Shankar. Principles of quantum mechanics. Springer Science & Business Media, 1994.
[34] Oswald Veblen and John Wesley Young. Projective geometry, volume 2. Ginn, 1918.
[35] Steven Roman. Advanced linear algebra. Springer, 2008.
[36] John D. Dollard and Charles N. Friedman. Product Integration with Application to Differential Equations.
Encyclopedia of Mathematics and its Applications. London: Addison-Wesley, 1984.
[37] A. Yu. Kitaev, A. H. Shen, and M. N. Vyalyi. Classical and Quantum Computation. American Mathematical
Society, USA, 2002.
[38] Yuchen Wang, Zixuan Hu, Barry C. Sanders, and Sabre Kais. Qudits and high-dimensional quantum
computing. Frontiers in Physics, 8, Nov 2020.
[39] Samantha Buck, Robin Coleman, and Hayk Sargsyan. Continuous variable quantum algorithms: an intro-
duction. arXiv preprint arXiv:2107.02151, 2021.
[40] Dong-Sheng Wang. A comparative study of universal quantum computing models: towards a physical
unification a comparative study of universal quantum computing models: towards a physical unification.
Quantum Engineering, page e85, 2021.
[41] Dorit Aharonov, Wim Van Dam, Julia Kempe, Zeph Landau, Seth Lloyd, and Oded Regev. Adiabatic
quantum computation is equivalent to standard quantum computation. SIAM Review, 50(4):755–787, 2008.
[42] Daniel A. Lidar and Todd A. Brun. Quantum Error Correction. Cambridge University Press, 2013.
[43] A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland. Surface codes: Towards practical large-scale
quantum computation. Physical Review A, 86(3), Sep 2012.
[44] Arthur G Rattew, Yue Sun, Pierre Minssen, and Marco Pistoia. The efficient preparation of normal distri-
butions in quantum registers. Quantum, 5:609, 2021.
[45] Xin-Chuan Wu, Sheng Di, Emma Maitreyee Dasgupta, Franck Cappello, Hal Finkel, Yuri Alexeev, and
Frederic T. Chong. Full-state quantum circuit simulation by using data compression. Proceedings of the
International Conference for High Performance Computing, Networking, Storage and Analysis, November
2019.
[46] Xin-Chuan Wu, Sheng Di, Emma Maitreyee Dasgupta, Franck Cappello, Hal Finkel, Yuri Alexeev, and
Frederic T Chong. Full-state quantum circuit simulation by using data compression. In Proceedings of the
High Performance Computing, Networking, Storage and Analysis International Conference (SC19), Denver,
CO, USA, 2019. IEEE Computer Society.
[47] Xin-Chuan Wu, Sheng Di, Franck Cappello, Hal Finkel, Yuri Alexeev, and Frederic Chong. Memory-efficient
quantum circuit simulation by using lossy data compression. In Proceedings of the 3rd International Workshop
on Post-Moore Era Supercomputing (PMES) at SC18, Denver, CO, USA, 2018.
[48] Xin-Chuan Wu, Sheng Di, Franck Cappello, Hal Finkel, Yuri Alexeev, and Frederic T Chong. Amplitude-
aware lossy compression for quantum circuit simulation. In Proceedings of 4th International Workshop on
Data Reduction for Big Scientific Data (DRBSD-4) at SC18, 2018.
[49] Danylo Lykov, Roman Schutski, Alexey Galda, Valerii Vinokur, and Yuri Alexeev. Tensor network quantum
simulator with step-dependent parallelization. arXiv preprint arXiv:2012.02430, 2020.
[50] Danylo Lykov, Angela Chen, Huaxuan Chen, Kristopher Keipert, Zheng Zhang, Tom Gibbs, and Yuri
Alexeev. Performance evaluation and acceleration of the QTensor quantum circuit simulator on GPUs. 2021
IEEE/ACM Second International Workshop on Quantum Computing Software (QCS), nov 2021.
[51] Danylo Lykov and Yuri Alexeev. Importance of diagonal gates in tensor network simulations. 2021.
[52] Tameem Albash and Daniel A. Lidar. Adiabatic quantum computation. Reviews of Modern Physics, 90(1),
Jan 2018.
[53] Jérémie Roland and Nicolas J. Cerf. Quantum search by local adiabatic evolution. Physical Review A, 65(4),
Mar 2002.
[54] Jacob D. Biamonte and Peter J. Love. Realizable Hamiltonians for universal adiabatic quantum computers.
Physical Review A, 78(1), Jul 2008.
[55] P. Krantz, M. Kjaergaard, F. Yan, T. P. Orlando, S. Gustavsson, and W. D. Oliver. A quantum engineer’s
guide to superconducting qubits. Applied Physics Reviews, 6(2):021318, 2019.
[56] Colin D. Bruzewicz, John Chiaverini, Robert McConnell, and Jeremy M. Sage. Trapped-ion quantum
computing: Progress and challenges. Applied Physics Reviews, 6(2):021314, 2019.

42
[57] Antti P. Vepsäläinen, Amir H. Karamlou, John L. Orrell, Akshunna S. Dogra, Ben Loer, Francisca Vas-
concelos, David K. Kim, Alexander J. Melville, Bethany M. Niedzielski, Jonilyn L. Yoder, et al. Impact of
ionizing radiation on superconducting qubit coherence. Nature, 584(7822):551–556, 2020.
[58] Mohan Sarovar, Timothy Proctor, Kenneth Rudinger, Kevin Young, Erik Nielsen, and Robin Blume-Kohout.
Detecting crosstalk errors in quantum information processors. Quantum, 4:321, Sep 2020.
[59] Sergey Bravyi, Sarah Sheldon, Abhinav Kandala, David C. Mckay, and Jay M. Gambetta. Mitigating
measurement errors in multiqubit experiments. Phys. Rev. A, 103(4), 2021.
[60] Jacob R. West, Daniel A. Lidar, Bryan H. Fong, and Mark F. Gyure. High fidelity quantum gates via
dynamical decoupling. Phys. Rev. Lett., 105(23), Dec 2010.
[61] A. Yu. Kitaev. Fault-tolerant quantum computation by anyons. Annals of Physics, 303(1):2–30, 2003.
[62] Austin G Fowler and John M Martinis. Quantifying the effects of local many-qubit errors and nonlocal
two-qubit errors on the surface code. Physical Review A, 89(3):032316, 2014.
[63] J. Pino, Joan Dreiling, C. Figgatt, J. Gaebler, S. Moses, M. Allman, C. Baldwin, Michael Foss-Feig, D. Hayes,
K. Mayer, C. Ryan-Anderson, and Brian Neyenhuis. Demonstration of the trapped-ion quantum CCD com-
puter architecture. Nature, 2021.
[64] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to algorithms.
MIT press, 2009.
[65] Sanjeev Arora and Boaz Barak. Computational complexity: a modern approach. Cambridge University
Press, 2009.
[66] Andris Ambainis. Understanding quantum algorithms via query complexity. In Proceedings of the Interna-
tional Congress of Mathematicians: Rio de Janeiro 2018, pages 3265–3285. World Scientific, 2018.
[67] Lov K Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the twenty-eighth
annual ACM symposium on Theory of Computing, pages 212–219, 1996.
[68] Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani. Strengths and weaknesses of
quantum computing. SIAM Journal on Computing, 26(5):1510–1523, 1997.
[69] I. Kerenidis and A. Prakash. Quantum recommendation systems. arXiv preprint arXiv:1603.08675, 2016.
[70] Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Quantum random access memory. Physical Review
Letters, 100(16), April 2008.
[71] Aram W Harrow. Small quantum computers and large classical data sets. arXiv preprint arXiv:2004.00026,
2020.
[72] Andris Ambainis. Quantum search with variable times. Theory of Computing Systems, 47(3):786–807, 2010.
[73] Gilles Brassard, Peter Høyer, Michele Mosca, and Alain Tapp. Quantum amplitude amplification and
estimation. Quantum Computation and Information, page 53–74, 2002.
[74] Dominic W Berry, Andrew M Childs, Richard Cleve, Robin Kothari, and Rolando D Somma. Exponential
improvement in precision for simulating sparse Hamiltonians. In Proceedings of the forty-sixth annual ACM
symposium on Theory of Computing, pages 283–292, 2014.
[75] M. Boyer, G. Brassard, P. Høyer, and A. Tapp. Tight bounds on quantum searching. Fortschritte der
Physik: Progress of Physics, 46(4-5):493–505, 1998.
[76] Lov K. Grover. Fixed-point quantum search. Phys. Rev. Lett., 2005.
[77] Peter Høyer, Michele Mosca, and Ronald de Wolf. Quantum search on bounded-error inputs. Lecture Notes
in Computer Science, page 291–299, 2003.
[78] Andris Ambainis. Variable time amplitude amplification and quantum algorithms for linear algebra problems.
In STACS’12 (29th Symposium on Theoretical Aspects of Computer Science), volume 14, pages 636–647.
LIPIcs, 2012.
[79] A. Yu. Kitaev. Quantum measurements and the Abelian stabilizer problem. arXiv preprint quant-
ph/9511026, 1995.
[80] R. Cleve, A. Ekert, C. Macchiavello, and M. Mosca. Quantum algorithms revisited. Proceedings of the
Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 454(1969):339––354,
Jan 1998.
[81] Anupam Prakash. Quantum algorithms for linear algebra and machine learning. University of California,
Berkeley, 2014.
[82] Yohichi Suzuki, Shumpei Uno, Rudy Raymond, Tomoki Tanaka, Tamiya Onodera, and Naoki Yamamoto.
Amplitude estimation without phase estimation. Quantum Information Processing, 19(2), Jan 2020.
[83] Dmitry Grinko, Julien Gacon, Christa Zoufal, and Stefan Woerner. Iterative quantum amplitude estimation.
npj Quantum Information, 7(1), Mar 2021.
[84] Tudor Giurgica-Tiron, Iordanis Kerenidis, Farrokh Labib, Anupam Prakash, and William Zeng. Low depth
algorithms for quantum amplitude estimation. arXiv preprint arXiv:2012.03348, 2020.
[85] Kirill Plekhanov, Matthias Rosenkranz, Mattia Fiorentini, and Michael Lubasch. Variational quantum
amplitude estimation. arXiv preprint arXiv:2109.03687, 2021.

43
[86] Danial Dervovic, Mark Herbster, Peter Mountney, Simone Severini, Naïri Usher, and Leonard Wossnig.
Quantum linear systems algorithms: a primer. arXiv preprint arXiv:1802.08227, 2018.
[87] Andrew M. Childs, Robin Kothari, and Rolando D. Somma. Quantum algorithm for systems of linear equa-
tions with exponentially improved dependence on precision. SIAM Journal on Computing, 46(6):1920–1950,
Jan 2017.
[88] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for linear systems of equations.
Phys. Rev. Lett., 103:150502, Oct 2009.
[89] Scott Aaronson. Read the Fine Print. Nature Physics, 11(4):291–293, 04 2015.
[90] Dominic W. Berry, Graeme Ahokas, Richard Cleve, and Barry C. Sanders. Efficient quantum algorithms
for simulating sparse Hamiltonians. Communications in Mathematical Physics, 270(2):359—-371, Dec 2006.
[91] Andrew M. Childs, Yuan Su, Minh C. Tran, Nathan Wiebe, and Shuchen Zhu. Theory of Trotter error with
commutator scaling. Physical Review X, 11(1), Feb 2021.
[92] Dominic W. Berry, Andrew M. Childs, and Robin Kothari. Hamiltonian simulation with nearly optimal
dependence on all parameters. 2015 IEEE 56th Annual Symposium on Foundations of Computer Science,
Oct 2015.
[93] Guang Hao Low and Isaac L. Chuang. Optimal Hamiltonian simulation by quantum signal processing. Phys.
Rev. Lett., 118(1), Jan 2017.
[94] Pedro Costa, Dong An, Yuval R Sanders, Yuan Su, Ryan Babbush, and Dominic W Berry. Optimal scaling
quantum linear systems solver via discrete adiabatic theorem. arXiv preprint arXiv:2111.08152, 2021.
[95] Leonard Wossnig, Zhikuan Zhao, and Anupam Prakash. Quantum linear system algorithm for dense matri-
ces. Physical Review Letters, 120(5), Jan 2018.
[96] Iordanis Kerenidis and Anupam Prakash. Quantum gradient descent for linear systems and least squares.
Physical Review A, 101(2), Feb 2020.
[97] Shantanav Chakraborty, András Gilyén, and Stacey Jeffery. The power of block-encoded matrix powers:
improved regression techniques via faster Hamiltonian simulation. arXiv preprint arXiv:1804.01973, 2018.
[98] Sander Gribling, Iordanis Kerenidis, and Dániel Szilágyi. Improving quantum linear system solvers via a
gradient descent perspective. arXiv preprint arXiv:2109.04248, 2021.
[99] András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular value transformation and
beyond: exponential improvements for quantum matrix arithmetics. Proceedings of the 51st Annual ACM
SIGACT Symposium on Theory of Computing, Junee 2019.
[100] Alexander Dranov, Johannes Kellendonk, and Rudolf Seiler. Discrete time adiabatic theorems for quantum
mechanical systems. Journal of Mathematical Physics, 39(3):1340–1349, 1998.
[101] Iordanis Kerenidis and Anupam Prakash. A quantum interior point method for LPs and SDPs. ACM
Transactions on Quantum Computing, 1(1), Oct 2020.
[102] Alan Frieze, Ravi Kannan, and Santosh Vempala. Fast Monte-Carlo algorithms for finding low-rank ap-
proximations. Journal of the ACM (JACM), 51(6):1025–1041, 2004.
[103] Ewin Tang. A quantum-inspired classical algorithm for recommendation systems. In Proceedings of the 51st
Annual ACM SIGACT Symposium on Theory of Computing. Association for Computing Machinery, 2019.
[104] Nai-Hui Chia, András Gilyén, Tongyang Li, Han-Hsuan Lin, Ewin Tang, and Chunhao Wang. Sampling-
based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning. Proceed-
ings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, June 2020.
[105] Gregory F. Lawler and Vlada Limic. Random Walk: A Modern Introduction. Cambridge Studies in
Advanced Mathematics. Cambridge University Press, 2010.
[106] Fischer Black and Myron Scholes. The pricing of options and corporate liabilities. In World Scientific
Reference on Contingent Claims Analysis in Corporate Finance: Volume 1: Foundations of CCA and Equity
Valuation, pages 3–21. World Scientific, 2019.
[107] F. C. Klebaner. Introduction to Stochastic Calculus with Applications. Imperial College Press, 3rd edition,
2012.
[108] László Lovász. Random walks on graphs: A survey, combinatorics, Paul Erdos is Eighty. Bolyai Soc. Math.
Stud., 2, Jan 1993.
[109] J. R. Norris. Markov Chains. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge
University Press, 1997.
[110] J. Kempe. Quantum random walks: An introductory overview. Contemporary Physics, 44(4):307–327, July
2003.
[111] Mario Szegedy. Quantum speed-up of Markov chain based algorithms. In 45th Annual IEEE symposium
on foundations of computer science, 2004.
[112] Frédéric Magniez, Ashwin Nayak, Jérémie Roland, and Miklos Santha. Search via quantum walk. SIAM
Journal on Computing, 40(1):14–164, Jan 2011.
[113] Neil Shenvi, Julia Kempe, and K. Birgitta Whaley. Quantum random-walk search algorithm. Physical
Review A, 67(5), May 2003.

44
[114] Andris Ambainis, Julia Kempe, and Alexander Rivosh. Coins make quantum walks faster. In Proceedings
of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’05, page 1099–1108, USA,
2005. Society for Industrial and Applied Mathematics.
[115] Andris Ambainis, Eric Bach, Ashwin Nayak, Ashvin Vishwanath, and John Watrous. One-dimensional
quantum walks. In Proceedings of the thirty-third annual ACM symposium on Theory of Computing, pages
37–49, 2001.
[116] D. Aharonov, A. Ambainis, J. Kempe, and U. Vazirani. Quantum walks on graphs. In Proceedings of the
thirty-third annual ACM symposium on Theory of Computing, pages 50–59, 2001.
[117] Cristopher Moore and Alexander Russell. Quantum walks on the hypercube. In Proceedings of the 6th
International Workshop on Randomization and Approximation Techniques, pages 164––178. Springer, 2002.
[118] Andris Ambainis. Quantum walk algorithm for element distinctness. SIAM Journal on Computing,
37(1):210–239, 2007.
[119] Simon Apers, András Gilyén, and Stacey Jeffery. A unified framework of quantum walk search. arXiv
preprint arXiv:1912.04233, 2019.
[120] R. D. Somma, S. Boixo, H. Barnum, and E. Knill. Quantum simulations of classical annealing processes.
Phys. Rev. Lett., Sep 2008.
[121] Andrew M. Childs and Jason M. Eisenberg. Quantum algorithms for subset finding. Quantum Info.
Comput., 5(7):593–604, Nov 2005.
[122] Andris Ambainis, András Gilyén, Stacey Jeffery, and Martins Kokainis. Quadratic speedup for finding
marked vertices by quantum walks. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory
of Computing, pages 412––424, New York, NY, USA, 2020. Association for Computing Machinery.
[123] Edward Farhi and Sam Gutmann. Quantum computation and decision trees. Physical Review A, 58(2):915–
928, Aug 1998.
[124] Andrew M. Childs, Edward Farhi, and Sam Gutmann. An example of the difference between quantum and
classical random walks. Quantum Information Processing, 2002.
[125] Andrew M. Childs, Richard Cleve, Enrico Deotto, Edward Farhi, Sam Gutmann, and Daniel A. Spielman.
Exponential algorithmic speedup by a quantum walk. Proceedings of the thirty-fifth ACM symposium on
Theory of computing - STOC ’03, 2003.
[126] Julia Kempe. Quantum random walks hit exponentially faster. arXiv preprint quant-ph/0205083, 2002.
[127] Yue Ruan, Samuel Marsh, Xilin Xue, Xi Li, Zhihao Liu, and Jingbo Wang. Quantum approximate algorithm
for NP optimization problems with constraints. arXiv preprint arXiv:2002.00943, 2020.
[128] Andrew M. Childs. On the relationship between continuous- and discrete-time quantum walk. Communi-
cations in Mathematical Physics, 294(2):581–603, Oct 2009.
[129] Salvador Elías Venegas-Andraca. Quantum walks: a comprehensive review. Quantum Information Pro-
cessing, 11(5):101–1106, Jul 2012.
[130] M. Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C. Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R.
McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and et al. Variational quantum algorithms. Nature
Reviews Physics, 3(9):625––644, Aug 2021.
[131] Lennart Bittel and Martin Kliesch. Training variational quantum algorithms is NP-hard. Phys. Rev. Lett.,
127(12):120502, 2021.
[132] Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J. Love, Alán
Aspuru-Guzik, and Jeremy L. O’Brien. A variational eigenvalue solver on a photonic quantum processor.
Nat. Commun., 5(1), July 2014.
[133] Carlos Bravo-Prieto, Ryan LaRose, Marco Cerezo, Yigit Subasi, Lukasz Cincio, and Patrick J Coles.
Variational quantum linear solver. arXiv preprint arXiv:1909.05820, 2019.
[134] Hsin-Yuan Huang, Kishor Bharti, and Patrick Rebentrost. Near-term quantum algorithms for linear systems
of equations with regression loss functions. New Journal of Physics, 23(11):113021, Nov 2021.
[135] Patrick Huembeli and Alexandre Dauphin. Characterizing the loss landscape of variational quantum cir-
cuits. Quantum Science and Technology, 6(2):025011, Feb 2021.
[136] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum approximate optimization algorithm.
arXiv preprint arXiv:1411.4028, 2014.
[137] Stuart Hadfield, Zhihui Wang, Bryan O’Gorman, Eleanor Rieffel, Davide Venturelli, and Rupak Biswas.
From the quantum approximate optimization algorithm to a quantum alternating operator ansatz. Algorithms,
12(2):34, Feb 2019.
[138] Xiao Yuan, Suguru Endo, Qi Zhao, Ying Li, and Simon C. Benjamin. Theory of variational quantum
simulation. Quantum, 3:191, Oct 2019.
[139] A.D. McLachlan. A variational solution of the time-dependent Schrödinger equation. Molecular Physics,
8(1):39–44, 1964.
[140] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii. Quantum circuit learning. Physical Review A, 98(3),
Sep 2018.

45
[141] Edward Farhi, Jeffrey Goldstone, Sam Gutmann, and Michael Sipser. Quantum computation by adiabatic
evolution. arXiv preprint quant-ph/0001106, 2000.
[142] Peter JM Van Laarhoven and Emile HL Aarts. Simulated annealing. In Simulated annealing: Theory and
applications, pages 7–15. Springer, 1987.
[143] Kostyantyn Kechedzhi and Vadim N. Smelyanskiy. Open-system quantum annealing in mean-field models
with exponential degeneracy. Physical Review X, 6(2), May 2016.
[144] Vasil S. Denchev, Sergio Boixo, Sergei V. Isakov, Nan Ding, Ryan Babbush, Vadim Smelyanskiy, John
Martinis, and Hartmut Neven. What is the computational value of finite-range tunneling? Physical Review
X, 6(3), Aug 2016.
[145] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. Quantum adiabatic evolution algorithms versus
simulated annealing. arXiv preprint quant-ph/0201031, 2002.
[146] Akshay Ajagekar, Travis Humble, and Fengqi You. Quantum computing based hybrid solution strategies
for large-scale discrete-continuous optimization problems. Computers & Chemical Engineering, 132:106630,
Jan 2020.
[147] Radford M Neal. Probabilistic inference using Markov chain Monte Carlo methods. Department of Com-
puter Science, University of Toronto Toronto, ON, Canada, 1993.
[148] Christian P Robert, George Casella, and George Casella. Monte Carlo statistical methods, volume 2.
Springer, 2004.
[149] Tito Homem-de-Mello and Güzin Bayraksan. Monte Carlo sampling-based methods for stochastic opti-
mization. Surveys in Operations Research and Management Science, 19(1):56–85, 2014.
[150] Gilles Brassard, Frederic Dupuis, Sebastien Gambs, and Alain Tapp. An optimal quantum algorithm to
approximate the mean and its application for approximating the median of a set of points over an arbitrary
distance. arXiv preprint arXiv:1106.4267, 2011.
[151] S. Heinrich. Quantum summation with an application to integration. Journal of Complexity, 18(1):1–50,
2002.
[152] Ashley Montanaro. Quantum speedup of Monte Carlo methods. Proceedings of the Royal Society A:
Mathematical, Physical and Engineering Sciences, 471(2181):20150301, Sep 2015.
[153] D. Ceperley and B. Alder. Quantum Monte Carlo. Science, 231(4738):555–560, 1986.
[154] Shouvanik Chakrabarti, Rajiv Krishnakumar, Guglielmo Mazzola, Nikitas Stamatopoulos, Stefan Woerner,
and William J. Zeng. A threshold for quantum advantage in derivative pricing. Quantum, 5:463, June 2021.
[155] Thomas Häner, Martin Roetteler, and Krysta M Svore. Optimizing quantum circuits for arithmetic. arXiv
preprint arXiv:1805.12445, 2018.
[156] Arjan Cornelissen and Sofiene Jerbi. Quantum algorithms for multivariate Monte Carlo estimation. 2021.
[157] Lov Grover and Terry Rudolph. Creating superpositions that correspond to efficiently integrable probability
distributions. arXiv preprint quant-ph/0208112, 2002.
[158] Steven Herbert. No quantum speedup with grover-rudolph state preparation for quantum Monte Carlo
integration. Physical Review E, 103(6), Jun 2021.
[159] Christa Zoufal, Aurélien Lucchi, and Stefan Woerner. Quantum generative adversarial networks for learning
and loading random distributions. npj Quantum Information, 5(1), 2019.
[160] Bruno Dupire, The Black–scholes Model (see Black, and Gives Options. Pricing with a smile. Risk Magazine,
pages 18–20, 1994.
[161] Gisiro Maruyama. On the transition probability functions of the Markov process. Nat. Sci. Rep. Ochano-
mizu Univ, 5:10–20, 1954.
[162] Kazuya Kaneko, Koichi Miyamoto, Naoyuki Takeda, and Kazuyoshi Yoshino. Quantum pricing with a
smile: Implementation of local volatility model on quantum computer. 2020.
[163] Koichi Miyamoto and Kenji Shiohara. Reduction of qubits in a quantum algorithm for Monte Carlo
simulation by a pseudo-random-number generator. Physical Review A, 102(2), aug 2020.
[164] Michael B Giles. Multilevel Monte Carlo path simulation. Operations research, 56(3):607–617, 2008.
[165] Dong An, Noah Linden, Jin-Peng Liu, Ashley Montanaro, Changpeng Shao, and Jiasu Wang. Quantum-
accelerated multilevel monte carlo methods for stochastic differential equations in mathematical finance.
Quantum, 5:481, June 2021.
[166] Steven Herbert. Quantum Monte-Carlo integration: The full advantage in minimal circuit depth. arXiv
preprint arXiv:2105.09100, 2021.
[167] Ryan Babbush, Jarrod R. McClean, Michael Newman, Craig Gidney, Sergio Boixo, and Hartmut Neven.
Focus beyond quadratic speedups for error-corrected quantum advantage. PRX Quantum, 2(1), March 2021.
[168] Niladri Gomes, Anirban Mukherjee, Feng Zhang, Thomas Iadecola, Cai-Zhuang Wang, Kai-Ming Ho,
Peter P Orth, and Yong-Xin Yao. Adaptive variational quantum imaginary time evolution approach for
ground state preparation. Advanced Quantum Technologies, page 2100114, 2021.
[169] Arthur G. Rattew and Bálint Koczor. Preparing arbitrary continuous functions in quantum registers with
logarithmic complexity. 2022.

46
[170] Mark Kac. On distributions of certain Wiener functionals. Transactions of the American Mathematical
Society, 65(1):1–13, 1949.
[171] Richard P Feynman. The principle of least action in quantum mechanics. In Feynman’s Thesis—A New
Approach to Quantum Theory, pages 1–69. World Scientific, 2005.
[172] Christian Grossmann, Hans-Görg Roos, and Martin Stynes. Numerical treatment of partial differential
equations, volume 154. Springer, 2007.
[173] Jie Shen, Tao Tang, and Li-Lian Wang. Spectral methods: algorithms, analysis and applications, volume 41.
Springer Science & Business Media, 2011.
[174] Yudong Cao, Anargyros Papageorgiou, Iasonas Petras, Joseph Traub, and Sabre Kais. Quantum algorithm
and circuit design solving the Poisson equation. New Journal of Physics, 15(1):013021, 2013.
[175] Dominic W Berry. High-order quantum algorithm for solving linear differential equations. Journal of
Physics A: Mathematical and Theoretical, 47(10):105301, 2014.
[176] Dominic W Berry, Andrew M Childs, Aaron Ostrander, and Guoming Wang. Quantum algorithm for linear
differential equations with exponentially improved dependence on precision. Communications in Mathematical
Physics, 356(3):1057–1081, 2017.
[177] Andrew M Childs, Jin-Peng Liu, and Aaron Ostrander. High-precision quantum algorithms for partial
differential equations. Quantum, 5:574, 2021.
[178] Ashley Montanaro and Sam Pallister. Quantum algorithms and the finite element method. Physical Review
A, 93(3):032324, 2016.
[179] Pedro CS Costa, Stephen Jordan, and Aaron Ostrander. Quantum algorithm for simulating the wave
equation. Physical Review A, 99(1):012323, 2019.
[180] François Fillion-Gourdeau and Emmanuel Lorin. Simple digital quantum algorithm for symmetric first-
order linear hyperbolic systems. Numerical Algorithms, 82(3):1009–1045, 2019.
[181] Torsten Carleman. Application de la théorie des équations intégrales linéaires aux systèmes d’équations
différentielles non linéaires. Acta Mathematica, 59:63–87, 1932.
[182] Krzysztof Kowalski and W-H Steeb. Nonlinear dynamical systems and Carleman linearization. World
Scientific, 1991.
[183] Jin-Peng Liu, Herman Øie Kolden, Hari K Krovi, Nuno F Loureiro, Konstantina Trivisa, and Andrew M
Childs. Efficient quantum algorithm for dissipative nonlinear differential equations. Proceedings of the Na-
tional Academy of Sciences, 118(35), 2021.
[184] Sarah K Leyton and Tobias J Osborne. A quantum algorithm to solve nonlinear differential equations.
arXiv preprint arXiv:0812.4423, 2008.
[185] Seth Lloyd, Giacomo De Palma, Can Gokler, Bobak Kiani, Zi-Wen Liu, Milad Marvian, Felix Tennie, and
Tim Palmer. Quantum algorithm for nonlinear differential equations. arXiv preprint arXiv:2011.06571, 2020.
[186] Ilon Joseph. Koopman–von Neumann approach to quantum simulation of nonlinear classical dynamics.
Physical Review Research, 2(4):043102, 2020.
[187] Shi Jin and Nana Liu. Quantum algorithms for computing observables of nonlinear partial differential
equations. arXiv preprint arXiv:2202.07834, 2022.
[188] Shi Jin and Xiantao Li. Multi-phase computations of the semiclassical limit of the Schrödinger equation
and related problems: Whitham vs Wigner. Physica D: Nonlinear Phenomena, 182(1-2):46–85, 2003.
[189] Filipe Fontanela, Antoine Jacquier, and Mugad Oumgari. A quantum algorithm for linear PDEs arising in
finance. SIAM Journal on Financial Mathematics, 12(4):SC98–SC114, 2021.
[190] Hedayat Alghassi, Amol Deshmukh, Noelle Ibrahim, Nicolas Robles, Stefan Woerner, and Christa Zoufal.
A variational quantum algorithm for the Feynman–Kac formula. arXiv preprint arXiv:2108.10846, 2021.
[191] Michael Lubasch, Jaewoo Joo, Pierre Moinier, Martin Kiffner, and Dieter Jaksch. Variational quantum
algorithms for nonlinear problems. Physical Review A, 101(1):010301, 2020.
[192] Oleksandr Kyriienko, Annie E Paine, and Vincent E Elfving. Solving nonlinear differential equations with
differentiable quantum circuits. Physical Review A, 103(5):052416, 2021.
[193] Oleksandr Kyriienko, Annie E Paine, and Vincent E Elfving. Protocols for trainable and differentiable
quantum generative modelling. arXiv preprint arXiv:2202.08253, 2022.
[194] Annie E. Paine, Vincent E. Elfving, and Oleksandr Kyriienko. Quantum quantile mechanics: Solving
stochastic differential equations for generating time-series. 2021.
[195] Gyorgy Steinbrecher and William Shaw. Quantile mechanics. European Journal of Applied Mathematics -
EUR J APPL MATH, 19, 08 2007.
[196] James E Gentle. Random number generation and Monte Carlo methods, volume 381. Springer, 2003.
[197] Kenji Kubo, Yuya O. Nakagawa, Suguru Endo, and Shota Nagayama. Variational quantum simulations of
stochastic differential equations. Physical Review A, 103(5), May 2021.
[198] Phelim P. Boyle. Option valuation using a three jump process. 1986.
[199] Hans Föllmer and Alexander Schied. Stochastic finance. In Stochastic Finance. de Gruyter, 2016.
[200] Stefan Weinzierl. Introduction to Monte Carlo methods. arXiv preprint hep-ph/0006269, 2000.

47
[201] Phelim P Boyle. Options: A Monte Carlo approach. Journal of Financial Economics, 4(3):323–338, 1977.
[202] Nikitas Stamatopoulos, Daniel J. Egger, Yue Sun, Christa Zoufal, Raban Iten, Ning Shen, and Stefan
Woerner. Option pricing using quantum computers. Quantum, 4:291, Jul 2020.
[203] Patrick Rebentrost, Brajesh Gupt, and Thomas R. Bromley. Quantum computational finance: Monte
Carlo pricing of financial derivatives. Physical Review A, 98(2), Aug 2018.
[204] Koichi Miyamoto and Kenji Kubo. Pricing multi-asset derivatives by finite difference method on a quantum
computer, 2021.
[205] Noah Linden, Ashley Montanaro, and Changpeng Shao. Quantum vs. classical algorithms for solving the
heat equation. 2020.
[206] Javier Gonzalez-Conde, Ángel Rodríguez-Rozas, Enrique Solano, and Mikel Sanz. Simulating option price
dynamics with exponential quantum speedup. 2021.
[207] Albert N Shiryaev. Optimal stopping rules, volume 8. Springer Science & Business Media, 2007.
[208] Francis A Longstaff and Eduardo S Schwartz. Valuing American options by simulation: a simple least-
squares approach. The Review of Financial Studies, 14(1):113–147, 2001.
[209] João F. Doriguello, Alessandro Luongo, Jinge Bao, Patrick Rebentrost, and Miklos Santha. Quantum
algorithm for stochastic optimal stopping problems with applications in finance, 2021.
[210] Koichi Miyamoto. Bermudan option pricing by quantum amplitude estimation and chebyshev interpolation,
2021.
[211] Hao Tang, Anurag Pal, Tian-Yu Wang, Lu-Feng Qiao, Jun Gao, and Xian-Min Jin. Quantum computation
for pricing the collateralized debt obligations. Quantum Engineering, 3(4):e84, 2021.
[212] David X. Li. On default correlation: A copula function approach. The Journal of Fixed Income, 9(4):43–54,
2000.
[213] Ole E. Barndorff-Nielsen. Normal inverse Gaussian distributions and stochastic volatility modelling. Scan-
dinavian Journal of Statistics, 24(1):1–13, 1997.
[214] L. Jeff Hong, Zhaolin Hu, and Guangwu Liu. Monte Carlo methods for value-at-risk and conditional
value-at-risk: A review. ACM Trans. Model. Comput. Simul., 24(4), Nov 2014.
[215] Stefan Woerner and Daniel J. Egger. Quantum risk analysis. npj Quantum Information, 5(1), Feb 2019.
[216] D. J. Egger, R. Garcia Gutierrez, J. Mestre, and S. Woerner. Credit risk analysis using quantum computers.
IEEE Transactions on Computers, 70(12):2136–2145, Dec 2021.
[217] Nikitas Stamatopoulos, Guglielmo Mazzola, Stefan Woerner, and William J. Zeng. Towards quantum
advantage in financial market risk using quantum gradient algorithms. 2021.
[218] Stephen P. Jordan. Fast quantum algorithm for numerical gradient estimation. Phys. Rev. Lett., 95:050501,
July 2005.
[219] András Gilyén, Srinivasan Arunachalam, and Nathan Wiebe. Optimizing quantum optimization algorithms
via faster quantum gradient computation. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on
Discrete Algorithms, SODA ’19, page 1425–1444, USA, 2019. Society for Industrial and Applied Mathematics.
[220] Guang Hao Low and Isaac L Chuang. Hamiltonian simulation by qubitization. Quantum, 3:163, 2019.
[221] Jianping Li. General explicit difference formulas for numerical differentiation. Journal of Computational
and Applied Mathematics, 183(1):29–52, 2005.
[222] Arjan Cornelissen. Quantum gradient estimation of gevrey functions. arXiv preprint arXiv:1909.13528,
2019.
[223] Andrew Green. XVA: credit, funding and capital valuation adjustments. John Wiley & Sons, 2015.
[224] Jeong Yu Han and Patrick Rebentrost. Quantum advantage for multi-option portfolio pricing and valuation
adjustments. 2022.
[225] Javier Alcazar, Andrea Cadarso, Amara Katabarwa, Marta Mauri, Borja Peropadre, Guoming Wang, and
Yudong Cao. Quantum algorithm for credit valuation adjustments. New Journal of Physics, 24(2):023036,
feb 2022.
[226] Guoming Wang, Dax Enshan Koh, Peter D. Johnson, and Yudong Cao. Minimizing estimation runtime on
noisy quantum computers. PRX Quantum, 2(1), March 2021.
[227] Dax Enshan Koh, Guoming Wang, Peter D. Johnson, and Yudong Cao. Foundations for Bayesian inference
with engineered likelihood functions for robust amplitude estimation. Journal of Mathematical Physics,
63(5):052202, may 2022.
[228] J. Hromkovič. Algorithmics for hard problems: introduction to combinatorial optimization, randomization,
approximation, and heuristics. Springer Science & Business Media, 2013.
[229] Yurii Nesterov and Arkadii Nemirovskii. Interior-point polynomial algorithms in convex programming.
SIAM, 1994.
[230] Lee Braine, Daniel J. Egger, Jennifer Glick, and Stefan Woerner. Quantum algorithms for mixed binary
optimization applied to transaction settlement. IEEE Transactions on Quantum Engineering, 2:1–8, 2021.

48
[231] Gary Kochenberger, Jin-Kao Hao, Fred Glover, Mark Lewis, Zhipeng Lü, Haibo Wang, and Yang Wang.
The unconstrained binary quadratic programming problem: a survey. Journal of combinatorial optimization,
28(1):58–81, 2014.
[232] S. Okada, M. Ohzeki, and S. Taguchi. Efficient partition of integer optimization problems with one-hot
encoding. Scientific Reports, 2019.
[233] Andrew Lucas. Ising formulations of many NP problems. Frontiers in Physics, 2:5, 2014.
[234] Tadashi Kadowaki and Hidetoshi Nishimori. Quantum annealing in the transverse Ising model. Physical
Review E, 58(5):5355––5363, Nov 1998.
[235] Nicholas Chancellor, Stefan Zohren, and Paul A Warburton. Circuit design for multi-body interactions
in superconducting quantum annealing systems with applications to a scalable architecture. npj Quantum
Information, 3(1):1–7, 2017.
[236] Krzysztof Domino, Akash Kundu, Özlem Salehi, and Krzysztof Krawiec. Quadratic and higher-order
unconstrained binary optimization of railway dispatching problem for quantum computing. arXiv preprint
arXiv:2107.03234, 2021.
[237] James King, Masoud Mohseni, William Bernoudy, Alexandre Fréchette, Hossein Sadeghi, Sergei V
Isakov, Hartmut Neven, and Mohammad H Amin. Quantum-assisted genetic algorithm. arXiv preprint
arXiv:1907.00707, 2019.
[238] Masayuki Ohzeki. Breaking limitation of quantum annealer in solving optimization problems under con-
straints. Scientific Reports, 10, 02 2020.
[239] Elisabeth Lobe and Annette Lutz. Minor embedding in broken chimera and pegasus graphs is NP-complete.
arXiv preprint arXiv:2110.08325, 2021.
[240] Jun Cai, William G Macready, and Aidan Roy. A practical heuristic for finding graph minors. arXiv
preprint arXiv:1406.2741, 2014.
[241] Timothy Goodrich, Blair D. Sullivan, and T. Humble. Optimizing adiabatic quantum program compilation
using a graph-theoretic framework. Quantum Information Processing, 17:1–26, 2018.
[242] Davide Venturelli, Salvatore Mandrà, Sergey Knysh, Bryan O’Gorman, Rupak Biswas, and Vadim Smelyan-
skiy. Quantum optimization of fully connected spin glasses. Phys. Rev. X, 5:031040, Sep 2015.
[243] Wolfgang Lechner, Philipp Hauke, and Peter Zoller. A quantum annealing architecture with all-to-all
connectivity from local interactions. Science Advances, 1(9):e1500838, 2015.
[244] Kilian Ender, Roeland ter Hoeven, Benjamin E Niehoff, Maike Drieb-Schön, and Wolfgang Lechner. Parity
quantum optimization: Compiler. arXiv preprint arXiv:2105.06233, 2021.
[245] Maike Drieb-Schön, Younes Javanmard, Kilian Ender, and Wolfgang Lechner. Parity quantum optimiza-
tion: Encoding constraints. arXiv preprint arXiv:2105.06235, 2021.
[246] Michael Fellner, Kilian Ender, Roeland ter Hoeven, and Wolfgang Lechner. Parity quantum optimization:
Benchmarks. arXiv preprint arXiv:2105.06240, 2021.
[247] Henrique Silvério, Sebastián Grijalva, Constantin Dalyac, Lucas Leclerc, Peter J Karalekas, Nathan
Shammah, Mourad Beji, Louis-Paul Henry, and Loïc Henriet. Pulser: An open-source package for the design
of pulse sequences in programmable neutral-atom arrays. arXiv preprint arXiv:2104.15044, 2021.
[248] Loïc Henriet, Lucas Beguin, Adrien Signoles, Thierry Lahaye, Antoine Browaeys, Georges-Olivier Reymond,
and Christophe Jurczak. Quantum computing with neutral atoms. Quantum, 4:327, September 2020.
[249] S. Ebadi, A. Keesling, M. Cain, T. T. Wang, H. Levine, D. Bluvstein, G. Semeghini, A. Omran, J.-G. Liu,
R. Samajdar, X.-Z. Luo, B. Nash, X. Gao, B. Barak, E. Farhi, S. Sachdev, N. Gemelke, L. Zhou, S. Choi,
H. Pichler, S.-T. Wang, M. Greiner, V. Vuletić, and M. D. Lukin. Quantum optimization of maximum
independent set using rydberg atom arrays. Science, 376(6598):1209–1215, 2022.
[250] Ruslan Shaydulin, Stuart Hadfield, Tad Hogg, and Ilya Safro. Classical symmetries and the quantum
approximate optimization algorithm. Quantum Information Processing, 20(11):359, Oct 2021.
[251] Ruslan Shaydulin, Ilya Safro, and Jeffrey Larson. Multistart methods for quantum approximate optimiza-
tion. In 2019 IEEE High Performance Extreme Computing Conference (HPEC), pages 1–8, 2019.
[252] Alexey Galda, Xiaoyuan Liu, Danylo Lykov, Yuri Alexeev, and Ilya Safro. Transferability of optimal QAOA
parameters between random graphs. In 2021 IEEE International Conference on Quantum Computing and
Engineering (QCE), pages 171–180. IEEE, 2021.
[253] Michael Streif and Martin Leib. Training the quantum approximate optimization algorithm without access
to a quantum processing unit. Quantum Science and Technology, 5(3):034008, 2020.
[254] Sami Khairy, Ruslan Shaydulin, Lukasz Cincio, Yuri Alexeev, and Prasanna Balaprakash. Learning to
optimize variational quantum circuits to solve combinatorial problems. In Proceedings of the Thirty-Fourth
AAAI Conference on Artificial Intelligence (AAAI), 2020.
[255] Ruslan Shaydulin and Yuri Alexeev. Evaluating quantum approximate optimization algorithm: A case
study. In Proceedings of the 2nd International Workshop on Quantum Computing for Sustainable Computing,
2019.

49
[256] Zhen-Duo Wang, Pei-Lin Zheng, Biao Wu, and Yi Zhang. Quantum dropout for efficient quantum approx-
imate optimization algorithm on combinatorial optimization problems. arXiv preprint arXiv:2203.10101,
2022.
[257] Xiaoyuan Liu, Ruslan Shaydulin, and Ilya Safro. Quantum approximate optimization algorithm with
sparsified phase operator, 2022.
[258] Sahil Gulania, Bo Peng, Yuri Alexeev, and Niranjan Govind. Quantum time dynamics of 1D-Heisenberg
models employing the Yang–Baxter equation for circuit compression. 2021.
[259] Matthew P Harrigan, Kevin J Sung, Matthew Neeley, Kevin J Satzinger, Frank Arute, Kunal Arya, Juan
Atalaya, Joseph C Bardin, Rami Barends, Sergio Boixo, et al. Quantum approximate optimization of non-
planar graph problems on a planar superconducting processor. Nature Physics, 17(3):332–336, 2021.
[260] Johannes S Otterbach, Riccardo Manenti, Nasser Alidoust, A Bestwick, M Block, B Bloom, S Caldwell,
N Didier, E Schuyler Fried, S Hong, et al. Unsupervised machine learning on a hybrid quantum computer.
arXiv preprint arXiv:1712.05771, 2017.
[261] Pradeep Niroula, Ruslan Shaydulin, Romina Yalovetzky, Pierre Minssen, Dylan Herman, Shaohan Hu, and
Marco Pistoia. Constrained quantum optimization for extractive summarization on a trapped-ion quantum
computer. 2022.
[262] Dmitry A Fedorov, Bo Peng, Niranjan Govind, and Yuri Alexeev. VQE method: A short survey and recent
developments. Materials Theory, 6(1):1–21, 2022.
[263] Arthur G. Rattew, Shaohan Hu, Marco Pistoia, Richard Chen, and Steve Wood. A Domain-
agnostic, Noise-resistant, Hardware-efficient Evolutionary Variational Quantum Eigensolver. arXiv preprint
arXiv:1910.09694, 2020.
[264] David Amaro, Carlo Modica, Matthias Rosenkranz, Mattia Fiorentini, Marcello Benedetti, and Michael
Lubasch. Filtering variational quantum algorithms for combinatorial optimization. arXiv preprint
arXiv:2106.10055, 2021.
[265] Xuchen You, Shouvanik Chakrabarti, and Xiaodi Wu. A convergence theory for over-parameterized varia-
tional quantum eigensolvers. 2022.
[266] Gregory L Naber. The Geometry of Minkowski Spacetime: An Introduction to the Mathematics of the
Special Theory of Relativity, volume 92. Springer Science & Business Media, 01 2012.
[267] Konrad Osterwalder and Robert Schrader. Axioms for Euclidean Green’s functions. Communications in
mathematical physics, 31(2):83–112, 1973.
[268] G. C. Wick. Properties of Bethe-Salpeter wave functions. Phys. Rev., 96:1124–1134, Nov 1954.
[269] Sam McArdle, Tyson Jones, Suguru Endo, Ying Li, Simon C. Benjamin, and Xiao Yuan. Variational
ansatz-based quantum simulation of imaginary time evolution. npj Quantum Information, 5(1), Sep 2019.
[270] Mario Motta, Chong Sun, Adrian T. K. Tan, Matthew J. O’Rourke, Erika Ye, Austin J. Minnich, Fernando
G. S. L. Brandão, and Garnet Kin-Lic Chan. Determining eigenstates and thermal states on a quantum
computer using quantum imaginary time evolution. Nature Physics, 16(2):205–210, Nov 2019.
[271] Kosuke Mitarai and Keisuke Fujii. Methodology for replacing indirect measurements with direct measure-
ments. Physical Review Research, 1(1), Aug 2019.
[272] Marcello Benedetti, Mattia Fiorentini, and Michael Lubasch. Hardware-efficient variational quantum algo-
rithms for time evolution. Phys. Rev. Research, 3:033083, Jul 2021.
[273] Christa Zoufal, Aurélien Lucchi, and Stefan Woerner. Variational quantum Boltzmann machines. Quantum
Machine Intelligence, 3(1), Feb 2021.
[274] Ming-Cheng Chen, Ming Gong, Xiaosi Xu, Xiao Yuan, Jian-Wen Wang, Can Wang, Chong Ying, Jin Lin,
Yu Xu, Yulin Wu, and et al. Demonstration of adiabatic variational quantum computing with a supercon-
ducting quantum coprocessor. Physical Review Letters, 125(18), Oct 2020.
[275] Christoph Dürr and Peter Høyer. A quantum algorithm for finding the minimum. arXiv preprint quant-
ph/9607014, 1996.
[276] William P Baritompa, David W Bulger, and Graham R Wood. Grover’s quantum algorithm applied to
global optimization. SIAM Journal on Optimization, 15(4):1170–1184, 2005.
[277] David Bulger, William P. Baritompa, and Graham R. Wood. Implementing pure adaptive search with
Grover’s quantum algorithm. Journal of optimization theory and applications, 116(3):517–529, 2003.
[278] Austin Gilliam, Stefan Woerner, and Constantin Gonciulea. Grover adaptive search for constrained poly-
nomial binary optimization. Quantum, 5:428, 2021.
[279] Dimitris Bertsimas and John N Tsitsiklis. Introduction to linear optimization, volume 6. Athena Scientific
Belmont, MA, 1997.
[280] Giacomo Nannicini. Fast quantum subroutines for the simplex method. In Integer Programming and
Combinatorial Optimization - 22nd International Conference, IPCO 2021, Atlanta, GA, USA, May 19-21,
2021, Proceedings, volume 12707 of Lecture Notes in Computer Science, pages 311–325. Springer, 2021.
[281] Iordanis Kerenidis, Anupam Prakash, and Dániel Szilágyi. Quantum algorithms for second-order cone
programming and support vector machines. Quantum, 5:427, April 2021.

50
[282] Renato Monteiro and Takashi Tsuchiya. Polynomial convergence of primal-dual algorithms for the second-
order cone program based on the MZ-family of directions. Mathematical programming, 88(1):61–83, 2000.
[283] Fernando G.S.L. Brandao and Krysta M. Svore. Quantum speed-ups for solving semidefinite programs. In
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 415–426, 2017.
[284] Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi
Wu. Quantum SDP solvers: Large speed-ups, optimality, and applications to quantum learning. In 46th
International Colloquium on Automata, Languages, and Programming (ICALP 2019), volume 132 of Leibniz
International Proceedings in Informatics (LIPIcs), pages 27:1–27:14. Schloss Dagstuhl–Leibniz-Zentrum fuer
Informatik, 2019.
[285] Joran van Apeldoorn, András Gilyén, Sander Gribling, and Ronald de Wolf. Quantum SDP-solvers: Better
upper and lower bounds. Quantum, 4:230, Feb 2020.
[286] Sanjeev Arora and Satyen Kale. A combinatorial, primal-dual approach to semidefinite programs. J. ACM,
63(2), may 2016.
[287] Ruslan Shaydulin, Hayato Ushijima-Mwesigwa, Christian FA Negre, Ilya Safro, Susan M Mniszewski, and
Yuri Alexeev. A hybrid approach for solving optimization problems on small quantum computers. Computer,
52(6):18–26, 2019.
[288] Ruslan Shaydulin, Hayato Ushijima-Mwesigwa, Ilya Safro, Susan Mniszewski, and Yuri Alexeev. Network
community detection on small quantum computers. Advanced Quantum Technologies, 2(9):1900029, 2019.
[289] Hayato Ushijima-Mwesigwa, Ruslan Shaydulin, Christian F. A. Negre, Susan M. Mniszewski, Yuri Alexeev,
and Ilya Safro. Multilevel combinatorial optimization across quantum architectures. ACM Transactions on
Quantum Computing, 2(1), feb 2021.
[290] Guillaume Chapuis, Hristo Djidjev, Georg Hahn, and Guillaume Rizk. Finding maximum cliques on the
D-Wave quantum annealer. Journal of Signal Processing Systems, 91(3):363–377, 2019.
[291] Harry Markowitz. Portfolio selection. The Journal of Finance, 7(1):77–91, 1952.
[292] S. Benati and R. Rizzi. A mixed integer linear programming formulation of the optimal mean/value-at-risk
portfolio problem. European Journal of Operational Research, 2007.
[293] Renata Mansini, Włodzimierz Ogryczak, and M. Grazia Speranza. Linear and mixed integer programming
for portfolio optimization. Springer, 2015.
[294] Gerard Cornuejols, Marshall L Fisher, and George L Nemhauser. Exceptional paper—location of bank
accounts to optimize float: An analytic study of exact and approximate algorithms. Management Science,
23(8):789–810, 1977.
[295] Angad Kalra, Faisal Qureshi, and Michael Tisi. Portfolio asset identification using graph algorithms on a
quantum annealer. Available at SSRN 3333537, 2018.
[296] Gili Rosenberg and Maxwell Rounds. Long-short minimum risk parity optimization using a quantum or
digital annealer. White Paper 1Qbit, 2018.
[297] Mark Hodson, Brendan Ruck, Hugh Ong, David Garvin, and Stefan Dulman. Portfolio rebalancing exper-
iments using the quantum alternating operator ansatz. arXiv preprint arXiv:1911.05296, 2019.
[298] N. Slate, E. Matwiejew, S. Marsh, and J. B. Wang. Quantum walk-based portfolio optimisation. Quantum,
5:513, July 2021.
[299] Gili Rosenberg, Poya Haghnegahdar, Phil Goddard, Peter Carr, Kesheng Wu, and Marcos Lopez de Prado.
Solving the optimal trading trajectory problem using a quantum annealer. IEEE Journal of Selected Topics
in Signal Processing, 10(6):1053–1060, Sep 2016.
[300] Samuel Mugel, Carlos Kuchkovsky, Escolástico Sánchez, Samuel Fernández-Lorenzo, Jorge Luis-Hita, En-
rique Lizaso, and Román Orús. Dynamic portfolio optimization with real datasets using quantum processors
and quantum-inspired tensor networks. Phys. Rev. Research, 4:013006, Jan 2022.
[301] Erica Grant, Travis S. Humble, and Benjamin Stump. Benchmarking quantum annealing controls with
portfolio optimization. Physical Review Applied, 15(1), Jan 2021.
[302] Iordanis Kerenidis, Anupam Prakash, and Dániel Szilágyi. Quantum algorithms for portfolio optimization.
In Proceedings of the 1st ACM Conference on Advances in Financial Technologies, AFT ’19, pages 147—-155,
New York, NY, USA, 2019. Association for Computing Machinery.
[303] Patrick Rebentrost and Seth Lloyd. Quantum computational finance: quantum algorithm for portfolio
optimization. arXiv preprint arXiv:1811.03975, 2018.
[304] András Gilyén, Seth Lloyd, and Ewin Tang. Quantum-inspired low-rank stochastic regression with loga-
rithmic dependence on the dimension. 2018.
[305] Juan Miguel Arrazola, Alain Delgado, Bhaskar Roy Bardhan, and Seth Lloyd. Quantum-inspired algorithms
in practice. Quantum, 4:307, aug 2020.
[306] G. Rosenberg, C. Adolphs, A. Milne, and A. Lee. Swap netting using a quantum annealer. White Paper
1Qbit, 2016.
[307] René M Stulz. Credit default swaps and the credit crisis. Journal of Economic Perspectives, 24(1):73–92,
2010.

51
[308] Henry TC Hu. Swaps, the modern process of financial innovation and the vulnerability of a regulatory
paradigm. U. Pa. L. Rev., 138:333, 1989.
[309] James Bicksler and Andrew H. Chen. An economic analysis of interest rate swaps. The Journal of Finance,
41(3):645–655, 1986.
[310] Darrell Duffie and Ming Huang. Swap rates and credit quality. The Journal of Finance, 51(3):921–949,
1996.
[311] Andrei Shleifer and Robert W. Vishny. The limits of arbitrage. The Journal of Finance, 52(1):35–55, 1997.
[312] Freddy Delbaen and Walter Schachermayer. The mathematics of arbitrage. Springer Science & Business
Media, 2006.
[313] Gilli Rosenberg. Finding optimal arbitrage opportunities using a quantum annealer. White Paper 1Qbit,
2016.
[314] Boris V Cherkassky and Andrew V Goldberg. Negative-cycle detection algorithms. Mathematical Program-
ming, 85(2), 1999.
[315] Wanmei Soon and Heng-Qing Ye. Currency arbitrage detection using a binary integer programming model.
International Journal of Mathematical Education in Science and Technology, 42(3):369–376, 2011.
[316] Andrew Milne, Maxwell Rounds, and Phil Goddard. Optimal feature selection in credit scoring and
classification using a quantum annealer. White Paper 1Qbit, 2017.
[317] Matthew Elliott, Benjamin Golub, and Matthew O. Jackson. Financial networks and contagion. American
Economic Review, 104(10):3115–53, October 2014.
[318] Brett Hemenway and Sanjeev Khanna. Sensitivity and computational complexity in financial networks.
Algorithmic Finance, 5(3-4):95–110, 2016.
[319] Ilene Grabel. Predicting financial crisis in developing economies: astronomy or astrology? Eastern Eco-
nomic Journal, 29(2):243–258, 2003.
[320] Arturo Estrella and Frederic S. Mishkin. Predicting U.S. recessions: Financial variables as leading indica-
tors. The Review of Economics and Statistics, 1998.
[321] Román Orús, Samuel Mugel, and Enrique Lizaso. Forecasting financial crashes with quantum computing.
Phys. Rev. A, 99(6), Jun 2019.
[322] Martin Plesch and Časlav Brukner. Quantum-state preparation with universal gate decompositions. Phys.
Rev. A, 83:032302, Mar 2011.
[323] Jarrod R McClean, Jonathan Romero, Ryan Babbush, and Alán Aspuru-Guzik. The theory of variational
hybrid quantum-classical algorithms. New Journal of Physics, 18(2):023023, Feb 2016.
[324] Hsin-Yuan Huang, Michael Broughton, Masoud Mohseni, Ryan Babbush, Sergio Boixo, Hartmut Neven,
and Jarrod R McClean. Power of data in quantum machine learning. Nature Communications, 12(1):1–9,
2021.
[325] Nathan Wiebe, Daniel Braun, and Seth Lloyd. Quantum algorithm for data fitting. Phys. Rev. Lett.,
109:050505, Aug 2012.
[326] Guoming Wang. Quantum algorithm for linear regression. Phys. Rev. A, 96:012335, Jul 2017.
[327] Prasanna Date and Thomas Potok. Adiabatic quantum linear regression. Scientific reports, 11(1):21905,
2021.
[328] Zhikuan Zhao, Jack K. Fitzsimons, and Joseph F. Fitzsimons. Quantum-assisted Gaussian process regres-
sion. Physical Review A, 99(5):052331, 2019.
[329] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii. Quantum circuit learning. Phys. Rev. A, 98:032309,
Sep 2018.
[330] K.-R. Muller, S. Mika, G. Ratsch, K. Tsuda, and B. Scholkopf. An introduction to kernel-based learning
algorithms. IEEE Transactions on Neural Networks, 12(2):181–201, 2001.
[331] Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big data
classification. Phys. Rev. Lett., 2014.
[332] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and unsuper-
vised machine learning. arXiv preprint arXiv:1307.0411, 2013.
[333] Vojtěch Havlíček, Antonio D. Córcoles, Kristan Temme, Aram W. Harrow, Abhinav Kandala, Jerry M.
Chow, and Jay M. Gambetta. Supervised learning with quantum-enhanced feature spaces. Nature,
567(7747):209–212, March 2019.
[334] Ruslan Shaydulin and Stefan M. Wild. Importance of kernel bandwidth in quantum machine learning.
2021.
[335] Abdulkadir Canatar, Evan Peters, Cengiz Pehlevan, Stefan M. Wild, and Ruslan Shaydulin. Bandwidth
enables generalization in quantum kernel models. 2022.
[336] Jonas M. Kübler, Simon Buchholz, and Bernhard Schölkopf. The inductive bias of quantum kernels. 2021.
[337] N. Wiebe, A. Kapoor, and K. Svore. Quantum algorithms for nearest-neighbor methods for supervised and
unsupervised learning. Quantum Information & Computation, 15, 2015.

52
[338] Yue Ruan, Xiling Xue, Heng Liu, Jianing Tan, and Xi Li. Quantum algorithm for k-nearest neighbors classi-
fication based on the metric of Hamming distance. International Journal of Theoretical Physics, 56(11):3496–
3507, 2017.
[339] Afrad Basheer, A Afham, and Sandeep K Goyal. Quantum k-nearest neighbors algorithm. arXiv preprint
arXiv:2003.09187, 2020.
[340] James MacQueen. Some methods for classification and analysis of multivariate observations. In Proceedings
of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281–297. Oakland,
CA, USA, 1967.
[341] Iordanis Kerenidis, Jonas Landman, Alessandro Luongo, and Anupam Prakash. q-means: A quantum
algorithm for unsupervised machine learning. arXiv preprint arXiv:1812.03584, 2018.
[342] Sumsam Ullah Khan, Ahsan Javed Awan, and Gemma Vall-Llosera. K-means clustering on noisy interme-
diate scale quantum computers. arXiv preprint arXiv:1909.12183, 2019.
[343] Hideyuki Miyahara, Kazuyuki Aihara, and Wolfgang Lechner. Quantum expectation-maximization algo-
rithm. Phys. Rev. A, 101:012326, Jan 2020.
[344] Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm.
In Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and
Synthetic, pages 849—-856, 2001.
[345] Ammar Daskin. Quantum spectral clustering through a biased phase estimation algorithm. TWMS Journal
of Applied and Engineering Mathematics, 10(1):24–33, 2017.
[346] Simon Apers and Ronald de Wolf. Quantum speedup for graph sparsification, cut approximation and
Laplacian solving. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS),
pages 637–648, 2020.
[347] I. Kerenidis and J. Landman. Quantum spectral clustering. Physical Review A, 103(4):042415, 2021.
[348] J. S. Otterbach, R. Manenti, N. Alidoust, A. Bestwick, M. Block, B. Bloom, S. Caldwell, N. Didier,
E. Schuyler Fried, S. Hong, P. Karalekas, C. B. Osborn, A. Papageorge, E. C. Peterson, G. Prawiroatmodjo,
N. Rubin, Colm A. Ryan, D. Scarabelli, M. Scheer, E. A. Sete, P. Sivarajah, Robert S. Smith, A. Staley,
N. Tezak, W. J. Zeng, A. Hudson, Blake R. Johnson, M. Reagor, M. P. da Silva, and C. Rigetti. Unsupervised
machine learning on a hybrid quantum computer, 2017.
[349] E. Aïmeur, G. Brassard, and S. Gambs. Quantum clustering algorithms.
[350] E. Aïmeur, G. Brassard, and S. Gambs. Quantum speed-up for unsupervised learning. Mach Learn,
20:261–287, 2013.
[351] Iordanis Kerenidis, Alessandro Luongo, and Anupam Prakash. Quantum expectation-maximization for
Gaussian mixture models. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International
Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5187–5197.
PMLR, 13–18 Jul 2020.
[352] Vaibhaw Kumar, Gideon Bass, Casey Tomlin, and Joseph Dulny. Quantum annealing for combinatorial
clustering. Quantum Information Processing, 17(2), jan 2018.
[353] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum principal component analysis. Nature
Physics, 10(9):631––633, July 2014.
[354] Chen He, Jiazhen Li, Weiqi Liu, and Z. Jane Wang. A low complexity quantum principal component
analysis algorithm, 2020.
[355] CH. Yu, F. Gao, and S. et al Lin. Quantum data compression by principal component analysis. Quantum
Inf Process, 18:249, 2019.
[356] Jie Lin, Wan-Su Bao, Shuo Zhang, Tan Li, and Xiang Wang. An improved quantum principal component
analysis algorithm based on the quantum singular threshold method. Physics Letters A, 383(24):2862–2868,
2019.
[357] Armando Bellante, Alessandro Luongo, and Stefano Zanero. Quantum algorithms for data representation
and analysis, 2021.
[358] YaoChong Li, Ri-Gui Zhou, RuiQing Xu, WenWen Hu, and Ping Fan. Quantum algorithm for the nonlinear
dimensionality reduction with arbitrary kernel. Quantum Science and Technology, 6(1):014001, Nov 2020.
[359] Iris Cong and Luming Duan. Quantum discriminant analysis for dimensionality reduction and classification.
New Journal of Physics, 18(7):073011, July 2016.
[360] Iordanis Kerenidis and Alessandro Luongo. Classification of the MNIST data set with quantum slow feature
analysis. Phys. Rev. A, 101:062327, Jun 2020.
[361] Seth Lloyd, Silvano Garnerone, and Paolo Zanardi. Quantum algorithms for topological and geometric
analysis of data. Nat. Commun., 7(1):1–7, 2016.
[362] Casper Gyurik, Chris Cade, and Vedran Dunjko. Towards quantum advantage via topological data analysis.
arXiv preprint arXiv:2005.02607, 2020.

53
[363] Shashanka Ubaru, Ismail Yunus Akhalwaya, Mark S Squillante, Kenneth L Clarkson, and Lior
Horesh. Quantum topological data analysis with linear depth and exponential speedup. arXiv preprint
arXiv:2108.02811, 2021.
[364] Iordanis Kerenidis and Anupam Prakash. Quantum machine learning with subspace states. 2022.
[365] Marcello Benedetti, Delfina Garcia-Pintos, Oscar Perdomo, Vicente Leyton-Ortega, Yunseong Nam, and
Alejandro Perdomo-Ortiz. A generative modeling approach for benchmarking and training shallow quantum
circuits. npj Quantum Information, 5(1), May 2019.
[366] Jin-Guo Liu and Lei Wang. Differentiable learning of quantum circuit Born machines. Physical Review A,
98(6), Dec 2018.
[367] Brian Coyle, Maxwell Henderson, Justin Chan Jin Le, Niraj Kumar, Marco Paini, and Elham Kashefi.
Quantum versus classical generative modelling in finance. Quantum Science and Technology, 6(2):024013,
2021.
[368] Elton Yechao Zhu, Sonika Johri, Dave Bacon, Mert Esencan, Jungsang Kim, Mark Muir, Nikhil Mur-
gai, Jason Nguyen, Neal Pisenti, Adam Schouela, et al. Generative quantum learning of joint probability
distribution functions. arXiv preprint arXiv:2109.06315, 2021.
[369] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques. The MIT
Press, 2009.
[370] Guang Hao LLow, Theodore James Yoder, and Isaac L Chuang. Quantum inference on Bayesian networks.
Phys. Rev. A, 89, June 2014.
[371] Robert R Tucci. Quantum Bayesian nets. International Journal of Modern Physics B, 09(03):295–337,
1995.
[372] S. E. Borujeni, S. Nannapaneni, N. H. Nguyen, E. C. Behrman, and J. E. Steck. Quantum circuit repre-
sentation of Bayesian networks. Expert Systems with Applications, 2021.
[373] Sima E Borujeni, Nam H Nguyen, Saideep Nannapaneni, Elizabeth C Behrman, and James E Steck.
Experimental evaluation of quantum bayesian networks on IBM QX hardware. In 2020 IEEE International
Conference on Quantum Computing and Engineering (QCE), pages 372–378. IEEE, 2020.
[374] G. Klepac. Chapter 12 – The Schrödinger equation as inspiration for a client portfolio simulation hybrid
system based on dynamic Bayesian networks and the REFII model. In Quantum Inspired Computational
Intelligence, pages 391–416. Morgan Kaufmann, Boston, 2017.
[375] Catarina Moreira and Andreas Wichert. Quantum-like Bayesian networks for modeling decision making.
Frontiers in Psychology, 7:11, 2016.
[376] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT Press, 2016.
[377] Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation,
14(8):1771–1800, 2002.
[378] Ruslan Salakhutdinov and Geoffrey Hinton. Deep Boltzmann machines. In Proceedings of the Twelth
International Conference on Artificial Intelligence and Statistics, volume 5, pages 448–455. PMLR, 2009.
[379] Marcello Benedetti, John Realpe-Gómez, Rupak Biswas, and Alejandro Perdomo-Ortiz. Estimation of
effective temperatures in quantum annealers for sampling applications: A case study with possible applications
in deep learning. Physical Review A, 94(2):022308, 2016.
[380] Vivek Dixit, Raja Selvarajan, Muhammad A Alam, Travis S Humble, and Sabre Kais. Training restricted
Boltzmann machines with a d-wave quantum annealer. Front. Phys., 9:589626, 2021.
[381] Mohammad H. Amin, Evgeny Andriyash, Jason Rolfe, Bohdan Kulchytskyy, and Roger Melko. Quantum
Boltzmann machine. Phys. Rev. X, 8:021050, May 2018.
[382] Seth Lloyd and Christian Weedbrook. Quantum generative adversarial learning. Phys. Rev. Lett.,
121:040502, July 2018.
[383] Kerstin Beer, Dmytro Bondarenko, Terry Farrelly, Tobias J. Osborne, Robert Salzmann, Daniel Scheier-
mann, and Ramona Wolf. Training deep quantum neural networks. Nature Communications, 11(1), Feb
2020.
[384] Yudong Cao, Gian Giacomo Guerreschi, and Alán Aspuru-Guzik. Quantum neuron: an elementary building
block for machine learning on quantum computers. arXiv preprint arXiv:1711.11240, 2017.
[385] Shuai Li, Kui Jia, Yuxin Wen, Tongliang Liu, and Dacheng Tao. Orthogonal deep neural networks. IEEE
transactions on pattern analysis and machine intelligence, 43(4):1352–—1368, April 2021.
[386] Iordanis Kerenidis, Jonas Landman, and Natansh Mathur. Classical and quantum algorithms for orthogonal
neural networks. arXiv preprint arXiv:2106.07198, 2021.
[387] Jonathan Allcock, Chang-Yu Hsieh, Iordanis Kerenidis, and Shengyu Zhang. Quantum algorithms for
feedforward neural networks. ACM Transactions on Quantum Computing, 1(1):1–24, 2020.
[388] Edward Farhi and Hartmut Neven. Classification with quantum neural networks on near term processors.
arXiv preprint arXiv:1802.06002, 2018.
[389] Maxwell Henderson, Samriddhi Shakya, Shashindra Pradhan, and Tristan Cook. Quanvolutional neural
networks: powering image recognition with quantum circuits. Quantum Machine Intelligence, 2(1):1–9, 2020.

54
[390] Nathan Killoran, Thomas R. Bromley, Juan Miguel Arrazola, Maria Schuld, Nicolás Quesada, and Seth
Lloyd. Continuous-variable quantum neural networks. Phys. Rev. Research, 1:033063, Oct 2019.
[391] Henry Liu, Junyu Liu, Rui Liu, Henry Makhanov, Danylo Lykov, Anuj Apte, and Yuri Alexeev. Embedding
learning in hybrid quantum-classical neural networks, 2022.
[392] Maria Schuld, Ryan Sweke, and Johannes Jakob Meyer. Effect of data encoding on the expressive power
of variational quantum-machine-learning models. Physical Review A, 103(3):032430, 2021.
[393] A. Abbas, D. Sutter, C. Zoufal, A. Lucchi, A. Figalli, and S. Woerner. The Power of Quantum Neural
Networks. Nature Computational Science, 1(6):403–409, 2021.
[394] Maria Schuld. Supervised quantum machine learning models are kernel methods. arXiv preprint
arXiv:2101.11020, 2021.
[395] Dylan Herman, Rudy Raymond, Muyuan Li, Nicolas Robles, Antonio Mezzacapo, and Marco Pistoia.
Expressivity of variational quantum machine learning on the Boolean cube. 2022.
[396] Sofiene Jerbi, Lukas J. Fiderer, Hendrik Poulsen Nautrup, Jonas M. Kübler, Hans J. Briegel, and Vedran
Dunjko. Quantum machine learning beyond kernel methods. 2021.
[397] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization
in neural networks. arXiv preprint arXiv:1806.07572, 2018.
[398] Kouhei Nakaji, Hiroyuki Tezuka, and Naoki Yamamoto. Quantum-enhanced neural networks in the neural
tangent kernel framework. arXiv preprint arXiv:2109.03786, 2021.
[399] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278–2324, 1998.
[400] Iordanis Kerenidis, Jonas Landman, and Anupam Prakash. Quantum algorithms for deep convolutional
neural networks. arXiv preprint arXiv:1911.01117, 2019.
[401] Feihong Shen and Jun Liu. Qfcnn: Quantum fourier convolutional neural network. arXiv preprint
arXiv:2106.10421, 2021.
[402] Iris Cong, Soonwon Choi, and Mikhail D. Lukin. Quantum convolutional neural networks. Nature Physics,
15(12):1273–1278, 2019.
[403] Guillaume Verdon, Trevor McCourt, Enxhell Luzhnica, Vikash Singh, Stefan Leichenauer, and Jack Hidary.
Quantum graph neural networks. arXiv preprint arXiv:1909.12264, 2019.
[404] Jaeho Choi, Seunghyeok Oh, and Joongheon Kim. A tutorial on quantum graph recurrent neural network
(QGRNN). In 2021 International Conference on Information Networking (ICOIN), pages 46–49, 2021.
[405] Daoyi Dong, Chunlin Chen, Hanxiong Li, and Tzyh-Jong Tarn. Quantum reinforcement learning. IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 38(5):1207–1220, 2008.
[406] G. D. Paparo, V. Dunjko, A. Makmal, M. A. Martin-Delgado, and H. J. Briegel. Quantum speedup for
active learning agents. Phys. Rev. X, 2014.
[407] Samuel Yen-Chi Chen, Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, and Hsi-Sheng Goan.
Variational quantum circuits for deep reinforcement learning. IEEE Access, 8:141007–141024, 2020.
[408] Samuel Yen-Chi Chen, Chih-Min Huang, Chia-Wei Hsing, Hsi-Sheng Goan, and Ying-Jer Kao. Variational
quantum reinforcement learning via evolutionary optimization. Machine Learning: Science and Technology,
2021.
[409] Owen Lockwood and Mei Si. Reinforcement learning with quantum variational circuits. In Proceedings
of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 16, pages
245–251. AAAI Press, 2020.
[410] Sofiene Jerbi, Lea M. Trenkwalder, Hendrik Poulsen Nautrup, Hans J. Briegel, and Vedran Dunjko. Quan-
tum enhancements for deep reinforcement learning in large spaces. PRX Quantum, 2:010328, Feb 2021.
[411] Daniel Crawford, Anna Levit, Navid Ghadermarzy, Jaspreet S Oberoi, and Pooya Ronagh. Reinforcement
learning using quantum Boltzmann machines. arXiv preprint arXiv:1612.05695, 2016.
[412] El Amine Cherrat, Iordanis Kerenidis, and Anupam Prakash. Quantum reinforcement learning via policy
iteration, 2022.
[413] Arjan Cornelissen. Quantum gradient estimation and its application to quantum reinforcement learning.
Master’s thesis, Delft, Netherlands, 2018.
[414] Daniel W. Otter, Julian R. Medina, and Jugal K. Kalita. A survey of the usages of deep learning for natural
language processing. IEEE Transactions on Neural Networks and Learning Systems, 32(2):604–624, 2021.
[415] Usman Naseem, Imran Razzak, Shah Khalid Khan, and Mukesh Prasad. A comprehensive survey on word
representation models: From classical to state-of-the-art word representation language models. ACM Trans.
Asian Low-Resour. Lang. Inf. Process., 20(5), Jun 2021.
[416] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz
Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference
on Neural Information Processing Systems, NIPS’17, 2017.
[417] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirec-
tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American

55
Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long
and Short Papers). Association for Computational Linguistics, June 2019.
[418] Justin Sybrandt and Ilya Safro. CBAG: Conditional biomedical abstract generation. Plos one,
16(7):e0253905, 2021.
[419] Luciano Floridi and Massimo Chiriatti. GPT-3: Its nature, scope, limits, and consequences. Minds and
Machines, 30:1–14, 2020.
[420] Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. Language (technology) is power: A
critical survey of “bias” in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computa-
tional Linguistics. Association for Computational Linguistics, July 2020.
[421] Justin Sybrandt, Ilya Tyagin, Michael Shtutman, and Ilya Safro. Agatha: Automatic graph mining and
transformer based hypothesis generation approach. In Proceedings of the 29th ACM International Conference
on Information & Knowledge Management, pages 2757–2764, 2020.
[422] Payal Dhar. The carbon impact of artificial intelligence. Nature Machine Intelligence, 2(8):423–425, 2020.
[423] Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. Mathematical foundations for a compositional
distributional model of meaning. arXiv preprint arXiv:1003.4394, 2010.
[424] Joachim Lambek. The mathematics of sentence structure. Journal of Symbolic Logic, 33(4):627–628, 1968.
[425] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations
in vector space. In ICLR: Proceeding of the International Conference on Learning Representations Workshop,
2013.
[426] Bob Coecke and Eric Oliver Paquette. Categories for the practising physicist. In New structures for physics,
pages 173–286. Springer, 2010.
[427] Bob Coecke, Giovanni de Felice, Konstantinos Meichanetzidis, and Alexis Toumi. Foundations for near-term
quantum natural language processing. arXiv preprint arXiv:2012.03755, 2020.
[428] Samson Abramsky and Bob Coecke. Categorical quantum mechanics. Handbook of quantum logic and
quantum structures, 2:261–325, 2009.
[429] Bob Coecke and Ross Duncan. Interacting quantum observables: categorical algebra and diagrammatics.
New Journal of Physics, 13(4):043016, Apr 2011.
[430] Giovanni de Felice, Alexis Toumi, and Bob Coecke. DisCoPy: Monoidal categories in Python. Electronic
Proceedings in Theoretical Computer Science, 333:183–197, Feb 2021.
[431] R. Lorenz, A. Pearson, K. Meichanetzidis, D. Kartsaklis, and B. Coecke. QNLP in practice: Running
compositional models of meaning on a quantum computer. arXiv preprint arXiv:2102.12846, 2021.
[432] Konstantinos Meichanetzidis, Alexis Toumi, Giovanni de Felice, and Bob Coecke. Grammar-aware question-
answering on quantum computers. arXiv preprint arXiv:2012.03756, 2020.
[433] Johannes Bausch. Recurrent quantum neural networks. In H. Larochelle, M. Ranzato, R. Hadsell, M. F.
Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1368–1379.
Curran Associates, Inc., 2020.
[434] Samuel Yen-Chi Chen, Shinjae Yoo, and Yao-Lung L Fang. Quantum long short-term memory. arXiv
preprint arXiv:2009.01783, 2020.
[435] Yuto Takaki, Kosuke Mitarai, Makoto Negoro, Keisuke Fujii, and Masahiro Kitagawa. Learning temporal
data with a variational quantum recurrent neural network. Physical Review A, 103(5), May 2021.
[436] Mohiuddin Ahmed, Abdun Naser Mahmood, and Jiankun Hu. A survey of network anomaly detection
techniques. Journal of Network and Computer Applications, 60:19–31, 2016.
[437] Jarrod West and Maumita Bhattacharya. Intelligent financial fraud detection: A comprehensive review.
Computers & Security, 57:47–66, 2016.
[438] Guansong Pang, Chunhua Shen, Longbing Cao, and Anton Van Den Hengel. Deep learning for anomaly
detection: A review. ACM Comput. Surv., 54(2), March 2021.
[439] Daniel Herr, Benjamin Obert, and Matthias Rosenkranz. Anomaly detection with variational quantum
generative adversarial networks. Quantum Science and Technology, 6(4):045004, Jul 2021.
[440] T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs. Unsupervised anomaly detec-
tion with generative adversarial networks to guide marker discovery. In Information Processing in Medical
Imaging. Springer, 2017.
[441] Ming-Chao Guo, Hai-Ling Liu, Yong-Mei Li, Wen-Min Li, Su-Juan Qin, Qiao-Yan Wen, and Fei Gao.
Quantum algorithms for anomaly detection using amplitude estimation. arXiv preprint arXiv:2109.13820,
2021.
[442] Dongkuan Xu and Yingjie Tian. A comprehensive survey of clustering algorithms. Annals of Data Science,
2(2):165–193, 2015.
[443] Shihao Gu, Bryan Kelly, and Dacheng Xiu. Empirical asset pricing via machine learning. The Review of
Financial Studies, 33(5):2223–2273, 2020.
[444] L. Chen, M. Pelger, and J. Zhu. Deep learning in asset pricing. SSRN, 2020.
[445] Stefan Nagel. Machine Learning in Asset Pricing. Princeton University Press, 2021.

56
[446] T. Sakuma. Application of deep quantum neural networks to finance. arXiv preprint arXiv:2011.07319,
2020.
[447] Steven A Cuccaro, Thomas G Draper, Samuel A Kutin, and David Petrie Moulton. A new quantum
ripple-carry addition circuit. arXiv preprint quant-ph/0410184, 2004.
[448] Davide Venturelli and Alexei Kondratyev. Reverse quantum annealing approach to portfolio optimization
problems. Quantum Machine Intelligence, 1(1-2):17––30, Apr 2019.
[449] Tomas Boothby, Andrew D King, and Aidan Roy. Fast clique minor generation in Chimera qubit connec-
tivity graphs. Quantum Information Processing, 15(1):495–508, 2016.
[450] Romina Yalovetzky, Pierre Minssen, Dylan Herman, and Marco Pistoia. NISQ-HHL: Portfolio optimization
for near-term quantum hardware. arXiv preprint arXiv:2110.15958, 2021.
[451] Stephane Beauregard. Circuit for Shor’s algorithm using 2n+3 qubits. Quantum Info. Comput., 3(2):175—
-185, Machr 2003.
[452] Mikko Möttönen, Juha J. Vartiainen, Ville Bergholm, and Martti M. Salomaa. Transformation of quantum
states using uniformly controlled rotations. Quantum Info. Comput., 5(6):467––473, Sep 2005.
[453] Yonghae Lee, Jaewoo Joo, and Soojoon Lee. Hybrid quantum linear equation algorithm and its experimental
test on IBM quantum experience. Scientific Reports, 9, 03 2019.
[454] Gadi Aleksandrowicz, Thomas Alexander, Panagiotis Barkoutsos, Luciano Bello, Yael Ben-Haim, David
Bucher, Francisco Jose Cabrera-Hernández, Jorge Carballo-Franquis, Adrian Chen, Chun-Fu Chen, et al.
Qiskit: An open-source framework for quantum computing. 2019.
[455] R. G. Beausoleil, W. J. Munro, T. P. Spiller, and W. K. van Dam. Tests of quantum information. US
Patent 7,559,101 B2, 2008.

57

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy