An Optimizing Method for Performance and Resource
An Optimizing Method for Performance and Resource
com/scientificreports
In recent years, the phenomenon of quantum computing has received global attention1. Quantum computational
theory goes back to the works by Feynman and Deutsch in the 1980s2 and after that many new quantum comput-
ing algorithms have been proposed. Machine learning is the science and art of building computers to learn from
data how to solve problems instead of explicitly programming. So, Machine learning and quantum computing
are two very important research areas, and by combining these two areas, new solutions for today’s challenges
are proposed3. There are some challenges for implementing quantum machine learning algorithms due to the
processing of large datasets. One usual way to meet these challenges is to arrange these algorithms in the cloud
system. With the help of the computing power of the cloud system, the problems are partially solved. However,
data storage and management in heterogeneous distributed networks had a number of other problems. Physicists
take a different method of quantum computing by exploiting the “Superposition” and “Entanglement” p roperties9.
This solution has particularly increased the speed of solving certain problems compared to classical algorithms4,5.
Problem-solving is very different in quantum and classical systems. In fact, some problems that can be solved in the
ours6. On the other
classical system in several years, it is known that can be solved in a quantum system in a few h
1
Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran. 2Department of
Computer Science, Faculty of Computer Science and Telecommunications, Cracow University of Technology,
Krakow, Poland. 3Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Gliwice,
Poland. 4Department of Physics, Isfahan University of Technology, Isfahan 84156‑83111, Iran. 5Institute for
Quantum Science and Technology, and Department of Physics and Astronomy, University of Calgary, Calgary,
T2N 1N4, Alberta, Canada. 6Basque Center for Applied Mathematics (BCAM), Bilbao, Spain. *email: zomorodi@
pk.edu.pl
Figure 1. (a) An example for a quantum circuit: each line represents a qubit, and operations (gates) are applied
on them. (b) Representation of unitary two-qubit gates in the quantum circuit with matrix f ormalism17.
hand, in recent years many types of research have been carried out on the subject of big data. The challenge is the
inefficiency of the computations of classical machine learning algorithms and metaheuristics for processing such
a large volume of d ata7–9. The unit of quantum processing is the “quantum bit” or “qubit”. One of the capabilities
of a quantum computer is that by increasing the number of qubits of a quantum computer, the processing power
improves exponentially10. Quantum algorithms usually express computations by primitive quantum gates. There
are different approaches to implementing these algorithms. Therefore, it is useful to find an implementation using
the least resource numbers, especially for large-scale quantum circuits with complex designs. To this end, we apply
optimization methods which is a fundamental task in almost all areas of quantum computing science, including
monolithic and distributed quantum circuits 10–13. This work has developed and implemented a framework for
quantum circuit optimization algorithms to optimize the desired circuits which are designed particularly for
machine learning tasks. We also show how to optimize the repetition of quantum circuits and reduce the required
resources for large-scale quantum circuits. While the original functionality of the algorithm is preserved, the final
quantum circuit has fewer time steps, execution time, and quantum cost compared to the original circuit. As input,
we assumed that the quantum circuit (QC) consists of a set of quantum gates with a certain number of 2 qubits .
The ultimate goal of optimizing the quantum circuit of a machine learning algorithm is to reduce the number of
gates, time steps, and quantum cost. The quantum cost of a circuit is the number of 1×1 and 2×2 quantum gates
esign14. For this purpose, this paper proposes a method to optimize the quantum cost of machine learning
in its d
algorithms. In principle, it can be said that the operations involved in quantum machine learning circuits can be
large and so it is worth reducing them. Quantum circuits typically use single-qubit and two-qubit gates such as
NOT, Hadamard, and rotation, and also two-qubit CNOT gates. If there are three-qubit gates such as Bridge, and
Swap, and multi-qubit gates, we decomposed them into single-qubit and two-qubit gates in a preprocessing step.
In “Quantum gates and circuits”, we discussed related work in the field of the quantum computation systems for
the machine learning algorithm, as well as optimization algorithms for the quantum circuits. Then, in “Quantum
memory”, the proposed method is explained and at the end, in “Related work” and “Quantum circuits optimiza-
tion techniques”, our results and discussion are presented and we conclude the paper.
Quantum memory
The memory of a classic computer can be easily built by writing an arbitrary bit string in any position. Classi-
cal memory performance is not optimal for processing big data. In order to solve the problem of normal and
associative memory capacity, quantum memory has been used successfully. In many applications of quantum
computers, a quantum register is used instead of classical memory to simulate a physical system. This quantum
memory consists of a qubit state tensor in multidimensional Hilbert space that is first prepared in a simple state.
For a quantum memory consisting of two qubits |q0 � and |q1 �, its state |qR � is given by Eq. (1). The symbol ⊗ is
the tensor operation. For example, for a 4-qubit quantum register, its state is represented as Eq. (2), where the
probability of measuring each of its base states is as different four terms illustrated in Eq. (3)18,19. Giovannetti
et al.20 have demonstrated that how classical data is presented in the language of quantum mechanics through
the study of quantum random access memory (QRAM).
where
n −1
2
|ci |2 = 1 (5)
i=0
In the classical system for Eq. (6), the operation must be repeated 2n times. But in the quantum system, the
system can examine all computational states for the variables simultaneously, assuming that the Uf operator
understands the function f(x)9.
2n −1 2n −1 2n −1
1 1 1
Uf √ |x� = √ Uf |x� = √ |f (x)� (6)
2n x=0 2n x=0 2n x=0
Related work
In this section, we first present some works done in implementing a quantum circuit for quantum machine
learning algorithms and then methods for quantum circuit optimization are presented.
Quantum circuits of machine learning algorithms. Recently, quantum machine learning is consid-
ered as a suitable solution to increase the speed of execution of algorithms. This method has led to the introduc-
tion of various quantum algorithms for machine learning using quantum features. In this paper, we first examine
the K nearest neighbor algorithm. In this regard, Lioyd et al.21 and Wiebe et al.22 use similar approaches such
as quantum amplitude estimation or Grover a lgorithm23 to obtain the quantum state of the nearest neighbor
algorithm. In the next method for implementing the nearest neighbor algorithm, Buhrman et al.24 use quantum
parallelism and the test circuit to calculate the distance between two vectors and provide a quantum solution.
The Euclidean distance can be calculated as Euclidean distance= ((2 − 2|�x|y�|)) . The next method in K near-
est neighbor by Ruan et al.25 is used in document classification, image classification, etc. It works based on the
size of the Hamming distance. A natural vector is defined as a bit vector with a hash function and then converted
to an equivalent quantum state, after that, then the input vector bits are compared with the training vector. The
number of different properties is counted by the Kaye circuit25 and the distance between the two vectors is esti-
mated. The next algorithm, the support vector machine (SVM), is a supervised algorithm developed by Arodz
and Saeedi26 and also by Rebentrost et al.27, which classifies vectors in a specific space based on training data.
In comparison with the classical support vector machine for binary classification, they achieved a logarithmic
acceleration. These methods use Grover algorithm and adiabatic algorithm. The next algorithm is the neural net-
work algorithm. Transfer learning is an interesting technique in neural networks in which a pre-trained model
is reused as an input model for a new task. One of the works in quantum neural networks presented in this field
was developed by Acar et al28 and uses the quantum transfer learning method. This method is a hybrid machine
learning method consisting of a classical network feature extractors and a diverse quantum classification circuit.
There are other works by Zen et al.29 that have used transmission learning toward scalable quantum neural
network states using transmission learning. A protocol was proposed in47 for machine translation based on
quantum long short term memory for translating a sentence from English to Persian. In another work, Mishra
et al.30 used the design and operation of a classical neural network and they designed a quantum neural network
capable of working on a 10 qubit system. By demonstrating network performance, they have tried to use the
basic principles of machine learning to manage data that can be used in cancer detection.
to search for the discrete space of a quantum circuit. These changes led to the improvement of various circuits.
Using this method, the median has been increased to 244.7% and 44.4% for the grid and complete graph models
of quantum computation. Median reduction in the number of two-qubit gates is 33.3% and 20.8%, respectively. In
another paper, Alam et al.34 proposed a method to accelerate the implementation of the quantum approximation
optimization algorithm (QAOA). First, a connection is made between the classical optimizer and the quantum
computer, and then two parameters named δ and β , with initial values of zero, are inserted into the loop. The
classical optimizer for randomly defined variables initially set to some random values. If the values are not ideal,
it establishes a connection to the quantum computer. increases the depth of the circuit, which is not good and
should be reduced. For this reason, to determine the appropriate distance for the parameters, artificial intelligence
techniques are used to achieve the desired result with the acceleration in the process. This method shows that
the number of optimization iterations can be reduced 44.9% on average for 264 graphs. Haner et al.15 optimized
the circuit using the Hoare triples35. This method checks the accuracy of the execution of specific programs.
For each circuit level, a pre-condition defines conditions and post-conditions, and based on the previous level
condition, the authors can decide on the operating conditions for the next level operation. When using a Hoare-
based optimization strategy, the circuit depth decreases for n ≥ 2, according to relation (4(n − 2) + n)/n. In the
next method of Childs and Maslov16, the automatic optimization of large circuits is accomplished using iterative
parameters. This method also preserves the main structure of the algorithm and performs better optimizations
than state-of-the-art approaches. In fact, it uses a set of exploratory laws that reduces the number of gates. This
technique first displays the quantum circuit as a netlist and then preprocesses and simplifies the circuit. Then, it
divides the circuit into sub-circuits and optimizes the sub-circuits according to the rules 1-4. I n36, Abdessaied
et al. used several algorithms to synthesize reversible functions to quantum circuits and to reduce the number of
Hadamard gates. This reduction of the Hadamard gate, reduces the number and depth of T gates, which improves
the combined gates. By applying this method, the authors improved the number and depth of T gate by 88% more
than other optimization methods. One other approach for quantum circuit optimization is based on ZX-calculus
which is a graphical language for expressing quantum computation49. The optimization approach uses the rules
of the ZX-calculus for simplifying ZX- diagrams50. The authors show that their simplification procedure works
well when there are few non-clifford gates in the original circuit. Using different quantum circuit optimization
techniques, the aim of this paper is to improve the performance of quantum machine learning circuits and to
reduce their cost. To this end we optimized the quantum machine learning circuits in terms of quantum gates
and time steps.
Methods
Implementing machine learning algorithms with big data in quantum systems is a major challenge due to the
excessive increase in the number of gates, the depth of the circuit, and the execution time of the algorithm.
Optimizing quantum circuits is an effective way to overcome these problems. In this section, the details of
the optimization algorithm for quantum machine learning circuits are explained. This method is then used to
optimize the quantum circuits of two machine learning algorithms, transmission learning and neural networks.
Initially, in the preprocessing step, the quantum circuit represented as a list of gates that are applied sequentially.
The following transformation rules are then applied to optimize the quantum machine learning circuits.
Rule 1: First, if there is a NOT gate in the circuit, the next gate is checked. In this case, there are three differ-
ent possibilities for the next gates16:
• If the next gate is a TOFFOLI gate: in this case the control qubit of the TOFFOLI is reversed and the NOT
gate is removed.
• If the next gate is a NOT gate: in this case the two NOT gates are removed.
• If the next gate is a CNOT gate: in this case the control qubit is reversed and the NOT gate is removed.
Rule 2: Remove gates that are directly adjacent to their inverse. In a two-qubit gate, it is usually possible to
simplify or eliminate the gate in the form of quantum circuits by moving it between the gates. In fact, for each U
gate in the circuit, the optimizer searches for an instance of U †. If present, U is successfully canceled with some
instances of U †.
Rule 3: For two rotation gates RZ (θi ) and RZ (θj ) that have a shared control line, According to Eq. (7), we can
merge two rotations37. For example, in Fig. 2 two rotation gates RZ (θ1 ) and RZ (θ4 ) can be c ombined16:
R(θ1 ) · R(θ2 ) = R(θ1 + θ2 ) (7)
Figure 3. (a) Equivalent circuits of Swap gates. (b) Equivalent circuits of Bridge g ates38.
Figure 5. (a) Commuting of the rotation gates and the CNOT gate. (b) Commuting of two CNOT gates in two
different circumstances38.
Figure 6. The flowchart of the steps of the optimization approach using rules 1–4.
the circuit are moved around and all locations where the gates can be placed are examined. Then, the rules 1-4
are re-examined by the algorithm and the circuit is simplified if conditions pass.
These defined rules are applied in a loop until no further improvement is obtained. Algorithm 1 and Fig. 6
present the steps of our optimization approach using the above rules. Using this framework, we optimized the
quantum machine learning circuit of a classification task for medical diagnosis using quantum transfer learning28.
This circuit has been tested in several real quantum processors as well as various simulators. This quantum
circuit aims at distinguishing a sick person from a healthy person based on computed tomography images. The
circuit consists of four steps: The Hadamard gate is first applied to all qubits and then with the help of U operator
defined in28, the classical data is encoded and then entanglement is created. The dotted box of Fig. 7 shows one
application of this operator. Finally, the qubits are measured. Figure 7 demonstrates the quantum circuit of this
quantum machine learning algorithm with only one repetition of the sub-circuit.
Q transfer 29
4 6.8 13 28 6.2 13 25 8.82 – 10.71 0.6
learning
Q neural 28
10 6.9 17 67 6.6 16 57 4.34 5.88 14.92f 0.3
network
41
Grover 1 4 6.4 26 41 6.2 16 21 3.12 38.46 48.78 0.2
41
Grover 2 4 6.7 21 39 5.9 17 27 11.94 19.09 30.76 0.8
42
QAOA 2 6.7 7 11 6.2 5 7 7.46 28.57 36.36 0.5
test circuit K 43
3 6.5 14 26 6.2 14 24 4.6 0 7.7 0.3
Means
44
TNN 4 6 10 27 5.9 8 14 1.7 20 48.14 0.1
45
KNN 4 7.8 20 33 4.7 8 13 39.74 60 60.60 3.1
its implementation cost improve. In this case, the proposed model will be a more efficient model. In order to
verify our approach, we first tested our approach on different general quantum circuits and the results are shown
in Table 1. In this table, each column is the corresponding quantum circuit and for each circuit we showed the
improvement caused by our optimization approach.
Also, in Table 2 the comparison between our proposed approach and other works in the literature ZX-
calculus50, AQCEL51, tket52, and Q uilc53 are presented. It can be seen from this table that our approach works
better in terms of circuit depth compared to other approaches and also in many circuits it is better in terms of
the number of 2-qubit gates, while the execution time of our method is better than all other approaches.
In the proposed method, assuming that the number of time steps is N and the number of qubits is Q, the
time complexity of the algorithm is obtained as O(NQ). As shown in Tables 1 and 2 applying our method to a
variety of quantum circuits reduces the number of gates, time steps, and execution time of the quantum circuits
significantly. At the second part of the experiments, our optimization approach was applied on the quantum
machine learning circuits. One of these circuits uses transmission learning method for a potential application
in medical diagnosis. By applying the proposed method to the above quantum circuit, only the U-shaped part of
the circuit improves as shown in Fig. 8. In Fig. 8a it can be seen that the original circuit from28 has 28 quantum
gates. Figure 8b shows the improved circuit diagram with 10.7% reduction in the number of gates. This is the
amount of quantum cost reduction for one repetition below the U-circuit in the main circuit. For cases where
this sub-circuit is repeated many times in the main circuit, the rate of improvement increases. In this case, by
applying the proposed method on circuits with big data, desirable results will be obtained. The results of the
implementation of the proposed method on the quantum circuit of transfer learning are shown in Fig. 9 before
and after optimization.
We verified the outputs in IBM Q and the results are demonstrated in Fig. 10. Figure 10a is the output of the
original circuit and Fig. 10b is the output after we applied our optimization algorithm. Since the output is the
same in both cases, the transformation has done correctly.
The next quantum machine learning circuit that we used in this work is the quantum circuit of the neural
network for cancer d etection30 which used the design and operation of a classical neural network but it is a
quantum neural network capable of working on a 10 qubit system. By demonstrating network performance, the
authors have tried to use the basic principles of machine learning to manage data. The graphical representation
of this circuit is shown in Fig. 11. Figure 11a shows the original circuit f rom30 which is implemented in 17 time
steps with 67 quantum gates. Figure 11b shows the improved circuit which in addition to a 14.9% reduction in
the number of gates, reduces its time steps to 16. The comparison result of applying the proposed method to
this circuit is shown in Fig. 12.
The output results of the circuits are shown in Fig. 13. Figure 13a is the output of the original circuit and
Fig. 13b is the output after we applied our optimization algorithm. Since the output is the same in both cases, it
shows that the transformation of the proposed optimization is correct.
The next quantum machine learning circuit that we used in this work is Fig. 14. The quantum repeater cir-
cuit is used as a test for the KNN algorithm i n45. The graphical representation of this circuit is shown in Fig. 14.
Figure 14a shows the original circuit, which is implemented in 20 time steps with 33 quantum gates. Figure 14b
shows the improved circuit, which in addition to a 60.60% reduction in the number of gates, reduces its time
steps to 8. The comparison result of applying the proposed method to this circuit is shown in Fig. 15.
We verified the outputs in IBM Q and the results are demonstrated in Fig. 16. Figure 16a is the output of the
original circuit and Fig. 16b is the output after we applied our optimization algorithm. Since the output is the
same in both cases, the transformation has done correctly.
Table 2. Comparison results of the proposed method with state-of-the-art optimization methods on different
quantum circuits.
Conclusion
Realizing machine learning algorithms in a quantum system for big data is a real challenge but with remarkable
advantages of using quantum computers. In quantum circuits, as the number of gates increases, the number of
time steps and execution time is also increased, which is why optimizing quantum circuits is an effective way to
overcome these problems. In this study, a new general framework of quantum circuit optimization was presented
and in particular, quantum machine learning algorithms for big data were investigated in order to improve their
quantum circuit model which in turn leads to the improvement and reduction in the number of required quan-
tum computation resources. In fact, by applying the proposed method, quantum circuits were implemented in
less time than the original circuits, with the same functionality of the original design. In addition, applying this
method also reduces the quantum costs. Several quantum circuits with different functionality and algorithms
were used to evaluate the proposed method. The results of the improved circuits showed that the number of
quantum gate, the time steps, and the execution time in the evaluated circuits were reduced. In particular,
the proposed method was investigated on the quantum circuits of transfer learning and neural network. Our
Figure 8. Demonstration of original and improved quantum transfer learning circuits. Diagram (a) shows the
non-optimal circuit and diagram (b) shows the improved circuit with reducing the number of gates by 10.7%.
Figure 9. Demonstration of optimized and non-optimized diagrams of quantum transfer learning circuits.
Figure 10. The simulation result of the output of the quantum transfer learning circuit before and after
optimization in (a) and (b), respectively. The results are identical.
Figure 11. Demonstration of o riginal30, and (b) improved quantum neural network circuits used for cancer
diagnosis. The non-optimal circuit (a) is executed in 17 time steps, but the improved circuit, which has a
reduction of 14.9% in the number of gates, its time step is reduced to 16.
Figure 12. Comparison between optimized and non-optimized quantum neural networks circuits.
approach reduced the number of the gates by 10.71% respectively in transfer learning circuit and also reduced the
number of time steps and the gate by 27.2% and 14.9% respectively in neural network circuit. More importantly,
this was the amount of reduction for one iteration of the U-subcircuit in the main circuit of the transfer learning
algorithm. So, for the cases where this sub-circuit was repeated more often in the main circuit, the optimization
is even more. So, by applying the proposed method on circuits with big data, better results would be obtained.
Figure 13. The simulation result of the output of the quantum neural network circuit before and after
optimization in (a) and (b) respectively. The results are identical.
Figure 14. (a) Demonstration of original30, and (b) improved quantum circuits test for the KNN algorithm.
The non-optimal circuit (a) is executed in 20 time steps, but the improved circuit, which has a reduction of
60.60% in the number of gates, its time step is reduced to 8.
Figure 15. Comparison between optimized and non-optimized quantum circuits test for the KNN algorithm.
Figure 16. (a) The simulation result of the output of the quantum circuits test for the KNN algorithm before
and after optimization in (a) and (b), respectively. The results are identical.
Data availibility
The datasets used and/or analysed during the current study available from the corresponding author on reason-
able request.
References
1. Humble, T. S., Thapliyal, H., Munoz-Coreas, E., Mohiyaddin, F. A. & Bennink, R. S. Quantum computing circuits and devices.
IEEE Des. Test. 36(3), 69–94 (2019).
2. Benenti, G., Casati, G., & Strini, G. Principles of Quantum Computation and Information-Volume II: Basic Tools and Special
Topics. World Scientific Publishing Company (2007).
3. Schuld, M. & Petruccione, F. Supervised learning with quantum computers (Springer, Berlin, 2018).
4. Grover, L.K. A fast quantum mechanical algorithm for database search. InProceedings of the twenty-eighth annual ACM sympo-
sium on Theory of computing pp. 212–219 (1996)
5. Humble, T. S., Thapliyal, H., Munoz-Coreas, E., Mohiyaddin, F. A. & Bennink, R. S. Quantum computing circuits and devices.
IEEE Des. Test. 36(3), 69–94 (2019).
6. Gyongyosi, L. & Imre, S. A survey on quantum computing technology. Comput. Sci. Rev. 1(31), 51–71 (2019).
7. Dang, Y., Jiang, N., Hu, H., Ji, Z. & Zhang, W. Image classification based on quantum K-Nearest-Neighbor algorithm. Quantum
Inf. Process. 17(9), 1–8 (2018).
8. Beheshti Roui, M., Zomorodi, M., Sarvelayati, M., Abdar, M., Noori, H., Pławiak, P., Tadeusiewicz, R., Zhou, X., Khosravi, A.,
Nahavandi, S. & Acharya, U. R. A novel approach based on genetic algorithm to speed up the discovery of classification rules on
GPUs. Knowl Based Syst. 231, 107419 https://doi.org/10.1016/j.knosys.2021.107419 (2021).
9. Ruan, Y., Xue, X., Liu, H., Tan, J. & Li, X. Quantum algorithm for k-nearest neighbors classification based on the metric of ham-
ming distance. Int. J. Theor. Phys. 56(11), 3496–507 (2017).
10. Savchuk, M. M. & Fesenko, A. V. Quantum Computing: Survey and Analysis. Cybern. Syst. Anal. 55(1), 10–21 (2019).
11. Ghodsollahee, I., Davarzani, Z., Zomorodi, M., Pławiak, P., Houshmand, M. & Houshmand, M. Connectivity matrix model of
quantum circuits and its application to distributed quantum circuit optimization. Quantum Inf. Process. 20(7), 235https://doi.
org/10.1007/s11128-021-03170-5(2021).
12. Gilyén, A., Arunachalam, S., & Wiebe, N. Optimizing quantum optimization algorithms via faster quantum gradient computa-
tion. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms 2019 (pp. 1425–1444). Society for
Industrial and Applied Mathematics.
13. Daei, O., Navi, K. & Zomorodi-Moghadam, M. Optimized Quantum Circuit Partitioning.Int. J. Theor. Phys. 59(12), 3804-3820
https://doi.org/10.1007/s10773-020-04633-8 (2020).
14. Thapliyal, H. & Ranganathan, N. Design of reversible sequential circuits optimizing quantum cost, delay, and garbage outputs.
ACM J. Emerg. Technol. Comput. Syst. (JETC). 6(4), 1–31 (2010).
15. Häner, T., Hoefler, T., & Troyer, M. Using Hoare logic for quantum circuit optimization. ArXiv e-prints. (2018).
16. Nam, Y., Ross, N.J., Su, Y., Childs, A.M., & Maslov, D. Automated optimization of large quantum circuits with continuous param-
eters. NPJ Quant. Inf. 4(1), 1–2 (2018).
17. Schuld, M., Sinayskiy, I. & Petruccione, F. An introduction to quantum machine learning. Contemp. Phys. 56(2), 172–85 (2015).
18. Hagouel, P.I., & Karafyllidis, I.G. Quantum computers: Registers, gates and algorithms. In 2012 28th International Conference on
Microelectronics Proceedings pp. 15-21. IEEE (2012).
19. Soklakov, A. N. & Schack, R. Efficient state preparation for a register of quantum bits. Phys. Rev. A 73(1), 012307 (2006).
20. Giovannetti, V., Lloyd, S. & Maccone, L. Quantum random access memory. Phys. Rev. Lett. 100(16), 160501 (2008).
21. Lloyd, S., Garnerone, S. & Zanardi, P. Quantum algorithms for topological and geometric analysis of data. Nat. Commun. 7(1),
1–7 (2016).
22. Wiebe, N., Granade, C., Ferrie, C. & Cory, D. Quantum Hamiltonian learning using imperfect quantum resources. Phys. Rev. A
89(4), 042314 (2014).
23. Grover, L.K. A fast quantum mechanical algorithm for database search. In Proceedings of the twenty-eighth annual ACM symposium
on Theory of computing pp. 212–219 (1996).
24. Buhrman, H., Cleve, R., Watrous, J. & De Wolf, R. Quantum fingerprinting. Phys. Rev. Lett. 87(16), 167902 (2001).
25. Kaye, P. Reversible addition circuit using one ancillary bit with application to quantum computing. arXiv preprint. arXiv:q
uant-p
h/
0408173 (2004).
26. Saeedi, S., & Arodz, T. Quantum sparse support vector machines. arXiv preprint arXiv:1902.01879 (2019).
27. Rebentrost, P., Mohseni, M. & Lloyd, S. Quantum support vector machine for big data classification. Phys. Rev. Lett. 113(13),
130503 (2014).
28. Acar, E. & Yilmaz, I. COVID-19 detection on IBM quantum computer with classical-quantum transfer learning. Turk. J. Electr.
Eng. Comput. Sci. 29(1), 46–61 (2021).
29. Zen, R. et al. Transfer learning for scalability of neural-network quantum states. Phys. Rev. E 101(5), 053301 (2020).
30. Mishra, N., Bisarya, A., Kumar, S., Behera, B.K., Mukhopadhyay, S., & Panigrahi PK. Cancer Detection Using Quantum Neural
Networks: A Demonstration on a Quantum Computer. arXiv preprint arXiv:1911.00504 (2019).
31. Bae, J. H., Alsing, P. M., Ahn, D. & Miller, W. A. Quantum circuit optimization using quantum Karnaugh map. Sci. Rep. 10(1), 1–8
(2020).
32. Basak, A., Sadhu, A., Das, K. & Sharma, K. K. Cost Optimization Technique for Quantum Circuits. Int. J. Theor. Phys. 58(9),
3158–79 (2019).
33. Li, L., Fan, M., Coram, M., Riley, P. & Leichenauer, S. Quantum optimization with a novel gibbs objective function and ansatz
architecture search. Phys. Rev. Res. 2(2), 023074 (2020).
34. Alam, M., Ash-Saki, A., & Ghosh, S. Accelerating quantum approximate optimization algorithm using machine learning. In 2020
Design, Automation & Test in Europe Conference & Exhibition (DATE) (pp. 686–689, 2020). IEEE.
35. Hoare, C. A. An axiomatic basis for computer programming. Commun. ACM 12(10), 576–80 (1969).
36. Abdessaied, N., Soeken, M., & Drechsler, R. Quantum circuit optimization by Hadamard gate reduction. In International Confer-
ence on Reversible Computation pp. 149–162. Springer, Cham (2014).
37. Zomorodi-Moghadam, M. & Navi, K. Rotation-based design and synthesis of quantum circuits. J. Circ. Syst. Comput. 25(12),
1650152 (2016).
38. Itoko, T., Raymond, R., Imamichi, T. & Matsuo, A. Optimization of quantum circuit mapping using gate transformation and com-
mutation. Integration. 1(70), 43–50 (2020).
39. Curry, M. Symbolic quantum circuit simplification in SymPy.
40. Variational Quantum Classifier - Syed Farhan (born-2learn.github.io)
41. Mandviwalla, A., Ohshiro, K., & Ji, B. Implementing Grover’s algorithm on the IBM quantum computers. In2018 IEEE International
Conference on Big Data (Big Data) (pp 2531–2537, 2018). IEEE.
42. Karalekas, P. J. et al. A quantum-classical cloud platform optimized for variational hybrid algorithms. Quant. Sci. Technol. 5(2),
024003 (2020).
43. Larose, R. Overview and comparison of gate level quantum software platforms. Quantum 25(3), 130 (2019).
44. https://cds.cern.ch/record/2716204/plots.
45. LaBorde, M. L., Rogers, A. C. & Dowling, J. P. Finding broken gates in quantum circuits: Exploiting hybrid machine learning.
Quant. Inf. Process. 19(8), 1–8 (2020).
46. McKay, D.C., et al. Qiskit backend specifications for openqasm and openpulse experiments. arXiv preprint arXiv:1809.03452
(2018).
47. Abbaszade, M., Salari, V., Mousavi, S. S., Zomorodi, M. & Zhou, X. Application of quantum natural language processing for lan-
guage translation. IEEE Access. 30(9), 130434–48 (2021).
48. Salari, V., et. al. Quantum Face Recognition Protocol with Ghost Imaging. preprint: arXiv:2110.10088. [quant-ph]
49. Coecke, B. & Duncan, R. Interacting quantum observables: Categorical algebra and diagrammatics. New J. Phys. 13(4), 043016
(2011).
50. Duncan, R., Kissinger, A., Perdrix, S. & Van De Wetering, J. Graph-theoretic Simplification of Quantum Circuits with the ZX-
calculus. Quantum 4(4), 279 (2020).
51. Jang, W., Terashi, K., Saito, M., Bauer, C.W., Nachman, B., Iiyama, Y., Kishimoto, T., Okubo, R., Sawada, R., & Tanaka, J. Quantum
gate pattern recognition and circuit optimization for scientific applications. In EPJ Web of Conferences 2021 (Vol. 251, p. 03023).
EDP Sciences.
52. Sivarajah, S. et al. t| ket: a retargetable compiler for NISQ devices. Quant. Sci. Technol. 6(1), 014003 (2020).
53. Smith, R. S., Peterson, E. C., Skilbeck, M. G. & Davis, E. J. An open-source, industrial-strength optimizing compiler for quantum
programs. Quant. Sci. Technol. 5(4), 044001 (2020).
Acknowledgments
VS thanks the financial support from Natural Sciences and Engineering Research Council (NSERC) and the
National Research Council (NRC) of Canada.
Author contributions
T.S. contributed to implementation of the research and the main conceptual ideas, and performing the experi-
ments. M.Z. contributed to the original idea, took the lead in writing the manuscript with input from all authors,
and were in charge of overall direction. P.P. supervised and commented on the manuscript and contributed to
the final version. M.A. contributed to the final version of the manuscript and commented on the manuscript.
V.S. contributed to the analysis of the results and also contributed to the writing of the manuscript.
Competing interests
The authors declare no competing interests.
Additional information
Correspondence and requests for materials should be addressed to M.Z.
Reprints and permissions information is available at www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International
License, which permits use, sharing, adaptation, distribution and reproduction in any medium or
format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons licence, and indicate if changes were made. The images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
1. use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
2. use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com