```
Re-imagining Drug Discovery with Quantum Computing : A Framework and
Critical Benchmark Analysis for achieving Quantum Economic Advantage
```
```
By
Johannes Galatsanos-Dueck
```
Submitted to MIT Sloan School of Management in partial fulfillment of the requirements for the
degree of
MASTER OF SCIENCE IN MANAGEMENT OF TECHNOLOGY
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
May 2024
```
ยฉ2024 Johannes Galatsanos. All rights reserved.
```
```
The author hereby grants to MIT a nonexclusive, worldwide,
irrevocable, royalty-free license to exercise any and all rights under
copyright, including to reproduce, preserve, distribute and publicly
display copies of the thesis, or release the thesis under an open-
access license.
```
Authored by: Johannes Galatsanos-Dueck
MIT Sloan
May 10, 2024
Certified by: Michael A. Cusumano
MIT Sloan
Sloan Management Review Distinguished Professor of Management
Accepted by: Johanna Hising DiFabio, Director
MIT Sloan Fellows and EMBA Programs
```
Re-imagining Drug Discovery with Quantum Computing : A Framework and
Critical Benchmark Analysis for achieving Quantum Economic Advantage
```
```
By
Johannes Galatsanos-Dueck
Submitted to MIT Sloan School of Management
at the Massachusetts Institute of Technology
On May 10, 2024 in Partial Fulfillment of the requirements for the
Degree of Master of Science in Management of Technology
```
**Abstract** :
Quantum computing โs (QC) promise of solving computationally hard problems has captured
public attention and imagination, leading to significant private and public capital investments in
recent years. At the same time, we are at the cusp of a biomedical revolution powered by
computer-aided drug discovery (CADD). Drug discovery companies are rapidly transitioning to
the use of artificial intelligence to expedite and enhance research and development. However,
many of the classical AI use cases scale exponentially fast and face computational power ceilings.
QC can potentially accelerate these processes by several orders of magnitude in the future. As
such, an open question for drug discovery companies is when and how to adopt QC.
This thesis summarizes quantum CADD methods and useful applications in drug discovery. The
current state and trajectory of quantum computing is critically analyzed based on multiple
benchmarks and manufacturer roadmaps. Furthermore, 11 industry decision-makers were
interviewed to identify the current behaviors of end customers in investing in QC. To answer the
question of correct timing and sizing of investments for a drug discovery company, the concept of
_net quantum economic advantage_ is introduced, considering all direct and indirect costs and
benefits. A framework for drug discovery companies to monitor and invest in QC to reach a net
quantum economic advantage is provided.
The most useful QC algorithms of Quantum Phase Estimation and Quantum Machine Learning
for CADD will provide practical value after >2000 logical qubits and circuit sizes of >10^11 gates,
a far cry from todayโs performance of single-digit logical qubits. Based on manufacturer timelines,
these benchmarks may be achieved in the mid-2030s. However, other use cases might become
interesting in the next years, and preparing a company to take advantage of QC has a long lead
time. As such, drug discovery companies should move to an _active quantum monitoring phase_
soon.
**Thesis Supervisor** : Prof. Michael A. Cusumano
**Title** : Sloan Management Review Distinguished Professor of Management
**Acknowledgements**
To my parents, Harda and Tony, who always encouraged me in my endeavors and supported me
unconditionally.
To my nephews, Aurel, Aenias, and Aris, which inspire me with their curiosity. I am looking
forward to seeing them grow in their pursuits and interests.
I would like to thank Prof. Michael Cusumano for his tireless support as my supervisor and for
encouraging me to explore all dimensions of the thesisโ central questions.
Thanks also to all the industry advisors who gave me their expert insights and made this thesis
possible.
I would also like to thank Profs. William Oliver and Jonathan Ruane for their content support,
and the Sloan Fellows 24โ cohort for their encouragement.
Lastly, I would like to acknowledge Profs. Peter Shor, Aram Harrow, Paola Cappellaro, Isaac
Chuang, and Jake Cohen, and Asif Sinay. Their lectures at MIT and the action lab project with
QEDMA ultimately inspired me to write this thesis.
## Table of Contents
- 1. Introduction and Methodology
- 2. Computer-Aided Drug Discovery
- 2.1. Classical Computer-Aided Drug Discovery
- 2.2. Quantum Computer-Aided Drug Discovery
- 3. Quantum Algorithms, Software and Hardware
- 3.1. Advantages and Limitations of Quantum Algorithms
- 3.2. Software and Integration
- 3.3. Hardware Modalities
- 4. Quantum Computing Benchmarks
- 4.1. Physical-Level Benchmarks
- 4.2. Aggregated Benchmarks
- 4.3. Application-Level Benchmarks
- 4.4. Summary of Benchmarks and Industry Perspective
- 5. Quantum Advantage
- 5.1. Quantum Advantage for Algorithms and Hardware
- 5.2. Application-Specific Quantum Advantage
- 5.3. Quantum Economic Advantage
- 6. Trajectory and Roadmaps of Quantum Computing
- 7. Framework for Investing in Quantum Computational Drug Discovery
- 7.1. Timing and Size of Investment
- 7.2. Moving to Active Quantum Monitoring
- 8. Conclusion and Outlook
- Appendix
- A โ List of Interviewees
- B - Qualitative Interview Themes
- Table of Figures
- References
## 1. Introduction and Methodology
The quantum computing industry has taken up its shoulders the tremendous promise of tackling
some of the worldโs most challenging computational problems. With the unique combination of
benefits of superposition, entanglement, observation collapse, and tunneling, QC is the most
promising and advanced paradigm of computing to tackle computationally hard problems, but
also to simulate quantum mechanics. Some of these problems have very real applications in the
industry, for example, in optimization, cryptography, and machine learning. A particularly
interesting application is quantum mechanical simulation to accelerate chemical and biological
research, and the drug discovery process.
In 2023, private investments into quantum ventures reached $1,7B, and total public funding
reached $42B. The field left the pure research space and entered the commercial applications field
and may grow to $30-70B by 2035 (McKinsey, 2024). Quantum hardware, software and services
companies and startups are trying to push the boundaries of computing, while simultaneously
keeping public and private investors interested and attracted. Various claims of benchmark
achievements and _quantum advantage_ are made in frequent intervals, confusing potential end
customers. Even for more technical audiences, a plethora of benchmarks like quantum volume,
algorithmic qubits, and logical qubits are introduced, revised, and replaced by newer ones, adding
to the confusion across all layers of potential end customers.
This thesis attempts to fill a gap in current research, namely the perspective of end customers,
specifically drug discovery companies, that are considering investing in quantum computing. These
companies need to determine the appropriate timing and sizing of quantum computing
investments, but also the right focus areas and the right expectations on timelines for a return of
their investments (ROI). To define a strong quantum computing strategy, a combination of
insights is required, from evaluating the current state of quantum computing based on useful
benchmarks to identifying realistic drug discovery applications over to generating expected
trajectories and timelines for useful applications. Furthermore, a more transparent framework to
understand when quantum investments create an economic advantage is required, considering all
direct and indirect costs and benefits to such investments in drug discovery. Understanding and
projecting the benefits better will provide more consistent support for such projects from senior
management and other stakeholders inside the pharmaceutical companies, ultimately driving
faster adoption of quantum computing into the drug discovery process.
**Structure**
This thesis has the following structure:
Chapter 2 briefly overviews current methods and challenges in classical computational of Drug
Discovery, and the promise of quantum algorithms in this area.
Chapter 3 gives an overview of the advantages and limitations of quantum algorithms, software,
and hardware modalities.
Chapter 4 is the core analysis of this thesis, exploring the state of current computers across a
plethora of physical, aggregated, and application-level benchmarks, as well as comparison
benchmarks with classical computing.
Chapter 5 critically examines the terms of quantum advantage from an algorithmic and hardware
perspective, as well as what it would mean to have a _net quantum economic advantage_ from the
perspective of a drug discovery company.
Chapter 6 analyses the current trajectory of quantum computing and the different roadmaps given
by manufacturers and researchers to achieve useful QC drug discovery.
Chapter 7 concludes with a framework for decision-makers in drug discovery companies for the
right time and size to invest in quantum computing.
The Appendix summarizes the industry perspectives of the qualitative interviews I have
conducted.
A basic understanding of quantum information theory and the drug discovery process is assumed
to purposefully not get deep into chemistry, quantum mechanics, or mathematics due to the
limited time and space for this work and because this is covered in numerous textbooks like the
infamous _Mike & Ike_ (Nielsen & Chuang, 2010).
**Literature Review**
To address the key questions of this thesis, a literature review was conducted across key papers
in the areas of different QC modalities, history of QC, quantum complexity theory, quantum
advantage, quantum economic advantage, QC trajectory, QC for chemistry and drug
development, and quantum benchmarks. In addition to academic literature, benchmarks and
roadmaps for each key hardware manufacturer were investigated.
**Qualitative Interviews and Analysis**
To get a perspective of the industry, 11 leaders (Appendix A) were interviewed. They represent
companies active in pharmaceutical and chemical research, as well as hardware suppliers and
consulting firms. They were selected as a cross-sample of all large pharmaceutical companies and
are responsible for their respective companyโs quantum investments and strategy. As such, their
seniority is from Senior Manager to C-level decision makers. I recruited them via LinkedIn or
personal contacts based on these criteria. In total there were 8 senior-level executives at
pharmaceutical companies, one at a telecommunications company, one from a consulting firm,
and one hardware provider.
A semi-structured qualitative interview approach with probing questions and thematic analysis
was chosen. The key themes from the thematic analysis are summarized in Appendix B and
highlighted in the appropriate sections throughout the document. The interviews were conducted
online and in-person and averaged at 45mins, ranging from 30 to 90mins. Interview recordings or
notes were transcribed after each interview, and a thematic analysis was conducted to address the
overall questions. The purpose of the interviews was to understand key decision makerโs
approaches and challenges in the pharmaceutical and chemical industry to internal investments
for quantum computing. In particular, the following questions were asked:
- _Do you think there is any practical application for QC in the NISQ era (next 3-5 years)_
_in your company?_
- _How, and to what size have you built your teams for QC?_
- _Which are your key motivations for investing/not investing in QC currently?_
- _Who initiated, supported, and is driving the QC initiatives at your company?_
- _How can pharmaceutical companies cooperate better with industry, academia, and QC_
_suppliers to achieve advancements in relevant problems for actual drug discovery?_
- _What are the key benchmarks you are monitoring in QC (hardware/software) to trigger_
_decisions on increasing (or reducing) investments?_
- _Do you feel you have sufficient funding for QC initiatives in your company?_
- _What internal KPI do you use to justify quantum investments towards your management?_
- _Which are some promising use cases and quantum algorithms in the drug discovery space_
_for NISQ era? Which are promising more mid- to long-term?_
- _Are there sufficient algorithms and use cases for drug discovery?_
- _What benefits are you projecting for your company from these applications?_
- _When do you expect to achieve a positive ROI from your QC investments? How does that_
_expectation influence your investment strategy?_
- _How do you approach QC talent management based on the evolution of QC benchmarks?_
- _Did the surge in AI or other technologies reduce your overall budget for QC investments?_
## 2. Computer-Aided Drug Discovery
### 2.1. Classical Computer-Aided Drug Discovery
Drug discovery is an inherently expensive and long process, costing billions of dollars and
compounds requiring over a decade from their initial stages to commercialization. With worldwide
revenues close to 1,5 trillion USD (Mikulic, 2024), the pharmaceutical business has both the
resources and incentives to invest in accelerating drug discovery.
A key area of drug discovery is _hit identification_ , in which compound screening assays are executed
to identify a compound to address the drug target. The hit is a _compound with the desired activity
in a compound screen and whose activity is confirmed upon retesting_ (Hughes, Rees, Kalindjian,
& Philpott, 2011). A wide variety of techniques is used for identifying hit candidates out of existing
compound libraries of 10^60 of molecules (Polishchuk, Madzhidov, & Varnek, 2013). Molecules can
also be designed with a _de-novo_ design, like High-Throughput- (HTS), Focused-, Fragment-,
Physiological- and Virtual-screening, or structural-aided drug design. Out of all potential targets,
_lead compounds_ are identified based on several attributes: medicinal activity (i.e., the ability to
bind to the target), ADMET properties (absorption, distribution, metabolism, excretion, and
toxicity), and pharmacokinetics. To determine these properties, in-vitro techniques like HTS and
_in-silico_ computer-aided drug design (CADD) techniques like vHTS (virtual HTS) are used.
CADD is used to simulate the physics of molecular systems for small molecules and biological
targets during target identification, hit search, lead discovery, and lead optimization. Cao et al.
(Cao, Fontalvo, & Aspuru-Guzik, 2018) split CADD methods into **Structure-based** methods
like target structure prediction, molecular docking, de novo design, building affinity calculation,
and **Ligand-based** quantitative structure-activity relation (QSAR).
**Target identification**
Structure-based CADD uses the 3-D molecular structure of the target protein to predict its ability
to bind, while the less effective ligand-based methods predict the activity based on reference
molecules. To determine the structure of the target, proteins are studied with nuclear magnetic
resonance spectroscopy or X-ray crystallography. The structures can also be predicted via protein
folding based on amino acid sequences or using comparative modeling comparing them to other
known proteins from e.g., the Protein Data Bank. Next to the structure, the binding sites for the
ligand (candidate compound) must be identified. When these attributes are understood, the
conformation of the ligand to the target binding site needs to be understood ( _molecular docking_ ),
as well the strength of the interaction of the ligand to the target ( _binding affinity scoring_ ) must
be calculated. In the absence of structure information, QSAR can predict the structure based on
molecular descriptors like molecular weight, geometry, volume, surface areas, ring content, 3-D
geometrical information, and others. Additionally, the binding of _anti-target_ sites needs to be
calculated, i.e., targets that shouldnโt be bound to for safety and efficacy reasons.
**Hit Search**
Traditional HTS is a manual comparison of compounds with the right structure and activity from
a database and is very slow and expensive. Virtual HTS can find suitable molecules faster based
on different molecular descriptors of known structures, or again via QSAR if properties of
particular compounds are missing. QSARs are trained with datasets of particular descriptors and
are applied to small sets of compounds that are already characterized comprehensively. As such,
the results of QSAR are highly dependent on the initial training, as well as the chosen descriptors
and compound sets, and limit the diversity of the resulting candidates. QSAR and alchemical
perturbation methods are becoming more sophisticated with the explosion of generative models
and are continuously evolving fields.
Alternatively to QSAR, structure-based searches can be employed to predict protein-ligand
geometries and scoring. This is done with molecular simulations or with statistical and AI
methods^1. The scoring itself is related to the binding affinity of the candidate with the target, i.e.,
the free energy of the formation of the ligand-target complex. Ideally, ab initio methods that
simulate the quantum mechanical effects of the binding are employed, but classical computers are
very limited in these simulations. Up to 2n amplitudes must be tracked for n qubits, which would
bring modern high-performance computers to a limit of 50 qubits (Lund, Bremner, & Ralph,
2017). This is too small for molecule simulations by several orders of magnitude as I will highlight
later, so this method is not used widely today. Quantum computing can fill this gap and enable
us to re-imagine the Hit Search process.
**De Novo Design**
Next to vHTS to find a suitable candidate, a _de novo_ design of a ligand can be employed by _ligand
growing_ or _ligand linking_. In the former method, a binding site is docked with a known ligand,
and then structures are added to that ligand for better binding. In the latter method, multiple
ligands are docked and then linked to improve scoring. Again, pharmacokinetics and reaction
paths must be predicted here, an area where AI methods are rapidly evolving (Allen, et al., 2022),
(Jayatunga, Xie, Ruder, Schulze, & Meier, 2022), but are fundamentally limited as opposed to
QC simulations due to the exponential complexity scaling.
(^1) I will use AI and Machine Learning (ML) interchangeably in this thesis.
**Lead Discovery and Optimization**
Next, the lead candidates must be identified among the identified hits and optimized iteratively,
particularly their activity, pharmacokinetics, and ADMET. This is performed by alternating
between in vitro, _in vivo_ (inside an organism), and CADD methods, since small changes to
structure can have big effects on optimizing those properties. Similarly to previous steps, QSAR
and Structural methods are used, but they have similar limitations in that Machine Learning
(ML) models would have low accuracy of the simulations. The techniques used to estimate the
binding affinity of lead candidates are semiempirical methods (SE), density functional theory
(DFT), or HartreeโFock (HF) calculations with increasing layers of accuracy but decreasing size
of predicted systems, as seen in Figure 1.
_Figure 1: Zoom in on the compound intermediate of cytochrome-c peroxidase (PDB 1ZBZ). a:
Force fields/semi-empirical methods can model large systems but not fully describe quantum-
mechanical effects b. To model the central portion of the protein, HartreeโFock/DFT methods
can be exploited. DFT includes electronic correlation. C. coupled-cluster (CC) methods. D. The
full configuration interaction (FCI) method delivers the exact energy of the electronic-structure
problem but can deal only with a handful of atoms. Source: (Santagati R. A.-G., 2024)_
Going one level deeper, the thermodynamic calculation of the binding free energies is much more
difficult, requiring a quantum mechanical free-energy simulation or perturbation (FES and FEP)
and Coupled-Cluster Method with Single, Double, and Triple Excitations (CCSD(T)). More recent
techniques that have shown success are Molecular Mechanics Poisson Boltzmann Surface Area
(MM-PBSA) and Linear Interaction Energy (LIE) (King, Aitchison, Li, & Luo, 2021). However,
the accuracy of current methods, even when combined with modern ML techniques is often not
sufficient to be used at all. The needed accuracy is within 1.0 kcal molโ1 because a deviation of
1.5 kcal molโ1 at body temperature can give a wrong dosage estimation by an order of magnitude.
As Blunt et al. (Blunt, et al., 2022) mention, many current high-throughput workflows donโt even
use current state-of-the-art methods like DFT. Important to note is that free -energy calculations
of several thousands of atoms require billions of energy and force calculations, as they also must
include water as a solvent. Quantum computing can overcome this challenge with the appropriate
size and change this paradigm.
### 2.2. Quantum Computer-Aided Drug Discovery
**Introduction to Quantum computing**
In May 1981, at a conference, Richard Feynman gave a talk on the topic of _Simulating physics
with Computers_. The same talk was published shortly after (Feynman, 1981) and arguably
launched the field of Quantum Computing. The proposed idea was to simulate physics by a
universal computer, and since the world is quantum mechanical, this simulation should also be
quantum mechanical. The probably most famous quote from this talk, however was:
```
Nature is not classical, dammit, and if you want to make a simulation of Nature, youโd
better make it quantum mechanical, and by golly it is a wonderful problem because it
doesnโt look so easy.
```
Feynman saw the limitations of classical computing being used to simulate physics in a better-
than-approximative matter. Specifically, he states that:
```
This is called the hidden variable problem: It is impossible to represent the results of
quantum mechanics with a classical universal device.
```
As this idea matured, Deutsch (Deutsch, 1985) formalized the notion of a quantum computer and
Deutch and Jozsa (Deutsch & Jozsa, 1992) formulated the first algorithm showing a computing
advantage. The **Deutsch-Josza** problem gives an oracle that implements a function f:{0 ,1} n โ
{0, 1} with an input of n binary values. The oracle shall then determine if f is a constant function
(i.e., returns always 1 or 0) or balanced (i.e., that can return either 1 or 0 based on the input,
e.g., an OR or AND gate) by using the oracle. This algorithm obviously has no practical
application, but it is simple to understand and demonstrates a clear speedup of using a quantum
computer versus a classical computer, kicking off the quantum computing algorithm exploration.
After this, Simon (Simon, 1997) described a superpolynomial speedup on another oracle-type
problem formulated earlier by Bernstein and Vazirani (Bernstein & Vazirani, 1997). This inspired
Peter Shor to develop both the Quantum version of the **Fourier Transformation** , which can be
used to compute discrete logarithms, and the famous **Shorโs algorithm** for prime factorization
(Shor, 1999). This is arguably one of the most important advancements in the field of quantum
computing, moving the field to an explosion in growth and interest, as it promised to crack secure
encryption protocols still used to this day.
A plethora of interesting algorithms have been introduced since then, such as **Groverโs** algorithm
(Grover, 1996), which provides a quadratic speedup for searching unstructured databases. In
classical computing, searching an unstructured database has a linear time complexity, while
Grover's algorithm can do it in roughly the square root of that time, making it significantly faster
for very large databases and search queries.
**Quantum Computer-Aided Drug Discovery Methods**
Since quantum computing started with Feynmannโs idea to simulate Nature, chemistry and drug
discovery are possibly the most obvious use cases for its application. Figure 2 summarizes the
areas of application of quantum computing methods for near-term and long-term techniques based
on the CADD areas and pipeline phases described earlier.
_Figure 2: a) General workflow of the drug discovery process. Here, Cao et al. focus on the early
phase where computationally intensive quantum chemical analyses are involved. (b) Components
of each stage of drug discovery that heavily involve quantum chemistry or machine learning
techniques. (c) Quantum techniques that can be applied to the components listed in (b) and
potentially yield an advantage over known classical methods. Here, they make the separation
between techniques for NISQ devices and FTQC devices. Source: (Cao, Fontalvo, & Aspuru-
Guzik, 2018)_
**Variational Quantum Eigensolver (VQE)**
**VQE** (Peruzzo, McClean, & Shadbolt, 2014) is a variational algorithm for finding the Eigenvalues
of a Hamiltonian. It is used in quantum chemistry as a heuristic hybrid quantum-classical
algorithm determining the ground state energy of a Hamiltonian, which is important for
understanding chemical properties and reactions. It minimizes the energy expectation of the
output Hamiltonian by tuning the parameters of the quantum circuit. This algorithm is currently
useful because it can be implemented on noisy intermediate-scale quantum (NISQ) computers.
A drawback VQE shares with all variational techniques is that it requires a reasonably good
_ansatz_ i.e., a starting guess close to the optimum, so it can avoid getting stuck in a non-useful
local optimum. Since classical methods like HF outperform quantum algorithms on current
hardware, they can be used to create an ansatz that is reasonably good for VQE to optimize. The
resulting circuit is relatively short, so it can work on near-term NISQ devices, and by having
randomness in the variational approach, inherent errors from noisy quantum hardware can be
compensated to an extent.
VQE has been further optimized and altered e.g., as per Elfving (2020) with VQE-UCC combining
the Variational Quantum Eigensolver algorithm with the chemistry-inspired Unitary Coupled
Cluster ansatz, VQE-HF performing the quantum version of the Hartree-Fock procedure. The
Quantum Equation-of-Motion VQE (qEOM-VQE) and Variational Quantum Deflation (VQD)
methods are used to compute excited state energies (Elfving, et al., 2020).
However, a limitation of VQE is that the solution is, like its classical alternatives, an
approximation without adjustable accuracy levels. Furthermore, it needs a classically calculated
ansatz, and, most importantly, the runtime estimations for large-scale calculations are not
predictable, i.e., it is not a provably efficient algorithm. Due to these severe limitations, the
conclusion of Blunt (Blunt, et al., 2022), and the interviews I conducted^2 is that, while VQE is
useful for QC research purposes in the NISQ era, it will not be used in any realistic scenario of
drug research to the scale and precision required.
**QPE**
The **Quantum Phase Estimation (QPE)** (Kitaev, 1995) estimates the phase (or eigenvalue)
of an eigenvector of a unitary operator. This approach is more promising for drug discovery than
VQE. It is a Hamiltonian phase estimation to extract the eigenenergies associated with prepared
eigenstates of a Hamiltonian. The needed accuracy can be defined for QPE, and the runtime scales
(^2) As mentioned earlier, the summary of these interviews is in Appendix B.
polynomially with the input size. QPE uses the dynamical evolution under the target Hamiltonian,
which is implemented using techniques such as trotterization, taylorization and qubitization. They
require the fermionic Hamiltonian and the Initial state constraints as an input and then calculate
the energy level of the state and the eigenvectors, which represent ground electron configurations.
More formally, given a fermionic Hamiltonian _H_ and parametrized state | _ฯ_ (๐ฅ๐ฅโ) โฉ , it finds out what
๐ฅ๐ฅโ minimizes ฮป in _H_ | _ฯ_ (๐ฅ๐ฅโ) โฉ = ฮป|ฯ(๐ฅ๐ฅโ) โฉ , where ฮป is the eigenvalue of a specific eigenstate. Both H
and | _ฯ_ (๐ฅ๐ฅโ) โฉ scale fast since each additional orbital doubles the number of potential configurations.
Due to this, Quantum Phase Estimation assumes Fault-Tolerant Quantum Computing (FTQC)-
era logical qubits, so it is currently very limited in its usage. Recently, Yamamoto et al.
(Yamamoto, Duffield, Kikuchi, & Ramo, 2023) demonstrated running a Bayesian QPE with
Quantinuumโs trapped Ion two-logical qubit machine. However, this was only done for a two-
qubit Hydrogen model, which is very far away from any useful applications requiring thousands
of logical qubits as I will highlight later.
QPE can be applied to the mentioned _ab initio_ estimations for docking but also for _de novo_ design,
which is limited today due to the challenges in simulating reaction paths to determine the
candidateโs synthesizability. All quantum mechanical simulations mentioned in Section 2.1. could
be improved with these quantum methods, ultimately significantly improving the results of
classical methods like DFT. These methods are combined with classical methods, but also
iteratively optimized to reduce the number of necessary calculations, e.g., by Chen et al. (Chen,
Huang, Preskill, & Zhou, 2023), with not directly determining the ideal ground state but instead
find the quantum systemโs local minimum energy state, which quantum computers can solve with
a clear speed advantage.
**Annealing**
Next to the gate-based methods, the modality of Quantum Annealing offers a few techniques for
Drug Discovery. A recent example is given by Mato et al. (Mato, Mengoni, Ottaviani, & Palermo,
2022) in which a phase of the molecular docking procedure is simulated with molecular unfolding
into a simpler state to manipulate it within the target cavity using Quantum Unconstrained
Binary Optimization (QUBO) (Farhi, Goldstone, Gutmann, & Sipser, 2000). QUBO tackles
specific optimization algorithms by mapping a real-life problem to annealing computers, which I
will discuss later.
Another annealing example is the discrete-continuous optimization algorithm (Genin, Ryabinkin,
& Izmaylov, 2019), which finds the lowest eigenstate of the qubit coupled cluster (QCC) method
using the for solving a discrete part of the Hamiltonian eigenstate problem. There are also
applications demonstrated in protein folding (Babej, Ing, & Fingerhuth, 2018) and de novo genome
assembly (Boev, Rakitko, & Usmanov, 2021). While annealing methods provide advantages in the
NISQ era and show promise in accelerating individual classical methods with a hybrid approach,
there are significant challenges with scaling these approaches to useful sizes necessary for drug
discovery.
**Quantum Machine Learning**
Many of the discussed classical CADD algorithms are enhanced by ML methods, so it is natural
that speed enhancements created by Quantum Machine Learning (QML) would apply to quantum
CADD as well. Generally, any speedup to a classical approach could be interesting even in the
NISQ era, considering that computing power is arguably the biggest bottleneck in AI currently.
However, considering the net computing cost savings possible^3 , NISQ algorithms will not become
a cost-efficient alternative to current high-performance computation. Looking at FTQC, there are
applications like Bayesian inference (Low, Yoder, & Chuang, 2014), using quantum perceptrons
(Cao, Guerreschi, & Aspuru-Guzik, 2017), HHL^4 , and other QML techniques. However, QML may
provide advantages only in a similar timeframe as the harder quantum simulations (Schuld &
Killoran, 2022). While it is impossible to predict what ML methods will be used for classical
CADD in the 2030s when quantum drug discovery becomes feasible at the earliest, it seems like
executing the molecular simulations via QPE is more promising than using _hardware-accelerated_
ML techniques of classical CADD for the same purpose. However, a _double-hybrid_ approach of
quantum simulations and classical CADD ML methods accelerated by QML may very likely be
used in the FTQC era. A recent example of that approach is HypaCADD, a hybrid classical-
quantum workflow for finding ligands binding to proteins (Lau, et al., 2023).
**Use Cases and Summary**
Looking at particular use cases, some applications explored are active space calculations (CAS) of
cytochrome P450 enzymes (Goings, et al., 2022), to evaluate their chemical mechanism of
reactivity and describe their energy states. P450 is the largest family of hemoproteins, with 300.
members known and 57 isoforms encoded in the human genome. These membrane-bound heme-
containing enzymes are particularly interesting because they are more realistic use cases of
simulations and that function primarily as monooxygenases for the detoxification of organisms.
Protein folding has been an interesting use case explored with annealing methods, as mentioned
earlier but also gate-based e.g., with VQE and the **Quantum Approximate Optimization**
(^3) Discussed in Section 5.3.
(^4) The Harrow-Hassidim-Lloyd (HHL) developed by (Harrow, Hassidim, & Lloyd, 2008) can solve certain
systems of linear equations exponentially faster than the best-known classical algorithms.
**Algorithm (QAOA)** (Robert, Barkoutsos, & Woerner, 2021)**.** QAOA (Farhi, Goldstone, &
Gutmann, 2014) is designed to tackle combinatorial optimization problems and is another
variational algorithm that can be run on NISQ computers. It also has potential applications in
logistics, scheduling, and machine learning, among other areas.
Classical methods can help with simulations of simpler oral drugs because they are small closed-
shell organic molecules passing through the gut wall. These generally lack strong electronic
structure correlation, so they can be addressed with lower accuracy methods. However, drug
molecules with metal centers have a stronger correlation and some of them can be used for cancer
treatments. Those drugs might be more unexplored due to inherent unwanted effects, or due to
limits in classical simulation, again to be overcome with QC.
Beyond qCADD discussed here, there are other algorithms and use cases that may potentially
impact a drug discovery company. Groverโs can accelerate database searches for very large strings,
which are common in e.g., DNA sequencing. This could accelerate personalized medicine
approaches and R&D in the genomics area. QAOA, QUBO, and other algorithms can help execute
optimizations in clinical operations supply chain management, or other areas. However, drug
discovery is not the area with the most significant supply challenges. Larger supply chain
optimization problems can be found e.g., in manufacturing at scale for pharmaceutical, fast-
moving consumer goods, logistics and transportation companies, which may see earlier use cases
than drug discovery.
Per my interviews, better algorithms beyond the existing ones will be required to drive a
significant improvement in drug discovery, especially in the earlier FTQC era. Academia and
industry should focus on finding useful applications in drug discovery in the earlier FTQC period.
This can be achieved, e.g., with contests like Q4BIO, which is searching for a useful algorithm
and use case utilizing 100 -200 logical qubits and a 10^5 -10^7 circuit depth, expected to be achieved
in 3- 5 years (Welcome Leap, 2023). A very recent example of such a promising algorithm is the
Generative Quantum Eigensolver (GQE), an evolution of VQE inspired by generative AI with
potential application in ground state search (Nakaji, et al., 2024).
In summary, QC, specifically QPE and derivative methods in a hybrid classical and quantum ML
setting, show big promise in accelerating classical CADD methods in the Target Id, Hit Search,
and Optimization phases of drug discovery. In the following sections, I will explore the current
state of QC hardware, and the trajectory of QC performance to have an actual impact on realistic
drug discovery use cases.
## 3. Quantum Algorithms, Software and Hardware
### 3.1. Advantages and Limitations of Quantum Algorithms
The idea to conduct quantum simulations with a quantum computer is well-argued in Feynmanโs
paper, as well as in further research on the topic. It fits the intuitive understanding of using
quantum mechanics to simulate quantum mechanics. However, it is not so obvious that quantum
computers could be used to tackle computationally complex problems like Shorโs, Groverโs, and
others, which are surprisingly unrelated to quantum mechanical problems (Watrous, 2008).
Fundamentally, quantum algorithms create their advantage over classical algorithms by relying
on superposition, measurement collapse, entanglement, and tunneling^5. The idea is that n qubits
need 2n bits of information to be described due to their probabilistic superposition, as compared
to the classical bits. However, due to measurement collapse, the information that can be deducted
from a single measurement of a qubit is again a binary 0 or 1. This alone would not give any
advantage, but by utilizing entanglement, larger circuits that can do useful calculations can be
built.
**Overview of complexity theory**
In classical computation theory, the terms P and NP are used since the 70s to compare the
complexity of problems. A problem being part of P means that there is an algorithm that can
solve it in polynomial time, i.e., that it requires nk calculation steps for n-sized input, annotated
with the _big O_ notation as T(n) = O(nk) for some positive constant k. A problem being in NP
(nondeterministic polynomial time) means an answer can be verified in polynomial time by a
deterministic Touring machine and solvable in polynomial time by a nondeterministic Turing
machine. The question of P = NP is extremely important in computer science and one of the
seven Millennium Prize Problems from the (Clay Mathematics Institute, 2024) with a prize money
of $1, 000 ,000. The significance is that if P=NP were true, many real-life problems would have
practical solutions that can scale polynomially and are, as such, possible to calculate with classical
computers. Some examples of these algorithms are 3- SAT, Clique, Vertex Cover, Hamiltonian
Circle, and Travelling Salesman, which have applications in cryptography, scheduling,
combinatorial optimization, process monitoring, and other areas. These problems fall under the
NP-complete category, a subset of the hardest NP problems. Formally, a decision problem C in
NP is NP-complete if every problem in NP is reducible to C in polynomial time.
(^5) Tunneling is mostly used by annealing algorithms; also quantum teleportation is another useful property
with potential applications in quantum networking and error correction.
**Potential of Quantum Algorithms**
Since quantum computers have the advantage of exponential information, there is a complexity
class BQP of efficiently quantum computable problems. However, this does not mean that BQP
= NP-complete. According to Fortnow (Fortnow, 2009):
```
Even if we could build these machines, Shorโs algorithm relies heavily on the algebraic
structures of numbers that we donโt see in the known NP-complete problems. We know
that his algorithm cannot be applied to generic โblack-boxโ search problems so any
algorithm would have to use some special structure of NP-complete problems that we donโt
know about. We have used some algebraic structure of NP-complete problems for
interactive and zero-knowledge proofs but quantum algorithms would seem to require much
more.
```
As for Groverโs algorithm, Fortnow mentions:
```
Lov Grover did find a quantum algorithm that works on general NP problems but that
algorithm only achieves a quadratic speed-up and we have evidence that those techniques
will not go further.
```
This assessment does not necessarily exclude the achievement of exponential speed-up brought
upon by quantum algorithms for problems beyond the known ones, like Shorโs. However, Fortnow
goes one step further and concludes _that BQP actually contains no interesting complexity classes
outside of BPP_ (Fortnow & Rogers, 1999)_._ BPP (efficient probabilistic computation) is the
probabilistic cousin of BQP and contains the problems that can be solved in polynomial time by
a probabilistic Turing machine with an error bound of 1/3.
This conclusion would limit the space of useful quantum algorithms to specific use cases like
simulations, Shorโs, graph isomorphism, finding a short vector in a lattice, as well as accelerations
like Groverโs quadratic one, as seen in Figure 3.
_Figure 3: The BQP (bounded-error, quantum, polynomial time) class of problems. Source (MIT
Open Courseware, 2010)_
More recently, Pirnay et al. (Pirnay, et al., 2024) claimed that quantum computers feature a
super-polynomial advantage over classical computers in approximating combinatorial optimization
problems for specific instances of the problem, as seen in Figure 4. These problems include the
Traveling Salesman problem, a key algorithm used in supply chain and logistical optimization.
_Figure 4 - (Pirnay et.al, 2024)โs work (arrow) shows that a certain part of the combinatorial
problems can be solved much better with quantum computers, possibly even exactly._
As this is a very recent paper, the problem of if BPP is (effectively) equal to BQP remains hotly
debated in computational complexity. Whatever the answer, the algorithmic space for quantum
computing is uncertain, and one generally cannot infer that efficient quantum algorithms will be
found for more interesting problems in the future, ultimately possibly limiting quantum
computationโs usefulness in the long run.
**Shots**
Another parameter to consider when comparing classical to quantum algorithms is the net factor
of the number of shots. Many quantum algorithms like VQE, QAOA, Hamiltonian simulations
etc. are probabilistic, meaning several shots must be executed to get statistically relevant results.
The exact number of shots varies based on the algorithm, precision, and circuit size. However,
this multiplicative factor of shots needs to be considered when comparing the quantum algorithm
runtime to a classical counterpart. In addition to algorithm-required shots, error correction and
mitigation techniques like, e.g., Zero Noise Extrapolation (Giurgica-Tiron, Hindy, LaRose, Mari,
& Zeng, 2020) used currently by IBM also require multiple shots to error-correct, which increases
runtime by another factor when considering quantum advantage of an algorithm versus their
classical counterparts.
### 3.2. Software and Integration
Quantum algorithms must be brought to life using appropriate software. In QC, an algorithm is
implemented on a physical machine as a _quantum circuit_. Circuits use logical one-qubit gates like
the X, Y, Z, and Hadamard gates, which perform unitary matrix operations on a single qubit, i.e.,
are rotations on the _Bloch Sphere_. Operations are also performed between multiple qubits with,
e.g., two-qubit gates like controlled-not (CNOT), controlled Z (CZ), SWAP and Toffoli (CCNOT).
Below that, error correction and lastly, quantum hardware operations are performed to execute
these circuits, as seen in Figure 5.
_Figure 5: Different layers of Hardware over logical qubits to Algorithms. Source: (Santagati,
Aspuru-Guzik, & Babbush, 2024)._
Quantum software is developed on classical computers, so an important consideration is which
programming language and Software Development Kit (SDK) to use. Beyond that, it is important
to decide how to access the hardware layer, i.e., if a cloud platform is used and how the quantum
computers will interact with classical computers.
**Coding environments**
The main programming languages used in QC are based on Python and C. Based on these, most
manufacturers have created their own SDKs and languages like IBMโs Qiskit, Azureโs Modern
QDK (formerly Q#), Intelโs Quantum SDK, Braket SDK, CUDA-Q, Googleโs Cirq, Rigettiโs
pyQuil, Quantinuumโs pytket, but also third parties like Pennylane or openQASM. Some platform
SDKs like qBraid can convert between many of these languages to allow easy compatibility across
different machines, which is further assisted by the fact that most of these SDKs have a very
similar syntax based on Python. All these SDKs provide access to the hardware of the respective
provider or to the more cost-effective _simulators_ , i.e., classical High-performance computers like
NVIDIAโs A100 simulating QC, which are today more performant in executing QC code than the
actual QPUs (Quantum Processor Units). However, some languages are more performant than
others, e.g., Qiskit was perceived as less performant in my interviews.
The SDKs mentioned above visualize circuits but are low-level programming languages, which is
why, for example QASM stands for quantum assembly language (named after the low-level
_assembly language_ in classical computing). This low level of control is important in the current
stage of QC, as quantum programs are circuits, and quantum algorithms are defined at the circuit
level. Many low-level tasks are executed on the software level, from working around connectivity
with SWAP gates for individual qubits, to performing error mitigation techniques between
individual qubits. However, as hardware gets more performant and circuits get bigger, easier
abstractions and a move towards high-level languages will be required. This niche is currently
covered by software platforms like Classiq, which offer automatic characterization and
optimization, accelerating the development of large circuits.
Beyond that, AI copilots like the recent Microsoft AI Copilot for Quantum Computing and the
Wolfram Quantum Framework are promising developments. Looking at current rates of evolution
and utility of copilots for classical computing, it is likely that future versions can automate a great
deal of quantum programing as well. All these endeavors for simplifying and automating coding
will be critical for an end customer adopting quantum computing, as upskilling data scientists or
programmers into quantum computing coders is a very time-intensive process, as per my
interviews.
The automation and AI aspects will be especially critical for CADD use cases. When going through
Hit Search or Lead Discovery and Optimization, each of the potentially thousands or millions of
molecular simulations will require individualized circuits of >>10^10 of Toffoli gates (e.g., for a
CAS(50,50) FeMo-co simulation). Building circuits of this size is an impossible task with manual
coding techniques and will require very efficient automation of circuit design and execution for
molecules to make QC feasible in CADD.
**Cloud platforms and on-premise computing**
A big question for an organization getting into QC is to choose a cloud provider or get an on-
premise computer. Based on my interviews, business end customers currently clearly prefer cloud
services, especially if they are not in the _active quantum research_ phase^6. Cloud platforms offer
massive simplification for end customers as they have no upfront investment, do not require
specialized hardware maintenance efforts, and upgraded machines are made available immediately.
The main cloud platforms offering simulators and hardware from multiple manufacturers are
Amazon Braket, Microsoft Azure Quantum, NVIDIA, QBraid, Quantum Inspire, and QM Ware,
and OVH Cloud (providing simulators from European manufacturers). Most manufacturers with
commercially available machines also provide a cloud service for their computers, like IBM Q,
Google AI Quantum Cloud via their library Cirq, Rigettiโs Quantum Cloud Services, Xanadu or
OQC Cloud. Alibaba used to have a quantum cloud offering but spun the quantum arm off in
2023 and shut down its quantum cloud.
On the other hand, many educational institutes and research centers currently prefer on-premise
computers, and so do some end customer companies. One motivation for this is to have direct
access to the hardware for research purposes. Another argument is the protection of secrets
(especially for the defense area) and IP that should not be shared with cloud providers, especially
across international borders. National and security agencies will opt for on-premise machines, and
drug discovery companies may follow a similar approach for highly sensitive data.
However, the most crucial consideration would be how to best utilize speed advantages. Currently,
code execution via the cloud is both waitlisted and throttled for most users but can be prioritized
with premium access. Even with full prioritization, since quantum computers work hybrid with
classical ones, the connection speed (latency) between both modalities is important. Both today
in the NISQ era and the future in FTQC, the principle is to perform load-balancing and utilize
the benefits of classical and hybrid computers for the calculations they are best at. For example,
in drug discovery, a _double-hybrid_ approach of using QC simulations assisted by classical methods,
which are expedited themselves by QML, is likely.
Running both classical and quantum software on the same cloud provider in the same physical
location can provide critical speed benefits. Providers like Google AI Quantum, NVIDIA, and QM
Ware have adopted this thinking, providing AI and QC services via the same cloud with QPUs
(^6) The second most active of the four total stages of quantum adoption for a company, which I will
introduce in section 7.
sitting in the same physical location as classical CPUs. Another benefit of cloud computing could
be _multi-QPU backends_ like the one NVIDIA offers. This allows the optimization of specific
calculations on different QPUs with different modalities, utilizing the benefits of each, e.g., the
gate speed of superconduction QPUs with the superior connectivity of neutral atom QPUs, as I
will highlight later.
**Summary**
In summary, companies adopting QC should carefully consider an appropriate set of SDKs and
cloud providers. To allow future use cases with trillions of gates, a flexible, performant SDK
should be chosen that can address multiple modalities of QPUs and integrates well with classical
CPUs and ML applications. The platform should also be able to utilize automation and AI co-
pilots to accelerate the development of large circuits.
### 3.3. Hardware Modalities
After Deutschโs foundational work on quantum systems, different experimental groups started
constructing quantum computers. In 2000, the famous 5+2 DiVincenzo criteria were formulated
for implementing a quantum computer (DiVincenzo, 2000):
_1. A scalable physical system with well characterized qubits
2. The ability to initialize the state of the qubits to a simple fiducial state, such as |000..._ ใ
_3. Long relevant decoherence times, much longer than the gate operation time
3. A โuniversalโ set of quantum gates
4. A qubit-specific measurement capability
5. The ability to interconvert stationary and flying qubits
6. The ability faithfully to transmit flying qubits between specified locations_
Several different modalities have been invented and developed that fulfill these criteria, which I
will explore below. I will highlight a few of the manufacturers, although the list of hardware and
software suppliers is quite large and very dynamic, as seen in Figure 6.
_Figure 6: List of Quantum Software and hardware providers. Source: Quantum Insider_
**Errors and Error-Correction**
An infamous article from Haroche and Raimond (Haroche & Raimond, 1996) stated that _the large-
scale quantum machine, though it may be the computer scientist's dream, is the experimenter's
nightmare._ Even though this rather pessimistic article came very early in QC history, their
arguments remain strong today. Contrary to classical computing hardware, all known quantum
hardware modalities encounter the issue of having relatively high error rates while executing
operations. While these error rates have improved by several orders of magnitude since 1996, they
are still large enough to make large-scale computation infeasible, since errors propagate and scale
multiplicatively with every layer of gates. Haroche & Raimond describe a way forward based on
error correction, akin to what was later coined as logical qubits, which is still the focus of the
hardware industry today.
Quantum error correction codes (QEC) like surface-, repetition-, Hamming-, Shor- or Steane-codes
(Shor, 1995) (Shor, 1996) (Steane, 1996) are vital components of any quantum computer and have
been a significant field of study ever since. These codes combine multiple physical qubits at the
hardware level to create an error-corrected logical qubit.
Current methods combine physical and algorithmic techniques, and reducing error rates has been
a critical driver of the development of components like a _quantum error correction chip_ by
Riverlane, or new computing architectures like the _Topological Majorana_ by Microsoft^7.
Despite error correction codes, the high error rates inherent to current quantum computers led
Preskill (Preskill, 2018) to name the current era of quantum computers as NISQ (noisy
intermediate-scale quantum)^8. Since most quantum algorithms rely on logical qubits and gates,
there are currently very limited applications for these algorithms at scale. The next era to move
into is FTQC (Fault-Tolerant Quantum Computing), with sufficient logical qubits to execute
algorithms based on these error-corrected qubits.
**Annealers**
Fundamentally there are three paradigms of quantum computing: the first one is gate-based,
simulating a gate architecture and algorithmic approach akin to classical computers. The second
one is analog, explained later. The third approach is called adiabatic computing, executed by
annealers and inspired by the metallurgical process with the same name. The underlying
phenomenon is that quantum systems will always convene to the energetic minimum. Using this
phenomenon, an optimization problem can be mapped as an energy landscape of possible solutions
with the lowest energy being the best solution. By _annealing_ i.e., adjusting the system parameters,
the optimal solution will emerge at the lowest energy state using QUBO. There are also hybrid
approaches trying to tackle large Isin g problems by segmenting the problem and iterating it
between gate-based VQE and smaller annealing Hamiltonian calculations (Liu & Goan, 2022).
The company D-Wave made the first adiabatic quantum computer, but also the first quantum
computer of any type commercially available in 2010 with 128 qubits. To this day, D-Wave
produces commercial machines, currently offering the 2000Q and Advantage machines which reach
5000 qubits. In fact, in hardware benchmarks like QScore, D-Waveโs machines currently perform
at the top performance levels compared to other modalities.
Annealers are based on supercooled superconducting qubits utilizing Josephson junctions, which
is a very similar architecture to the gate-based superconducting modality. However, the layout is
different, allowing easier scaling and optimization to solve QUBO and Ising problems, with
mapping objective functions to graphs and then Hamiltonians that can then be solved with
annealing. Adiabatic computers utilize quantum tunneling to achieve solutions faster but do not
(^7) This research has raised some concerns (Frolov, 2021), and Microsoft has not yet demonstrated a working
prototype of this technology, so I will not explore this modality further in this thesis.
(^8) I will explain why we have not yet moved beyond NISQ, even though Microsoft has made contrary claims
recently.
use circuits or gates to address individual qubits, and as such are even more specialized machines
than gate-based quantum computers. Formally, adiabatic computers are equivalent to gate-based
quantum computers (Aharonov, et al., 2007), but this applies only to perfect adiabatic qubits
which are far off from current D-Wave annealers. It is also doubtful if this method can actually
scale to sizes that are interesting enough for practical applications and are non-stoquastic (i.e.,
cannot be simulated by classical computers) (Preskill, 2018).
**Trapped Ions**
The first quantum computer ever was built by Chuang et al. (Chuang, Gershenfeld, & Kubinec,
1998) with nuclear magnetic resonance techniques using a solution of chloroform molecules
creating two physical qubits. However, this technique did not catch on due to significant
deficiencies in coherence and scaling. Instead, trapped ions are the next modality built and still
in commercial use today. This modality was suggested early on by Cirac and Zoller (Cirac &
Zoller, 1995) and implemented experimentally by Monroe et al. (Monroe, Meekhof, King, Itano,
& Wineland, 1995), creating the first gate-based computer, which led to the 2012 Nobel prize for
Wineland.
Trapped Ions are possibly the most intuitive approach to implement gate-based quantum
computing, where qubits are ions held in place by Radiofrequency Paul Traps (for Quantinuum
these are Ytterbium atoms). Lasers or microwaves are directed at individual ions, allowing gate
operations, and photon detectors measure the state of the ions. Trapped ions have the benefit of
longer coherence and higher fidelities than other modalities. Also, individual trapped ions can be
moved around the device with DC electrodes, enabling very flexible connectivity. However, they
do not scale as well, so the number of qubits in current market leading machines is lower than
e.g., for superconducting qubits.
Trapped Ions are considered to be well-suited for running QPE^9. The reason is their high fidelity
and coherence, combined with the fact that Toffoli gates, which are a key component to QPE,
require fewer operations to be executed in a trapped ion computer and, as such, have lower overall
error rates allowing larger circuit sizes.
The current market leaders in this modality are Quantinuum (a merger of Honeywell and
Cambridge Computing ) with their Model H2 with 32 qubits and 99.9% two-gate fidelity, which
(^9) Even though this is currently not demonstratable at scale, as the most powerful trapped ion machine by
Quantinuum only has a couple of logical qubits and is limited to simulating only molecules as small as
Hydrogen with QPE.
recently demonstrated four logical qubits (Silva, et al., 2024), and IonQ with the 36 qubit Forte
machine (IonQ, 2024).
**Superconducting Qubits**
Superconducting computers are electronic circuits created with lithographical methods used for
classical computing fabrication. These circuits also use Josephson junctions and are cooled to
millikelvin temperatures to create entangled _artificial_ qubits. Their main benefits are fast gate
speeds and the fact that they can be manufactured more easily than other modalities as they use
classical chip fabrication techniques. On the downside, the coherence is lower, and the cooling and
control systems necessary are bulky and energy intensive, so there are questions on physical scaling
and operation.
Superconducting qubits is the modality with the most contenders in the market, with IBM,
Google, IQM, Rigetti, and Oxford Quantum Computers providing commercial offers, reaching up
to 133 Qubits with IBM Heron. In fact, the first quantum computer offered to the cloud was
superconducting (IBM, 2016), as well as the first commercial quantum computer (Aron, 2019).
Superconducting qubits so far require a higher ratio of physical to logical qubits than Ion Traps,
and it remains to be seen if current architectures can scale to large enough logical qubit counts.
However, they also offer a very dynamic field for using new approaches e.g., to reduce errors, like
with the _Cat Qubits_ explored by _Alice & Bob_ or Fluxonium Qubits explored by _Atlantic Quantum_.
Another approach being in the early research phase is _Silicon quantum dot spin qubits_ explored
by Intel, reaching 6 qubits (Philips, Mฤ
dzik, & Amitonov, 2022) and making progress in
fabricating their silicon qubits on a 300mm diameter wafer (Neyens, O.K., & Watson, 2024).
**Neutral Atoms**
Shortly after the trapped ion papers from Zoller and Cirac, Briegel et al. (Briegel, Calarco, Jaksch,
Cirac, & Zoller, 1999) proposed the idea to create quantum computers with Rydberg Atoms,
which was further developed by Saffman et al. (Saffman, Walker, & Mรธlmer, 2010). However, up
until recently, this modality did not have any commercial offerings, mainly because of low fidelity
and the fact that atoms get _knocked out_ by other atoms during gate operations (Wang, Zhang,
Corcovilos, Kumar, & Weiss, 2015). This changed in 2022 when the MIT/Harvard-spinout QuEra
offered the machine with the highest qubit count (256) Aquila machine, with an aggressive
roadmap to release a machine with over ten thousand qubits (QuEra, 2024) by 2026. Around the
same time, the French company PAQSAL announced a _closed beta_ cloud offering of their device.
Other contenders are Infleqtion, with a target to reach 100 logical qubits in 2028 (Siegelwax,
2024), Atom Computing, which plans to offer a 1000-qubit machine (Atom Computing, 2022),
and ColdQuanta.
Neutral atom computers use atoms produced by heated alkaline earth metal sources, cooling them
down with lasers and magnetic fields and trapping them into vacuum chambers. Lasers can
entangle the atoms and excite them into the Rydberg state, allowing gate operations with these
_optical tweezers_ , while the readout is done optically with a fluorescence-detecting camera. This
technology has several benefits: it does not require cryogenic cooling like superconducting
computers, it has high fidelity and coherence times, it allows creating multi-qubit-gates natively
by direct multiple qubit entanglement^10 , and is possibly highly scalable, as the QuEra and Atom
Computing roadmaps suggest.
Neutral atoms are analog quantum computers, meaning that they use continuous variables to
express states and represent information. Digital quantum computers, like trapped ions, have a
discrete representation of qubits and gates and are built in a way to allow implementation of
algorithms like Shorโs or Groverโs. The _analog continuous work mode_ allows executing simulations,
similarly to annealers^11. However, neutral machines like the QuEra Aquila can be equipped with
modules to enable a _digital mode_ , enabling digital gates, making them quite flexible and able to
execute hybrid gate/analog algorithms in tandem.
Despite all these benefits, it is not clear if analog simulation techniques can be applied to complex
problems with large qubit numbers, due to noise and limitations in applying error correction to
this modality, as hypothesized by Hauke et al. (Hauke, Cucchietti, Tagliacozzo, Deutsch, &
Lewenstein, 2012). Using the current generation of neutral atoms to execute digital algorithms, a
concern is that gate operations speeds are extraordinarily slow, with a magnitude of 1000x slower
than superconducting and 10x slower than trapped ions. The knock-out effect is also still not fully
solved, which means atoms must be reset and regenerated after measurement in each round, so
different techniques like atom reservoir loading in Pause et al. (Pause, Preuschoff, Schรคffner,
Schlosser, & Birkl, 2023) are being developed. It remains to be seen if manufacturers can show
this modality to grow as per the aggressive plans and tackle the issues of knock-out and low gate
speeds.
(^10) As, for example, needed for the Toffoli gate.
(^11) Technically, annealers also work in an _analog_ -like way and are used for similar use cases. However, they
are constructed with digital components like superconducting computers, so they cannot be described as
_true_ analog computers like neutral atoms.
**Photonics**
The last modality explored here is Photonics. Companies like Xanadu, PsiQuantum, ORCA
computing, and Quandela explore this technology, with Xanadu offering their hardware
commercially via the cloud.
These computers use photons as qubits. The qubit states can then be defined by the orientation
of the photonโs electric field oscillations (e.g., horizontal |0> and vertical |1> polarizations) or the
spatial paths the photons take ( _dual-rail encoding_ ). Entanglement happens via interferometry or
directly at the source before computation. Since photons move through the circuit at the speed of
light, the gates cannot be implemented like with trapped ions via excitation with lasers or
microwaves. Instead, the photons pass through Photonic Integrated Circuits consisting of beam
splitters and phase shifters, which simulate quantum gates and are read out via photodetectors.
Photonic computing has good coherence times, is resilient to some forms of noise that affect e.g.,
trapped ions, and information moves at the speed of light so it can be used for quantum networks
via fiber optical cables over long distances. However, they suffer from photon loss, require
elaborate error correction techniques, and some components require cryogenic cooling. Photonic
computers have not reached the same commercial maturity as other modalities, as there are still
technical challenges to be overcome to become competitive. However, PsiQuantum recently
announced a very aggressive timeline of delivering a 1 million qubit machine to Australia by 2027
(GQI, 2024). It remains to be seen if this machine will be useful for actual use cases and can
overcome other challenges besides scaling qubit numbers, as currently their bell-state two-gate
fidelity is at 99.22% (Alexande, Bahgat, & Benyamini, 2024), well below the 99.9% ( _triple-nine_ )
of trapped ions.
## 4. Quantum Computing Benchmarks
Quantifying the performance of quantum computers is crucial for end customers, investors, and
providers alike. The discussion on metrics is not straightforward due to the complexity of the
topic. In classical computing, various benchmarks have been used to indicate progress, as, for
example, examined in Figure 7 (Rupp, 2018), using transistors, single-thread performance,
frequency, logical cores, and also power consumption.
_Figure 7: how microprocessor figures of merits progress slowed down with single thread
performance, clock speed, and number of logical cores, in relation with total power
consumption. Source: (Rupp, 2018)_
At the same time, a bridge needs to be built comparing classical and quantum performance for
like-for -like use cases. Lastly, I will examine if a _Mooreโs Law_ equivalent (Moore, 2006) exists in
Quantum Computing to predict progress and, as such, extrapolate timelines for useful computing.
One note to make is that benchmarking has intrinsic difficulties, starting from that different
hardware modalities are suited for different types of applications. Annealing and analog computing
are better suited (or practically limited to) specific applications like simulations and optimizations,
while gate-based computers are focused on gate-based algorithms like Shorโs, Groverโs and
simulations like QPE. As such, they must be compared in like-for -like scenarios and cannot infer
the direct overall advantage of one modality over another. Ultimately, it is too early in the race
for quantum advantage to crown individual winners, a perception my interviewees shared.
For the following sections, I will use the benchmark categorizations introduced by Wang et al.
(Wang, Guo, & Shan, 2022) updated to the current situation.
### 4.1. Physical-Level Benchmarks
At the lowest level of performance metrics, the most prevalent benchmarks are Qubits, Fidelity,
Gate Speed, Coherence, and Architecture.
**Qubits**
A seemingly obvious metric for measuring a quantum computerโs performance is the number of
its qubits, similar to the transistor count used in classical computing. For neutral atoms, the
highest qubit count machine has been QuEraโs Aquila with 256 Qubits (Wurtz, et al., 2023). For
ion traps, it is 36 and 32 qubits for IonQ Forte (IonQ, 2023) and Quantinuumโs H2-1
(Quantinuum, 2023) respectively. For superconducting QC, the highest reported number is 1121
qubits for IBM Condor (Gambetta, 2023)^12. Interestingly, the same announcement also introduced
their 133 Qubit Processor, Heron, and in fact mentioned that it _will form foundation of our
hardware roadmap going forward._ This is because Heron is _offering a five-times improvement over
the previous best records set by IBM Eagle_ with 127 Qubits (IBM Newsroom, 2023) as per the
benchmarks introduced by McKay et al. (McKay, et al., 2023). IBM does not offer its 1121 qubit
Condor or even its 433-qubit predecessor, Osprey, on its cloud service but does offer the 133 Heron
machine. This is a clear change in direction from IBM, away from focusing on incremental
increases of qubit numbers but being able to experiment with different architectures with lower
numbers of qubits but higher fidelities, ultimately providing better end results.
Looking at different quantum computing modalities, D-Waveโs largest annealing system,
Advantage, reaches over 5000 qubits (McGeoch & Farrรฉ, 2022). However, these qubits practically
only work for annealing algorithms and not gate-based architecture, so again, the numbers cannot
be compared directly to gate-based algorithms.
Thus, qubits alone is an important but not a very useful metric for the purpose of measuring the
performance of quantum computers , as seen in Figures 8 and 9.
_Figure 8: Qubit count progress over time. Source: (Ezratty, 2023)_
(^12) Similarly, another modality not focused in these metrics is photonics like e.g., from PsiQuantum. Due to
their photonic specificities, papers like (Bartolucci, 2023) focus on different metrics like fusions, entangling
measurements performed on the qubits of small constant-sized entangled resource states.
_Figure 9: Trajectory of Qubits for different modalities Source: (Cappellaro, 2024)_
**Qubit Connectivity and Architecture**
Another benchmark is the qubit connectivity, which is related to the inherent architecture of each
modality. Connectivity indicates the way qubits are directly coupled with each other. Coupling
qubits allows them to execute a multi-qubit gate operation or entangle them directly. Important
to note is that non-coupled qubits can be coupled with _ancilla qubits_ and by applying SWAP
gates, but this increases the number of operations and circuit size and, as such, directly reduces
performance.
Specifically, in the superconducting architecture, the qubits are arranged in a 2D grid, so only the
direct neighbors are coupled. IBM has a one-to-three connectivity with the heavy-lattice
architecture, while Google and IQM have a square one-to-four connectivity.
In contrast, Ion traps like the Quantinuum H1 have _all-to-all_ connectivity, moving and regrouping
qubits into arbitrary pairs around the device with DC electrodes. However, it remains to be seen
if this can scale for larger processors.
Neutral atoms have _any-to-any_ connectivity, meaning every circuit can be programmed to couple
individual qubits with each other. This is enabled via _shuttling_ , which allows maximum flexibility
in circuits and has some applications in error correction. On the flip side, shuttling is a very slow
operation itself, which limits the coherence time. Larger connectivity is preferred as it allows for
more flexible optimizations to reduce the circuit size but does come with a cost on available
coherence time.^13
Beyond connectivity, different modalities also have benefits and drawbacks in native gates and
creating gate structures. Some algorithms use more gates of a particular type, so computers that
implement that gate more efficiently will perform better for those algorithms. For example, Toffoli
gates, which are very important to quantum chemistry simulations discussed here, are more easily
executed on Ion traps, providing inherent benefits for QPE.
**Decoherence - T1 and T2**
By virtue of being quantum mechanical, qubits are subject to vibrational energy relaxation,
meaning they will inevitably decay from a high energy state |1> to the low energy state of |0>,
as they are exchanging energy with their environment. The time in which that happens is called
energy relaxation time or qubit lifetime and is abbreviated as T1.
Tฯ focuses on pure dephasing time, i.e., the randomization of the qubit phase, and T2 is the
parallel combination of T1 and Tฯ.
T1, T2 and Tฯ are important variables as they dictate the time calculations can be executed with
them. Neutral atom and especially ion trap modalities have longer T2 times than superconducting
qubits by several orders of magnitude, leading to an inherent advantage.
Interestingly, the qubit is lifetime inside of the superconducting modality has made dramatic
increases from nanosecond to millisecond area by finding better qubits, as seen in Figure 10.
(^13) One more modality not mentioned here is the annealers of D-Wave, which have a 15-qubit
connectivity and use couplers as a metric (35000 for the _Advantage_ machine). However, this
connectivity doesnโt have the same implication as there are no gates.
_Figure 10: Evolution of superconducting lifetime over time. Source: (Ezratty, 2023)_
**Fidelity & Errors**
Quantum computers currently generate more errors than classical ones by several orders of
magnitude. These errors are unavoidable and caused by imperfections in many areas like control
pulses, inter-qubit couplings, and qubit state measurements. To measure these errors, several
benchmarks have been introduced. These include _single-qubit gate fidelity_ , _two-qubit gate fidelity_ ,
_readout fidelit_ y, _state preparation and measurement_ (SPAM) error, _memory error_ per qubit (e.g.,
at average depth circuit), and _crosstalk_ error, although manufacturers may use different
terminologies and variants of these measurements. Each of these benchmarks measures
inaccuracies in executing a particular operation necessary for executing a quantum circuit.
Arguably, single-qubit and two-qubit gate fidelities are especially important as they scale directly
with the circuit size. Trapped ions historically have had the highest two-gate infidelity, followed
by superconducting and neutral atom modalities as shown in Figure 11.
_Figure 11: Two Qubit Gate Performance for different modalities. Source: (Monroe C. R., 2024)_
**Gate Speed**
Gate speed indicates how fast a one- or two-gate operation is executed. It is directly connected to
decoherence as together they determine how many operations can be conducted in a system before
the qubit decoheres. Interestingly, inversely to coherence, superconducting systems have a much
faster gate speed than trapped ions and neutral atoms. Current superconducting computers run
somewhere in the 1-100 MHz range, which contrasts with current top-end CPUs running at 5
GHz.
**Summary of Physical-Level Benchmarks**
Every physical-level benchmark is an important indicator, so aggregated benchmarks are necessary
to make better like-for -like comparisons between modalities. It is interesting to observe graphs of
relationships of metrics. Examining Figure 12, Trapped Ions, Superconducting Qubits, and
Neutral atoms are at different points but roughly at the same Pareto front for gate speed to
number operations before error (which includes fidelity and decoherence).
_Figure 12: Mapping Gate Speed to 1Q and 2Q Gate fidelity for different modalities. Source:
(Oliver, 2024)_
### 4.2. Aggregated Benchmarks
Due to the complexity and interconnection of different physical benchmarks, they have been
aggregated in various forms for simplification, primarily for commercial communication of
progress. Notable endeavors of such metrics include the Quantum Volume, CLOPS, and Circuit
depth coined by IBM, Mirror Benchmarks, the Algorithmic Qubit coined by IonQ, and the Logical
Qubit.
**Quantum Volume**
Quantum Volume (QV) was introduced by Bishop et al. (Bishop, Bravyi, Cross, Gambetta, &
Smolin, 2017) by IBM as an aggregation of different physical-level benchmarks as seen in Figure
13. QV identifies the largest square-shaped circuit that can be run on a given quantum device.
_Figure 13: Quantum Volume is a composite of different lower-level metrics. Source: (Silvestri,
2020)_
An intriguing property of Quantum Volume is that an exponential scaling similar to Mooreโs law
has been consistently delivered by IBM and Quantinuum, as seen in Figures 14-16, which could
also be a key reason why it is still used and propagated by both companies.
_Figure 14: Quantum Volume evolution for IBM. Source: (Jurcevic, Zajac, Stehlik, Lauer, &
Mandelbaum, 2022)_
_Figure 15: Quantum Volume evolution for Quantinuum. Source: (Quantinuum, 2024)_
_Figure 16: Quantum Volume across all modalities. Source: (Metriq, 2024)_
QV is criticized by papers like Proctor et al. (Proctor, Rudinger, & Young, 2022) but also by IBM
itself via a paper from Wack et al. (Wack, et al., 2021) as being insufficient for three main reasons:
it is not useful for devices with more qubits than log 2 (QV), it requires classical computation of
the circuits and is as such not future-proof, and it relies on square circuits. This is artificial, as
circuits in real-life applications have no reason to be quadratic^14 and would certainly not primarily
consist of CNOT gates. Despite that, QV has been widely adopted by hardware providers and is
still used in spec sheets and marketing materials. While QV served to demonstrate Mooreโs law-
like behavior and sensitize the public that other metrics beyond qubits are important, I donโt
foresee a long-term establishment of QV, considering its flaws. More accurate benchmarks will
(^14) For example, VQE and QAOA algorithms rely on the repetition of gate layers, and Shorโs circuit depth
is n^3 for n Qubits.
likely replace QV gradually, especially if they can work in the manufacturerโs favor of
demonstrating consistent exponential growth of capabilities.
**CLOPS & EPLG**
Building on the criticism of QV and inspired by the established classical computing benchmark
of FLOPS (FLoating point Operations Per Second), IBM introduced a quantum equivalent,
CLOPS (Circuit Layer Operations per Second) (Wack, et al., 2021)^15. CLOPS measures the
number of quantum circuits a quantum processing unit can execute per second and is a speed-
oriented metric, as seen in Figure 17. It is defined as _the number of QV layers executed per second
using a set of parameterized QV circuits, where each QV circuit has D = log 2 QV layers_. As such,
the Quantum Volume established for a machine needs to be used in this metric.
_Figure 17: Benchmarking pyramid showing how quality and speed can be benchmarked and
Quantum Volume is associated with CLOPS and lower-level metrics. Source: (Wack, et al.,
2021)._
IBM recognized that CLOPS was assuming an idealized version of how circuits run and updated
its definition to CLOPS-h to compensate that while also introducing EPLG (error per layered
gate) and its exponential inverse, the Layer Fidelity (Wack & McKay, 2023). Again, there is a
similar exponential improvement pattern for EPLG in Figure 18.
(^15) The same paper also introduced Cross-Entropy Benchmark (XEB), which was also criticized by Gao et
al. (Gao, et al., 2021) and Martiel et al. (Martiel, Ayral, & Allouche, 2021). I will not further expand into
in this thesis. However, this benchmark is still used in technical papers like the logical qubit demonstration
of QuEra (Bluvstein, 2024).
_Figure 18: Layer fidelity of IBM machines. Source: (Wack & McKay, 2023)_
CLOPS-h & EPLG are used in the spec sheets in IBMโs cloud platform (IBM Quantum Platform,
2024), indicating 3.8k CLOPS and 0.8% EPLG for 133-Qubit Heron r1, and 5k CLOPS and 1.7%
EPLG for the 127-Qubit Eagle r3. CLOPS is also being used by superconducting competitor IQM
which has reached 2.6k CLOPS. However, CLOPS was criticized by Quantinuum (Quantinuum,
2022) for not appropriately balancing run time with fidelity, which disadvantages trapped ion
computers. Since CLOPS is also complimentary to Mirroring Benchmarks without showing a clear
advantage or significant criticisms to them, it has seen only limited adoption across the industry.
**Mirroring Benchmarks**
Due to the limitations of QV and other benchmarks, Mirror-RB was introduced by Proctor et al.
(Proctor, et al., 2022) (Proctor, Rudinger, & Young, 2022) to test how well a quantum processor
can run almost any class of circuits/programs. This benchmark takes a circuit as an input and
creates mirror circuits constructed from random disordered logic gate sequences, ordered, periodic
sequences, or quantum algorithm kernels. These mirror circuits are similar to the input circuit
but are efficiently verifiable. Due to its high customization potential, Mirror-RB can be used
application-specifically to mirror specific types of problems. However, it was also criticized by
Martiel et al. (Martiel, Ayral, & Allouche, 2021) for giving too little information on real-life
scenario applications and not reflecting errors like crosstalk, which are more prevalent in actual
circuits versus randomized ones. Despite that, Mirror-RB has been positively received in the
academic space as a mature benchmark but is not widely used by hardware suppliers in their spec
sheets.^16
**Circuit Size & Gates**
IBMโs latest Quantum Hardware roadmap (IBM Newsroom, 2023) measures their progress in
qubits and the number of gates in a circuit that can be executed with reasonable accuracy and
time requirements, which is essentially a scale-based metric. Based on this timeline, IBM Heron
is claimed to achieve 5K gates, reaching 100M in 2029 and 1B in 2033+, achieving _1000s logical
qubits._ This gate number is the multiplication of utilized Qubits x Deepest possible circuit (of
CNOT gates) and is a square root of circuit length or qubits. It is, as such, connected to quantum
volume and logical qubits discussed later. This definition of the circuit benchmark goes back to
two targets set by IBM, the 100 qubits as a goal to accelerate scientific discovery with quantum
computing (Mandelbaum, Davis, Janechek, & Letzter, 2023), and the 100x100 challenge
(Gambetta, 2022) consisting of running a quantum circuit of size 100x100=10K in less than a day
producing unbiased results. However, this form of measurement, as a square circuit, has the same
disadvantage as that mentioned for QV.
However, if circuit size is adjusted to a more realistic use case e.g., layers of T or Toffoli gates for
QPE, it can be used as an important benchmark. I will highlight this aspect more in section 5.2
when discussing application-specific quantum advantage for chemical and drug discovery
applications.
**Algorithmic Qubit**
The Algorithmic Qubit (AQ) was coined by IonQ and published in (Nam, 2020) along with a
demonstration of a ground-state energy estimation of the water molecule with IonQโs pre-
commercial computer of AQ 4. It is a modification to the _algorithmic volumetric benchmarking_
metric introduced originally by Lubinski et al. (Lubinski, et al., 2021) and is a simplification of
QV. As per IonQ (IonQ, 2024), _the expression #AQ = N, can be defined as a side of the largest
box of width N and depth Nยฒ that can be drawn on this volumetric background, such that every
circuit within this box meets the success criteria._ IonQ has used this metric extensively in their
publications and press releases, most recently for their 35 AQ IonQ Forte computer (IonQ, 2024).
However, AQ is not used by any other notable manufacturer and has been criticized by
(^16) There are further mirroring variations like Inverse-free RB from (Hines, Hothem, Blume-Kohout, &
Whaley, 2023) or KPIs from (Becker, et al., 2022), which I will not dive deeper into here, as they have
similar advantages and limited adoption rates as Mirror-RB.
Quantinuum (Quantinuum, 2024) in that _algorithmic qubits test turns out to be very susceptible
to tricks that can make a quantum computer look much better than it actually is._ Considering the
current reactions of the industry and academia, AQ is unlikely to be adopted as a widely used
benchmark.
**Logical Qubits**
Recently, Boger (Boger, 2024) stated that _Quantum Computing Has Entered the Logical Qubit
Era,_ and technical media like EETimes (Skyrme, 2024) mention that _the number of logic qubits
per system is becoming an increasingly important criterion for assessing the long-term success
potential of quantum hardware_ The Logical Qubit is increasingly used by manufacturers like
Microsoft, Quantinuum, QuEra, Google and IBM to refer to indicate progress in their hardware
performance (Haah, 2024) (Svore, 2023) (Neven & Kelly, 2023) (QuEra, 2023) (Silva, et al., 2024)
(Bluvstein, 2024) (Bravyi, et al., 2023) (Google Quantum AI, 2023).
Fundamentally, a logical qubit is an abstraction, in particular an encoding using a collection of
physical qubits to protect against errors, using error correction codes discussed in Section 3.3. In
contrast, a physical qubit is the physical implementation of that qubit, e.g., a neutral atom, a
trapped ion, a photon, or a cryogenic superconducting circuit serving as a qubit that can be
controlled and measured.^17
Part of the allure of using this benchmark is that it combines the benchmarks of qubit, fidelity,
and error correction together into one metric and simultaneously builds a bridge to quantum
algorithms, which are described using theoretically _perfect_ qubits without error rates. Since this
is a qubit aggregation to achieve a noise-free qubit, the respective coding used for each type of
qubit, considering the individual fidelity, creates differences between the methods and results of
defining and counting logical qubits.
Several manufacturers have recently claimed to have achieved logical qubits, most notably the 48
logical qubits (Bluvstein, 2024) from QuEra based on 280 physical qubits, the four logical qubits
(Silva, et al., 2024) from Quantinuum and Microsoft based on 30 physical qubits, and IBMโs 12
logical qubits out of 288 physical ones (Bravyi, et al., 2023).
Interestingly, the ratio of physical to logical qubits based on the above publications is different
for each modality, in that the claimed ratios for ion traps is currently ~6:1, for neutral atoms ~8:1 ,
(^17) Technically, a physical qubit can also become a logical qubit on its own. For example, the company Nord
Quantique experiments with bosonic codes to create a self-correcting physical qubit acting as stand-alone
logical qubits (Lachance-Quirion, et al., 2024). Creating logical qubits is an integrated approach of hardware
and software approaches.
while superconducting is at ~24:1 based on the mentioned papers^18. To note, though, is that these
claimed _logical_ qubits have important differences. The Quantinuum/Microsoft experiment can
only exhibit the low error rates necessary for a logical qubit (x800 improvement to physical qubits)
for the first gate calculation, but for subsequent gates, the error rate goes back on par with
physical qubits^19. In any real-life scenario, having a logical qubit only for the first operation is
negligible.
Looking at the QuEra logical qubits, these are not fully error-correcting but only error-detecting
_logical_ qubits. QuEraโs current roadmap (QuEra, 2024) states that the current Aquila computer
has zero fully error-corrected logical qubits, and is expected to grow to 10 logical based on the
256 physical qubits, and then 30 and 100 in 2025 and 2026, respectively, based on much larger
machines, giving it a ratio of 1:100 logical/physical qubits.
Despite this misuse of the logical qubit term, the current trend suggests that logical qubits will
be adopted and pushed more strongly by the industry as a benchmark, especially as we reach the
technological milestone where they can be achieved. Notable exceptions are IonQ persisting on
AQ, IBM using Gates at least for their marketing materials, and smaller manufacturers like OQC
(OQC, 2024), Atom Computing (Atom Computing, 2022), and Rigetti (Rigetti, 2024)^20 using
different combinations of qubits, fidelity types, and coherence for their latest quantum system
specification sheets. However, a reason for not using this metric could be that these manufacturers
have not yet demonstrated the appropriate sizes and fidelities to enable logical qubits.
The logical qubit metric can help end customers to map the capabilities of a particular machine
to the usage in their respective algorithms, so it seems a useful metric that could persist for a
longer period. Looking at the themes of the interviews I conducted, the logical qubit was the most
prominent metric used as an external benchmark for hardware progress.
However, a more precise and universally accepted definition of this term and associated
measurement techniques are necessary to avoid incomplete comparisons. Even though the exact
definition of the logical qubit is tied to the error correction code used, there are no universally
accepted thresholds for error rates to qualify as logical qubit. However, Google (Google Quantum
(^18) Until before the IBM paper using qLDPC (quantum low-density parity check) codes, the ratio for
superconducting qubits was closer to 1000:1.
(^19) As per Figure 7 of (Silva, et al., 2024) and highlighted in a blog post comment by (Gidney, 2024).
(^20) As mentioned earlier, photonics providers are not using logical qubits as a metric either, like PsiQuantum
proposing fusions in (Bartolucci, 2023). However, fusions are fundamentally tackling the same issue as
fidelity/qubits and logical qubits, i.e., trying to demonstrate a practical applicability for running quantum
algorithms on these devices with error-corrected photonic qubits.
AI, 2024) provided in their roadmap targets for logical qubits that scale from a current error rate
for a logical qubit at 10-2 moving to 10-6 in _2025+_ and to 10-13 when the physical qubit number
reaches 10^6. This kind of scaling of logical qubit error rates with the number of physical qubits
and size of the maximum possible circuits should be more widely adopted.
Lastly, judging by QuEraโs publication, correction vs just detection needs to be clarified, and
judging by the Quantinuum publication, similarly, the exact method of measurement (specific
depth or average) to qualify as a logical qubit needs to be defined.
### 4.3. Application-Level Benchmarks
Similarly to classical computing, a logical way to benchmark performance for quantum computers
is via running relevant algorithms on the respective machines. For this purpose, several
benchmarks have been created.
Since, in the NISQ era , variational algorithms like VQE are more prevalent, a lot of benchmarks
have been created in that space. For example, a chemical-simulation-specific VQE benchmark was
introduced by McCaskey et al. (McCaskey, Parks, & Jakowski, 2019), as well as QPack, F-VQE,
VQF, and other VQE or variational benchmarks. However, since these are limited to variational
methods, which will not be useful in FTQC, these benchmarks will not persist.
An interesting approach has been presented by Lubinski (Lubinski, et al., 2021) by a consortium
around QED-C, Princeton, and some of the hardware manufacturers as an aggregation of different
benchmarks running a plethora of important quantum algorithms (e.g., Deutsch-Jozsa, Bernstein-
Vazirani, Quantum Fourier Transform, Groverโs, Hamiltonian Simulation, Monte Carlo Sampling,
VQE and Shorโs) on different hardware as in Figure 19, determining a _volumetric benchmark_ for
each system via their normalized circuit width and depth. This approach has been picked up and
modified by various manufacturers e.g., as discussed earlier by IonQ to determine Algorithmic
Qubits, and Quantinuum to criticize AQ. However, this method has similar limitations to
Quantum Volume, as a classical computer needs to validate the results.
_Figure 19: Results from running the suite of application-oriented benchmarks on a quantum
simulator (colored squares) on top of a volumetric background (grey-scale squares) extrapolated
from the quantum volume. The simulator uses all-to-all qubit connectivity and two different
error rate scales that result in quantum volumes of 32 and 2048, as shown in each plot. Source:
(Lubinski, et al., 2021)._
Inspired by the LINPACK benchmark used in classical high-performance computing, Dong and
Lin (Dong & Lin, 2020) introduced a quantum-equivalent benchmark called RAndom Circuit
Block-Encoded Matrix (RACBEM). Atos improved upon this benchmark and presented the Q-
Score in 2021 (Martiel, Ayral, & Allouche, 2021). The Q-score measures the maximum number of
qubits that a quantum computer can use to solve the Max Cut^21 problem using QAOA. The Q-
Score was recently improved to work on photonics devices and annealers (Schoot, Wezeman,
(^21) Maximum-cut is an NP-hard problem, determining the biggest cut in a graph whose size is at least the
size of any other cut.
Neumann, Phillipson, & Kooij, 2023)^22. These metrics have been used to benchmark some existing
analog computers like the D-Wave Advantage, reaching a score of 140 (Schoot, Leermakers,
Wezeman, Neumann, & Phillipson, 2022), Pasqalโs neutral atom system reaching a score of 80
(Coelho, DโArcangelo, & Henry, 2022), and IQMโs machine reaching 11. However, they have not
been adopted yet in the materials and publications of gate-based manufacturers, presumably
because their scores have yielded much lower in the 20 range. Another criticism is that QAOA is
a variational algorithm with little practical application in FTQC, so it does not give a good
indication for any drug discovery uses.
### 4.4. Summary of Benchmarks and Industry Perspective
A wide variety of benchmarks are used by both academia and manufacturers to signal progress in
quantum hardware capabilities. There is no consensus on aggregated or application-specific
benchmarks, and even fundamental benchmarks are not used consistently across all
manufacturers. Application-based benchmarks will certainly remain dynamic over time to
accommodate advancements in hardware architecture but also shift in interesting applications,
similarly to how current classical benchmarks have shifted towards AI. Especially as we move
from the NISQ to the FTQC era, the focus will shift away from VQE and QAOA benchmarks
over to QPE for chemical simulations and other useful algorithms like HHL, QML, Grover, Shorโs
etc.
My observation is that manufacturers pick individual benchmarks supportive of their modality
even if they are clearly flawed, like Quantum Volume or Algorithmic Qubits, while ignoring others
that are less flattering.
At the same time, a key theme from the conducted interviews is that end consumers do not focus
on any specific metrics and that it is difficult to measure and compare results from individual
providers without doing actual proof of concepts on the hardware. Physical-level benchmarks like
fidelity or qubits are not considered particularly useful stand-alone, as they would require at least
a comparison in relation to each other. Due to the sheer number of benchmarks and their constant
flow of introduction of change, general confusion and even mistrust towards these benchmarks
have been created for the end users. Due to that, the primary focus of pharmaceutical end
customers to benchmark hardware has shifted to three approaches:
```
1) Building relationships with trustworthy manufacturers who have a perceived potential to
achieve quantum advantage for their chosen applications.
```
(^22) Another recently proposed interesting improvement to RACBEM is QASM Bench (Li, Stein,
Krishnamoorthy, & Ang, 2023), which has not seen much adoption, so will not be highlighted here further.
```
2) Running proof-of-concepts with a variety of modalities and comparing the results in
practice.
3) Monitoring and awaiting signals from agencies like DARPA/IARPA, companies in energy,
logistics, financial, chemical, and material science companies since they all have use cases
that are easier to materialize than drug discovery use cases.
Building on these insights, the quantum hardware industry would be well-poised to align on
universal benchmarks to make comparisons between modalities and towards classical computing
easier and more useful for end customers, as well as to introduce economic benchmarks. This will
allow easier comparison of spec sheets and reduce the stickiness of individual hardware
manufacturers to their customers.
To realize that goal, a quantum equivalent to the classical computing TOP500 HPC ranking and
the introduction of a standardized method like LINPACK could be one approach. Other
organizations like DARPA^23 and the UK governmentโs National Quantum Computing Centre
have their own benchmark groups and could take the lead in fair, comprehensive benchmarking^24 ,
and could be supported by NIST and their initiated QED-C consortium.
```
## 5. Quantum Advantage
### 5.1. Quantum Advantage for Algorithms and Hardware
```
Quantum computers are special-purpose machines executing a limited number of interesting
algorithms which outperform their classical counterparts. The term quantum advantage is used to
determine if quantum computers perform better than classical ones. IBM (Kim, Eddins, & Anand,
2023) defines it as follows:
Quantum advantage can be approached in two steps: first, by demonstrating the ability of
existing devices to perform accurate computations at a scale that lies beyond brute-force
classical simulation, and second, by finding problems with associated quantum circuits that
derive an advantage from these devices.
Quantum algorithms in the BQP space can have a quadratic, superpolynomial, or exponential
speedup to classical algorithms. However, a dynamic race exists between newly invented quantum
algorithms with quantum advantage and newly invented classical algorithms reducing said
```
(^23) Recently enlisting Riverlane, a quantum software company (Manoff, 2024), to develop those benchmarks.
(^24) Interestingly, NQCC uses quantum volume and logical qubits in their current roadmap, so better
benchmarks are necessary for them as well.
advantage. Classical algorithms with approximative ML methods in combination with classical
hardware can become suitable enough for practical applications and problem sizes, making their
BQP counterparts practically useless in that space. Furthermore, since it has not been proven
that any of the quantum algorithms in BQP will have a permanent advantage over classical
algorithms in P or BPP, the race for algorithmic advantage will remain relatively dynamic in the
future.
Next to algorithmic advantage, one must also examine whether the hardware can reach quantum
advantage by running those algorithms in a time frame shorter than the equivalent runtime on a
classical supercomputer. This hardware quantum advantage is a much more dynamic target than
the algorithmic quantum advantage discussed earlier since classical high-performance clusters
(HPCs) continuously evolve, and of course, so does QC hardware. It is also important to note
that practical improvements in classical computing performance can come from improvements in
parallelization, hardware architecture, and software performance engineering (Leiserson, 2020),
further fueling this race.
In recent history, several announcements have been made on reaching _quantum advantage_ and
_supremacy_^25 by major players. For example, Google (Arute, Arya, & Babbush, 2019) claimed in
2019 to have achieved quantum advantage on the Sycamore 53-qubit model with a three-minute
calculation that would have taken thousands of years on a classical computer. Just three months
later, IBM (Pednault, Gunnels, Nannicini, Horesh, & Wisnieff, 2019) countered that claim with
evidence that classical computers could do the same calculation in just days by optimizing memory
storage. Specifically in quantum simulations, classical tensor networks are proving to be capable
and accurate methods. For example, Tindall et al. (Tindall, Fishman, Stoudenmire, & Sels, 2024)
produced more accurate solutions to the spin problem using tensor networks than using IBMโs
127-qubit Eagle processor.
On the more optimistic side, there are promising papers like from Noh et al. (Noh, Jiang, &
Fefferman, 2020) and Cheng et al. (Cheng, et al., 2021) for efficient simulations on noisy quantum
hardware. There is also the photonic experiment from Xanadu (Madsen, et al., 2022) achieving
quantum advantage but in a less relevant area of Gaussian boson sampling with no practical
applications. IBM has been more cautious with using claims of quantum advantage but has used
the Eagle processor (Kim, Eddins, & Anand, 2023) to show that there is a path toward quantum
(^25) The term quantum supremacy was coined by John Preskill in 2012, but others, including himself, have
recognized the controversial association of this term (Preskill, 2019). As such, I am not going to use this
term other than in the context of citations, and instead use quantum advantage, which has been used
prevalently.
```
advantage. The simulation was on an oversimplified, unrealistic model of a material but the study
indicated that by increasing the fidelity and qubit number of the hardware, valuable applications
can be achieved.
Another recent development from King et al. (King, et al., 2024) claimed quantum advantage for
quantum simulations for annealing hardware. Such papers from hardware providers (in this case,
D-Wave) should be approached with caution, especially in the pre-peer-review phase. While the
authors claim an advantage over classical computing simulations, it has to be noted that the spin-
glass systems studied in the paper are pretty basic and not nowhere near the complexity of
materials like a LiHoF4 magnetic alloy, which would be interesting in real-world material science
research (Schechter, 2023).
Beyond the terms of NISQ and FTQC, several frameworks are used to describe the different eras
of quantum computing. For example, Microsoft (Nayak, 2023) uses three levels:
Level 1โFoundational: Quantum systems that run on noisy physical qubits which includes
all of todayโs Noisy Intermediate Scale Quantum (NISQ) computers.
Level 2โResilient: Quantum systems that operate on reliable logical qubits.
Level 3โScale: Quantum supercomputers that can solve impactful problems which even
the most powerful classical supercomputers cannot.
```
```
Microsoft (Zander, 2023) claimed that all manufacturers are still in Level 1, but more recently
claimed to have advanced into Level 2 with the achievement of four logical qubits (Zander, 2024)
(Silva, et al., 2024). Similarly to the aggregations of metrics introduced by manufacturers, such
claims should be approached with caution. In this case, it is not clear why it is claimed that
Microsoft and Quantinuum have breached into the Level 2 resilient computing phase with their
four logical qubits, when IBM and QuEra have published logical qubits earlier^26.
```
```
Whatever the case, it is certain that quantum advantage claims will be made frequently by
different parties, but do not necessarily translate into any actual advantage for the end customer.
```
### 5.2. Application-Specific Quantum Advantage
```
Since different quantum hardware modalities have advantages for different algorithms and use
cases, another angle to consider is a quantum advantage in the context of a specific application.
```
(^26) Especially in the context of my earlier criticisms for logical qubit claims for Quantinuum, QuEra, and
IBM.
In molecular chemistry, as per Elfving (Elfving, et al., 2020) quantum advantage means that
quantum computers must be more proficient in any of the three dimensions: speed, accuracy, and
molecule size. However, some types of quantum advantages are irrelevant, for example, because
easy and accurate physical/chemical experiments are possible and, as such, no simulations are
necessary, or there is no practical application for simulating particular molecules. While seemingly
obvious, it is essential to expand our definition of quantum advantage to exclude such cases, which
happen to be (mis-)used by a few publications mentioned earlier.
Application-specific benchmarks can test quantum advantage for specific applications. However,
this advantage will only be achieved for problems of a minimal size, which may make most use
cases impractically small for a quantum computer, e.g., short search strings.
**Minimal Problem sizes for quantum speedups**
Choi et al. (Choi, Moses, & Thompson, 2023) introduce a formula to calculate speed advantages
for quantum vs classical computers comparing their gate speed, logical:physical qubit ratio, and
dimension of quantum advantage. With this formula, the number of logical qubits necessary to
achieve quantum advantage for a particular algorithm at a given time can be calculated. This
formula gives a net speed advantage of a classical computer over a quantum computer through
```
C = Cspeed โ Cgate overhead โ Calg. constant
```
Where:
- Cspeed is the ratio of a classical computer's gate speed divided by the quantum computer's
gate speed. The authors assume speeds for classical=5GHz vs Superconducting
Quantum=200MHz, which brings CSpeed to 10^5. Trapped ions and neutral atoms have
much slower gate speeds as discussed in Figure 12. I expect this constant to improve more
strongly for quantum gate speed, while classical computing gate speed has stabilized at
the 5GHz range.
- Cgate overhead: the additional calculations a quantum computer needs to do to create a
logical qubit, i.e., the ratio logical to physical qubits. Cgate was assumed as =10^2 in the
paper which is a useful approximation^27. Again, this is a dynamic ratio that may improve
with better hardware. Even though this is not explicitly mentioned in the paper, the
number of shots necessary to perform error correction is another multiplicative factor to
be included here, so this factor should be two powers higher with current techniques.
(^27) Even though trapped ions can theoretically achieve this with just 6 qubits and superconducting with 24,
I criticized current results of defining logical qubits, so a 1:100 ratio is a reasonable average.
- Calg. constant: This is the multiplicative ratio of the classical algorithmโs time complexity
divided by the complexity of the quantum algorithm. For example, for Groverโs, the
speedup ratio is linear/squareroot, and for Shorโs, it is exptime/quadratic. This ratio
remains constant as long as no new algorithms are invented for classical or quantum
computing for the respective use case. However, it is important to take all realistic
parameters of a quantum algorithm into account for this ratio, specifically the read speed
using a classical computer (for Groverโs, it is O(n), reducing the speedup advantage) and
general overhead using a hybrid classical/quantum setup as highlighted by (Neukart,
2024). Finally, the necessary shots to execute the algorithm must be considered; for
Groverโs, it is 1, but for e.g., VQE, it is a multiplicative factor.
Comparing the runtime of a classical and equivalent quantum algorithm f(n) and g(n), f(nโ) =
Cg(nโ) gives the minimal problem size nโ at which quantum computing can provide a speedup.
Based on that definition, Choi et. al. explored that Quantum Computers would create an economic
advantage for Groverโs (search a string of text for a particular sub-string like a DNA sequence)
with a minimal problem size of 10^12. It is essential to remember that any advantage here would
be given only for problems larger than the calculated size. In the case of drug discovery, this size
is in the ballpark for genomic searching, so an effective speedup is expected.
Choi et. al. also conclude that this would be achieved with O(log2 10^12 ) โ 40 logical qubits as per
Figure 20. However, this factor does not include time for reading the data and hybrid integration
or imperfect qubit connectivity; the logical qubit number would be, as such, higher.
_Figure 20:Conceptual representation of why a better quantum algorithm only outpaces a
classical one when problem sizes are sufficiently large, based on the example of Groverโs
algorithm. Source: (Choi, Moses, & Thompson, 2023)_
**Problem sizes beyond classical limits**
The examined formula gives an understanding of the minimal sizes of problems that can be
executed faster and more cost-effectively. The good news is that for Drug Discovery, most useful
problems are larger than the minimal problem size. As such, the flipside angle to explore is the
size of the smallest interesting and useful problems to tackle. As mentioned earlier, modern high-
performance computers are limited at simulating circuits with around 50 (logical) qubits (Boixo,
et al., 2018), which will create a lower bound for this analysis. As visualized in Figure 21, a
machine with 50 logical qubits and 1250 layers of gates (in this case Mรธlmer-Sรธrenson gates as
used by IonQ) at a _triple-nine_ fidelity of 99.9% will achieve a quantum advantage.
_Figure 21: Quantum Advantage demonstrated at 50 logical qubits with a gate depth of 50. Source:
(Leviatan, et al., 2024)_
Searching for use cases above 50 logical qubits, there are quantum simulations of Fermi-Hubbard
(FH), a simplified model of electrons in materials that can be applied to study superconductivity
at high temperatures and metallic and insulating behaviors. These simulations would have more
applicability for material sciences than for drug discovery but are the earliest demonstrations of
quantum advantage as per the comprehensive algorithm analysis from Dalzell et al. (Dalzell, et
al., 2023). The smallest FH problem can be tackled with about 100 logical qubits to address some
questions on the physics of jellium, while more useful cases require around 200-250 logical qubits
to determine dynamics and ground state energy of a 2D 10x10, model, which would be a simplified
model of explaining electron interactions in a solid material (Kivlichan, et al., 2020).
Looking at more practical applications in chemistry, Reiher et al. (Reiher, Wiebe, Svore, & Troyer,
2017) estimated a useful simulation of biological nitrogen fixation by the enzyme nitrogenase,
which could be used for fertilizer production, a hyped use case for quantum computing. In their
work, they conclude the necessity of around 110 logical qubits for this simulation. However, it
should be noted that their baseline estimations in 2017 for physical qubits, gate speeds, and error
rates are far off from todayโs benchmarks. Additionally, Elfving et al. (Elfving, et al., 2020)
criticized that the assumptions made by Reiher et al. underestimated the complexity of doing a
practical simulation for that molecule, so a much higher threshold for a successful simulation can
be assumed.
A good way of determining quantum advantage in quantum chemistry is by using methods to
calculate Complete Active Spaces (CAS) in the dimensions of (N, M) complexity, with N electrons
in M spatial orbitals, calculating configuration interaction (CI) and coupled clusters (CC). The
classical methods calculating CAS reach their limits around CAS (26, 26), CI at around (24, 24),
and CC at (90, 1569) with 53 logical qubits for the qubitization strategy and 1366 for the
trotterisation strategy^28. The quantum advantage can thus come at around that threshold,
although interesting applications might only start at much higher values.
For example, looking at the quantum advantage milestone for Reiherโs FeMo-co model, Elfving
et al. suggest a CAS (113, 76) is needed, instead of the CAS(50, 50) as proposed by Reiher,
elevating the number of logical qubits to the tens or hundreds of thousands instead of the few
hundred originally proposed, as seen in Figure 22.
_Figure 22: Comparison of molecular features relevant for possible short-term quantum computer
applications and a schematic placement of their CASs estimated to be necessary for achieving
chemical accuracy. Source: (Elfving, et al., 2020)_
(^28) Trotterisation uses the full Hamiltonian, and qubitisation uses a truncated Hamiltonian to remove small
terms up to an error budget, dramatically improving execution time performance by up to a factor of
45000x (Blunt, et al., 2022).
Another use case was explored by Burg et al. (Burg, et al., 2020), a ruthenium catalyst that can
transform carbon dioxide to methanol with an approximate CAS between the sizes of (48-76, 52-
65), requiring around ~4000 logical qubits.
Another study (Goings, et al., 2022) calculated that the assessment electronic structure of
cytochrome P450 would require about 1400-2100 logical qubits. Also Blunt et al. (Blunt, et al.,
2022) determined ~2200 qubits for simulating CAS (32,32) on Ibrutinib, a drug approved for the
treatment of non-Hodgkin lymphoma by the FDA in 2015.
Looking at other examples in quantum chemistry, simulations of ethylene carbonate or LiPF 6 also
require logical qubits in the ~2000 range (Su, Berry, Wiebe, Rubin, & Babbush, 2021). A good
overview of different use cases is prepared by Kim et al. (Kim, et al., 2022) as per Figure 23 for
another valuable use case of lithium-ion batteries with ~2000-3000 logical qubits, compared to
other use cases.
_Figure 23: Ratio of magic state distillation (MSD) footprint to total computational footprint for
different numbers of qubits and T-count (Toffoli gates). Footprint is measured in terms of
number of RSGs required. Includes Simulation of the Fermi-Hubbard model; of crystalline
materials; of FeMoco; and breaking RSA encryption. Source: (Kim, et al., 2022)_
An interesting and prevalent use case referenced in Figure 23 is RSA 2048 encryption, requiring
14-20k logical qubits to execute the factorization in a realistic timeframe (Gidney & Ekerรฅ, 2021).
However, this number has come down to ~ 2900 logical qubits with improved circuits (Hรคner,
Jaques, Naehrig, Roetteler, & Soeken, 2020).
Next to logical qubits, a dimension still needs to be mentioned is the maximum amount of T
Gates (a single-qubit non-Clifford gate) and Toffoli gates required to run these simulations, given
in Figure 24. For a CAS(60,60) simulation, 10^11 Toffoli and 1016 T gates are required, which is very
far away from the current circuit sizes of 5x10^3 indicated by IBM for its current Heron machine.
_Figure 24: From left to right: Number of T Gates, Toffoli Gates, and Logical Qubits required for
CAS(N, N) simulations with N orbitals. Source: (Elfving, et al., 2020)_
Lastly, in addition to logical qubits and circuit sizes, also fast gate speeds and appropriate
architectures to run QPE efficiently are required. For example, a neutral atom machine with slow
gate speeds but a huge number of logical qubits may not be useful for drug discovery at the logical
qubit and Toffoli gate scales mentioned. At the current gate speed difference, a simulation that
would take 4 hours on superconducting machines would take 24 weeks on neutral atom machines
(assuming all other physical benchmarks like coherence, logical qubits, and circuit sizes were equal,
and appropriate parallelization and hybrid computing is executed to even allow such long
calculations). Considering a realistic scenario in drug discovery, waiting for weeks for simulation
results would make any theoretical quantum advantage obsolete, as the company would use
classical CADD or wet labs instead. This also makes the _total time to solution_ an important factor
to consider for achieving _application-specific quantum advantage_.
Based on these insights, the goalpost of around 2000-4000 logical qubits and 10^11 circuit sizes are
a far cry from todayโs handful of logical qubits, even with a very generous definition of that term,
as discussed earlier. This is also reflected in the current situation in Life Sciences since there is no
use case currently in deployment, as per Figure 25.
```
Figure 25: Number of use cases distributed across industries and for each of the industries also
segmented by the implementation status. Life Sciences does not have a single use case in
deployment. Source: (Baczyk, 2024)
```
### 5.3. Quantum Economic Advantage
```
Computation Cost
Beyond pure quantum advantage, a key consideration for end users is a comparison of quantum
to classical computing that can translate to a positive ROI for them. The most straightforward
aspect to consider is adding computing cost per second per 1000$, as Kurzweil used (Kurzweil,
2005) for classical computing in Figure 26.
```
```
Figure 26: Progress in computing since 1900 in calculations per second per 1000$. Source:
(Kurzweil, 2005, p. 67)
```
The same metric can be applied to quantum computing, comparing $ required for calculations for
quantum algorithms on hardware^29 , showing a reduction of quantum computing cost.
**Net Quantum Economic Advantage**
Formalizing this aspect Choi et al. (Choi, Moses, & Thompson, 2023) introduced the term of
**quantum economic advantage** (QEA):
```
[is achieved] if there exists a quantum computer that can solve a problem instance for $X
and all existing classical systems with a computation budget $X or less would solve the
problem in more time.
```
This definition considers the ratio of classical vs quantum computing cost/hr. To calculate the
_computation_ cost savings, the cost/hr difference is used:
```
๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ ๐๐๐ถ๐ถ๐๐๐ถ๐ถ ๐๐๐ถ๐ถ๐ ๐ ๐ถ๐ถ๐ถ๐ถ ๐ ๐ ๐๐=๐ถ๐ถ๐ถ๐ถ๐๐ ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ โ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ ๐๐๐ถ๐ถ๐ถ๐ถ ๐๐๐ถ๐ถ๐ถ๐ถ โ๐ถ๐ถ๐ถ๐ถ๐๐ ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ โ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ ๐๐๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ
```
For example, a quantum calculation of 1hr at 200$/hr vs. a classical computation of 100hrs at
10$/hr gives us a computational cost saving of 800$ for this task.
This formula gives us an economic perspective on the cost of computation for a particular
calculation. However, for many businesses like drug discovery companies, next to the pure
**computation cost savings** , there are potentially much higher **cost savings** and **revenue
increases** by executing algorithms faster or for tackling larger problems. If the lead drug
candidate can be identified quicker or a better candidate is identified, the chances of success,
efficacy, and safety for that trial can be improved. By saving one week in total time until
commercialization, an additional revenue of ~$20m can be generated for a blockbuster drug of
$1B annual sales. This angle of economic advantage was examined by Bova et al. (Bova, Goldfarb,
& Melko, 2022) in that QEA should also consider the speed advantage of the quantum calculation
vs the classical one, even when the quantum calculation is more expensive.
On the flip side, looking at the real cost side, due to inherent complexities and overhead in the
integration of quantum computers with classical, a razor-thin computation cost-saving advantage
will generate no immediate positive ROI for a traditional drug discovery company switching their
use cases to quantum. For that, one must also consider **Quantum Investments** , i.e., the
overhead of training quantum technical and business teams, building and maintaining software
(and possibly hardware^30 ), and bug fixing and integration efforts. Additionally, one must consider
(^29) Clearly, the number of calculations is not directly comparable to classical computing since those for
quantum algorithms should be exponentially less compared to their classical counterparts (e.g., Shorโs).
(^30) As Section 3.2, businesses today prefer cloud services, so hardware investments may be quite limited.
the **Opportunity Costs** of loss in productivity in the current R&D pipeline during the transition
time, where key business and technical staff and resources are taken away from the current
pipeline. We can formalize both costs and benefits for a particular use case with an updated
definition of the QAE, which I call _net quantum economic advantage_ for a project:
```
๐๐๐๐๐ถ๐ถ ๐๐๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ ๐๐๐๐๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐๐ ๐ถ๐ถ๐๐ ๐ ๐ ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ ๐ ๐๐=๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ^ ๐๐๐ถ๐ถ๐ ๐ ๐ถ๐ถ๐ถ๐ถ ๐ ๐ ๐๐ ๐ผ๐ผ๐ถ๐ถ๐ ๐ ๐๐๐๐๐ถ๐ถ๐ถ๐ถ๐๐๐ถ๐ถ๐ถ๐ถ๐๐+^ ๐๐๐ถ๐ถ๐ถ๐ถ๐๐๐๐๐๐๐๐^ ๐๐๐ถ๐ถ ๐๐๐ถ๐ถ+๐๐๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ๐ถ^ ๐๐๐ถ๐ถ๐ ๐ ๐ถ๐ถ๐ถ๐ถ ๐ ๐ ๐๐+๐๐^ ๐
๐
๐๐๐ ๐ ๐๐๐ถ๐ถ๐ถ๐ถ๐๐ ๐ถ๐ถ๐ถ๐ถ ๐๐๐ถ๐ถ๐๐^ ๐ถ๐ถ๐ถ๐ถ๐๐๐ถ๐ถ๐๐๐ถ๐ถ๐๐๐๐
```
This formula calculates a **direct ROI** of applying quantum computing to a particular use case.
Beyond the direct ROI for each project, there are **indirect benefits** to using quantum computing
for the whole company. As prominently seen in the current GenAI boom, there is an early
innovator advantage:
- Investments and the **cost of entry** are proportionally cheaper early on. Building a
quantum team from scratch and executing this framework would take at least a few years.
- Quantum **Talent** is easier to attract early, as demand and salaries will rise while getting
closer to quantum economic advantage. As highlighted in a McKinsey report (Mohr, Peltz,
Zemmel, & Zesko, 2022), the gap for quantum talent will increase strongly, and early
investments in upskilling are critical to keep up with the demand.
- Having a strong quantum strategy can **increase the attractiveness** of the company to
hotly contested non-quantum talent, e.g., in the AI and Data Science space, but also
chemists, biologists, and business talent.
- In case quantum use cases donโt evolve as fast as predicted, there is a **hedged bets**
element in that learnings from the team can be applied to classical and AI CADD. An
example of this is the company Zapata, which pivoted from specializing in CADD quantum
software to providing quantum-inspired classical CADD software and services.
- Contrary to classical code, quantum circuits can be patented as circuits under current
patent law. **Patents** can give a competitive advantage to an early innovator, as it could
hinder the entry of competitors using the same code for their applications. In fact, Wells
Fargo has one of the highest quantum patent numbers, close to some hardware
manufacturers. However, it remains to be seen how quantum circuit patent lawsuits play
out in court once economic advantage is reached. Also, patents might not be the best way
to generate competitive advantage, and it could be more beneficial to keep trade secrets
or even defensively publish circuits to prevent patenting.
**Summary**
Based on the above definitions, there are several milestones for quantum advantage from the
perspective of an end customer:
- **Quantum advantage** : a useful algorithm outperforms a classical counterpart
(algorithmic view).
- **Quantum net economic advantage for the company** : QC is introduced to the
company across multiple use cases and generates a net positive effect for the company
considering direct ROI and indirect benefits.
- **Quantum net economic advantage for a project** : QC is applied to a specific use case
and directly generates positive ROI for it.
Visualizing this, one can project the progression for quantum performance over time assuming an
exponential increase of performance/USD over time. The exact quantum performance metric does
not have to be specified but has to be relevant for benchmarking performance for a use case, e.g.,
QPE. With these assumptions, a timeline landscape like in Figure 27 can be drawn. I will use this
graph in Section 7 for the framework on decision making for investments in QC.
_Figure 27: Quantum advantage versus economic quantum advantage, assuming a Mooreโs law-
like exponential evolution of a suitable Quantum Performance Benchmark / USD on a linear
performance scale. The used Q Benchmark is not specified, but could be e.g., an aggregation of
Logical qubits, maximum Toffoli gates, Gate Speed and architectural benefits, to indicate
suitability for executing a particular algorithm like QPE._
## 6. Trajectory and Roadmaps of Quantum Computing
Accurate predictions about technological progress are impossible to make, especially for such
nascent technology. However, a few data points can giv e an insight into current trends. A key
question in the quantum computing industry is if a Mooreโs law equivalency can be observed.
_Roseโs law_ , based on Geordie Rose, the founder of D-Wave, has shown a similar pattern in qubit
number extrapolation for adiabatic qubits, at least until 2020 (Tanburn, 2014). However, as
mentioned in Section 3, we donโt have a good theoretical basis to assume this scaling can continue
beyond NISQ and into interesting applications of QAOA requiring ~ 600 logical qubits like for 1 st
order Trotter adiabatic algorithms (Sanders, et al., 2020).
Similar _Moore-esque_ patterns have been seen for other metrics like Quantum Volume, Fidelity,
and Coherence. The same goes for qubit number, so by extrapolating from progress made so far
by IonQ and IBM, a similar pattern of growth is observed in Figure 28.
_Figure 28: Plots of the number of physical qubits of IBM & IonQ over time based on their
published roadmaps and extrapolations. Source: (Choi, Moses, & Thompson, 2023)._
However, it is yet to be seen if the more important benchmarks of Toffoli Gates and Logical
Qubits show a similar increase. Since we are only in the domain of a few logical qubits, a big move
is required in the number of qubits and fidelity, as seen in Figure 29, especially considering the
requirements for applications in Quantum Chemistry and Drug Discovery.
_Figure 29: from NISQ to FTQC and logical qubit error rate requirements. Source: (Jong, 2019)
with annotations from (Ezratty, 2023)_
Comparing the earlier goalposts of a few thousand logical qubits with todayโs results, we are still
far away from both goalposts as examined in Figure 30.
_Figure 30: the paths to FTQC and NISQ. The number of physical qubits per logical qubits taken
here in the left of the chart is based on typical assumptions on algorithms depth, which condition
this ratio. However, the โusable zoneโ used in this figure is very lenient starting at 100 logical
qubits at least for Drug Discovery as discussed in section 4.3. Source: (Ezratty, 2023)_
To continue these exponential increases, significant engineering challenges and potentially physical
limits need to be overcome, as e.g., Ezratty (Ezratty, 2023) has highlighted, and similarly GQI in
Figure 31.
_Figure 31: Challenges across the hardware stack of different modalities. Source: (Global
Quantum Ontelligence (GQI), 2024)_
Based on todayโs insights, it is unclear if existing architectures or modalities can reach the several
thousand logical qubit area, and that timeline is too far out to make any reasonable predictions
on the exact challenges that will be faced. However, manufacturer and national roadmaps can
give a better understanding.
**Roadmaps**
Looking at the roadmaps of major manufacturers, Google, IBM, QuEra, Infleqtion, and IonQ have
all presented goals with metrics as per Figures 32- 37. IBM and IonQ provide more exact timelines,
with IonQ reaching 256 and 1024 AQ in 2026 and 2028, respectively.
# Figure 32: IonQ's roadmap. Source: (IonQ, 2024)
As I have criticized the AQ metric earlier, there is no exact mapping of it to logical qubits, and I
cannot speak yet to the usefulness of the 1024 AQ machine in relation to the chemistry quantum
advantage discussed in section 4. 3., although IonQ claims in Figure 32 that there are _chemistry
applications_.
# Figure 33: QuEra's roadmap. Source: (QuEra, 2024)
# Figure 34: Infleqtion's roadmap. Source: (Siegelwax, 2024)
Infleqtion is the latest contestant to the manufacturer race with their neutral atom machine,
planning to get to >100 logical qubits in 2028. However, neutral atom machines at scale have
many challenges to overcome as discussed in Section 3.3, especially the very slow gate speeds. The
logical qubit targets discussed in section 4.3 were oriented at the higher speeds of superconduction
and ion trap modalities, so we cannot apply them equally to neutral atoms.
_Figure 35: Googleโs Quantum Roadmap. Source: (Google Quantum AI, 2024)_
_Figure 36: IBM Quantum roadmap. Source: (IBM Newsroom, 2023)_
IBM uses (physical) qubits, gates, and CLOPS in their roadmap, none of which can be directly
mapped to logical qubits, although it could be assumed that 2029 and 2033+ targets refer to
achieving 200 and 2000 logical qubits, respectively, and not physical ones. Judging by circuit sizes,
2033+ would achieve circuit sizes of 10^9 , which would not be sufficient for a CAS(60, 60)
simulation, which would require 10^11 Toffoli gates, according to Figure 22. However, algorithm
and gate optimization algorithms might make it possible to run larger simulations with 1B gates.
To note is that IBM has been consistent in reaching its announced timelines, so it is probably the
most reliable manufacturer prediction to date.
_Figure 37: Summary of all roadmap metrics used. Source: (Siegelwax, 2024)._
These indications for timelines are also supported by the UK Quantum Strategy (UK Department
of Science, Innovation and Technology, 2023). It formulated missions for the next decade targeting
```
simulation of chemical processes by 2028, demonstrating large-scale error correction capabilities
with a billion quantum operations, with applications including accelerated drug discovery by 2032
and enabling applications such as optimizing the production of clean hydrogen by 2035.
Another perspective is to consider revenue and market size predictions from different research
institutes. GQI in Figure 38 predicts drug discovery to be a negligible market for QC until the
early-mid 2030s, with a bigger peak in 2035.
```
```
Figure 38: Quantum Computing QAM forecast for the 2024-2035 period. The total addressable
market is divided into specific Quantum Computing use case groups. Pharma Drug Discovery
simulations (in light blue) is expected to have a negligible market size until the early-mid 2030s,
with a big peak in 2035. Source: (Baczyk, 2024)
Obviously, these are very long-term predictions and must be taken as an orientation instead of a
certain prediction. However, there is currently a rough agreement between all these timelines,
with the early- to mid-2030s giving us the first machines capable of application-specific interesting
applications for drug discovery with several thousands of logical qubits and large enough circuit
sizes.
```
## 7. Framework for Investing in Quantum Computational Drug Discovery
### 7.1. Timing and Size of Investment
```
Using the insights from previous chapters, I will introduce a framework to consider investments
into quantum computing for drug discovery. As I have established so far, the most likely timeline
for direct ROI applications of QC in drug discovery using QPE is in the timeframe of ~10 years.
Possibly, other algorithms could reach a positive net quantum economic advantage for drug
discovery in a shorter timeframe with fewer logical qubits and smaller circuit sizes. This framework
for timing and phase of investments can be applied to any such algorithm whenever it is identified.
```
In that case, timelines discussed later can be adjusted based on the timelines for manufacturers
achieving net quantum economic advantage.
Staying with QPE, the timeframe of ~10 years is clearly a very long timeline for any technology,
and even harder to predict in this case where multiple technologies convene. Considering this, the
biggest open question for a drug discovery company not addressed yet is the right timing and size
of investment into quantum computing.
Companies can reach different adoption stages for QC:
```
i. Full quantum adoption companies have fully invested in QC, have sizable teams, and
use QC directly in their operational processes to achieve positive ROI. Today, no end
customer company interviewed or otherwise is at that stage, as Figure 25 suggests.
ii. Active quantum research companies have created teams in the size of 5-10+ full-time
employees dedicated to QC projects, having a budget, actively developing use cases, and
possibly publishing their research and filing QC patents. Four of the interviewed pharma
and chemical companies are in that stage and three of the other industries.
iii. Active quantum monitoring companies have no dedicated quantum teams but actively
observe the progress of quantum computing. They have a dedicated quantum computing
lead for the company and are active participants in conferences and possibly quantum
consortiums like QUTAC. Four of the interviewed pharma companies are in that stage.
iv. Passive in quantum companies donโt have any individuals formally monitoring the QC
space. None of the interviewed companies are in this category, but it is assumed to have
the most participants.
```
Interestingly, the interviewed companies in the middle two categories have similar, realistic
understandings of timelines, as neither of these companies see any positive direct ROI use case of
QC in the NISQ era and the next 3-5 years. One explanation for this difference is the general
strategy of each company in positioning itself as a leader in driving supporting technology like AI
and QC (i.e., technology not directly related to the drug product itself). One interviewee of an
active quantum monitoring company stated, _it is hard to justify investments without ROI when
thereโs so much better ways to improve patient health short term with investing in AI for drug
discovery with direct ROI_.
Using the insights from timing in Section 6, the advantage milestones for drug discovery in Section
5.2., the net economic quantum advantage from Section 5.3, and assuming manufacturer
milestones are correct, I will assume rough timings of reaching milestones for QPE for Drug
Discovery of quantum advantage in 2026-2029, Net Quantum Economic Advantage in 2032-2035+
like in Figure 39.
_Figure 39: Investment decision points for companies using the Net Quantum Economic Advantage
framework for QPE for drug discovery. The estimated timelines for achieving these milestones
are based on manufacturer timelines analyzed in Section 6. The sizes of qubits necessary for each
era are based on sizes for useful Drug Discovery applications with QPE from Section 5.2. An
exponential increase in quantum performance to cost ratio is assumed as per Section 6. It is also
assumed 2 years for a company to onboard a functioning quantum team to utilize the next
milestone._
Considering a lead time to reach full quantum advantage for their use cases, the right investment
time is around two years before reaching the respective milestone. Using the timelines from Section
6 and the insights for useful applications from Section 5 would imply the following strategy for
companies that would only apply QPE to their use cases.^31.
Companies in the _active quantum research_ phase should start moving towards the _full quantum
adoption_ phase at the _latest_ two years prior to reaching net quantum economic advantage.
Considering only QPE in Drug Discovery use cases, the timing for that would be in the early
2030s. That would imply to move at the _latest_ to _active quantum research_ around 2028-2030, and
to _active quantum monitoring_ around 2026 -2028. However, these timelines would apply to an _early_
(^31) To note is that these are estimations based on manufacturer predictions and can change significantly with
progress or delays both on quantum and classical algorithms and hardware capabilities, as I have highlighted
previously.
_or late majority_^32 company adopting QC just in time when it makes direct economic sense for a
particular use case in Drug Discovery.
Instead, an early adopter company might want to move earlier to not lose indirect benefits e.g.,
the economic advantage of a first mover or having patents, or to apply QC to other areas with
lower requirements than Drug discovery, like optimization. For such a company, the move to
_active quantum monitoring_ should be done now (2024), with the move to _active quantum research_
done at the latest around the point of reaching quantum advantage around 2026-2029, as shown
in Figure 40.
_Figure 40: Timelines for drug discovery companies entering respective QC phases considering
QPE for Drug Discovery._
For an early adopter, the timeline to move to _active quantum research_ can be much earlier. In
fact, some interviewed companies have already moved to this phase. These _innovator_ companies
expect higher indirect benefits and have the capability for early investments, bringing the _net
economic advantage plus indirect benefits_ milestone a few years closer than the pure _net economic
advantage_ milestone. However, these companies should have clear and realistic expectations and
timelines in mind to communicate to their stakeholders. Pharmaceutical companies are uniquely
positioned to make long-term bets like QC since their core business of drug discovery is inherently
based on decade-long bets on individual drug candidates. As such, clearly communicated
(^32) As per Rogerโs model of Diffusion of Innovations in (Rogers, 1962).
```
milestones based on calculated indirect benefits and direct ROIs can create a healthy long-term
investment portfolio for a pharmaceutical company^33.
```
```
A company should enter the active quantum monitoring phase at least 2 years before switching
to the active quantum research phase so that they are able to calculate direct ROI and indirect
benefits. The recommendation for pharmaceutical companies in the passive category would be to
move to the active monitoring category at the latest in 2026, but that would mean that they
would miss utilizing any potential indirect benefits they are unaware of. Considering the high
value of missed benefits, the volatility of quantum technology and algorithm timelines, and the
low cost of switching to active quantum monitoring, moving into the active monitoring phase as
soon as possible makes the most sense from a game-theoretical aspect. This recommendation is
also in line with MIT SMR article (Ruane, McAfee, & Oliver, 2022), which suggests managers to
be a) Vigilant, i.e., to keep track of metrics and benchmarks achieved like logical qubits, using
sources such as expert panels and forecasting tournaments; and b) Visioning, i.e., having in place
a team of people who understand the implications of quantum computing and can identify the
companyโs future needs, opportunities, and potential vulnerabilities. A company that has invested
in the active quantum monitoring category can calculate the direct ROI and indirect benefits for
its use cases. With this in hand, a much better-timed decision can be made to move to the active
quantum research phase. Based on the companyโs other priorities, overall investment capabilities
and overall NPV vs. other opportunities, a company can decide on its entry point to the a ctive
research phase.
```
### 7.2. Moving to Active Quantum Monitoring
```
Considering that the advice for a proactive company is to immediately move to an active
monitoring phase, I wil l examine more closely the activities necessary to achieve this phase. There
is a plethora of articles and opinions published by individuals, suppliers, and consulting companies
on how to approach quantum investments and projects; however, it is hard to separate conflict of
interest from such suppliers. The following is an attempt to provide an unbiased approach from
the perspective of an end customer drug discovery company, looking at four key areas of activities.
Activities in four areas of Use Cases, Process Integration, Technology, and Monitoring and
```
(^33) Some of the interviewed companies have a Venture Capital or Investment arm which are actively
investing in full-play quantum computing companies that they may also partner with, like e.g., Mitsui with
Quantinuum.
Collaboration must be executed by the end customer to create a holistic approach to QC
investments and move to the _active quantum monitoring_ phase, as shown in Figure 41.
_Figure 41: Overview of key activities to be conducted in the Active Quantum Monitoring phase._
**Use Cases**
As a first step, the **areas of business** which can be addressed with quantum acceleration must
be identified. These would be in drug discovery, supply chain, and other areas. They must be
**mapped to quantum algorithms** that have an acceleration or can tackle larger problem sizes
as opposed to currently used algorithms. It is important to **compare** the expected results to
currently used methods like CADD with ML. Business and technical teams must identify
**interesting use cases** to simulate, e.g., investigating particular molecular structures with open
question marks, expanding the HTS databases for structural methods, etc. However, not all use
cases must be in drug discovery; as discussed, there is a much closer value space for non-molecule
simulation problems like Supply Chain. Once all use cases are identified, the problems must be
**prioritized** based on the company strategy; if e.g., supply optimization is the key focus, consider
focusing on optimization problems.
**Technology**
Based on each algorithm, the **quantum modality** to be used needs to be identified. Annealers,
Analog, Photonic, and gate-based quantum computers are excelling at different types of problems.
Each modalityโs particularities must be identified, e.g., neutral atom computers and ion traps
have better connectivity and are better suited for QPE. The use cases must be **clustered** based
on a particular modality and programming paradigm (e.g., annealing or gate-based). It may be
better to focus on one cluster initially since different modalities require different vendors,
programing paradigms, optimizations, etc. Next, the appropriate **coding environments,
libraries, and SDKs** must be considered. For example, CUDA-Q or Modern SDK might
currently be more performant than Qiskit, as per my interviews. Also, since languages are low-
level languages, kits with easier abstraction for larger circuits like Classiq must be considered.
Major benefits can also come from using **automation and AI copilots** , like the recent Microsoft
AI Copilot for Quantum Computing. An important decision needs to be made on using a **cloud
platform or on-premise machines** , e.g., platforms like Amazon Braket and Azure Quantum,
but also last-mile integrators like QBraid based on considerations elaborated in section 3.2.
Finally, the **technical integration** to the existing ecosystem needs to be considered, since
quantum computers work hybrid with classical ones. The bottlenecks to the future hybrid
classical-quantum end-to-end workflows must be considered, considering the connected classical
systems and data in- and out-flows.
**Process Integration**
An underestimated area of integration is on the process side. Specifically, groups executing current
workflows must **re-think and re-design processes** to use quantum technologies. As e.g., Blunt
et al. (Blunt, et al., 2022) mention, current high-throughput workflows donโt even use current
state-of the art methods like DFT due to the drug discovery design.
Personnel must be **trained** to use new methods in their processes. Obviously, technical/IT
delivery talent needs to be trained, i.e., programmers to code quantum algorithms, business
analysts to translate business requirements into IT requirements, and project managers to get
experience in managing quantum projects. At the same time, the chemists, biologists, statisticians,
and pharmacists working on drug discovery must learn to work with and integrate quantum
methods into their existing processes. Lastly, management and leadership must understand the
implications and push the new ways of working to their organization. **Upskilling talent** to be
able to integrate quantum computing into a process or code is a slow process. One of the interview
themes was that upskilling is a slow process and can take several years even for skilled AI experts,
as quantum computing bases a fundamentally different conceptual principle than data science and
AI.
Training incurs costs for training materials and back-filling, but also **opportunity costs** in that
some of the most skilled personnel in drug discovery will be working less on the current pipeline
until they are fully productive in applying the new quantum methods. However, upskilling existing
top talent to quantum computing is not just a cost factor; it can be a big factor in the retention
of that talent.
**Monitoring and collaboration**
To determine the optimal exit point to the next phases, the net economic advantage needs to be
**calculated** , i.e., financial benefits, and direct and opportunity costs for each use case.
Additionally, the indirect benefits must be calculated.
A key principle of the calculations, but also all other activities in this phase, is that they must be
**continuously monitored and updated** after the initial assessment. New classical and quantum
algorithms are coming out, hardware is continuously progressing, business areas are developing,
and new priorities are coming out. This Monitoring is more effective if happening **actively** , via
participation in relevant conferences, collaboration with industry and academic consortia,
participation in research, and conduction of Hackathons.
## 8. Conclusion and Outlook
Drug discovery has a great potential of being accelerated with modern CADD methods. A current
limitation to applying CADD to e.g., target identification and de novo design are inherent
difficulties of classical computers in simulating quantum mechanics. Classical machine learning
methods are evolving rapidly and might lead to significant discoveries in that space. However,
these methods have not seen huge adoption due to the high costs of integration and similar scaling
limitations like classical CADD. On the other hand, quantum computers promise to fully simulate
molecules at the required scale to become interesting, strongly enhancing CADD, in vitro, and in
vivo techniques. These quantum techniques may unlock fundamentally new ways of drug
discovery, simulating hundreds of molecule bindings, and identifying and designing lead candidates
that would not be identifiable otherwise.
However significant the promise is, current NISQ quantum computers are in a nascent stage.
Quantum advantage claims by various manufacturers are based on algorithms without any
practical applications. The current hardware is in the realm of a handful of logical qubits with
circuit sizes around 105 , which is a far cry from hundreds of logical qubits at the right gate speeds
and coherence times to execute even the simplest of material science simulations. For useful
applications for drug discovery simulations, 2-4k logical qubits and circuit sizes in the realm of
1011 will be required. If manufacturers can keep their current timelines, this performance level can
be achieved in the mid-2030s. However, many engineering and scaling problems must be addressed
first across all modalities.
End customers must formulate their requirements using useful benchmarks like logical qubits and
circuit sizes, combined with gate speeds and coherence, but also understand the intricacies of each
modality and what it means for their use cases. Additionally, they must understand how quantum
and classical hardware and software will integrate, how the personnel can be upskilled and trained,
and how their processes can be re-imagined to utilize any quantum advantage. Since quantum
advantage would just mean a razor-thin calculation advantage, they must also understand the
totality of benefits, investments, and opportunity costs required to adopt quantum computation,
which means understanding and monitoring their _net quantum economic advantage_.
From a timing perspective, the right time to move to understand these aspects and move to the
phase of _active quantum monitoring_ for early adopter end customers would be now to allow
exploration of use cases beyond drug discovery. Innovator companies considering moving to or
already in the _active quantum research_ phase should be realistic in their expectations for executing
a long-term deep tech bet with long timelines without creating confusion on potential quantum
advantage. Both approaches have merit and are important to move quantum computing beyond
the nascent phase. However, the only way to move to a sustainable _active quantum research phase_ ,
as per my interviews, is to have sustained support from top management. Some of the initiatives
of interviewed companies, which were previously in _active quantum research_ but not supported
by top management, have degraded to the monitoring phase.
Crucial for all companies would be to cooperate with each other in this early stage, as the quantum
budgets are too small to replicate the same experiments everywhere. The insights on successful
and unsuccessful experiments should be replicated and shared, both in publications but also in
more informal networks like conferences and consortiums, like e.g., QUTAC in Germany.
Beyond drug discovery companies, this thesis should also work as a call for action for several other
parties:
- **Standardization agencies and consortia** should create standardized benchmark
measurements that are fair to different modalities and manufacturers, highlighting possible
benefits and drawbacks for each modality and hardware, and publishing results available
to the public, like in a quantum index report.
- **QC suppliers and manufacturers** should publish transparent standardized
benchmarks, be more accurate with claims on achieving milestones like a logical qubit or
quantum advantage and be clear on their modalityโs strengths and weaknesses for
particular applications (like ion traps for QPE).
- **Governments** need to understand the real potential of QC for drug discovery and live
up to their role in providing grants to suppliers, academia, and drug discovery companies
to develop useful end customer use cases in the nearer future. A t this stage of the nascency
of QC, grants are primarily responsible for pushing the technology further.
- **QC consultants and advisors** should be more accurate on the timelines and the hype
presented to clients. Early adopter and early majority pharmaceutical companies will soon
need support to become _active quantum monitoring_ , but this should be done with a realistic
net quantum economic advantage calculation and timelines for actual use cases in mind.
- **QC academia and researchers** should collaborate more strongly with drug discovery
companies in identifying and achieving practical use cases and algorithms for near-term
FTQC.
Having a more careful, transparent, and scientific approach to claims and trajectories of QC from
all parties will be essential for the field to avoid _quantum winters_ like the ones experienced in AI,
avoiding disappointed investors and clients alike.
Looking at future necessary study, universal and application-specific benchmarks must be
established and monitored transparently in collaboration with academia, manufacturers and
consortia. As the individual modalities and algorithms advance, the application-specific
benchmarks should be applied to them, and timelines for the framework introduced in this thesis
must be updated. Lastly, the concept of net economic advantage needs to be applied in more
detail for other use cases to make sense to investors, government, and the public alike.
## Appendix
### A โ List of Interviewees
```
Company Name Role
AbbVie Brian Martin^ Head of AI in R&D Information Research^
Amgen Zoran Krunic Sr. Manager Data Science
Bayer Dr. Bijoy Sagar EVP and Chief Information Technology and
Digital Transformation Officer
Boehringer
Ingelheim
```
```
Clemens Utschig-
Utschig
```
```
Head of IT Technology Strategy, CTO
```
```
Deloitte Dr. Renata Jovanovic Partner; Sector Lead Energy & Chemicals
Consulting; Global Quantum Ambassador
J&J Varun Ramdevan Global Technology and Digital Health Early
Innovation, Johnson & Johnson Innovation
Merck DE Dr. Tomas Ehmer Business Technologies R&D Science and
Technology - Innovation Incubator
Mitsui Shimon Toda
Shigeyuki Toya
```
```
General Manager, Quantum Innovation
General Manager, New Business Development
Novo Nordisk
Foundation
```
```
Dr. Morten Bache Scientific Director
```
```
NVIDIA Elica Kyoseva Director of Quantum Algorithm Engineering
T-Systems Joern Kellermann SVP Global Portfolio and Technology Excellence
```
### B - Qualitative Interview Themes
Conducting the set of interviews with industry experts was crucial to guide the thinking process
and highlight issues covered in this thesis. Most insights are already covered in the respective
chapters, but this appendix summarizes takeaways from the interviewees after applying a thematic
analysis. The key themes which emerged are Applications of QC, Motivation in Investments,
Technology, Management Support, Budget, Benchmarks and KPIs, Collaboration, and Talent
and Integration. All opinions expressed here are attributed to the interviewees.
**Applications for Quantum Computing in Drug Discovery**
The interviewees didnโt see any QC CADD use cases in the NISQ era and in the next 3-5 years.
There may be some optimization problems to be tackled sooner, e.g., in supply chain with
annealing. Also, QML might give an advantage in a shorter time frame than QC QM simulations.
From an algorithmic perspective, VQE does not seem to have useful applications in the long-term
due to its scaling uncertainty, but QPE is a very promising algorithm. New algorithms must be
invented to give a sooner advantage in the 100-200 logical qubit area.
Looking at use cases, metabolic disorders driving cardiovascular diseases, oncology, personalized
medicine, and cellular morphology seem promising areas for application. We are currently treating
end-states but not approaching drug design on cellular states, like with membrane functions.
Brian Martin mentioned **Feynmannโs curse** : Feynmannโs postulate was that if one is about to
simulate QM, one should use a quantum computer. Feynmannโs curse is the inverse belief: if one
is building a quantum computer, one should use it only to simulate QM. Practically, the curse
means that for drug discovery companies, the main focus has been QM simulation, even though
there is a much closer potential value space for other use cases, like supply chain optimization.
**Motivation in Investments**
All interviewees see QC as a deep tech bet. The main reasons for them to invest early are the low
cost of entry at this point, the fact that they can register patents for a competitive advantage,
and that they believe in sizable long-term benefits.
The interviewed companies are split equally across the _active quantum monitoring_ and _active
quantum research_ phases. They share similar insights in that the current state of QC is far from
any useful applications, but they apply different strategies for their research.
More individually attributed motivations were that they could use the insights for quantum-
inspired algorithms, even if quantum computing doesnโt get an economic advantage. Furthermore,
GenAI has shown that the cycle for adapting technology has become much shorter, and pharma
companies cannot keep up with that pace. The applied learning for QC is to start using and
investing in the technology earlier.
**Technology**
The interviewees recognized that all hardware modalities have application-specific advantages and
disadvantages. Ion traps and neutral atoms are seen as the most promising modalities for
supporting QPE due to their superior connectivity. Network models could possibly be encoded
efficiently in neutral atom computers. Annealers are interesting in the near-term due to
optimization problems, but their long-term advantage is unclear. However, companies prefer not
to crown a hardware modality winner so early in the race. Some companies have worked on
partnerships on this topic, and some of the companiesโ investment arms have invested in QC
companies.
Lastly, the interviewees recognize that the SDKs and copilots to build applications can be
potential bottlenecks for use cases at a larger scale.
**Management Support**
Almost all interviewees from the chemical and pharmaceutical companies are part of the business
or IT/Digital Group of the R&D department. The _active quantum research_ companies have their
quantum initiatives supported by upper management at the C-level or at least the R&D leadership
level. Companies in the _passive_ or _active monitoring_ phases did not receive management support
at the C-level. Those initiatives were grass roots-driven and of smaller size, but without further
engagement from top management, they were discontinued due to no direct ROI.
When looking at internal KPIs used to justify quantum investments, _active quantum research_
companies do not have specific KPIs or benchmarks to achieve. It is understood by their
management that these are deep tech bets with no direct ROI. Some have a milestone-based
approach to showcase the work and progress of the quantum teams, for example, exploration of a
new modality, benchmarking a use case, or determining the scaling behavior of a program applied
to more powerful hardware.
**Budget**
Overall, all interviewees felt comfortable with being in their current phase of QC investment and
that they had sufficient budget and resources for that phase. While additional resources would
allow them to move to a more engaged phase, most interviewees didnโt feel they could justify
additional investments to their management.
GenAI has created competition and is taking away resources for companies in the _active quantum
monitoring_ stage. For companies that moved to _active quantum research_ , the budgets have not
been impacted negatively by GenAI. This can be partially explained by the fact that the premise
of QC being a deep tech bet hasnโt changed with the increase in GenAI investments.
**Benchmarks and KPIs**
None of the interviewed companies currently track exact benchmarks of manufacturers. Logical
qubits and circuit size give some indications, but the number of metrics used and their rate of
change are often confusing and do not allow reasonable comparisons. The best way to compare
technologies is to design and run the actual circuits on their platforms and compare the outcomes
based on use case-specific benchmarks. Additionally, companies are monitoring competitorsโ and
other leadersโ reactions, especially in chemistry, energy, and government.
**Collaboration**
QC is seen as a nascent technology, so it is too early to isolate and tackle problems as individual
companies. All interviewees expressed the need to collaborate on use cases, share costs and
insights, and publish things that work but that also donโt work. Since compute hours are so
expensive and talent so scarce, there is no point in replicating things that donโt work.
Big breakthroughs in software, hardware, and algorithms are expected to come through specialized
QC companies and startups. Drug discovery companies do not try to compete in these dimensions
but would rather collaborate and invest early in promising companies (either directly or via their
VC arm). The interviewees believe that government grants are still necessary in this nascent
phase, as there is no clear ROI for companies to invest right now.
Some interviewees would appreciate _quantum showcase studios_ in their area, where basic education
for quantum computing can take place for decision-makers, and easily accessible QC testbeds are
available for companies. An example of such a testbed is Quantum Basel.
Some interviewees are unsure about how to address IP issues, and if this creates constraints in
collaborating with third parties. IP is considered to be handled differently in quantum vs AI, as
circuits can be patented.
**Talent and Integration**
All interviewed companies feel that they have sufficient talent for their current phase. However,
they foresee bottlenecks in getting sufficient talent in the next 5+ years. However, currently, none
of them have a talent or recruitment strategy for quantum computing.
The team sizes in _active quantum research_ companies are about 5-10+ full-time equivalents and
consist of a mix of PhD physicists and upskilled data and computer scientists. So far, a problem
has been finding individual talent with deep expertise in both CADD and QC areas.
Interviewees felt that significant upskilling is required for personnel currently engaged in drug
discovery to use QC effectively. In fact, processes must be fundamentally changed to implement
QC capabilities, and it would not be a straightforward replacement of classical to quantum
computing.
## Table of Figures
Figure 1: Zoom in on the compound intermediate of cytochrome-c peroxidase (PDB 1ZBZ). a:
Force fields/semi-empirical methods can model large systems but not fully describe quantum-
mechanical effects b. To model the central portion of the protein, HartreeโFock/DFT methods
can be exploited. DFT includes electronic correlation. C. coupled-cluster (CC) methods. D. The
full configuration interaction (FCI) method delivers the exact energy of the electronic-structure
problem but can deal only with a handful of atoms. Source: (Santagati R. A.-G., 2024) ........... 11
Figure 2: a) General workflow of the drug discovery process. Here, Cao et al. focus on the early
phase where computationally intensive quantum chemical analyses are involved. (b) Components
of each stage of drug discovery that heavily involve quantum chemistry or machine learning
techniques. (c) Quantum techniques that can be applied to the components listed in (b) and
potentially yield an advantage over known classical methods. Here, they make the separation
between techniques for NISQ devices and FTQC devices. Source: (Cao, Fontalvo, & Aspuru-
Guzik, 2018) .............................................................................................................................. 13
Figure 3: The BQP (bounded-error, quantum, polynomial time) class of problems. Source (MIT
Open Courseware, 2010) ............................................................................................................ 20
Figure 4 - (Pirnay et.al, 2024)โs work (arrow) shows that a certain part of the combinatorial
problems can be solved much better with quantum computers, possibly even exactly. .............. 20
Figure 5: Different layers of Hardware over logical qubits to Algorithms. Source: (Santagati,
Aspuru-Guzik, & Babbush, 2024). ............................................................................................. 21
Figure 6: List of Quantum Software and hardware providers. Source: Quantum Insider ........... 25
Figure 7: how microprocessor figures of merits progress slowed down with single thread
performance, clock speed, and number of logical cores, in relation with total power consumption.
Source: (Rupp, 2018) ................................................................................................................. 31
Figure 8: Qubit count progress over time. Source: (Ezratty, 2023) ............................................ 32
Figure 9: Trajectory of Qubits for different modalities Source: (Cappellaro, 2024) .................... 33
Figure 10: Evolution of superconducting lifetime over time. Source: (Ezratty, 2023) ................. 35
Figure 11: Two Qubit Gate Performance for different modalities. Source: (Monroe C. R., 2024)
................................................................................................................................................. 36
Figure 12: Mapping Gate Speed to 1Q and 2Q Gate fidelity for different modalities. Source:
(Oliver, 2024) ............................................................................................................................ 37
Figure 13: Quantum Volume is a composite of different lower-level metrics. Source: (Silvestri,
2020) ......................................................................................................................................... 38
Figure 14: Quantum Volume evolution for IBM. Source: (Jurcevic, Zajac, Stehlik, Lauer, &
Mandelbaum, 2022) ................................................................................................................... 38
Figure 15: Quantum Volume evolution for Quantinuum. Source: (Quantinuum, 2024) ............. 39
Figure 16: Quantum Volume across all modalities. Source: (Metriq, 2024) ................................ 39
Figure 17: Benchmarking pyramid showing how quality and speed can be benchmarked and
Quantum Volume is associated with CLOPS and lower-level metrics. Source: (Wack, et al., 2021).
................................................................................................................................................. 40
Figure 18: Layer fidelity of IBM machines. Source: (Wack & McKay, 2023) ............................. 41
Figure 19: Results from running the suite of application-oriented benchmarks on a quantum
simulator (colored squares) on top of a volumetric background (grey-scale squares) extrapolated
from the quantum volume. The simulator uses all-to-all qubit connectivity and two different error
rate scales that result in quantum volumes of 32 and 2048, as shown in each plot. Source:
(Lubinski, et al., 2021). ............................................................................................................. 46
Figure 20:Conceptual representation of why a better quantum algorithm only outpaces a classical
one when problem sizes are sufficiently large, based on the example of Groverโs algorithm. Source:
(Choi, Moses, & Thompson, 2023) ............................................................................................ 52
Figure 21: Quantum Advantage demonstrated at 50 logical qubits with a gate depth of 50. Source:
(Leviatan, et al., 2024) .............................................................................................................. 53
Figure 22: Comparison of molecular features relevant for possible short-term quantum computer
applications and a schematic placement of their CASs estimated to be necessary for achieving
chemical accuracy. Source: (Elfving, et al., 2020) ...................................................................... 54
Figure 23: Ratio of magic state distillation (MSD) footprint to total computational footprint for
different numbers of qubits and T-count (Toffoli gates). Footprint is measured in terms of number
of RSGs required. Includes Simulation of the Fermi-Hubbard model; of crystalline materials; of
FeMoco; and breaking RSA encryption. Source: (Kim, et al., 2022) .......................................... 55
Figure 24: From left to right: Number of T Gates, Toffoli Gates, and Logical Qubits required for
CAS(N, N) simulations with N orbitals. Source: (Elfving, et al., 2020) ..................................... 56
Figure 25: Number of use cases distributed across industries and for each of the industries also
segmented by the implementation status. Life Sciences does not have a single use case in
deployment. Source: (Baczyk, 2024) .......................................................................................... 57
Figure 26: Progress in computing since 1900 in calculations per second per 1000$. Source:
(Kurzweil, 2005, p. 67) .............................................................................................................. 57
Figure 27: Quantum advantage versus economic quantum advantage, assuming a Mooreโs law-
like exponential evolution of a suitable Quantum Performance Benchmark / USD on a linear
performance scale. The used Q Benchmark is not specified, but could be e.g., an aggregation of
Logical qubits, maximum Toffoli gates, Gate Speed and architectural benefits, to indicate
suitability for executing a particular algorithm like QPE. ......................................................... 60
Figure 28: Plots of the number of physical qubits of IBM & IonQ over time based on their
published roadmaps and extrapolations. Source: (Choi, Moses, & Thompson, 2023). ................ 61
Figure 29: from NISQ to FTQC and logical qubit error rate requirements. Source: (Jong, 2019)
with annotations from (Ezratty, 2023) ...................................................................................... 62
Figure 30: the paths to FTQC and NISQ. The number of physical qubits per logical qubits taken
here in the left of the chart is based on typical assumptions on algorithms depth, which condition
this ratio. However, the โusable zoneโ used in this figure is very lenient starting at 100 logical
qubits at least for Drug Discovery as discussed in section 4.3. Source: (Ezratty, 2023) ............ 62
Figure 31: Challenges across the hardware stack of different modalities. Source: (Global Quantum
Ontelligence (GQI), 2024) ......................................................................................................... 63
Figure 32: IonQ's roadmap. Source: (IonQ, 2024) ...................................................................... 64
Figure 33: QuEra's roadmap. Source: (QuEra, 2024) ................................................................. 64
Figure 34: Infleqtion's roadmap. Source: (Siegelwax, 2024) ....................................................... 65
Figure 35: Googleโs Quantum Roadmap. Source: (Google Quantum AI, 2024) .......................... 65
Figure 36: IBM Quantum roadmap. Source: (IBM Newsroom, 2023) ........................................ 66
Figure 37: Summary of all roadmap metrics used. Source: (Siegelwax, 2024). ........................... 66
Figure 38: Quantum Computing QAM forecast for the 2024-2035 period. The total addressable
market is divided into specific Quantum Computing use case groups. Pharma Drug Discovery
simulations (in light blue) is expected to have a negligible market size until the early-mid 2030s,
with a big peak in 2035. Source: (Baczyk, 2024)........................................................................ 67
Figure 39: Investment decision points for companies using the Net Quantum Economic Advantage
framework for QPE for drug discovery. The estimated timelines for achieving these milestones are
based on manufacturer timelines analyzed in Section 6. The sizes of qubits necessary for each era
are based on sizes for useful Drug Discovery applications with QPE from Section 5.2. An
exponential increase in quantum performance to cost ratio is assumed as per Section 6. It is also
assumed 2 years for a company to onboard a functioning quantum team to utilize the next
milestone. .................................................................................................................................. 69
Figure 40: Timelines for drug discovery companies entering respective QC phases considering QPE
for Drug Discovery. ................................................................................................................... 70
Figure 41: Overview of key activities to be conducted in the Active Quantum Monitoring phase.
................................................................................................................................................. 72