Quantum chemistry has been an important benchmark application for emerging quantum computers. In this article, we will revisit the big-picture motivation for this application and how the iterative nature of the algorithm presents unique challenges and opportunities to improve quality of results on noisy quantum machines.
Drug discovery: importance and challenges
The COVID19 pandemic has showcased the need for accelerated scientific innovation in the field of medicine, placing an impetus on vastly improving our ability to develop novel effective drugs to combat diseases. Drug discovery is the process through which potential new medicines are identified, from an initial hypothesis to a fully commercialized product. Today, this process can often take more than a decade and billions of dollars in expenditure before a molecule can be recognized as a drug.
A significant portion of these resources is invested in the identification of molecules that exhibit significant medicinal activity against a disease, through what is known as Computer Aided Drug Design (CADD). A major challenge to CADD is the accurate analysis of every drug candidate’s interaction with biological target, which is largely constrained by the computational cost of accurate molecular system simulation. This is because the computational cost of many-electron molecular system simulation grows exponentially with the number of electrons.
Quantum computing for molecular chemistry
In what is now loosely considered the genesis of Quantum Computing, Feynman suggested that computers which obey quantum mechanics could efficiently simulate quantum systems, such as the molecular system described above. Fast-forward 40 years and quantum computers are a reality today (albeit with many hurdles to surmount) and we are poised towards a potential paradigm shift in quantum chemical simulation. Quantum computing could potentially deliver efficient and highly accurate solutions to many classically intractable problems, such as finding the potential energy of molecules, i.e., the energy required to break a molecule into its sub-atomic components, at various bond lengths (shown for H2 in the Figure). This is then used to estimate chemical reaction rates, which are integral to accelerating effective drug discovery, apart from other vital applications in material science, chemical engineering, and nuclear physics.
Variational Quantum Algorithms
But, today’s quantum computers are extremely noisy, suffering from high error rates in the form of SPAM errors, gate errors, decoherence, crosstalk, etc. And we still have some ways to go before building devices with sufficient qubits to effectively correct these errors (though we have some promising roadmaps!) Thus, we should expect to live with non-negligible levels of noise in the immediate and intermediate future of quantum devices.
Fortunately, the molecular chemistry applications described above can be tackled with reasonably noisy quantum devices via hybrid quantum-classical schemes known as variational quantum algorithms, and more specifically the variational quantum eigensolver (VQE). VQE is used to estimate the minimum value of some objective function, wherein the objective function is the expectation value of a `Hamiltonian’ and the Hamiltonian is a mathematical representation of the classically intractable target problem. To achieve this, VQE tries to iteratively find suitable parameters of a parameterized quantum circuit (called an `ansatz’) such that the expectation value of the target Hamiltonian is minimized, and it does so by adapting to the noise profile of the NISQ device. By constructing Hamiltonians for the molecular systems of interest, estimations can be made regarding their electronic structure, chemical reactivity and much more, which play a key role in the molecular chemistry problems described earlier.
VQA: Challenges and Opportunities
While there is clear promise, estimating the VQE global optimum with high accuracy has proven challenging in the NISQ era. Energy estimations on today’s machines, even for relatively smaller molecular systems, are still orders of magnitude worse than the target accuracy requirements. Thus, it is imperative to develop classical support and error mitigation techniques that can push VQA over this final hump. We describe some integral VQA components (illustrated above), the state-of-the-art, as well as future challenges and opportunities below:
Ansatz: Many ansatz structures are suitable for VQAs. In the context of VQE for molecular chemistry, the Unitary Coupled Cluster Single-Double (UCCSD) ansatz is considered the gold standard. Unfortunately, the UCCSD ansatz is generally of considerable circuit depth, making it less suitable for today’s NISQ machines. More suitable to the NISQ-era are hardware efficient ansatz which are low depth parameterized circuits. But a hardware-efficient ansatz can be limited in its capabilities due to being application agnostic, inefficient in the target Hilbert space coverage, requiring many tuning parameters and prone to barren plateaus. Promising middle-ground solutions explored include dynamic evolving ansatz such as that proposed in ADAPT-VQE but can suffer high tuning costs. Constructing an ansatz which is well suited to a problem, while also meeting NISQ device limitations, is an open research problem.
Optimizer: The classical tuner/optimizer variationally updates the parameterized circuit until the measured objective converges to a minimum. With the high noise levels in NISQ machines, the optimization surface can often be non-convex and non-smooth. The surface contour worsens as the problem complexity increases because of increase in circuit depth, number of parameters, and entanglement spread. While multiple tuners exist such as SPSA, there is still much work to be done in identifying the optimal tuners and their hyper-parameters for each application or even per application phase. Constraints to consider include: derivative vs derivative-free approaches, number of samples per iteration, navigating barren plateaus, performance with bounds/constraints, robustness to transient noise, classical computational costs and much more.
Initialization: Well-chosen initialization of the ansatz can help avoid considerable noise effects as well as barren plateaus, enabling fast accurate convergence. For VQE, a popular and simple approach to construct a fair initial state for quantum systems is derived from Hartree-Fock (HF) theory. HF yields an initial state with no entanglement by assuming that each electron’s motion can be described as a stand-alone particle function, independent of the instantaneous motion of other electrons (and, therefore, can be limited in its accuracy). Initialization provides enticing research opportunities to the classical computing community. A good choice can provide tremendous benefits since initialization decisions are made in a classical setting which is noise-free and can thereby reduce the noisy quantum real estate that the NISQ device has to tune over.
Measurement: Another key VQE challenge is the large number of circuit measurements required, proportional to: the number of terms in the Hamiltonian, the number of shots per term, and the number of iterations to convergence. The number of terms per shot iteration itself scales as O(N4) where N is the number of qubits. Term measurements could be partly reduced by techniques such as performing simultaneous measurement and term truncation, while the number of shots can be reduced through intelligent shot allocation. The number of iterations will of course reduce with better optimizers and less noisy devices. Measurements can be a practical limiter for VQE and significant research towards measurement reduction is critical.
Error Mitigation: Multiple error mitigation strategies have been proposed to correct different forms of NISQ device quantum errors. While these techniques have the potential to greatly improve execution fidelity, there is often a significant disconnect between the theoretical expectations from these methods and how they perform on real hardware. Dynamically tailoring specific features of error mitigation techniques to a machine’s actual noisy execution environment is a perfect fit for the variational quantum approach which already tunes gate angle parameters as part of the framework. Prior work has shown that the iterative nature of variation algorithms can be exploited to improve the deployment of decoherence error mitigation techniques. There is abundant opportunity to explore new (and improve old) mitigation techniques within the variational harness.
Near-term quantum computing domains like variational quantum algorithms have the potential to make substantial strides in the real world, in critical fields like drug discovery and other areas of chemistry and optimization. There is ample opportunity for classical / quantum / cross-stack research across the expanse of VQA, and our community is well poised to contribute in a variety of ways to push the quantum needle forward!
About the authors:
Gokul Subramanian Ravi is a CIFellow postdoc at the University of Chicago, mentored by Prof. Fred Chong. His research targets hardware-software approaches towards practical quantum computing systems, with focus on variational quantum algorithms, NISQ error mitigation, classical optimizations and (recently) error correction. He received his PhD from UW-Madison in 2020 and was advised by Prof. Mikko Lipasti. Gokul will be on the 2022-23 academic job market.
Fred Chong is the Seymour Goodman Professor of Computer Architecture at the University of Chicago and the Chief Scientist for Quantum Software at ColdQuanta. He is the Lead Principal Investigator of the (Enabling Practical-scale Quantum Computation), an NSF Expedition in Computing and a member of the STAQ Project. He is also an advisor to Quantum Circuits, Inc.
Many ideas from this blog stem from conversations with the rest of the EPiQC team: Ken Brown, Ike Chuang, Diana Franklin, Danielle Harlow, Aram Harrow, Andrew Houck, Robert Rand, John Reppy, David Schuster, and Peter Shor.
Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.