By offering a $1M prize for the first demonstration of a quantum advantage over solely classical computation, quantum computing startup Rigetti hopes to highlight the potential of hybrid algorithms that cooperatively compute on both quantum and classical computers. In fact, this hybrid model is one of the most promising ways that most quantum companies and researchers hope to demonstrate useful quantum computation in the next few years.
While quantum computers are (currently) small and unreliable, a great way to exploit their special, but limited, abilities is to adopt a hybrid model which leverages both quantum and classical computation. Perhaps the most promising example is in quantum chemistry, where Variational Quantum Eigensolver (VQE) algorithms perform a kind of heuristic search by iterating between a quantum machine and a classical supercomputer. The goal is to find the lowest energy state of a chemical compound (the ground state). We start from the best known configuration of electrons from a classical supercomputer and estimate the energy of that configuration using the quantum machine. This estimate is then given back to a classical supercomputer to guide its search towards a configuration with lower energy. In this way, the quantum machine acts as an accelerator for the energy modeling part of the computation. By solving for lowest energy under different configurations and constraints, we can explore a range of molecular reactions.
This hybrid example has some great advantages. First, it sidesteps the innovator’s dilemma by leveraging an initial guess derived from our incumbent classical technology, rather than directly competing with that technology. Second, it allows us to effectively utilize the small number of instructions a quantum machine can reliably execute. Third, it allows us to pick small but classically challenging problems (chemical compounds) that can be represented in a small number of quantum bits. Fourth, we have a clear measure of success, as we know that classically-computed ground state energy can be significantly higher than experimentally-observed values, even for small compounds. If our hybrid approach can get closer to experimental values, than our quantum machine has helped compute something not computable classically! Long-term, improved understanding of molecular reactions could lead to better materials, more efficient photovoltaics, and lower energy fertilizer production.
Ken Brown from Duke gives some great context for quantum chemistry in his talk from a recent ISCA Tutorial. Ken notes that we have known how to model molecular energy since Dirac in 1929, but that the problem has been one of computation in determining where the electrons are. Nature only uses n electrons to “model” n electrons. Classical computers use k^n electrons to model n electrons, but quantum computers only use kn electrons to model n electrons.
With such a substantial advantage, why use a hybrid approach at all? Perhaps we will eventually dispense with classical co-processing, but current quantum machines are noisy and error-prone. Although interesting molecules could soon be represented by quantum machines with around 100 qubits, the number of quantum instructions (perhaps 100,000 total) we can execute before experiencing an error will be a limiting factor. It is critical to focus the available quantum computation on what can be best accelerated (e.g., modeling the quantum interactions) and not other mechanics of the algorithm (e.g., gradient descent to search for other configurations). In the long run, we will have enough qubits to use error correction codes and extend executable instructions substantially, but the first demonstration of a practical quantum computation will likely involve too few quantum instructions to be useful without a hybrid approach.
Even as quantum machines scale, quantum algorithms are likely to be specialized, making the quantum device a very domain-specific accelerator. Most practical applications will still require a combination of general classical and specialized quantum processing to be useful
The hybrid approach implies a number of interesting research challenges for system designers. Traditional quantum algorithms, such as Shor’s factorization algorithm and Grover’s search algorithm, can be statically compiled with a high level of optimization using known input parameters. With hybrid algorithms, some of a quantum program’s input parameters can change each iteration. For example, a compiler may spend hours optimizing for quantum instructions that include quantum rotations for specific input angles to solve a chemistry problem, but now we find that the angles change every iteration. This suggests that we need a partial compilation strategy in which programs are optimized for unchanging parameters, but then quickly re-optimized each iteration for parameters that change.
These optimizations are important, as quantum machines will be noisy and the key challenge will be to achieve enough precision to improve the result (e.g., the energy estimate for a molecule). Other methods for improving precision include physical qubit ensembles and circuits that reduce noise, lightweight error-mitigation codes, algorithm-level redundancy, better gradient descent methods, and better sampling methods across multiple executions.
Hybrid algorithms also require more thought be given to hardware and software communication mechanisms between quantum and classical hardware, as well as how such ensembles might be presented as compute resources to users. IBM was the first to make a physical quantum machine accessible via the cloud, which has greatly grown the quantum computing community and allowed research into how to adapt to the physical properties of real machines. The IBM machines, however, are cumbersome for hybrid computation, as the batch queue interface is really designed for stand-alone quantum programs and latency to couple with classical computation is long. Rigetti has taken the next step by releasing a cloud resource that has classical and quantum hardware integrated. Others will surely be soon to follow.
About the Author: Fred Chong is the Seymour Goodman Professor of Computer Architecture at the University of Chicago. He is the Lead Principal Investigator of the EPiQC Project (Enabling Practical-scale Quantum Computation), an NSF Expedition in Computing. Many ideas from this blog stem from conversations with the EPiQC Co-PI’s (Ken Brown, Ike Chuang, Diana Franklin, Danielle Harlow, Aram Harrow, Margaret Martonosi, John Reppy, David Schuster, and Peter Shor) as well as our many students and staff.
Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.