Computer Architecture Today

Informing the broad computing community about current activities, advances and future directions in computer architecture.

Camel Up is a light-hearted board game. Fueled by the randomness of dice, camels race around a cardboard track to be the first to cross the finish line. Throughout the race players place bets, trying to predict which camel will ultimately win. Unfortunately, correct predictions are difficult to make. Camels can ride on each other, benefiting from an opponent’s success, and external forces constantly shuffle the rankings. The result is a game filled with uncertainty. A camel that in one turn lags behind can find itself leading the race in the next.

The board game Camel Up (JIP, CC BY-SA 3.0, via Wikimedia Commons)

Today, the field of quantum computing could be construed as a game of camel up. The camels in our race are different qubit technologies: superconducting, trapped ions, neutral atoms, electron spins, photons, and our finish line is a demonstration of scalable, practical quantum advantage. 

Betting on Camels

Imagine you’re a player in this quantum game of camel up, and you’re trying to choose which camel to place a bet on. One strategy you might employ is to benchmark each camel on relevant metrics. If you believe the finish line lies in the Noisy Intermediate Scale Quantum (NISQ) regime these would be physical system metrics: gate fidelities, measurement error rates, and coherence times. You may even try and calculate a single number to summarize these, such as Quantum Volume, to make comparison easier. And if you had done this six years ago you would have found a race with two camels in the lead: superconducting qubits and trapped ion qubits. These technologies have features that are attractive for NISQ: they are mature, have good error rates, and exhibit flexible control for implementing general quantum programs.

However quantum hardware is error-prone, much more so than classical hardware, and optimistic projections set expected error rates between 1 in 1,000  to 1 in 10,000. This is incompatible with large-scale applications for material science, chemistry, and cryptography that require millions or even trillions of operations. In response, many researchers now believe the finish line lies in the regime of Fault-Tolerant Quantum Computing (FTQC), defined by lower error rates through Quantum Error Correction (QEC). 

But if you believe the finish line lies in the FTQC regime, how should you determine which camel to bet on? General, physical system metrics are not sufficient. The end-goal is logical-level performance, but this changes based on which QEC codes are used, of which there are hundreds.

Co-Designing with Camels

One way to answer this question is to look at the state of hardware today. With the shift towards FTQC, hardware design decisions have changed, and the co-design of quantum hardware with QEC codes has become increasingly popular. Google’s superconducting chips are designed to meet the connectivity requirements of a planar surface code, IBM’s roadmap now incorporates plans to implement nonlocal c-couplers necessary for a family of quantum LDPC codes, and many companies involved in DARPA’s Quantum Benchmarking Initiative have co-designed quantum hardware with QEC codes, such as those for cat qubits and photonically linked silicon spin qubits.

This increase in co-design reflects a fundamental difference in designing systems for FTQC compared to NISQ. For FTQC the role of hardware is not to implement any quantum program, but instead to implement a layer of quantum error correction. 

What Makes a Camel Good at QEC

If QEC is critical to a camel’s success, it’s necessary to discuss which hardware features are most important for QEC. Surprisingly, the physical instruction set needed is quite simple–most codes can be implemented with a collection of Controlled NOT and Hadamard gates. The key challenges are instead scalability and connectivity. It’s expected that large-scale systems will require hundreds to thousands of simultaneous QEC codes which, depending on their encoding rates, can lead to resource estimates in the millions of physical qubits. This cost can be mitigated through the use of QEC codes with higher encoding rates, but this is dependent on the available connectivity. Particularly, it’s known that higher degrees of connectivity are necessary to implement codes with higher encoding rates. An ideal hardware for QEC is therefore one that can scale in size effectively while maintaining a high degree of connectivity between physical qubits.

A Camel Case Study: Neutral Atoms

One camel that might fit this bill is neutral atom arrays, a newer hardware built from 2D arrays of optically trapped atoms. Although neutral atoms have received interest for NISQ applications, they’ve become increasingly popular in recent years as a platform for QEC. They have high connectivity between atoms: two-qubit interactions can go beyond nearest-neighbors, and atoms can even be moved dynamically at runtime with reconfigurable traps. They also scale well: existing demonstrations have shown > 6,000 trapped atoms with expectations set at 10,000 atoms for a single device in the future. 

However, these features also come with challenges. Individual control of atoms in large arrays can be difficult and impractical, logical operation times have historically been slow, and loss of atoms during execution requires lengthy reloading steps. As a result, co-designs of neutral atoms with QEC have tried to address these challenges. 

Researchers at Harvard and QuEra realized a system that avoids challenges in control scalability through mid-circuit movement with zoned control, and mitigates the impact of slow cycle times through fast, transversal error-corrected operations. Movement-based systems have further been co-designed with high encoding rate QEC codes such as Hypergraph Product Codes and Generalized Bicycle Codes (see animation below) for effective quantum memories.

 

Another system based on trapping two atomic species was demonstrated at UChicago, leading to our own corresponding co-design with QEC. In that work we studied ways to mitigate control costs by exploiting global control per species. We also studied how to avoid slow cycle times, and we proposed interleaving QEC blocks to enable fast, transversal operations without the need for movement. 

The Race Ahead

While neutral atoms are a promising platform for the FTQC regime, they’re only one camel in the race. Today, it’s still difficult to predict which hardware platform will demonstrate scalable, practical quantum advantage. However, a more interim step is to ask which hardware implements QEC the best, and answering this requires the continued co-design of new QEC architectures with evolving quantum hardware. 

About the Authors

Joshua Viszlai is a PhD student at the University of Chicago advised by Fred Chong. His research studies the co-design of quantum error correction and underlying quantum hardware as well as software systems in the fault-tolerant regime. His work has addressed a range of architectural questions in surface codes, quantum LDPC codes, neutral atom arrays, and QEC decoding. He will be on the job market looking for academic positions this upcoming year.

Fred Chong is the Seymour Goodman Professor of Computer Architecture at the University of Chicago and the Chief Scientist for Quantum Software at Infleqtion. He was the Lead Principal Investigator of the (Enabling Practical-scale Quantum Computation), an NSF Expedition in Computing, as well as the Lead PI of a Wellcome-Leap Q4Bio project. He is also an advisor to Quantum Circuits, Inc.

Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.