TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 24212

Classical-Quantum Synergies in the Theory and Practice of Quantum Error Correction

( May 20 – May 23, 2024 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/24212

Organizers
  • Carmen G. Almudéver (Technical University of Valencia, ES)
  • Leonid Pryadko (University of California at Riverside, US)
  • Valentin Savin (CEA - Grenoble, FR)
  • Bane Vasic (University of Arizona - Tucson, US)

Contact

Shared Documents


Schedule

Summary

Background and Motivation: From Classical to Quantum Error Correction and Fault-Tolerance

A fundamental consequence of the mathematical theory of information laid down by Shannon, error correcting codes play a vital role in ensuring the integrity of data in systems exposed to noise or errors. Classical error correcting codes were crucial to the success of modern communications and data storage systems (from the Internet to mobile, satellite, and deep-space communications, and from disk to flash memory storage) and found applications in other areas, such as pattern recognition, group testing, cryptography, or fault-tolerant computing.

Likewise, quantum error correcting codes are at the heart of all quantum information processing, from fault-tolerant quantum computing to reconciliation in quantum key distribution, quantum sensing, and reliable optical communications.

Computation in the presence of noise is a long-standing problem, going back to the 1950s and the celebrated works of von Neumann, Elias, Taylor, Kuznetsov, Winograd, Cowan, Dobrushin, Pippenger, and many others. The first attempt to apply general error correction techniques for the design of fault-tolerant computing systems is due to Elias (Computation in the presence of noise, 1962), and one of the first attempts to derive fundamental limits in fault-tolerant computing is due to Winograd and Cowan (Reliable computation in the presence of noise, 1963). These works focused on fault-tolerant classical (Boolean logic based) computation, prior to the advent of ultra-high reliability integrated circuits based on complementary metal-oxide-semiconductor (CMOS) technology, but they still inspire and resonate with current approaches to fault tolerance, e.g., to support the ongoing miniaturization of the emerging data processing and storage devices (technology scaling). In parallel, the last years have seen significant advances in the field of quantum technologies, promising a disruptive impact in information and computing technologies. Basic requirements for quantum computation have been demonstrated in various technologies, including semiconductor or superconductor materials, photons, trapped ions, etc. Nonetheless, for unleashing the full computational power that quantum computers can bring, a critical task is to protect the quantum computation from the inherent quantum noise. The discovery of quantum error correcting codes in the mid-90s paved the way to noise resilient quantum computation, developed through the works of Calderbank, Shor, Steane, Sloane, Gottesman, Knill, Kitaev, Freedman, Meyer, Preskill, and many others. The integration of quantum error correction (QEC) into the quantum computation led to the development of the fault-tolerant quantum computing framework, aimed at countering the effects of noise on stored quantum information, faulty quantum preparation, faulty quantum gates, and faulty measurements. Such an integration of QEC and fault-tolerance techniques in quantum computing systems is key to the development of a universal large-scale quantum computer, achieving its expected exceptional potential.

While classical and quantum error correction may be regarded as different paradigms, involving different ways of thinking and to a certain extent different research communities, it turns out that they are actually closely related. One may mention here the formalism of quantum stabilizer codes, allowing notably to move from a continuous to a discrete model for quantum error correction, among which of particular interest is the Calderbank-Shor-Steane (CSS) construction of a quantum code from a pair of orthogonal classical binary codes.

CSS codes can be alternatively described as chain complexes involving three spaces, where the boundary operators are defined (up to a choice of bases) by the two orthogonal classical codes. This homological point of view is essentially the one adopted by topological constructions, where quantum codes are produced based on cellular decompositions of surfaces (e.g., torus), or higher dimensional manifolds. In parallel, the powerful machinery of abstract homological algebra proved to be very efficient in providing new constructions of quantum codes, among which of particular interest are codes with constant weight stabilizer generators, referred to as quantum low-density parity-check (qLDPC) codes. The class of qLDPC codes encompasses the above topological constructions, and is the only class of quantum codes known to contain families of codes with both constant non-zero rate and non-zero fault-tolerant error-correction threshold. It is also worth mentioning here the recent constructions of asymptotically good qLDPC codes (with constant rate and relative minimum distance), auguring for practical constructions with increased error correction capacity or reduced qubit overhead. However, unlike their classical counterparts, which are equipped with efficient message-passing decoding algorithms, qLDPC codes are difficult to decode. The decoding of a qLDPC code requires locating not a single most likely error, but the most likely equivalence class of mutually degenerate errors (degeneracy is an inherent characteristic of any qLDPC code), which tends to inhibit the convergence of message-passing algorithms designed for classical codes. Besides, it is also worth mentioning that the time budget available to perform a single error correction round varies with the quantum technology, but a first-order approximation is a period of hundreds of nanoseconds. Hardware implementations meeting such a time constraint will require massive parallel processing, which has to be enabled by both the structure of the quantum code and the decoding algorithm.

To tackle these challenges, this Dagstuhl Seminar aimed at promoting interactions among coding theorists, quantum physicists, mathematicians, and computer and hardware engineers, to discuss achievements, strategies, and remaining gaps in the integration of QEC and fault-tolerance techniques into practical quantum computers, towards a comprehensive and mutual understanding of theory and engineering practice.

Topics Covered by the Seminar

Classical and Quantum LDPC codes. The quest for low-complexity decoders of classical LDPC codes has resulted to the emergence of soft-decision iterative message passing decoders, e.g., based on belief-propagation (BP) or min-sum (MS) algorithms. In the quantum case, decoding a CSS qLDPC code boils down to decoding the two constituent classical LDPC codes (e.g., assuming separate decoding of X and Z errors, which does not preclude taking into account the possible correlations between the two types of error). In homological terms, the goal of the decoder is to find the most likely chain (error) - or more specifically, the most likely class of chains - corresponding to a given boundary (syndrome), where two chains are equivalent if their sum is in the trivial homology class.

Maximum-likelihood decoders exist for the toric code (yet, their complexity is too high for practical applications), but they are out of reach for arbitrary topological or qLDPC codes. Developing new approaches to accurate and hardware friendly decoding of quantum codes is a crossroad of theory and practice, and of classical and quantum coding.

Presumably, classical-quantum synergies can provide meaningful insights to the theory and practice of qLDPC codes. There are many examples where the theory and practice of qLDPC codes may benefit from classical-quantum synergies, such as devising optimized constructions for short qLDPC codes, improving the decoding performance through modified message-passing or smart post-processing techniques, using knowledge of quantum trapping sets to cope with the code degeneracy, devising machine learning based decoding solutions, conceiving efficient decoding algorithms to exploit soft information on measurement errors, or developing codes and decoding algorithms amenable to single-shot error correction.

Particular challenges discussed during the seminar were broadly related to novel constructions of qLDPC codes and expanding properties of the associated graph, novel decoding algorithms for topological and qLDPC codes, including message-passing based decoding, tensor network decoding, and machine-learning based decoding, applications of quantum error correction in various areas as quantum computing or quantum networks, and the design of entanglement-assisted quantum codes.

Fault-Tolerant Quantum Computation. Quantum memory with a topological or, more generally, qLDPC stabilizer code can be implemented with repeated syndrome measurements, where errors are detected by the difference between syndromes measured in consecutive rounds. It is also worth noticing that a QEC with a sufficiently short syndrome measurement cycle is needed throughout the operation of a quantum computer, and measurement circuits have to be designed with fault-tolerance in mind, e.g., to prevent a single error to spread on multiple qubits. More generally, when non-trivial gates are executed on the logical subspace, detection events have to be chosen for each particular circuit. The gate error for the hardware in use, as well as the specific choice of the circuit and of the detection events determines the error model and the structure of the quantum error-correcting code that has to be decoded. Pauli error channels associated with specific gates on specific qubits are most commonly used for decoding. Actual error probabilities may also depend on the parameters chosen for each qubit (e.g., working frequencies chosen for individual qubits in the case of superconducting qubits), as well as variability of the manufacturing. Other important error types include non-Pauli errors (decay, unitary errors, etc.), as well as leakage from the computational subspace. Furthermore, with some hardware, syndrome measurement may contain additional soft information about the measurement outcome. Taking such information into account may dramatically improve the decoding accuracy. While in theoretical analysis such details can often be ignored, in practice, for a quantum computer operating close to the threshold, a relatively small improvement in the decoding accuracy can reduce the required overhead by orders of magnitude, or even be required to attain fault-tolerance.

Particular challenges discussed during the seminar were broadly related to a variety of Pauli error channels, including those derived from Clifford circuits with gate error models customized for specific hardware, related unification of decoding protocols for qubit-based codes, decoding using soft syndrome information, coherent noise and quantum error correction, subsystem and Floquet codes, effective consideration of geometric and connectivity requirements, fault-tolerant quantum computation, and fault-tolerant design of algorithms and protocols.

From Noisy Intermediate Scale Devices to Large Scale Quantum Computing. While QEC is the only presently known gateway to reap the benefits of computational quantum algorithms, a robust, scalable, and fully functional QEC technique that allows performing fault-tolerant quantum computations has not been demonstrated experimentally yet. Arguably, QEC is the only technology still lacking to realize a vision of useful large-scale quantum computation. However, there are already a few demonstrations of the potential to protect quantum information on noisy intermediate scale quantum (NISQ) processors based on superconducting qubits, such as: i) the experimental implementation of distance-3 surface code on the Zuchongzhi 2.1 superconducting quantum processor showing that by executing several consecutive error correction cycles, the logical error can be significantly reduced after applying corrections (Realization of an Error-Correcting Surface Code with Superconducting Qubits); ii) the experimental demonstration that increasing the code distance leads to a better logical qubit performance using an expanded Sycamore device with 72 transmon qubits (Suppressing quantum errors by scaling a surface code logical qubit). NISQ technology may serve as a first step towards demonstrating a certain number of QEC protocols, suitable to the intermediate scale, but which in the long term may also have useful implications for large-scale quantum technologies. Yet, in a large-scale quantum computer, the QEC decoder design faces significant challenges, arising from the need to integrate various system constraints, such as accuracy, bandwidth, latency, power-consumption, or scalability. QEC decoders need to be powerful enough to accurately correct the quantum errors, fast enough to fight against the qubit decoherence, energy efficient to meet stringent power-consumption requirements, and highly scalable to meet the needs of fault-tolerance. Achieving all these constraints is extremely challenging, and might not be possible with existing solutions. Recent research has focused on the design of hardware architectures capable of efficiently accommodating QEC techniques, where considerations such as timing, latency, power, and wiring between the quantum chip and the QEC processor take a prominent place, as they are critical for creating a viable solution.

The main challenges discussed during the seminar ranged from low-qubit overhead fault-tolerant schemes and efficient implementation of small QEC on NISQ processors to scalable modular quantum computing architectures for quantum error correction and large scale fault tolerance, while also considering software implementation of quality decoders, decoding architectures that lend themselves to high-speed and low energy consumption, and recent progress on the hardware implementation and prototyping of QEC decoders.

Organization of the Seminar

The seminar brought together 30 participants, both senior and talented young researchers, from 13 countries (Denmark, Finland, France, Germany, Great Britain, India, Ireland, Netherlands, Russia, Switzerland, Spain, Taiwan, and the United States), with research expertise in relevant areas, e.g., classical and quantum coding theory, hardware architectures and designs of error correcting codes, quantum information processing and software, fault-tolerant quantum computation and fault-tolerant design of algorithms and protocols, quantum technologies, and quantum computer architecture design.

The primary objective of the seminar was to foster an exchange of ideas on challenges faced by quantum error correction, evolving through presentations as well as discussions aimed at realizing the potential of a large community bring diverse viewpoints to the table. In order to facilitate this, the two and a half day program of the seminar comprised a series of 14 invited talks, organized in seven plenary sessions, as well as five time slots for breakout sessions, giving more time for discussions and the organisation of ad-hoc working groups (bringing together a large part of the participants). The full report includes the abstracts of all talks and three working groups.

Copyright Carmen G. Almudéver, Leonid Pryadko, Valentin Savin, and Bane Vasic

Motivation

The last years have seen significant advances in the field of quantum technologies, consolidating the development of basic requirements for quantum computation. Protecting the quantum computation from noise and decoherence has become more topical than ever, challenging and bringing quantum error correction fairly close to the integration into practical quantum computers. However, to make such an integration viable, further innovations and advances are required in both theoretical research and engineering practice. This Dagstuhl Seminar is intended to be an interaction forum for senior and talented junior researchers, crossing boundaries between classical and quantum coding theory, and related areas of quantum technology and engineering problems. The aim is to exchange on the challenges and the toolsets that emerged in the different areas, towards creating a diverse and inclusive research network, where researchers from different domains can share ideas and knowledge, and inspire each other. Topics to be covered include:

Quantum error correction and fault tolerant quantum computation:

  • Connections and interactions between classical and quantum coding theory,
  • Theory and practice of topological quantum codes, quantum LDPC, and quantum Polar codes,
  • Decoding aspects of topological and quantum LDPC codes,
  • Non-qubit based quantum error-correcting codes, in particular variety of oscillator-based codes and Gottesman-Kitaev-Preskill codes and their concatenation with quantum stabilizer codes,
  • Variety of Pauli error channels, including those derived from Clifford circuits with gate error models customized for specific hardware,
  • Related unification of decoding protocols for qubit-based codes,
  • Decoding using soft syndrome information, decoding in the presence of leakage errors,
  • Codes and decoding algorithms for single-shot error correction,
  • Subsystem codes (both on their own right and as part of fault tolerant gadgets).

From noisy intermediate scale quantum era to large-scale fault tolerance:

  • Low-qubit overhead fault-tolerant schemes and demonstration of small quantum error correcting codes in noisy intermediate-scale quantum devices,
  • Software implementation of quality decoders,
  • Quantum hardware architectures for quantum error correction and large-scale fault tolerance,
  • Optimization of quantum error correction for specific technology constraints or noise models,
  • Hardware implementations and prototyping of quantum error correction decoders,
  • Challenges of the integration of quantum error correction and control systems.
Copyright Carmen Garcia Almudéver, Leonid Pryadko, Valentin Savin, and Bane Vasic

Participants

Please log in to DOOR to see more details.

On-site
  • Alexei Ashikhmin (Bell Labs - Murray Hill, US)
  • Kenneth R. Brown (Duke University - Durham, US)
  • Michael Epping (DLR - Sankt Augustin, DE)
  • Abdul Fatah (Atlantic Technological University - Galway, IE)
  • Omar Fawzi (ENS - Lyon, FR)
  • Carmen G. Almudéver (Technical University of Valencia, ES)
  • Shayan Srinivasa Garani (Indian Institute of Science - Bangalore, IN)
  • Francisco García Herrero (Complutense University of Madrid, ES)
  • Ashutosh Goswami (University of Copenhagen, DK)
  • Robert König (TU München, DE) [dblp]
  • Ching-Yi Lai (National Yang Ming Chiao Tung University - Hsinchu, TW)
  • Mehdi Mhalla (LIG - Grenoble, FR)
  • Ioana Moflic (Aalto University, FI)
  • Davide Orsucci (DLR - Oberpfaffenhofen, DE)
  • Alexandru Paler (Aalto University, FI) [dblp]
  • Leonid Pryadko (University of California at Riverside, US)
  • Joseph M. Renes (ETH Zürich, CH)
  • Eleanor Rieffel (NASA - Moffett Field, US)
  • Laura Rodríguez Soriano (Technical University of Valencia, ES)
  • Joschka Roffe (University of Edinburgh, GB)
  • Pradeep Sarvepalli (Indian Institute of Techology Madras, IN)
  • Valentin Savin (CEA - Grenoble, FR)
  • Emina Soljanin (Rutgers University - Piscataway, US) [dblp]
  • Matt Steinberg (TU Delft, NL)
  • Barbara Terhal (TU Delft, NL) [dblp]
  • Bane Vasic (University of Arizona - Tucson, US) [dblp]
  • Gilles Zémor (University of Bordeaux, FR) [dblp]
Remote:
  • Mackenzie Hooper Shaw (TU Delft, NL)
  • Liang Jiang (University of Chicago, US)
  • Pavel Panteleev (Moscow State University, RU)

Classification
  • Information Theory

Keywords
  • quantum error correction
  • fault-tolerant quantum computing
  • quantum LDPC codes
  • hardware implementation and prototyping
  • noise models