Benchmarking and Assessment

Image of QPL-Logo-ky2-6

Sandia has long been at the cutting edge of the benchmarking and assessment of quantum computers. These efforts are led by the Quantum Performance Laboratory (QPL), a research and development (R&D) group within Sandia National Laboratories. The QPL helps address the need for benchmarking and characterization through mathematical theory, numerical analysis, creation of new algorithms and software, and experimental tests and demonstrations in real-world quantum computing systems. The QPL studies the performance of quantum computing devices, and develops practical methods to assess it. Our research produces:

  • insight into the failure mechanisms of real-world quantum computing processors,
  • well-motivated metrics of low- and high-level performance,
  • predictive models of multi-qubit quantum processors, and
  • concrete, tested protocols for evaluating as-built experimental processors.

Areas of Research

The QPL aims to extend the frontiers of understanding performance of quantum computers and quantum computing components — e.g. qubits, gates, logical components and subroutines, and fully integrated quantum computing systems. Key areas of research include:

Gate Set tomography (GST)

GST is a widely used technique for comprehensive, self-calibrating, and high-precision reconstruction of a full set of quantum logic gates. GST is a tool for characterizing one- or two-qubit gate sets, and the team is actively extending and adapting GST to solve a variety of problems in quantum computer characterization. We are extending GST to more qubits, to physics-based error models, to gate sets containing mid-circuit measurements, and to time-resolved tracking of quantum gates. Many of these cutting-edge techniques are available in pyGSTi.

Randomized Benchmarking (RB)

RB is a widely-used method for measuring the average performance of a set of quantum gates. QPL scientists developed direct randomized benchmarking to enable measuring the performance of more qubits (at once) than the industry-standard RB method. Subsequently, the team developed an even more streamlined RB method that can be used to benchmark hundreds of qubits, and introduced time-resolved RB.

Volumetric Benchmarking

The QPL introduced volumetric benchmarking, which is a framework for diverse and informative benchmarking of quantum computers. We used this framework to demonstrate scalable benchmarking of real quantum computing hardware, and to help construct the first commercially-focused benchmarking suite. The QPL introduced volumetric benchmarking, which is a framework for diverse and informative benchmarking of quantum computers. We used this framework to demonstrate scalable benchmarking of real quantum computing hardware, and to help construct the first commercially-focused benchmarking suite.

Diagnosing Complex Sources of Errors

Real quantum computers can suffer from many types of complex errors or noise. Usually, these are either condensed into “error rates”, or modeled using quantum process matrices. But real-world faults often don’t fit into these constrained frameworks. A prominent theme of QPL research is the development of theories to understand these subtle and complex errors, and techniques to diagnose them. Types of errors that the QPL studies include crosstalk, markovian errors, and more.

Featured Product

PgGSTi Software

Image of pyGSTi_icon_wht

The QPL has developed and maintains the open-source pyGSTi software package. It provides researchers and engineers around the world with optimized, reliable implementations of the QPL’s methods for assessing performance of quantum computers. PyGSTi is a mature Python package providing powerful tools for simulation, tomography, benchmarking, data analysis, robust reporting and data visualization. In addition to extensive documentation and tutorials, a survey article describes the capabilities of pyGSTi. 

Although pyGSTi originated as a reference implementation for gate set tomography (GST), it grew to contain many protocols for characterizing quantum computing components and quantifying their performance. These include focused characterization protocols for 1-2 qubits, and holistic benchmarks for hundreds of qubits.

PyGSTi has been used by groups around the world to test and characterize many types of quantum computing hardware.

To learn more about the Quantum Performance Lab Team, publications, events, and job opportunities, please visit their website at qpl.sandia.gov