SESSION 9a: Usage of NISQ devices -- Oneill

Chair: (Pablo Poggi (University of New Mexico))
3:45pm - 4:15pmKevin Kuper, University of Arizona
Native and Trotter errors during intermediate-depth quantum simulations on a small, highly accurate quantum processor
Abstract. Noisy, intermediate-scale quantum (NISQ) devices are improving rapidly but remain far short of the requirements for fault tolerant computation. In the meantime, much of the effort in the field is focused on the development of analog quantum simulators that operate without error correction. We are currently exploring the capabilities and limitations of such devices, using as our test bed a small, highly accurate quantum (SHAQ) processor based on the combined electron-nuclear spin of a single Cs-133 atom in the electronic ground state. The Cs-atom based SHAQ Processor is controlled with rf and µw magnetic fields, is fully programmable in its accessible 16-dimensional Hilbert space, and provides for direct measurement of the fidelity of the evolving quantum state, as well as more conventional quantum simulation of the time evolution of observables such as magnetization. We have used this SHAQ processor to study the impact of both native and Trotter errors on such simulations, finding that “macroscopic” properties (magnetization) are quantitatively less sensitive to errors, compared to “microscopic” properties such as quantum state fidelities and survival properties. Lastly, we find that the balance between native and Trotter errors leads to an optimal point of operation where their joint effects are minimized.
4:15pm - 4:45pmAndrew Sornborger, Los Alamos National Laboratory
Variational fast forwarding for quantum simulation beyond the coherence time
Abstract. Trotterization-based, iterative approaches to quantum simulation are restricted to simulation times less than the coherence time of the quantum computer, which limits their utility in the near term. Here, we present a hybrid quantum-classical algorithm, called Variational Fast Forwarding (VFF), for decreasing the quantum circuit depth of quantum simulations. VFF seeks an approximate diagonalization of a short-time simulation to enable longer-time simulations using a constant number of gates. Our error analysis provides two results: (1) the simulation error of VFF scales at worst linearly in the fast-forwarded simulation time, and (2) our cost function's operational meaning as an upper bound on average-case simulation error provides a natural termination condition for VFF. We implement VFF for the Hubbard, Ising, and Heisenberg models on a simulator. Finally, we implement VFF on Rigetti's quantum computer to show simulation beyond the coherence time.
4:45pm - 5:15pmYukio Kawashima, 1QB Information Technologies
Scaling up quantum chemistry simulations using density matrix embedding theory
Abstract. The simulation of large molecules using quantum computing is promising. The computational resources required scale only polynomially against molecular size, but do so exponentially when using classical computers. Quantum computing remains limited in computational resources, so computational costs must be reduced. Problem decomposition (PD) techniques are powerful tools able to reduce computational costs in maintaining accuracy in quantum chemistry simulations. The application of PD techniques shows promise in helping to scale up the ability to simulate larger molecules. We have developed QEMIST (Quantum-Enabled Molecular ab Initio Simulation Toolkit), a platform for both the classical and quantum simulation of large molecules, employing PD techniques. One PD technique implemented in QEMIST is the density matrix embedding theory (DMET). Its use involves decomposing a molecule into fragments, each fragment treated as an open quantum system entangled with each other fragment, all taken together to constitute a fragment's surrounding environment. We created an interface using DMET and quantum algorithms to perform quantum chemistry simulations. DMET-based simulation on a ring of 10 hydrogen atoms reduced the required number of qubits from 20 to 4; the error of calculated molecular energy was within 1.0 kcal/mol compared to the exact value. Employing the DMET method improved our ability to simulate larger molecules than conventional simulation techniques not making use of PD.
5:15pm - 5:45pmJaimie S. Stephens, Sandia National Laboratories
A hybrid quantum approximate optimization algorithm incorporating classical heuristics
Abstract. The Quantum Approximate Optimization Algorithm (QAOA) (Farhi et. al. 2014) can approximately solve NP-hard problems. However, the performance of QAOA is not well understood, especially relative to problem-specific classical heuristics. We propose boosting the performance of QAOA by leveraging classical heuristics as black-box oracles in a generic and automatic way. This allows QAOA to benefit from improved classical algorithms as they are discovered. We replace the QAOA cost Hamiltonian with an implicit cost operator derived from a classical heuristic of choice, allowing QAOA to optimize over the output of a classical heuristic. Our approach also eliminates the need for specially designed mixing Hamiltonians for constrained problems. We demonstrate our hybrid QAOA on several discrete optimization problems using high-quality classical heuristics, including local search. We observe that: (i) the performance of our hybrid QAOA improves as the computational cost of local-search is increased, and (ii) our hybrid QAOA outperforms both QAOA and the selected classical heuristics on their own. Thus we offer a new means for QAOA to automatically benefit from classical advances. Sandia National Labs is managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a subsidiary of Honeywell International, Inc., for the U.S. DOE, National Nuclear Security Administration under contract DE-NA0003525.
5:45pm - 6:15pmArik Avagyan, National Institute of Standards and Technology, Boulder
State tomography with photon counting after a beam splitter
Abstract. Quantum optics offers several proposed ways of achieving a scalable quantum computer. In order to characterize such a computer one needs to be able to perform state tomography on quantum states of light. A popular tomographic procedure, called homodyne detection, uses a strong coherent state, called the local oscillator (LO), which interferes on a beam splitter with the unknown state. The output beams are measured by photodiodes whose signals are subtracted and normalized. By changing the LO phase, it is possible to infer the optical state in the mode matching the LO. In this work we determine what can also be learned about the contents of the modes not matching the LO by counting photons in one or both outgoing paths after the beam splitter, keeping the local oscillator mode fixed but choosing its phase and amplitude. We prove that given the probabilities of photon counts of just one of the counters as a function of LO amplitude, it is possible to determine the content of the unknown optical state in the mode matching the LO mode conditional on each number of photons in orthogonal modes on the same path. If the unknown optical state has at most n photons, we determine finite sets of LO amplitudes sufficient for inferring the state. Such a setup thus allows a more extensive characterization of the quantum state of light as compared to the standard homodyne tomography.

SQuInT Chief Organizer
Akimasa Miyake, Associate Professor

SQuInT Co-Organizer
Brian Smith, Associate Professor UO

SQuInT Program Committee
Postdoctoral Fellows:
Markus Allgaier (UO OMQ)
Sayonee Ray (UNM CQuIC)
Pablo Poggi (UNM CQuIC)
Valerian Thiel (UO OMQ)

SQuInT Event Co-Organizers (Oregon)
Jorjie Arden
Holly Lynn

SQuInT Event Administrator (Oregon)
Brandy Todd

SQuInT Administrator (CQuIC)
Gloria Cordova
505 277-1850

SQuInT Founder
Ivan Deutsch, Regents' Professor, CQuIC Director

Tweet About SQuInT 2020!