Abstracts

Advantages of versatile neural-network decoders for topological codes

Presenting Author: Tomas Jochym-O'Connor, California Institute of Technology
Contributing Author(s): Nishad Maskara, Aleksander Kubica

Decoding of generic stabilizer codes is a computationally hard problem, even given simple noise models. While the task is simplified for codes with some structure, such as topological codes with geometrically-local stabilizer generators, finding optimal decoders remains challenging. In our work, we analyze the versatility and performance of neural network decoders. We rephrase the decoding problem as a classification task, which is well-suited for machine learning. We show versatility of the approach by studying two-dimensional variants of the toric and color codes and different error models, bit- and phase-flip, as well as nearest-neighbor depolarizing noise models. The resulting decoders have improved performance and thresholds over previously known methods. We believe that neural decoding will play a key role in error correction for near-term experiments where unknown noise sources could severely affect the performance of the code.

(Session 9b : Friday from 4:45pm-5:15pm)

 

SQuInT Chief Organizer
Akimasa Miyake, Assistant Professor
amiyake@unm.edu

SQuInT Co-Organizer
Mark M. Wilde, Assistant Professor LSU
mwilde@phys.lsu.edu

SQuInT Administrator
Gloria Cordova
gjcordo1@unm.edu
505 277-1850

SQuInT Founder
Ivan Deutsch, Regents' Professor
ideutsch@unm.edu

Tweet About SQuInT 2018!