pull down to refresh

From @south_korea_ln 2:

From a condensed matter perspective, different qubit platforms, such as superconducting circuits, spin-based systems, or proposed topological qubits, seem to rely on quite different physical mechanisms for suppressing errors. Do you think those differences could ever lead to qualitatively different scaling in how hard it is to keep a large system coherent, or are they mostly differences in prefactors rather than something more fundamental?

I think that they're ultimately differences in the prefactors (or at any rate, lower-order terms), much like the differences between different possible architectures for classical computers (integrated circuits, vacuum tubes...). Fault-tolerance should ultimately "work" in all of them, and yield the same class of problems solvable in quantum polynomial time; it's mostly a difference of how expensive and hard are the engineering problems are. The biggest difference of this kind is that superconducting qubits are fixed in place on a 2D grid, whereas with trapped-ion and neutral atom qubits you can pick them up and move them around. That's a huge advantage for the latter, maybe more than compensating for their 1000x slower gate speed. Even there, though, we're talking about lower-order factors, since even with superconducting qubits, you can simulate all-to-all connectivity more slowly by using cascades of nearest-neighbor swaps.

reply