I work in superconducting qubits largely by cosmic circumstance. 10,000 years ago, when I defended my dissertation, there were very few non-superconducting QC companies, none of which I had known about since I was a transplant from another field. IonQ had barely gotten started, and most of the other quantum startups had yet to be founded! I knew about some approaches, like electrons on helium, since they were adjacent to my graduate work, but EeroQ didn’t incorporate until much later. The only game in town, really, was superconducting qubits. D-Wave was well known (and controversial) along with Google, IBM, Rigetti, and various QC efforts scattered among national labs.
Since then I’ve tried to cultivate a better appreciation for the other QC approaches, especially since the field has grown so much. Unfortunately, due to the nature of, well, everything, it’s very hard to develop a nuanced appreciation for any of these architectures, since there’s substantial incentive to loudly trumpet successes while keeping the many closet skeletons safely tucked away in the literature.
In order to help members of the community better appreciate the pros and cons of the various flavors of quantum computing, Maciej over at Reading the Quantum asked me to contribute a few words about superconducting qubits, why they’re cool, and what horrors lurk beneath their surface. Also, if YOU have strong opinions about a qubit type, write it up and Maciej will link to it!
Tunable, Modular, Interactive-odular
I’ve talked a little about my general bullishness on superconducting qubits elsewhere, but it all boils down to the extraordinary parameter space unlocked by adding a few non-linear elements to simple LC circuits. That is to say, even adding a simple DC SQUID as the inductive part of an LC oscillating circuit lets you create a transmon with tunable energy levels!
Judicious parameter selection for this exceedingly simple qubit almost trivially gives you exponentially suppressed sensitivity to charge noise (the bane of superconducting qubits pre-2007) and 1st order insensitivity to flux noise. This is pretty much why the transmon is the workhorse qubit in the three leading gate-model SC qubit efforts: Google, IBM, and Rigetti. They’re simple, relatively easy to make, trivially tunable, and offer pretty good coherence for solid-state qubits.
Since then, the versatility of the superconducting architecture has shone brightly as teams have started investigating parameter regimes that offer ‘noise protection’ against both charge and flux. What started with fluxonium (recently demonstrated ms scale coherence) has proliferated to heavy/light fluxonium, 0-pi, bifluxon and other styles. Without changing our fabrication techniques, we are able to start searching for qubits that will be superior to the transmon. As fabrication quality improves and better techniques are invented, I expect the superconducting qubit family to grow rapidly and a new optimal qubit1 to be identified. It’s only a matter of time before one of these finds its way into an experimental processing architecture that goes beyond just one or two coupled qubits.
We’ve seen that modern lithographic techniques can give us a wide variety of different superconducting qubits with vastly different properties. It also gives the ability to tune the local electromagnetic qubit environment to our specifications. Most of the work I’m aware of that touches upon this comes from the Houck group, the Kollar group, and the Painter group. In short, it is shocking easy to engineer the impedance of superconducting circuits to make them opaque or transparent to various bands of electromagnetic radiation. You can imagine embedding a qubit in a ‘metamaterial’ lattice of non-qubit superconducting circuitry which provides further protection from certain portions of the electromagnetic spectrum.
In fact, lithography permits us to tailor not only the electromagnetic environment to our purposes, but also the phononic environment! One of the most interesting MM2022 talks I watched was “Progress on engineering the phonon bath for TLS” by Mo Chen from the Painter group. Not only can the qubits themselves be engineered to exhibit desirable properties, but the substrate on which the qubits sit can also be engineered to permit or suppress certain portions of the phononic spectrum. Whether this actually works to suppress unwanted qubit interactions with thermally activated TLS remains to be seen2, but it is a tantalizing glimpse at the sheer customizability of superconducting qubits and their local environment.
The Bad
One of the prime difficulties with superconducting qubits is their vulnerability to fabrication variations which cause the user to end up with a collection of very similar, but not identical, qubits on any given chip. This is in stark contrast to the fact that neutral atom, electron, and ionic qubits are all much, much more uniform. Indeed, one of the reasons superconducting qubit users like having tunability is to counteract these very process variations. There is some evidence out in the wild of how troublesome these issues can be. One major example is that IBM had to refine and improve Josephson junction laser annealing in order to reduce the variance of their qubit frequencies and improve the (expected) yield of their (forthcoming) 400+ qubit chips from ‘abysmal’ to ‘this could work’.
In principle, fabrication can always be improved with better tools, techniques, and materials, but there are some downsides to superconducting devices that are more fundamental. In general, superconducting qubits are coupled to each other either capacitively with metal paddles or inductively using superconductive loops. In each of these cases device designers must carefully consider the effect of parasitic inductances and capacitances to local ground planes, as well as the total inductance/capacitance budget of the qubit.
Since superconducting qubits are made up of finite lengths of superconducting wire, it is impossible to perfectly achieve the desired inductance and capacitance targets. In all cases, the superconducting wire will have capacitance to the ground plane, as well as extra inductance contributed by those same lengths. Often, this contribution is small, especially for single qubits. however, as we start coupling our qubits together, some L and C must also be devoted to achieving the required coupling strengths. As we increase the amount of coupled qubits, it is possible for parasitic loads to drive the qubit parameters into sub-optimal regimes. Fixing this requires trade-offs, often in coupling strength or a reduction in number of coupled qubits3.
A related problem is that, currently, high-quality transmons are Far Too Large. This is primarily a consequence of the gigantic capacitor paddles in use, the size of which help prevent qubit decoherence and relaxation. Given that current hardware estimates require millions of qubits, not to mention couplers, resonators, etc, there will eventually be an imperative to attempt qubit miniaturization. IBM has explored the merged element transmon4 (MET) in which the capacitor paddles also act as the Josephson junction leads, while MIT and Raytheon5 have experimented using hexagonal boron-nitride (hBN) as a high permittivity dielectric for shrinking capacitors, but these experimental devices have not made their way to any commercially available chips.
Superconducting qubits are also plagued by quasiparticles, with some interesting work suggesting that cosmic rays and gamma radiation could be responsible for bursts of correlated errors in QPUs. Here’s my recent post on the subject. The solution to this problem, and the issue of generally elevated quasiparticle populations in general, is likely a combination of better filtering, improved qubit geometry, and adding normal metal (non-superconducting) layers as quasiparticle traps on the QPU chips themselves. The latter two approaches will introduce additional technical risk to the design and fabrication process.
Finally there is the other bugbear of superconducting quantum computing: the dreaded Two-Level System (TLS). The TLS bath has no single source. They could be defects in dielectric materials, contaminations on surfaces, etc. They exist throughout the frequency band used by SC qubits and are another major cause of relaxation and dephasing in these systems. Additionally, it’s hard to pin down what exactly are the causes, since one of the features of TLS ensembles, is that they display many of the same properties, regardless of their source! Getting to grips with the TLS has already consumed substantial efforts across the whole field and will require much more.
Conclusion
Overall, I think the future is bright for superconducting qubits. The interesting parameter spaces have just started to be explored, and we’re starting to see effective gates for novel qubits. The modularity, tunability, and engineerability of the superconducting qubit and its local environment will continue to be a selling point. While at the same time, the intrinsic disadvantages of non-vacuum substrates will continue to pose substantial challenges with respect to dephasing and decoherence.
Or, perhaps more likely, an optimal set of qubits.
I’ll be F5’ing the arXiv until that day comes.
In my mind this is among the reasons why we see so many nearest-neighbor coupled architectures in SC qubits. Getting more connectivity eats up the budget and is hard to do in planar processes!
Superconducting Qubits
Would you like a column on my new Newsletter "Quantum Foundry"? We have about 2,000 free subscribers after 2.5 months. I like your perspectives! I also think "Quantum Observer" is a good moniker.
You can check out our archives here: https://ipotimes.substack.com/archive