Disclosure: I have a modest long position in $IONQ. Also none of this is financial advice, etc etc.
Why-onQ?
The $DMYI ticker officially converted to $IONQ last Friday and immediately started dropping in price. Early this week $IONQ bottomed out at $7.51 per share, basically incinerating those short-dated OTM call options I suggested you reconsider buying. This price action can be painful for holders of common shares, but it absolutely annihilates OTM call holders.
Explanations for this turn of events abound, ranging from those damn hedge funds and market makers (or are they market manipulators???), to ‘purely technical’ hedging from the PIPE participants.
Let me submit that this literally doesn’t matter. I mean, your realized or mark-to-market losses suck, yeah, but, my friend, what the fuck did you think you were buying?
Here’s a quick sampling of Risks from their S-1 filing.
We have a history of operating losses and expect to incur significant expenses and continuing losses for the foreseeable future.
We incurred net losses of $15.4 million and $17.3 million for the year ended December 31, 2020 and the six months ended June 30, 2021, respectively. As of June 30, 2021, we had an accumulated deficit of $56.9 million. We believe that we will continue to incur operating and net losses each quarter until at least the time we begin significant production of our quantum computers, which is not expected to occur until 2025, at the earliest, and may occur later, or never. Even with significant production, such production may never become profitable.
Building quantum computer expensive, building useful quantum computer hard.
We may need additional capital to pursue our business objectives and respond to business opportunities, challenges or unforeseen circumstances, and we cannot be sure that additional financing will be available.
Building quantum computer VERY expensive.
We have not produced a scalable quantum computer and face significant barriers in our attempts to produce quantum computers. If we cannot successfully overcome those barriers, our business will be negatively impacted and could fail.
Producing quantum computers is a difficult undertaking. There are significant engineering challenges that we must overcome to build our quantum computers. We are still in the development stage and face significant challenges in completing development of our quantum computers and in producing quantum computers in commercial volumes. Some of the development challenges that could prevent the introduction of our quantum computers include, but are not limited to, failure to find scalable ways to flexibly manipulate qubits, failure to transition quantum systems to leverage low-cost, commodity optical technology, and failure to realize multicore quantum computer technology.
Additional development challenges we face include:
Gate fidelity, error correction and miniaturization may not commercialize from the lab and scale as hoped or at all;
It could prove more challenging and take materially longer than expected to operate parallel gates within a single ion trap and maintain gate fidelity;
The photonic interconnect between ion traps could prove more challenging and take longer to perfect than currently expected. This would limit our ability to scale beyond a single ion trap of approximately 22 logical qubits;
It could take longer to tune the qubits in a single ion trap, as well as preserve the stability of the qubits within a trap as we seek to maximize the total number of qubits within one trap;
The gate speed in our technology could prove more difficult to improve than expected; and
The scaling of fidelity with qubit number could prove poorer than expected, limiting our ability to achieve larger quantum volume.
Here are all of the reasons why building quantum computer is hard!
Our 32-qubit system, which is an important milestone for our technical roadmap and commercialization, is not yet available for customers and may never be available.
We are developing our next-generation 32-qubit quantum computer system, which has not yet been made available to customers. We expect this system to have 22 algorithmic qubits, i.e., qubits that are usable to run quantum algorithms, but the number of algorithmic qubits available in this system has not been finalized and may be fewer than planned. The availability of this generation of quantum computer system for customer use or independent verification by a third party may be materially delayed, or even never occur. Additionally, the future success of our technical roadmap will depend upon our ability to approximately double the number of qubits in each subsequent generation of our quantum computer. Accordingly, our technical roadmap may be delayed or may never be achieved, either of which would have a material impact on our business, financial condition or results of operations.
Later in the S-1 they state that the roadmap is NOT referenced to create the S-1. Also I think they promised the 32 qubit computer for 2020, so, again, building quantum computer hard.
We are highly dependent on our co-founders, and our ability to attract and retain senior management and other key employees, such as quantum physicists and other key technical employees, is critical to our success. If we fail to retain talented, highly-qualified senior management, engineers and other key employees or attract them when needed, such failure could negatively impact our business.
There aren’t many people who know how to build quantum computer, and they’ve got a lot of places they can work. This connects to rumors I’ve heard of one or more high profile departures, but since nothing has been announced, we’ll wait and see if this is real. But also, there aren’t THAT many new PhDs minted familiar with ion trapping (Dr. Monroe and Dr. Kim are responsible for a substantial portion, I bet), and they can go work at Honeywell (Helios?) or AQT. Or at a cold-atom start-up.
On balance, I think IonQ did exactly the right thing. Or, more accurately, took a very reasonable course of action when faced with the substantial risks, uncertainties, and financial requirements of building a quantum computer. Instead of raising venture rounds piecemeal, with possible milestone requirements to get more money, IonQ gets a beefy $600M infusion right away.
Realistically, that might not even be enough. IonQ’s (defunct?) roadmap1, they will spend down cash reserves to about $200M by 2026, and hit $0 free cash flow by 2027. If they don’t quite get there, they can take on debt, or issue more shares, if the share price recovers by then (preferably on good news and impressive technical demos).
D-Wave goodbye to Annealing
In other unexpected (by me) quantum news, D-Wave is pivoting to gate-model computing!
Many people might know D-Wave from Scott Aaronson’s non-stop dunking on various D-Wave claims, although this has ebbed in past 4 years. So I guess the dunking does stop.
What many people may not know, is that the D-Wave chips, whatever their merits or deficiencies, are incredible technical feats. Fabricating, coupling, controlling, and reading out thousands of flux qubits is really hard! As far as I know, D-Wave were first or early adopters of flux qubit architecture, experimented with with CCJJ flux qubits, with QFPs, individual DC addressability of multiple qubits and QFPs, readout for each of those qubits and more.
Listen, I can’t speak to the quantumness of D-Wave’s machines, or the durability of their claimed speedups against the best classical annealing algorithms. I’m just a humble experimentalist, but from that perspective, I think it’s very clear that D-Wave employs some Very Serious people with Very Serious technical capability. I wouldn’t discount their efforts at gate-model computing, even if it took them a while to come around to it.
While their roadmap doesn’t provide much in the way of specifics regarding their gate model architecture, but they do mention leveraging ‘20 years of experience in pioneering superconducting quantum annealing systems to build a scalable gate-model quantum computing technology in a multi-layer fabrication stack’. I think most fabrication technologies have been trending toward multi-layer or multi-chip stacks, just because this is the only real way to get the wiring density required for many qubits. I’m not totally sure what the D-Wave multi-layer stack looks like, but you’d hope they’re not going to embed their qubits of choice in some random dielectric, as that will have… consequences. There’s a reason the best qubits are fabbed in planar processes, or suspended in 3D cavities.
We know that D-Wave has a lot of experience with flux qubits, but not exactly the nice, high-coherence flux qubits in use at MIT. However, with a few tweaks (apologies to all of my fabrication brothers and sisters) to their junction critical current densities and they could conceivably be in the high-coherence regime. Of course, they might decide to really go hog wild and try out some of the more novel qubit options out there, something like fluxonium? [As an aside, I think I will attempt a post about noise-protected qubits, now that there’s a review paper out. These qubits dominate APS novel qubit sessions, but you pretty much never see them incorporated in any multi-qubit architecture. We’ll find out whether it’s cowardice or something more.]
There’s always the path-more-travelled: bog-standard transmons with D-Wave filtering best practices and well controlled stray coupling to crush gate errors?
If I were in charge, I’d probably pick high-coherence flux qubits and try very hard to shoe-horn them into some sort of scheme that maximally utilizes already extant D-Wave IP.
I’m pretty excited to see what they come up with!
If you’ve made it this far, why not subscribe? You get my updates into your inbox, and I get to see Number Go Up!