Quantum Computers: A Brief Assessment of Progress in the Past Decade

In this post I give a brief  assessment of progress in the past decade, triggered by a recent article in Forbes Magazine that mentions my view on the matter.

Waging War On Quantum – A Forbes Article by Arthur Herman

Arthur Herman is a popular Historian and a senior fellow at the Hudson Institute. On Forbes Magazine he “comments on quantum computing and AI and American national security”, and his recent Forbes article Waging War on Quantum (thanks, Alexander Vlasov) starts as follows:

Quantum computing will never work. Keeping enough qubits stable long enough to do any significant calculating or processing, is a mathematical impossibility. The whole idea that one day quantum computers will discover new miracle drugs, or crack public encryption systems, is a mirage. Even worse, it’s a hoax.

That’s been the message from so-called quantum skeptics for a decade or more, including physicists like Gil Kalai of Hebrew University and Mikhail Dyakonov of the University of Montepellier—all in spite of the fact that quantum computers have continued to grow in sophistication and qubit power. Most experts now agree it’s not a question if a large-scale quantum will emerge that can break into public encryption systems using Shor’s algorithm, but when.”

The first paragraph gives a reasonable description of my views, however I never referred to the whole idea of quantum computing as a hoax. Regarding the second paragraph, it is indeed correct that quantum computers have continued to grow in sophistication and qubit power, however my theory (based on a computational complexity argument) is that progress for reducing the error rate will reach a wall and that recent progress merely approaches this limit. Let me elaborate a little on the development of the past decade as I see it.

Before moving to my assessment I would like to note that Arthur Herman’s offers an outrageous conclusion to his article. He suggests that skepticism of quantum computers (and of the company IonQ) puts the skeptics’ countries at risk. In my opinion, the militant rhetoric of the title and of the conclusion is very inappropriate.

Assessment of progress in the past decade

The past quantum computing decade is characterized both by notable progress, adjustment of expectations, larger investments, much enthusiasm, and some hype. The overall picture is unclear and might be clearer 5-10 years from now.

The following picture (click to enlarge) describes the shift in the community view over the last decade (as I see it).

2010-2020d

On the left you can see David DiVincenzo’s famous 7-steps road map to quantum computers. DiVincenzo put forward these steps in his 2000 paper  The physical implementation of quantum computation,  and the above picture on the left is a graphic description of these steps in a 2013 review paper by Michel Devoret and Rob Schoelkopf.  The caption under the Figure asserts that “Superconducting qubits are the only solid-state implementation at the third stage, and they now aim at reaching the fourth stage (green arrow). In the domain of atomic physics and quantum optics, the third stage had been previously attained by trapped ions and by Rydberg atoms. No implementation has yet reached the fourth stage, where a logical qubit can be stored, via error correction, for a time substantially longer than the decoherence time of its physical qubit components.” The fourth step “logical memory with (substantial) longer lifetime than physical qubits” looked to many like a near term goal ten years ago.

One important development of the last ten years was to introduce building NISQ computers and achieving “quantum supremacy” (and related tasks like high “quantum volume”) as an intermediate goal towards DiVincenzo’s step four. (See the picture on the right.) Of course, there is nothing wrong with setting intermediate goals, we do it all the time and this can be very fruitful.

For me, from the skeptical point of view, these intermediate goals were an opportunity, allowing me to present a clear computational theoretic argument for why “quantum supremacy” is out of reach and to connect the problem with the theory of noise-sensitivity and noise-stability and with Fourier methods that I and my colleagues developed in the 90s.

Adding the intermediate goal of quantum supremacy also represented a much slower time-table than what people previously anticipated. For example, nine years ago in 2013  John Martinis gave a lecture at QSTART,  the opening conference of our HUJI quantum science center. At that time, John expected to have the ability of building distance-3 and distance-5 surface codes within a few years and the tasks of demonstrating logical gates and of logical qubits with 10^{-15} error rates some years later. John also mentioned the ability to control 20 qubits within one month (to this Ray Laflamme commented that it is going to be a long month). All these targets are today still out of reach. It is undisputed that considerably lower noise rates are required even for achieving distance-3 surface codes and it is still not possible to have good control of 20-qubit (and perhaps even 10-qubit) quantum computation.

Of course, John Martinis himself was the leader of the Google efforts towards “quantum supremacy” which are now being carefully evaluated, and his vision and technology from 2013 was important for the Sycamore NISQ experiments. Let me mention that Google’s fantastic “quantum supremacy” claims were largely (but not fully) refuted.

There was a similar level of optimism from various other researchers. It was expected that coherence time would increase by a factor of ten every three years and there was a proposed “double exponential law” prediction for the classical computational power required to simulate quantum devices as time proceeds.  I personally don’t regard these specific claims as hype but rather as (at times, over-the-top) reasoned optimism, but both the reasoning and the predictions themselves should carefully be examined.

NISQ computers are interesting, and they allow interesting quantum physics experiments. Herman asserts that “quantum hybrid systems are making the qubit revolution something that’s happening now, not just a distant dream” and this echoes hopes of several researchers in academia and industry. My analysis asserts that NISQ computers are, from the computational complexity perspective, primitive classical computational devices with inherent chaotic behavior, and therefore, I don’t see how hybrid systems and the interface with conventional computers would turn them into useful computational devices. (They can still be useful for quantum physics experimentation.)

Let me repeat: slower progress than anticipated is very common, setting new intermediate goals is both common and welcome. By themselves they do not imply that the target of “large-scale quantum computers that can break into public encryption systems using Shor’s algorithm” is unrealistic, and indeed many experts in the field believe that it is a matter of time for this ultimate goal to be reached. My view is different, I try to explain my argument to other experts, and to offer experimental predictions and theoretical implications. There are good reasons to hope that the matter will be tested experimentally in the years to come, but my assessment is that the experimental picture from the past decade is not clear.

IonQ, trapped ion quantum computation, and Elon Musk

Herman’s article was triggered by a 183-page document written by a group called “Scorpion Capital.” The document attacks Maryland-based quantum computer company IonQ and among various concerns it also briefly mentions my and Michel Dyakonov’s positions about quantum computers.  I myself share the common view that ion-trap methods for quantum computers form a major avenue and that Chris Monroe (a co-founder of IonQ) is a major player in this direction. I don’t know much about IonQ’s specific efforts, but I would expect that large scale investment is required to put ion-trap methods to test and I personally would like to see it being tested. So I would be quite pleased to see Elon Musk deciding to buy IonQ or the Israeli trapped ion QC company of Roee Ozeri (or both) 🙂 and to make  trapped ion technology his quantum computing signature. Incidentally, the comment section of my 2018 Quanta Magazine interview presented an interesting exchange between Monroe and me (starting here).

A few more remarks:

1) Herman’s article raises several other interesting issues like when (and if) is the appropriate time to transfer to “post quantum cryptography” protocols.

2) There are a few researchers skeptical of quantum computers that actually conduct research and write papers (and books) about it. (There are others that regard the idea as absurd nonsense of absolutely no interest.) A notable researcher who wrote several important papers in the skeptical direction since the late 90s is Robert Alicki from the University of Gdansk.

3) Here is Herman’s crazy conclusion: “No one is saying the Scorpion Capital short-sellers are in Chinese pay, or that skeptics like Dyakonov and Kalai are knowingly putting their countries at risk. But waging war on the U.S. quantum industry can have serious consequences, unless quantum companies and labs show that they are not intimidated, and reassure the public that the quantum future doesn’t rest on hype but significant achievements—achievements that will make our country and our world safer, stronger, and more confident about our future as a whole.”

4) I changed the title to reflect the main topic of the post.

Update 2 (June 6, 2022):

When (and if) is the right time to transfer to post quantum cryptography?

Here is my uneducated recommendation (which is separate to the best of my ability from my overall quantum computing skepticism).  Here I take it as an assumption that the aim is to maximize communication security, and the reason for transferring to “post-quantum” cryptographic protocols is that large scale quantum computers will enable breaking most of current cryptosystems. Note that pressure to transfer earlier rather than later to new protocols and standards may reflect commercial or other interests and not towards the objective of maximizing communication security. (I take no view on these other interests.)

The efforts to build post quantum cryptography are intellectually interesting: I am thrilled to see Oded Regev’s LWE (learning with errors) and Ajtai-Dwork lattice-based cryptography getting used. I certainly support to invest (large amount of resources) in developing post-quantum cryptography.  The question is with transferring to new protocols and my recommendation is:

Wait in implementing new encryption standards based on post-quantum cryptography until DiVincenzo’s stage 4 is firmly established.

(For example, until distance-5 surface codes are built.)

Note that moving forward from good quality quantum error correcting codes (like distance-5 surface codes) to very good quality quantum error-orrecting codes needed for quantum fault tolerance (like distance 11-surface codes), implementing logical quantum gates, and later on implementing fault-tolerance is likely to be a slow process that may take quite a few decades.

The crucial thing to consider is that transferring to new cryptographic methods is by itself a serious communication security risk even (in fact, especially) when it comes to classical attacks. Giving ample time to check new suggested protocols somewhat reduces this risk.


Update 1 (June 1, 2022):
Very relevant to the discussion above: Yosi Avron told me about a very recent breakthrough for distance-3 surface code that is reported in the paper Realizing repeated quantum error correction in a distance-three surface code, by S. Krinner et al. (Here is a link to the arXive version, the researchers are from ETH, Jülich center, Quebec and other places.) Michael Rothschild (and others) told me about this recent breakthrough by a team led by Thomas Monz from the University of Innsbruck and Markus Müller from Aachen University and Forschungszentrum Jülich in Germany for creating entangled logical qubits. The paper is Entangling logical qubits with lattice surgery by Erhard et al, see also here.

Also, now that Scott Aaronson outsourced this very post of mine for quantum computing commentary, I will end with a quote of Scott taken from the interesting Facebook thread related to this post.  These days, Scott is busy dealing with deplorable rude remarks and attacks over the Internet pointed at him.  In my view, and this brings us back also to Herman’s piece, belligerent attacks are not appropriate in general and certainly are not constructive in academic discussions.

Scott commented that he chose to “sit this one out for now” regarding IonQ, and I asked him if he is still a believer in Google/Sycamore.  Here is Scott’s response (that I find rather reasonable; see further discussion between us in the thread).

Scott Aaronson[:] I think it [Google/Sycamore] represented a huge advance in scaling up and benchmarking NISQ devices. I think it showed the major result that the circuit fidelity scaled simply like the gate fidelity to the number of gates, and if that continues to be the case, then contrary to what you say, fault-tolerance will ultimately work. I also think that tensor network and other methods have gotten better at spoofing Sycamore’s Linear XEB score classically, nearly wiping out the quantum advantage as measured by time, though a significant quantum advantage remains as measured by floating-point ops or energy expenditure for the same LXEB score. We knew that quantum supremacy would be a moving target; recent progress underscores the need for better gate fidelities (which, fortunately, seem to be happening anyway) to stay ahead of classical computing on these benchmarks.
Further updates (Dec. 2023): In the comment section Greg Kuperberg (March `23) drew my attention to three advances: 1) The Quantinuum ion trap experiment: https://arxiv.org/abs/2107.07505, https://arxiv.org/abs/2208.01863; 2) The Yale experiment: https://arxiv.org/abs/2211.09116 ;3) The Google experiment: https://arxiv.org/abs/2207.06431 (see also the comment by Craig Gidney). To this we can add 4) a recent paper by Bluvstein et al. https://arxiv.org/abs/2312.03982 (Harvard/MIT/QuEra group) https://arxiv.org/abs/2312.03982 that is also discussed in this SO’s post.
A further update (Jan. 2024) Here is an interesting skeptical perspective The quantum house of cards about quantum computers by Xavier Waintal.

Late addition (June 21,2022): I looked more carefully at the Scorpion report that contains various concerns and claims of various nature and quality. Overall it is a strange document. Here is a link to the report, a link to IonQ’s response, and a news item about class action based on the report and a blog commentary. Of course, one needs to be skeptical about factual matters that are included in the report, and one also has to take with a grain of salt IonQ’s claims.

The 183-page document of “Scorpion Capital” devotes  roughly two pages (16 and 17)  to the views of Michel Dyakonov and mine and the possibility that quantum computation might be impossible in principle. (It devotes a few additional pages to the “hype problem” of quantum computers and the gaps between expectations and achievements. Most of the report deals specifically with IonQ.)  Overall I think that it is a good idea that investors will be aware of these views and this possibility. Investing in quantum computing is a large-risk huge-gain endeavor and in-principle obstacles is a small part of the overall risk in taking a decision on a particular investment. Note that quantum computers enthusiasts like John Preskill and Scott Aaronson also considered failure-in-principle as a serious (while remote) possibility. For example Scott Aaronson wrote in 2006 that “It’s entirely conceivable that quantum computing will turn out to be impossible for a fundamental reason,” and I am not aware of him changing this view since 2006. Another prominent quantum computing researcher, Aram Harrow, is certain that quantum computers are possible in principle but nevertheless opined in 2012 (in his opening statement of our debate) that There are many reasons why quantum computers may never be built,” which is, for investors, equally bad scenario as an in-principle obstruction.

So, as a scientist, if you believe that there are 20% chance that quantum computers are impossible in principle,  this already gives strong incentive to explore this direction. But as an investor, its hardly matter because risks that are specific to a special among the many investment avenues are usually higher and they matter more.

A few words  about hype: I never cared too much about hype and I thought that hype itself is over-hyped.  An important issue is the scientific clear cut question if quantum computers are possible and there are other important scientific and technological issues related to quantum computation.

The paragraph from Scorpion report referring (also) to me starts as follows:

One prominent scientist in the field after another – including ex employees of IonQ we interviewed – echoes this view, forcefully stating  that quantum computers can’t even work in principle, given that quantum decoherence undermines the entire theory. Anyone searching with the keywords “quantum computing” and “decoherence” or “hype” quickly encounters a barrage of papers by quantum computing insiders –researchers who have dedicated their careers only to arrive at the bitter truth.

I am not aware of researchers who have dedicated their careers to make quantum computers a reality only to arrive later at the bitter truth that quantum computers can’t even work in principle. (And I certainly am not aware of a “barrage of papers” by quantum computing researchers that have changed their mind.) In any case, from my point of view more important than the opinions of researchers (or policy-makers or people from the general public) is what the research itself tells us (both theoretical and experimental research).

The report goes on and describes me

As an example, we note an interview with a mathematics professor at Yale and in Israel, who has studied decoherence for a decade. He states he was initially “quite enthusiastic, like everybody else” and then  expounds on decoherence and “the mirage” of quantum computing.

As for me, in the 1990s I was “quite enthusiastic” but I was not a researcher in this area. My quantum computing research was from the start (2005) in the skeptical direction. I thought that this direction was neglected and I also thought it might be related to my theory of noise sensitivity and noise stability.  (A connection was only found in 2013.)

I was and still am quite enthusiastic about quantum computation and quantum information as an academic field and I was certainly happy when my friends and colleagues Dorit Aharonov and Michael Ben Or proved (with two other teams) the “threshold theorem” that shows how quantum fault-tolerant computation is possible for low rates of noise.

To avoid portraying a too pastoral yet inaccurate picture of academic sisterhood, I should mention that it goes without saying that most research efforts in quantum computing (as well as in post-quantum cryptography) are considerably more important if quantum computers could be built, and similarly, my own research would have considerably more value if my argument and predictions are correct :).

This entry was posted in Quantum and tagged , . Bookmark the permalink.

23 Responses to Quantum Computers: A Brief Assessment of Progress in the Past Decade

  1. Ilan Karpas says:

    “No one is saying”, yet he still thought it appropriate to write what no one is saying.

    Very unscientific way of writing about a scientific debate from Forbes.

    • Gil Kalai says:

      Ilan, indeed the aggressive tone of the article is highly inappropriate.

      • Yiftach says:

        Gil, you can invite him to a dual where your weapon will be a standard computer and his a quantum computer. 🙂

      • Gil Kalai says:

        Yiftach, I don’t want to encourage combatant attitudes. Primarily for me this is a fascinating scientific debate with interesting conceptual and technical facets and many issues to clarify.

  2. Gil Kalai says:

    For “quantum supremacy” efforts and tasks in the NISQ region there was also a gradual process of more and more relaxed goals.

    a) Early on (2014, say) the task was to give a good approximation of the noiseless (ideal) quantum state. For the corresponding sampling task, the goal was to demonstrate a sampling distribution that approximates the ideal noiseless distribution

    b) Later on,  (from 2018) the task was relaxed to achieving low fidelity approximation of the ideal model.

    c) Later on (from 2020, also while looking back at the Google 2019 experiment), the task was relaxed to achieving a distribution with prescribed  “linear entropy estimator” for the fidelity (in other words, a probability distribution that can be very far from the ideal distribution, yet exhibit a positive correlation with it).

    Every such relaxation weakens the statement about computational hardness, and also weakens the relevance for the primary goal of creating quantum error correcting codes.

  3. Pingback: Shtetl-Optimized » Blog Archive » An understandable failing?

  4. Gil Kalai says:

    More from the Facebook thread

    FBSAEM

    This was the starting point of an interesting exchange on FB between Scott and me regarding the Google supremacy claims. Summary: we largely (but not fully) agree (about evaluation of past advances not about predictions for the future). 

    The next commentator remembered from Chris Monroe’s lecture in Beer Sheva (22.02.2022) that IonQ has 2 Billion dollars cash and they dont need Elon Musk. 

    FBbar

  5. Gil Kalai says:

    I added some thoughts on the question: when (and if) is the right time to transfer to post quantum cryptography?

  6. Gil Kalai says:

    I added a proposal regarding developing and implementing post quantum cryptography (that I hope also quantum computers enthusiasts could endorse):

    1) Fiercely research post quantum cryptography protocols and devote much resources for their study

    2) Wait in implementing new encryption standards based on post-quantum cryptography until DiVincenzo’s stage 4 is firmly established.

    (For little more details see the update in the post itself.)

    • Anna Johnston says:

      Thank you for posting this proposal, particularly with recent attacks on some of the NIST submissions: One of the round 3 algorithms (RAINBOW) was broken over a weekend on a laptop, and even a round 4 (SIKE) was recently broken in record time (SIKEp434 broken in an hour using a single core).

      • Frank says:

        wow, and you failed to mention there are dozens and dozens of submissions that are safe! Making Quantum-safe crypto is not a hard task if you don’t care about performance, the reason why very few protocols were broken was that their designers were taking design risks to make them fast(er) to win the contest. Btw, since NIST has already standardized these protocols in August, why don’t you just try to attack the chosen ones?

      • Anna Johnston says:

        WRT Frank’s comment from 10.10.22: First, there is no such thing as safe — if by ‘safe’ you mean perfect security. There are just dozens and dozens of submissions that have yet to be attacked. Second, NIST (as of April 2023) hasn’t actually standardized these algorithms — they’ve only completed the fourth round. Since that completion, they’ve called for new signature algorithms to be submitted (deadline is 1 June 2023). There have also been improvements to existing lattice attacks (https://arxiv.org/pdf/2205.13983.pdf, https://zenodo.org/record/6412487#.ZDnKVC_MKfA) which could threaten the current choices.

      • Frank says:

        Hi Anna Johnston,

        The result of reported in the paper you posted has already “refuted” your argument (see table 2 of the paper), the cost to attack the “weakest” kyber512 is still 2^99.6 (consider we have less than 10^82 atoms in this universe, this is more than safe)

        In the crypto world the statement “improvements to existing lattice attacks”
        has practically zero value. As we have seen in RSA over the years, people find “more efficient” ways to “attack” NP problems from time to time (this is expected), but these attack are rarely significant enough to make a difference. Even if it does, people can just simply work around them by increasing key size (consider the disk space we have, this is a very cheap move). For example, the cost to attack kyber1024 is still 2^208 according to the paper

        ” NIST (as of April 2023) hasn’t actually standardized these algorithms”

        The candidates were finalized last August, unless something dramatic happens, we expect NIST will move them to the “standardized” phase in 2024

  7. Pingback: Quantum Computers: A Brief Assessment of Progress in the Past Decade |

  8. Big Joe says:

    There is no such thing as a free lunch.

  9. Gil Kalai says:

    The Google AI team reported on some implementations of distance-3 and distance-5 surface code in this paper: https://arxiv.org/abs/2207.06431

  10. Gil Kalai says:

    Greg Kuperberg wrote to me:

    Here are the three recent experiments on quantum error correction to which I would draw your attention:

    1) The Quantinuum ion trap experiment:

    https://arxiv.org/abs/2107.07505
    https://arxiv.org/abs/2208.01863

    2) The Yale experiment:

    https://arxiv.org/abs/2211.09116

    3) The Google experiment:

    https://arxiv.org/abs/2207.06431

    The point that I want to make is that I think that you should take these experiments seriously in the context of your own topic, beyond just mentioning them in passing, etc.

  11. Pingback: Greg Kuperberg @ Tel Aviv University | Combinatorics and more

  12. Gil Kalai says:

    Here is a video from a very nice talk by Dolev Bluvstein at the Simons Institute about quantum error-correction and fault tolerance with neutral atoms.

  13. Pingback: Three Remarkable Quantum Events at the Simons Institute for the Theory of Computing in Berkeley | Combinatorics and more

Leave a comment