Hi everybody, my post summarizes an on-going 5-year research project and four papers about the 2019 Google experiment. The timing of the post was indeed related to Google's Willow announcement and the fantastic septillion assertion. It is not clear why Google added to the announcement of nice published results about quantum error correction a hyped undocumented fantastic claim. I think that our work on Google's 2019 experiment provides useful information for evaluating Google's scientific conduct.
Welcome to Hacker News Gil! I’m a big fan of your work in complexity theory and have thought long and hard on the entropy-influence conjecture revisiting it again recently after Hao Huang’s marvelous proof of the sensitivity conjecture.
To answer your question on why the hyped fantastic claim, as you must know, the people who provide the funds for quantum computing research almost certainly do not understand the research they are funding, and need as feedback a steady stream of fantastic “breakthroughs” to justify writing the checks.
This has made QC research ripe for applied physicists who are skilled in the art of bullshitting about Hilbert Spaces. While I don’t doubt the integrity of a plurality of the scientists involved, I can say with certainty that approximately all of the people working on Quantum Computing Research would not take me up on my bet of $2048 that RSA2048 will not be factored by 2048 —- and would happily accept $204,800,000 to make arrays of quantum-related artifacts. Investors require breakthroughs or the physicists will lose their budget for liquid gases — certainly exceeding $2048.
While there might be interesting science discovered along the way, I think of QC a little like alchemy: the promise of unlimited gold attracted both bullshitters and serious physicists (Newton included) for centuries, but the physical laws eventually emerged that it is not scalable to turn lead into gold. Similarly it would be useful to determine the scaling laws for Quantum Computers. How big of an RSA key is needed before even a QC exceeds the total number of particles in the universe to factor it in reasonable time? Is 2048 good enough that we can shelf all the peripheral number-theory research in post-quantum-cryptography? Let’s not forget the mathematicians too!
You should not put any weight on surveys like this.
I'm an ML/AI researcher. I get similar surveys regularly. I don't reply, neither do my colleagues. The people who reply are a self selected group who are heavily biased toward thinking that AGI will happen soon. Or they have a financial interest in creating hype.
Most of the experts from that report have a direct financial benefit from claiming that this will happen really soon now.
I think Shor's scales lineally, if it's possible to do the fine grain control of the most significant bits. Some people don't think that's a problem, but if it is then growing keys will be an effective defense.
There's a specific view (as I understand it) that QFT's don't scale https://arxiv.org/abs/2306.10072 but some folks seem to dismiss this for reasons I just don't grok at all.
There are two major issues with the paper you linked.
First, it says if you can't do accurate rotations then you can't factor. But the premise is false. Quantum error correction allows you to do arbitrarily accurate rotations. Specifically, you can achieve arbitrarily accurate 45 degree rotations around the X and Z axes by using state distillation and gate teleportation [1], and all rotations can be approximated by a sequence of these 45 degree rotations with the error decreasing exponentially versus the length of the sequence [2]. The paper's justification for saying you can't do accurate rotations is that they think quantum mechanics will end up being wrong (see section 4).
Second, even if you assume for the sake of argument that rotations are inherently noisy, the conclusion doesn't actually follow. The mistake the paper makes is to assume the algorithm will use the textbook QFT circuit [3], which uses n^2/2 rotations for an n-qubit QFT, allowing large amounts of error to accumulate. But in practice, because this QFT is at the end of a circuit, you would use the qubit-recycling QFT which only performs n rotations [4]. Alternatively, if rotations were really such a problem, you could perform ~30 rotations to prepare what's known as a phase gradient state. You can then achieve later rotations via the phase kickback of adding a constant into the phase gradient controlled by the qubit you want to rotate [5]. In other words, the paper asserts you need millions of rotations to factor a 2048 bit number when actually you only need dozens. Everything else can be done with Clifford+Toffoli gates.
So yeah, this paper is not at all a convincing takedown of quantum factoring. The formal part focuses on the wrong cost and the informal justification is that they think quantum mechanics is wrong.
that's a tough argument given that there are already known algorithms to scale it. it's possibly QM is just broken, but it it's not, it's hard to see how error correction wouldn't work
Would be interested to hear your response to Scott Aaronson’s comment:
“Gil’s problem is that the 2019 experiment was long ago superseded anyway: besides the new and more inarguable Google result, IBM, Quantinuum, QuEra, and USTC have now all also reported Random Circuit Sampling experiments with good results.”
I think that I responded over Scott's blog but I can respond again perhaps from a different angle. I think it that is important to scrutinize one (major) experiment at a time.
We studied the Google 2019 claims and on the way we also developed tools that can be applied for further work and we identified methodological problems that could be relevant in other cases (or better could be avoided in newer experiments). Of course, other researchers can study other papers.
I don't see in what sense the new results by Google, Quantinuum, QuEra, and USTC are more inarguable and I don't know what experiment by IBM Scott refers to. And also I don't see why it matters regarding our study.
Actually in our fourth paper there is a section about quantum circuits experiments that deserves to be scrutinized (that can now be supplemented with a few more), and I think we relate to all examples given by Scott (except IBM) and more. (Correction: we mention IBM's 127 qubit experiment, I forgot.)
“Gil Kalai #23: So we’re perfectly clear, from my perspective your position has become like that of Saddam Hussein’s information minister, who repeatedly went on TV to explain how Iraq was winning the war even as American tanks rolled into Baghdad. I.e., you are writing to us from an increasingly remote parallel universe.
The smooth exponential falloff of circuit fidelity with the number of gates has by now been seen in separate experiments from Google, IBM, Quantinuum, QuEra, USTC, and probably others I’m forgetting right now. Yes, IBM’s gate fidelity is a little lower than Google’s, but the exponential falloff pattern is the same.
And, far from being “statistically unreasonable,” this exponential falloff is precisely what the simplest model of the situation (i.e., independent depolarizing noise on each qubit) would predict. You didn’t predict it, because you started from the axiom that quantum error-correction had to fail somehow—but the rest of us, who didn’t start from that axiom, did predict it!”
Hi Dave, nice to see you. Our quantum computer discussions go back to 2006 and as a member of the Google team you can certainly tell us about your perspective and personal angle if you were involved in one of the two recent assertions.
It is disappointing that you endorse Scott's uncalled for and a little juvenile analogy. I think it is a wrong analogy weather I am right or wrong (both on the general question of quantum computation and on the specific question of my evaluation of the Google supremacy efforts).
In any case here is my response to Scott's comment:
"Hi everybody,
1) I found the analogy in #39 offensive and inappropriate.
2) As I said many times, I don’t take it as axiomatic that scalable quantum computing is impossible. Rather, I take the question of the possibility of scalable quantum computing as one of the greatest scientific problems of our time.
3) The question today is if Google’s current fantastic claim of “septillion years beyond classic” advances us in our quest for a scientific answer. Of course, we need to wait for the paper and data but based on our five-year study of the 2019 Google experiment I see serious reasons to doubt it.
4) Regarding our claim that the fitness of the digital prediction (Formula (77)) and the fidelity estimations are unreasonable, Scott wrote: “And, far from being “statistically unreasonable,” this exponential falloff is precisely what the simplest model of the situation (i.e., independent depolarizing noise on each qubit) would predict. You didn’t predict it, because you started from the axiom that quantum error-correction had to fail somehow—but the rest of us, who didn’t start from that axiom, did predict it!”
Scott, Our concern is not with the exponential falloff. It is with the actual deviations of Formula (77)’s predictions (the “digital prediction”) from the reported fidelities. These deviations are statistically unreasonable (too small). The Google team provided a statistical explanation for this agreement based on three premises. These premises are unreasonable as well and they contradict various other experimental findings. My post gets into a few more details and our papers get into it with much further details. I will gladly explain and discuss the technical statistical reasons for why the deviations are statistically unreasonable.
5) “Yes, IBM’s gate fidelity is a little lower than Google’s, but the exponential falloff pattern is the same”
Scott, do you have a reference or link to this claim that the exponential falloff pattern is the same? Of course, one way (that I always suggested) to study the concern regarding the “too good to be true” a priori prediction in Google’s experiment is to compare with IBM quantum computers."
In 2019 you asserted [1] that attempts of creating distance-5 surface code will fail. Do you think you were wrong? If so, what was your mistake and why do you think you made it? If not, what's the problem with the Google's results? Have your estimations of feasibility of quantum computers changed in light of this publication?
I have a silly question, and I'm going to shamelessly use HN to ask it.
In Kitaev's construction of the high purity approximation to a magic state, he starts with the assumption that we start with a state which can be represented as the tensor product of n mixed states which are "close enough". I don't understand where this separability property comes from. My (very) naive assumption would be that there is some big joint state which you have a piece of, and the information that I have about this piece are n of its partial traces, which are indeed n copies of the "poor man's" magic state.
Can I know more than that? There's lots of stuff in the preimage of these partial traces. Why am I allowed to assert that I have the nicest one?
Distillation will still work if the inputs are slightly entangled with each other or with other qubits.
I recommend just simulating the specific case you're worried about. It's only a 15 qubit circuit; not at all expensive to check. You'll either see it working and stop worrying, or have an amazing concrete counter example to publish.
Can it be that he assumes you have some device that produces somewhat bad magic states and then you distill them into a better one? That would be the typical situation in practice.
Hi! I am under the impression that you're one of the better-known skeptics of the practicality of QEC. And to my untrained eye, the recent QEC claim is the more interesting one of the two.
(I am inclined to ignore the claims about quantum supremacy, especially when they're based on random circuit sampling which as you pointed out made assertions that were orders of magnitude off because nobody cares about this problem classically, and so there is not much research effort into finding better classical algorithms. And of course, there's a problem with efficient verification, as Aaronson mentions in his recent post.)
I've seen a few comments of yours where you mentioned that this is indeed a nice result (predicated on the assumption that it's true) [0, 1]. I worry a bit that you're moving the goalposts with this blog post, even as I can't fault any of your skepticism.
I work at Google, but not anywhere close to quantum computing, and I don't know any of the authors or anyone who works on this. But I'm in a space where I feel impacts of the push for post-quantum crypto (e.g. bloat in TLS handshakes) and have historically pooh-poohed the "store now, decrypt later" threat model that Google has adopted -- I have assumed that any realistic attacks are at a minimum decades away (if they ever come to fruition), and very little (if any) of the user data we process today will be relevant to a nation-state actor in, say, 30 years.
If I take the Willow announcement at face value (in particular, the QEC claims), should I update my priors? In particular, how much further progress would need to be made for you to abandon your previously-stated skepticism about the general ability of QEC to continue to scale exponentially? I see a mention of one-in-a-thousand error rates on distance-7 codes which seems tantalizingly close to what's claimed by Willow, but I would like to hear your take.
> If I take the Willow announcement at face value (in particular, the QEC claims) [...]
Considering that Google's 2019 claim of quantum supremacy was, at the very least, severely overestimated (https://doi.org/10.48550/arXiv.1910.09534) I would wait a little bit before making any decisions based on the Willow announcement.
> very little (if any) of the user data we process today will be relevant to a nation-state actor in, say, 30 years.
30 year old skeletons in people’s closets can be great blackmail to gain leverage with.
edit: As I understand it this is a popular way for state actors to "flip" people. Threaten them with blackmail unless they provide confidential information or do some actions.
I'm confused by this line of thinking. Are there really actors who a) do have the entire internet traffic since forever stored, but b) lack the resources to just go get whatever targets' sensitive data they want, right now?
The data this actor wants may be in an air gapped secure facility, for example Iran's nuclear facilities. Decrypting old social media messages that show that a scientist at that facility had a homosexual relationship while he was in college in Europe would give you access in a way you didn't have before.
That is an extreme example but high value information is often stored and secured in ways that are very resistant to theft. Using less secure and/or historical data to gain leverage over those with access to that data is exactly how spies have been doing things for centuries.
a) Yes. Plus most all digital mediated activity (phone calls, texts, ___location, purchasing history, yadda yadda).
b) Astronomy has (had?) the same conundrum: gathering acres of data, most of which will never be examined. (Don't call attention to yourself and hopefully law enforcement will ignore you.) Alas, now we're creating tools for chewing thru big data(s), to spot patterns and anomalies. For better or worse.
They might be interested in all data if they had easy access to it and could filter it as it came in.
On the other hand, if they're tapping even a 100 Gbps link that's run at 50% average utilization, over the course of a year, that's more than 300 EiB of data. This is a frankly stupid amount of storage for such a tiny cross-section of our actual traffic. And I'm supposed to believe that they actually want to do that for years, storing zettabytes (or even yottabytes, depending on the scale of such a collection effort) of traffic in aggregate, on the off-chance that they have a quantum computer in 30 years? Tape storage might be cheap (on the order of single-digit dollars per TiB), but even at that price, just 1 ZiB is billions of dollars.
Sure, maybe you could reduce those numbers by performing targeted wiretaps, but it's also way easier to just subpoena Google and ask them to provide search history for individuals on certain dates...