>in the quantum research to demonstrate any valuable forward progress you must compute something that is impossible to do with a traditional computer
This is factually wrong. The most interesting problems motivating the quantum computing research are hard to solve, but easy to verify on classical computers. The factorization problem is the most classical example.
The problem is that existing quantum computers are not powerful enough to solve the interesting problems, so researchers have to invent semi-artificial problems to demonstrate "quantum advantage" to keep the funding flowing.
There is a plethora of opportunities for LLMs to show their worth. For example, finding interesting links between different areas of research or being a proof assistant in a math/programming formal verification system. There is a lot of ongoing work in this area, but at the moment signal-to-noise ratio of such tools is too low for them to be practical.
No, it is factually right, at least if Scott Aaronson is to be believed:
> Having said that, the biggest caveat to the “10^25 years” result is one to which I fear Google drew insufficient attention. Namely, for the exact same reason why (as far as anyone knows) this quantum computation would take ~10^25 years for a classical computer to simulate, it would also take ~10^25 years for a classical computer to directly verify the quantum computer’s results!! (For example, by computing the “Linear Cross-Entropy” score of the outputs.) For this reason, all validation of Google’s new supremacy experiment is indirect, based on extrapolations from smaller circuits, ones for which a classical computer can feasibly check the results. To be clear, I personally see no reason to doubt those extrapolations. But for anyone who wonders why I’ve been obsessing for years about the need to design efficiently verifiable near-term quantum supremacy experiments: well, this is why! We’re now deeply into the unverifiable regime that I warned about.
It's a property of the "semi-artificial" problem chosen by Google. If anything, it means that we should heavily discount this claim of "quantum advantage", especially in the light of inherent probabilistic nature of quantum computations.
Note that the OP wrote "you MUST compute something that is impossible to do with a traditional computer". I demonstrated a simple counter-example to this statement: you CAN demonstrate forward progress by factorizing big numbers, but the problem is that no one can do it despite billions of investments.
If they can't, then is it really quantum supremacy?
They claimed it last time in 2019 with Sycamore, which could perform in 200 seconds a calculation that Google claimed would take a classical supercomputer 10,000 years.
That was debunked when a team of scientists replicated the same thing on an ordinary computer in 15 hours with a large number of GPUs. Scott Aaronson said that on a supercomputer, the same technique would have solved the problem in seconds.[1]
So if they now come up with another problem which they say cannot even be verified by a classical computer and uses it to claim quantum advantage, then it is right to be suspicious of that claim.
> If they can't, then is it really quantum supremacy?
Yes, quantum supremacy on an artificial problem is quantum supremacy (even if it's "this quantum computer can simulate itself faster than a classical computer"). Quantum supremacy on problems that are easy to verify would of course be nicer, but unfortunately not all problems happen to have an easy verification.
that applies specifically to this artificial problem google created to be hard for classical computers and in fact in the end it turned out it was not so much. IBM came up with a method to do what google said it would take 10.000 years on a classical computers in just 2 days. I would not be surprised if a similar reduction happened also to their second attempt if anyone was motivated enough to look at it.
In general we have thousands of optimisations problems that are hard to solve but immediate to verify.
What's factually wrong about it? OP said "you must compute something that is impossible to do with a traditional computer" which is true, regardless of the output produced. Verifying an output is very different from verifying the proper execution of a program. The difference between testing a program and seeing its code.
What is being computed is fundamentally different from classical computers, therefore the verification methods of proper adherence to instructions becomes increasingly complex.
They left out the key part which was incorrect and the sentence right after "If you can't do it with a traditional computer, it suddenly becomes difficult to verify correctness"
The point stands that for actually interesting problems verifying correctness of the results is trivial. I don't know if "adherence to instructions" transudates at all to quantum computing.
> This is factually wrong. The most interesting problems motivating the quantum computing research are hard to solve, but easy to verify on classical computers.
You parent did not talk about quantum computers. I guess he rather had predictions of novel quantum-field theories or theories of quantum gravity in the back of his mind.
This is factually wrong. The most interesting problems motivating the quantum computing research are hard to solve, but easy to verify on classical computers. The factorization problem is the most classical example.
The problem is that existing quantum computers are not powerful enough to solve the interesting problems, so researchers have to invent semi-artificial problems to demonstrate "quantum advantage" to keep the funding flowing.
There is a plethora of opportunities for LLMs to show their worth. For example, finding interesting links between different areas of research or being a proof assistant in a math/programming formal verification system. There is a lot of ongoing work in this area, but at the moment signal-to-noise ratio of such tools is too low for them to be practical.