Hi! I am under the impression that you're one of the better-known skeptics of the practicality of QEC. And to my untrained eye, the recent QEC claim is the more interesting one of the two.
(I am inclined to ignore the claims about quantum supremacy, especially when they're based on random circuit sampling which as you pointed out made assertions that were orders of magnitude off because nobody cares about this problem classically, and so there is not much research effort into finding better classical algorithms. And of course, there's a problem with efficient verification, as Aaronson mentions in his recent post.)
I've seen a few comments of yours where you mentioned that this is indeed a nice result (predicated on the assumption that it's true) [0, 1]. I worry a bit that you're moving the goalposts with this blog post, even as I can't fault any of your skepticism.
I work at Google, but not anywhere close to quantum computing, and I don't know any of the authors or anyone who works on this. But I'm in a space where I feel impacts of the push for post-quantum crypto (e.g. bloat in TLS handshakes) and have historically pooh-poohed the "store now, decrypt later" threat model that Google has adopted -- I have assumed that any realistic attacks are at a minimum decades away (if they ever come to fruition), and very little (if any) of the user data we process today will be relevant to a nation-state actor in, say, 30 years.
If I take the Willow announcement at face value (in particular, the QEC claims), should I update my priors? In particular, how much further progress would need to be made for you to abandon your previously-stated skepticism about the general ability of QEC to continue to scale exponentially? I see a mention of one-in-a-thousand error rates on distance-7 codes which seems tantalizingly close to what's claimed by Willow, but I would like to hear your take.
> If I take the Willow announcement at face value (in particular, the QEC claims) [...]
Considering that Google's 2019 claim of quantum supremacy was, at the very least, severely overestimated (https://doi.org/10.48550/arXiv.1910.09534) I would wait a little bit before making any decisions based on the Willow announcement.
> very little (if any) of the user data we process today will be relevant to a nation-state actor in, say, 30 years.
30 year old skeletons in people’s closets can be great blackmail to gain leverage with.
edit: As I understand it this is a popular way for state actors to "flip" people. Threaten them with blackmail unless they provide confidential information or do some actions.
I'm confused by this line of thinking. Are there really actors who a) do have the entire internet traffic since forever stored, but b) lack the resources to just go get whatever targets' sensitive data they want, right now?
The data this actor wants may be in an air gapped secure facility, for example Iran's nuclear facilities. Decrypting old social media messages that show that a scientist at that facility had a homosexual relationship while he was in college in Europe would give you access in a way you didn't have before.
That is an extreme example but high value information is often stored and secured in ways that are very resistant to theft. Using less secure and/or historical data to gain leverage over those with access to that data is exactly how spies have been doing things for centuries.
a) Yes. Plus most all digital mediated activity (phone calls, texts, ___location, purchasing history, yadda yadda).
b) Astronomy has (had?) the same conundrum: gathering acres of data, most of which will never be examined. (Don't call attention to yourself and hopefully law enforcement will ignore you.) Alas, now we're creating tools for chewing thru big data(s), to spot patterns and anomalies. For better or worse.
They might be interested in all data if they had easy access to it and could filter it as it came in.
On the other hand, if they're tapping even a 100 Gbps link that's run at 50% average utilization, over the course of a year, that's more than 300 EiB of data. This is a frankly stupid amount of storage for such a tiny cross-section of our actual traffic. And I'm supposed to believe that they actually want to do that for years, storing zettabytes (or even yottabytes, depending on the scale of such a collection effort) of traffic in aggregate, on the off-chance that they have a quantum computer in 30 years? Tape storage might be cheap (on the order of single-digit dollars per TiB), but even at that price, just 1 ZiB is billions of dollars.
Sure, maybe you could reduce those numbers by performing targeted wiretaps, but it's also way easier to just subpoena Google and ask them to provide search history for individuals on certain dates...
(I am inclined to ignore the claims about quantum supremacy, especially when they're based on random circuit sampling which as you pointed out made assertions that were orders of magnitude off because nobody cares about this problem classically, and so there is not much research effort into finding better classical algorithms. And of course, there's a problem with efficient verification, as Aaronson mentions in his recent post.)
I've seen a few comments of yours where you mentioned that this is indeed a nice result (predicated on the assumption that it's true) [0, 1]. I worry a bit that you're moving the goalposts with this blog post, even as I can't fault any of your skepticism.
I work at Google, but not anywhere close to quantum computing, and I don't know any of the authors or anyone who works on this. But I'm in a space where I feel impacts of the push for post-quantum crypto (e.g. bloat in TLS handshakes) and have historically pooh-poohed the "store now, decrypt later" threat model that Google has adopted -- I have assumed that any realistic attacks are at a minimum decades away (if they ever come to fruition), and very little (if any) of the user data we process today will be relevant to a nation-state actor in, say, 30 years.
If I take the Willow announcement at face value (in particular, the QEC claims), should I update my priors? In particular, how much further progress would need to be made for you to abandon your previously-stated skepticism about the general ability of QEC to continue to scale exponentially? I see a mention of one-in-a-thousand error rates on distance-7 codes which seems tantalizingly close to what's claimed by Willow, but I would like to hear your take.
[0] https://gilkalai.wordpress.com/2024/08/21/five-perspectives-...
[1] https://quantumcomputing.stackexchange.com/questions/30197/#...