Basically, if the error rate on an individual gate is low enough, you can use the gate to construct larger (less efficient) gates with arbitrarily low amounts of error. 99%, as you say, is a bit low and you'd need impractically large circuits to use these gates.
You want error correction with classical computers too, it just works differently.
Regular CPUs don't have error correction. The logic gates simply don't make mistakes, even after trillions of operations. The reason is that the gates are digital. Manufacturing imperfections and noise don't matter, as long as the signal levels stay within their bounds.
On the other hand, current quantum computers are analog. Each quantum gate will get the coefficients of its output states a little bit wrong, and after a few tens of operations only noise is left in the qubits. In principle, quantum error correction could be used to measure and compensate for these errors. But none of the technologies demonstrated so far are anywhere near good enough for this.
> The logic gates simply don't make mistakes, even after trillions of operations.
The error rates are not zero. Low, but not zero. Trillions of operations is what, a split-second of runtime on a five-year-old GPU?
"High-reliability" systems invariably use some form of error correction or error detection. You can do this at different levels of abstraction. At a low level, you can build redundant gates. At a company I used to work for, this was a product we sold--it would synthesize ICs from an HDL and incorporate error correction. (This particular feature forced the company to get ITAR export licenses.) At a different company I worked for, we did our error correction at a high level using software. We encountered hardware errors on a regular basis. I'm not even talking about ECC--I'm talking about CPU errors.
The only reason why you can think that imperfections and noise don't matter is because there isn't much noise in your environment and you aren't dealing with enough data that you'll notice any errors.
Deal with a large enough amount of data, enough CPUs, enough RAM, and error becomes a certainty.
The "quantum computers are analog" line is at best profoundly misleading. If your definition of "analog" extends to quantum computers, then I'd say that digital computers are also analog. Which is not an incorrect thing to say.
Sure, if your CPU is controlling an airplane, it should better be redundant. Still, the CPUs inside your laptop, smartphone, or even compute server do their jobs pretty well without any error correction.
The transistors inside a digital CPU are of course analog devices. However, the digital interpretation of the logic levels means that analog errors do not propagate between gates. If the voltage fluctuations are small enough, the probability of triggering a logic gate incorrectly becomes astronomically small. Analog computers (such as all current quantum computers) do not have this feature. Each operation adds a little bit of error to the quantity that is being processed. This could be the voltage in an electronic analog computer, or the state coefficients in a quantum computer.
> Each operation adds a little bit of error to the quantity that is being processed. This could be the voltage in an electronic analog computer, or the state coefficients in a quantum computer.
This brings us back to the Quantum threshold theorem, which says that it's possible to design a quantum computer so that you can perform as many operations as you like without accumulating additional error, as long as individual gates in the quantum computer have a bounded amount of error.
This is one of the reasons why thinking of a quantum computer as "analog" will lead you astray. Quantum computers are more like classical digital computers than like analog computers.
They do. They have physical error correction in the form of majority-vote. Whichever way the majority of electrons go, is what the digital signal will be. Then the total number of electrons is above a threshold such that you basically get less than a handful of errors in years of operations if that.
SEUs are more of a problem in space. CPUs used for space missions mitigate the errors in a few different ways. In the past, the feature size on the CPU substrate was large enough to absorb most cosmic radiation without incident. As the feature sizes have shrunk and the old foundries are no longer producing the old chips, many missions have adopted "multiple voting" architectures. Satellites often use "triple voted" processors. The Space Shuttle had a system with five votes.
I'm not a quantum computing expert, but I wonder if a SEU would even matter in some parts of a quantum system. The bits are in an undefined state during operation anyway.
Not my area but you hear about but flips due to cosmic rays, and I've heard of mainframes that run everything on two cpus as a form or error correction. I think your point stands, just pointing out that there are occasional errors in cpus
Those are physical bits. It is nowhere close to even a single logical bit that could do stuff like that. They are showing us what they can do. Now someone has to figure out how to turn it into something practical.
I think the current goal is the factorization of 15. There is no reason to suggest insane levels of challenge like 21.
(Edit: I have a mathematical physics degree; everything I’ve said here is sarcastic/jokingly. The person I’m replying to is right, this is not how Shor’s algorithm works although I’d hope that was obvious from my original tongue in cheek comment)
There are actually no 2-bit RSA keys. The smallest possible key is n=10, e=3.
Conveniently, d=3 too, i.e. encryption and decryption is the same operation.
I guess you can think of it as a ROT-13 of the RSA world.
Third on the front page and no comments about how this is a total misrepresentation and/or a huge leap? C'mon folks - this is where I come for due diligence on these kinds of things.
Good leap forward but much more than 99% is needed, at least 5-6 nines.
Thing is that, as you add more qubits, "operation fidelity" (as they call it) goes down exponentially, maybe even superexponentially (?), so it's very hard to keep it within an acceptable range on larger systems.
Sure, in theory, in practice there's all sort of physical constraints on the hardware that add/depend on each other and whose complexity "grows" (I'm abusing the term, I know) much faster than linear, hence why I used the term exponential.
"Quantum mechanical states are extremely fragile and require near absolute isolation from the environment. Such conditions are hard to create and typically require temperatures near absolute zero and shielding from radiation. Thus, building quantum computers is expensive and difficult. The challenge increases dramatically with increasing size (number of qubits and the length of time they must be coherent) and thus only small to medium scale computers have been built so far.", from [1]. Also, I recommend reading out the whole article as it gives a nice, broad, overview of the challenges in place, it's not as easy as putting 1 qubit + 1 qubit together.
There is an extremely broad range of scaling between linear and exponential…
Even in architectures with nearest neighbor gates, the (multiplicative) overhead stemming from error correction will only need to be logarithmic in the size of the computation. The constants may be unfavorable, but a log is still a log. See for example https://arxiv.org/abs/1208.0928 and https://arxiv.org/abs/1310.2984
Actually, it doesn't increase much faster than linear. It's quasilinear, and so in O(n^(1+\epsilon) for all \epsilon > 0. There are a variety of such results, so most objections don't hold weight.
The paper you linked to names challenges, but it only states those in respect to reaching practical scaling, which is a result of the constant that is associated with the growth rate, but not the growth rate itself.
This begs the age-old question - whose content is less valuable: the person who posts the substanceless comment or the person who replies to it with an equally substanceless comment pointing out that it's a substanceless comment? Ethicists may never come to a consensus (at least not until they can agree on who's worse between the person who ccs a huge number of people who should be bcced or the person who replies all saying that they should have bcced everyone and asking everyone not to reply all).