Quantum computers are a science in the same way all of mathematics is a science. In math, you create a set of axioms and build logical structures on top of them. In quantum computing, the existence of a qubit and its operations are the axioms. Trying to coin a meaningless phrase like “design science” and only mentioning math indirectly via Arabic numerals seems wrong.
This seems inaccurate. Engineering isn't a branch of mathematics: it relies on empirical results ("observations") and theory from physics (which also is empirical). Mathematics is not science. Engineering is science. Engineering (huge simplification, here) is essentially one experiment after another.
No, the parent is exactly correct. Research in quantum computing has been mostly mathematical, and it predated any practical development of useful physical implementations of quantum computing. Quantum computing as a field does not depend on empirical results any more than study of Turing machines depends on properties of physical tapes.
I thought neither mathematics nor engineering is strictly science. Engineering utilizes scientific knowledge to solve complex problems, but its purpose isn't to make scientific discoveries through hypothesis testing. It's interesting, though, that new artificial scientific disciplines (e.g., theory of how quantum computing works) occasionally spin out of engineering projects.
Agreed, engineering is about building stuff, not making discoveries about how the universe/existence works. Engineering often results in scientific discoveries but this is incidental.
Mathematics isn’t a science, it seperate thing entirely. Science is about trying to understand how the real world works. Make falsifiable hypothesis then do repeatable experiment do will either prove or disprove hypothesis.
Computer science isn’t really a science either. It should be classified as a branch of mathematics.
Mathematics isn't a science because it's deductive: It has absolute truths, and ways to derive new truths from those truths, recursively. Science, on the other hand, is empirical and thus inductive, and thus has no access to truth (problem of induction) and can only improve models based on observation.
Science uses mathematics to work out the implications of precise prediction-generators (often called laws) and check them for logical consistency, but mathematics itself has nothing to say about any world outside of the one generated by its axioms.
Mathematics may be defined deductively, but practically speaking I don't believe it is always or even usually done that way. People came up with axioms for set theory after inventing set theory first.
There is an intuition to mathematics, but it is definitely deductive because proofs follow from definitions, as opposed to definitions being generated from specific proofs.
Definitions do sometimes end up being justified by their consequences though, eg 0^0 is just defined to be 1 because it results in nicer equations. For that matter, you can sometimes do mathematics with only rough definitions and then make it well-defined later- umbral calculus "worked" before anyone actually set it on a solid mathematical grounding:
(you can see in that Wikipedia article the umbral calculus was originally invented and used because it seemed to work, long before anyone found a set of definitions to justify it to modern standards)
The axiom of choice is accepted because it's useful and required for certain results to hold that mathematicians aren't willing to give up, etc. If ZFC were to be somehow found to be inconsistent it would probably be patched up by altering the axioms rather than wholesale tossing out all the theorems.
I guess what I mean is that mathematics uses deduction in proofs but that's really not the end (or the beginning) of the story.
The article uses Turing machines as an example of computer science so the author clearly has a definition of science that includes deductive mathematics.
Turing machines are part of the prediction-generating framework of CS, like how the mathematical models in the Standard Model are part of the prediction-generating framework of modern physics: They're both an idealized representation of something which we think models something useful in the physical world in a way that allows us to think more clearly about some aspects of that thing without getting caught up in distractions. As alex_smart says, the Church-Turing thesis connects the idealized Turing machine to modern computers in a way somewhat similar to how the Standard Model connects the U(1) group to the study of electromagnetism.
I don't quite agree. The essential piece that transforms the study of Turing machines from mathematics into science is the Church-Turing thesis. And the Church-Turng thesis that is not deductive in any sense of the word.
I agree the Church-Turing thesis is why we care about Turing machines, but to say that the study of Turing machines is a science because of the Church-Turing thesis is like saying “it’s a science because it has applications.”
Anyway, bringing this back to QC. It is believed that algorithms exist that can be computed by a QC that cannot feasibly be done by a classical computer, so QC also meets the ”deductive mathematical definition with applicable characteristics” bar as well.
The Church-Turing thesis is what makes makes a result about Turning machines a theory about the empirical world rather than mere facts about a simplistic mathematical model of computation. So, the point is more like - "it's a science because it is an empirical observation about the natural world".
No, computer science is still deductive even with the C-T thesis. CS is definitely not empirical. If I think of an algorithm, I can explain its properties mathematically without experimentation.
I am not sure what it is about my way of stating things that is causing such severe comprehension problems for you (repeatedly). I never said that Computer Science is not deductive. In fact, the reason we even have a question over Computer Science being a science or not is that most of the things computer scientists do in practice are in fact deductive results about mathematical models. To be called a science, a field of study does however need to have a connection to the empirical world - and for computer science, C-T thesis is that connection.
Anyway, let me be the one bringing the discussion back to quantum computing this time. I don't think anybody in the world who could argue that study of entanglement - think Bell's theorem and CHSH inequality - isn't science. In fact, the 2022 Noble Prize for Physics was awarded to people who made foundational contributions on that field. I see Quantum Computing as a direct spiritual successor to that study.
I don't think I'm having trouble understanding you at all. Rather, your definition of science is inconsistent and thus not comprehensible. To understand why, let's assume that the C-T hypothesis was never hypothesized, but Turing still described his mathematical formulation of the Turing machine. Mathematicians then went on to use that definition to do mathematical research not tied to the empirical world. According to you, those mathematicians are not doing science. Now let's assume that someone formulated the C-T hypothesis. Suddenly, all that work that was previously "not science" is suddenly "science." So you can see how your definition, at best, can only classify things as "science" or "maybe science" because ties to the empirical world can always be discovered later.
A proper definition of science would be able to consider a study X and classify it as science or not science using only intrinsic properties of the study, not an extrinsic property like a known connection to the empirical world.
The premise of QC is that QM is essentially correct, to the n'th degree. We know that QM is "wrong" in the sense that it isn't the Grand Unified Theory of Everything because it does not merge properly with Relativity, but we respect the Correspondence Principle to hold for it in the relevant ___domain.
Ignoring the substantial engineering challenges, if QC doesn't work, we would learn something very interesting about our world in how it breaks, because any break would represent a deviation between the predictions of QM and reality, something we desperately need to get to the next level of theory.
Contrary to what some people might think, physicists would be giddy over a clean, lab-reproducible, engineering-type break in QM. They are starved for data right now. To be able to push a button and reproduce it is an almost impossibly beautiful dream. Almost for that reason, I expect our problems in QC to remain firmly in the engineering ___domain, and that no matter how many qubits we manage to deploy they will obstinately perfectly conform to QM theory.
I'm completely unqualified to hold an opinion, but it seems so intuitive to me that quantum computing is theoretically sound but impossible in practice because error correction will scale exponentially in difficulty.
Experts are clearly more optimistic, especially Scott Aaronson who is often mentioned on HN, but I can't get past a couple of simplistic, non-mathematical objections.
One is that it seems already like quantum computing is on a different trajectory than classical computing. If it was an analogous engineering problem, why haven't we already gotten much farther?
The other is that it just seems like a matter of symmetry. Quantum computing, even if it's not all powerful, seems like a "cheat code" for reality, and an inability to practically exploit it seems to me like balancing it out. I believe in a vague heuristic that reality has no "thread" that can be pulled to unravel everything.
I guess the counter to these vague feelings are that you could've said the same stuff about nuclear fission/fusion and any technology based on quantum theory.
So I'm not that confident in my opinion, but I'm still seeking an understandable reason why one should be optimistic. Nothing I've read by experts translates their feelings to my understanding.
The sort of situation that seems intuitive to me, I don't see why it would result in any earthshattering violation of theory either, as we'd just approach an asymptote forever.
P.S. I have a similar attitude towards practical fusion power for somewhat similar reasons, so that's evidence it's an attitude problem more than anything.
QC is already a practical computing system that is solving “practically unsolvable” problems. It’s just not highly publicized because much of the work is relevant to state security. The field is a bit farther along than one would infer from contemporary information and publicly published papers.
If you think about it even a little, it is almost axiomatic to the intelligence/security world that a machine that could break much known encryption in human relevant timeframes would obviously be being developed under a shroud of secrecy at the bleeding edge. It is far too important to be able to manage the implications of any significant breakthroughs potentially years before public release.
QC is not magic and there are effective counter strategies. But standards take time to implement and the well being of society demands a that the impacts of QC be mitigated through control of the tech and the information around it. Fortunately, the tech required so far keeps access to the computing resource somewhat restricted, and organizations that cooperate with governments control it so far.
For most people, the important thing to keep in mind is that everything now shielded by many forms of encryption will be trivially readable in the foreseeable future. Truly sensitive data stored long term should be moved to Q hard algorithms and old versions destroyed as possible (easier said than done in many cases)
>it is almost axiomatic to the intelligence/security world that a machine that could break much known encryption in human relevant timeframes would obviously be being developed under a shroud of secrecy at the bleeding edge
I'm unconvinced of this. Any government agency (say the NSA) is a tiny subset of the whole world, and can't be expected to outdo it.
Especially if they have some disadvantages like, say, not being able to hire people who've ever smoked pot.
>It is far too important to be able to manage the implications of any significant breakthroughs
But that's what keeping breakthrough tech secret would do - prevent managing the implications. Keeping it secret means not sharing it with allies. That means if someone else discovers the same thing or steals it via espionage, now your adversary has it and your allies don't.
The NSA does it’s work through the same companies that are known leaders in the field. If you are working on something with national security implications, the NSA or its counterpart in other countries will knock on your door and put your people and your organization under the burden of secrecy and cooperation as a condition of your continuing research and in some cases existence.
That is how the NSC/MIC works, it’s mostly outsourced. The government per se does very little research with some notable exceptions.
"I'm completely unqualified to hold an opinion, but it seems so intuitive to me that quantum computing is theoretically sound but impossible in practice because error correction will scale exponentially in difficulty."
They have some things that work the difficulty down below exponential, which seems reasonable, but it's still a pretty stiff poly factor (by engineering standards; it's actually pretty impressive mathematically but right now even having to add a small constant factor of qubits would be a kick in the teeth from an engineering perspective, and of course it's worse than that). I'm fairly skeptical myself. However, it's the sort I consider "true skepticism"; it isn't a certainty it will fail rhetorically disguised as uncertainty, I'm really not certain. If someone produces a working, useful QC device, then it is working and useful regardless of my skepticism, and I will celebrate them for all the more for the fact I thought it was really hard.
"Quantum computing, even if it's not all powerful, seems like a "cheat code" for reality, and an inability to practically exploit it seems to me like balancing it out"
IMHO, that's more the hype than the reality. The reality is that we don't have that many QC algorithms in general, and we have even fewer algorithms that are clearly advantageous over classical computing. If we had a perfect QC device with billions of qubits in hand right now and could mass produce them cheaply, at the moment I think the evidence is that you'd be looking at something more like a GPU situation where they get added on to conventional machines for certain problems than a complete revolution. And to be honest a lot of people wouldn't. Graphics cards give you awesome gaming; a QC attachment will solve specific problems that we just don't have everywhere, and using QC to do something a conventional computer can already do is going to be as silly as treating your GPU as a CPU.
The primitives that QC offer us are really weird. It's completely different, but in terms of understanding the difficult, imagine trying to write computation directly in terms of Conway's Game of Life, because we for some reason have some sort of massive accelerator that can run it quadrillions of times faster than we can simulate it otherwise, and trying to get it compute anything useful in "native" Life, e.g., if you do what we would really do in this sort of circumstance and lay a Turing Machine over it you're losing the "advantages" of Life in the process. QC is actually not quite that bad in my opinion, but I think it at least gives the flavor of the difficulties of QC. And also simulates the fact that you can'd do what we usually do in the discrete ___domain and just lay a conventional interpreter over whatever the problem is and work in terms of that interpreter, because you have to have the algorithm "stay quantum" for the QC to be of any use.
- finding and implementing in hardware an "embedding" which solves certain classes of problems efficiently, especially for which there is no known "efficient" process otherwise.
- finding an elegant way of describing those classes of problems and
- routes for identifying them.
I think [0] Schwartz-Zippel as applied to Polynomial Identity Testing is an instructive example where using noise as a computational resource improves efficiency.
QC works by the exploiting statistical properties of an algorithm to infer something about the subject through quantum computation. Writing quantum algorithms is hard because you can rarely write your algorithm in a way that directly deals with the property you want information on. Rather, you need to write an algorithm that yields your desired information as a biproduct of some other computation.