Hacker News new | past | comments | ask | show | jobs | submit login

Honestly, I don't understand the point you are making. If you can explain to me how the Deutsch-Joza algorithm works without making reference to the fact that a function evaluation happens on a qubit in superposition(and thus an expanded state-space, as the simplest example), sure. I sincerely don't understand your points about classical simulation, or input/output in classical computers.



As you say, N qubits can be thought of as a distribution over its possible 2^N values, but you can't ignore the starting distribution, which may be very far from the set of classical bits that you want to operate over (and therefore may take an exponential number of quantum operations to realize).

So one very simplified model to think of what quantum computation is that you can do parallel operations to a distribution over your 2^N vector of classical bits, but you don't get to specify an arbitrary starting distribution, you can only start with relatively 'simple' distributions.

This restriction is what makes the claim 'you can compute over 2^N values simultaneously' at best very incomplete — yes, you can do that, as long as you're fine not being able to start from arbitrary starting values. But this is a big restriction!


What do you mean exactly by starting distribution? I'm definitely not saying you get the answer to every problem in a single evaluation step, no one ever said that.


Let's say you have N qubits in a superposition.

A superposition is (a bit of an oversimplification since there are also complex numbers involved and a linear constraint) is in some sense a probabilistic distribution over the possible (classical) values that you get when you measure those N qubits.

E.g. if you measured this N-qubit state many times, which of the 2^(N-1) possible superposition values would you get? Would it be the uniform distribution? Biased towards particular classical bit patterns?

Note that there are 2^(N-1) possible classical bit patterns, so an arbitrary collection of these patterns would take O(2^N) operations to define.

> I'm definitely not saying you get the answer to every problem in a single evaluation step

Yes, but my point is that if your input is an arbitrary collection of classical N-bits, then in general you will require an exponential number of quantum operations to set stuff it into an N-qubit initial state, which means you get zero speedup.

Quantum computers only see speedup on inputs that require a relatively small number of quantum operations to set up. 'Small' could mean polynomial. But it's true that the space of 'easy to set up' initial N-qubit states is much smaller than the space of all possible N-qubit states, which is why a quantum computer cannot simply considered 'a magical computer that computes on 2^N bits at once' without considering how you get those bits in or out of the damn thing.


> Yes, but my point is that if your input is an arbitrary collection of classical N-bits, then in general you will require an exponential number of quantum operations to set stuff it into an N-qubit initial state, which means you get zero speedup

Can you provide a reference for this? My understanding is that you can efficiently setup qubits using a hadamard transform


Yes, but the Hadamard transform is essentially a linear operation.

The span of qubit states reachable in a polynomial number of Hadamard transform operations from the starting state is much less than all possible qubit states.

This follows immediately from information-theoretic principles. There are 2^(2^N-1) possible 'collections' of 'classical bits' representable by N qubits. Therefore it takes 2^N-1 bits of information to represent, but since the Hadamard transform is basically linear, we can see (in a very handwavy way) that only a polynomial number of possible states is reachable from a polynomial number of quantum gate operations.


This is verbatim from 'Explorations in Quantum Computing' by Colin P. Williams: "The utility of the Hadamard gate derives from that fact that by applying, in parallel, a separate Hadamard gate to each of n qubits, each initially in the state |0⟩, 1 H|0⟩⊗H|0⟩⊗···⊗H|0⟩= √ |j⟩ (2.20)

we can create an n-qubit superposition containing 2n component eigenstates. These eigenstates represent all the possible bit strings one can write using n bits. The importance of this capability is often overlooked. But, in reality, it is one of the most important tricks of quantum computing as it gives is the ability to load exponentially many indices into a quantum computer using only polynomially many operations. Had Nature been unkind, and had we had to enter the different bit-strings individually, as we do in classical computing, then quantum computing would have had far less potential for breakthroughs in computational complexity."

I'm just talking about loading a quantum register, there are other issues with gate times and decoherence.


If you have an N-bit classical computer in a black box, you get at most 2^N possible outputs. If you have an N-qubit quantum computer in a black box, you can get at most 2^N possible outputs, not 2^2^N. We write the N-qubit registers as a vector of 2^2^N elements to make the math work, but measurement means we still can only distinguish between 2^N possible values.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: