> We argue that many popular "quantum paradoxes" stem from a confusion between mathematical formalism and physics [...] most "conceptual puzzles" of QM are not much different from the well-known paradoxes from probability theory
This is really really true. Particularly for any experiment with "delayed choice" in the name, where pop-science explanations follow correlations backwards instead of forwards then confuse correlation for causation and start talking about time travel (sigh).
> e.g. quantum tunneling is the same as basically measure 0 events.
It is not. Quantum tunneling is just an event with low probability, and when you notice that quantum tunneling doesn't happen (on a macro scale), you're just noticing that events with sufficiently low probability don't happen. The probabilistic analogy is flipping a billion coins and having them all come up heads. Flipping a billion coins and having them all come up heads does not have measure zero. It has a positive measure.
Contrast this to measure zero events. Flipping infinitely many coins and having them all come up heads has measure zero.
You need measure theory to talk usefully about measure zero events. You don't need measure theory to talk about quantum tunneling. Just ordinary probability.
> The probabilistic analogy is flipping a billion coins and having them all come up heads. Flipping a billion coins and having them all come up heads does not have measure zero. It has a positive measure.
wut? 1. in each instance it's 1MM conditionally independent trials so i don't see the difference 2. any realization of any continuous random variable has measure zero
>Contrast this to measure zero events. Flipping infinitely many coins and having them all come up heads has measure zero.
you're playing fast and loose with the difference between a counting measure (no measure zero events) and a continuous/discrete measure.
anyway my point was exactly that quantum tunneling is counterintuitive exactly for the same reason that measure zero events are counterintuitive
Agreed... although a probabilistic energy distribution (e.g. thermal) could cause electron/hole current exponential with voltage, it would be unlike tunneling across a potential barrier that is too high. Thermal current would not be exponentially dependent on the separation gap.
Well, for example, the no cloning theorem has a classical analogy. I prepare a coin so that it is heads with probability p. I give you the coin. Your job is to prepare two coins so that they are independently heads with probability p. You aren't told p, you only get the coin to work with. We are going to repeat this many times, and with various values of p, and someone will test whether the coins you are producing follow the right distribution. It's obviously not possible for you to do particularly well at this task. See [1]
I'm not aware of a probability analogy for tunneling.
Personally I think the concept of "particle" needs to go away completely.
I think really what "particle" means is that "all measurements are quantized". But the probabilities of any given measurement happening flows around as a wave.
For example, "there is a quantity of energy `e` at ___location `x`" is an observation you could make. The answer to that question has a discrete yes/no answer. The probability of the answer being "yes" flows around like a wave between all the different possible `x` as time progresses.
>"I think really what "particle" means is that "all measurements are quantized"."
I'm not sure about that, classically the only quantized measurement is number. In QM only some eigenvalues are quantized, and even then only in some problems. (The energy of a free particle isn't quantized, nor is the position of a particle in a box.)
No. It goes through both. That's the weird thing that double-slit tells us, especially because it remains true even for "one photon" (i.e. one quantized energy packet arriving somewhere on the detector screen).
Wasn't it that particles went through both if you measure at the screen that shows the patterns and they went through one if you measure at the screen with the two slits?
Particles are a really useful fiction. I'm having a tough time thinking of how to teach and explain chemistry without the concept of valence electrons. Do you have any ideas?
(Related tangent: a super important but occasionally missed nuance of quantum mechanics is that the waves are not waves over 3D real space. They are waves over configuration space, with way more than 3 dimensions.)
One wouldn't teach sort algorithms or data structures by explaining how the individual bits are flipping. It is totally appropriate to use abstractions to teach concepts.
The electromagnetic field and other fields exist over 3D space. But the probability waves that represent them are in infinite dimensional space, because each point in 3D can have multiple states at once (and these states are correlated/entangled over space).
The paper is way more nuanced than the abstract suggests. There are huge numbers of papers on the arXiv which try to say, "quantum mechanics is normal and boring, these fools are trying to dress it up as something special because they don't understand probabilities". This paper is more interesting. Here's a quote:
> In fact, QM is so weird and surprising that even the very esoteric interpretations, such
as the many universes one, can not capture the subtlety normally associated with quantum
paradoxes. For example, theorems such as the Kochen-Specter theorem [23] (essentially,
a generalization of EPR with three particles) imply that during its quantum evolution no
”classical” variables can fully describe the probabilities of a quantum system.
The following of course is a comment on the quote not on your comment, but no Kochen Specker is not a generalization of EPR to 3 particles. It's easy to come up with a generalization of EPR in 3 particles - just exhibit a 3-party entangled state! (e.g GHZ state). K-S theorem exhibits something a lot more subtle than entanglement known as contextuality. There are examples of systems which have the K-S property but doesn't have any entanglement.
I also found the paper to have more depth than I expected. I was going to remark that it gives the EPR paradox too short a shrift, but I realised that it is not arguing against the actual paradox, but rather the paradox as it is most commonly presented/understood and "mystically" explained.
If anyone is interested, the original EPR paper is readable and profound in that timeless way that characterises much of what Einstein touched: http://www.drchinese.com/David/EPR.pdf
> For example, theorems such as the Kochen-Specter theorem [23] (essentially, a generalization of EPR with three particles) imply that during its quantum evolution no ”classical” variables can fully describe the probabilities of a quantum system
[...] while preserving these other properties we assume must hold, but don't really know for certain do actually hold.
The Monty Hall comparison is about shifting human knowledge about an underlying system with a definite state. What, then, is the source of the uncertainty in QM in this author's view? It is not our knowledge about the "true state" of things, because of course there is no single true state.
The same could be said about any appeal to "perturbations" from the measurements we make-- in order for a "push" from our measurement instruments be the source of uncertainty, we must explain it in terms of an unknown disturbance making a precise value imprecise (otherwise, the use of "our instruments" as an explanation for "uncertainty" can't be the final tortoise upon which the world stands). Because of the impossibility of a "precise physical value" this explanation doesn't seem satisfactory to me.
It seems the article (like many interpretations) does no better than to say "don't ask" about what lies underneath those probabilities. At best it says "our knowledge is uncertain" but then stops before it says precisely what we are uncertain about.
His points about analogues with classical probability are well taken, but I don't think they suffice to explain or dismiss quantum "weirdness", though it's quite possible I missed the point he was trying to make.
In particular, the article sweeps many-worlds away too quickly, IMO. Many worlds is what we get if we assume that observers are ensembles of particles which can exist in a state of superposition, just like any other ensemble of particles in the universe. "Wavefunction collapse" is what such an observer would expect to see upon becoming entangled with another system, without any additional a priori assumptions. Fundamentally it means that, after measurement, we cannot treat the system and the environment (and the observer therein) as separate-- another way of stating the phenomenon of decoherence.
In other words, I think it is not unparsimonious to assume that "multiple universes" exist, because we already know that ensembles of particles exist in a multitude of states. We simply note that the universe, too, is an ensemble of particles and draw a straightforward conclusion from well-established facts. To me, it seems almost more ontologically bold to reject this.
It is also not unparsimonious to assume that "multiple universes" exist, even after measurement, because delayed-choice eraser demonstrates that wavefunctions can be "un-collapsed"; that is to say re-combined with the other superpositions that Copenhagen supposes "disappear" after measurement. It fact it seems to me a fairly clear refutation of (common presentations of) Copenhagen-- But perhaps the author is on my side about that one.
Either way, this article seems to be a quite thoughtful overview, and I quite agree there's a lot of confusion and woo both within and outside the field.
> It is not our knowledge about the "true state" of things, because of course there is no single true state
That's a biased statement. There is more than one interpretation of QM in which your statement is completely incorrect. The math isn't telling you there is no state, that's a property you've assigned to the math given an interpretation. The paper's analogy to the Monty Hall problem would make perfect sense given another interpretation.
The Monty Hall reference just seems confused to me. The author wrote:
> In the Monty Hall problem, the probability suddenly switches from 1/3 to 2/3 when the player (“observer”?) chooses to switch boxes. [..] For example, assuming x = 1 means a win and x = 0 a loss, the expectation value would change from 1/3 to 2/3...
Usually I think of the Monty Hall problem as a discrepancy between 2/3 and 1/2: the probability that switching boxes will lead to a win is 2/3, but most people's intuition strongly expects it to be 1/2, because the boxes were originally equally likely to contain a prize.
However, I can think of one way to treat it as a change from 1/3 to 2/3. For any given box (call it 'our' box), the Bayesian probability (from the player's perspective) that it contained a prize was originally 1/3. Assuming the player chooses a different box, then if and when the host rules out a third box, the Bayesian probability that 'our' box contains a prize rises from 1/3 to 2/3. (On the other hand, if the host rules out 'our' box, then the probability falls from 1/3 to 0.) Importantly, though, that change occurs at the time the host rules out a box: it gets to the heart of the 'paradox' because it reflects the change in expectations due to new information provided (in a counterintuitive way) by the host.
But that's not what the paper is referencing. It talks about a change occurring later, once the player chooses to switch boxes. It's true that the probability of winning changes from 1/3 to 2/3 at that time - but that has nothing to do with what makes the Monty Hall problem special! The same thing would happen even for a much simpler scenario where the underlying probabilities are fixed. Like if for some reason you're given the choice between opening box #1, only, or both box #2 and #3 - and then the host (always) gives you the opportunity to change your mind, without revealing any additional information. Very boring game. Maybe you get a bonus reward if you choose box #1 and win, or else there would be no reason to ever choose it, but that doesn't matter. The point is, you get to switch from a 1/3 bet to a 2/3 bet, and if you do, the expectation value of winning changes from 1/3 to 2/3. Nothing paradoxical there at all.
The author does correctly describe the Monty Hall paradox in the appendix, but that doesn't explain the relevance to quantum mechanics.
The other invocation of a classical paradox, the boy or girl paradox, is just wrong:
> The probability of “both children being boys if one is a boy” is 2/3 if the children are completely indistinguishable except by gender or 1/2 if the children are somehow distinguishable. For normal life, such a distinction is silly, which is one of the reasons this problem was so confusing, but in quantum mechanics indistinguishable particles are a real possibility, …
If a family is selected randomly from all two-child families with at least one boy, the chance that both children are boys is 2/3. Or in other words, the conditional probability of "both children being boys" given that "one is a boy" is 2/3.
If you know a family has two children, and you see a boy in their yard, the Bayesian probability that both children are boys is 1/2. This is true regardless of whether you have any way of "distinguishing" them. It makes no difference if, say, all boys in the world look exactly the same as each other and the same for girls. For the purpose of mapping out the probabilities, it's fine to 'distinguish' between "the child I saw" and "the child I didn't see".
If you walk past their yard frequently, but at such a distance that you can only tell if you see a boy or a girl, without being able to tell whether you saw the same child on different occasions… and on any given occasion, you only ever see either one boy or nobody… and you know that both children spend time in the yard… then the Bayesian probability that both children are boys approaches 1 the more observations you make. At least here it matters whether you can distinguish the children; if you could, then the probability would go to 1 or 0 as soon as you knew you'd observed both of them, and remain 0.5 until then (assuming that 'which child is outside at this time' is uncorrelated with gender). But given that you can't distinguish them, the probability doesn't depend on whether the children are distinguishable in principle (e.g. by getting closer to the yard and seeing their faces), or whether they're not, because e.g. all children of a given gender look identical.
I'm not saying the boy or girl paradox isn't a paradox (i.e. a nonintuitive result, not a literal contradiction), or that it doesn't generally have to do with some ontological concept of "distinguishability". It may well be a good analogy to some aspect of quantum mechanics, similar to the other ways the author explains quantum mechanics in terms of analogies with classical probabilities (while being careful to say they're analogies, not exact interpretations). But I think it's fundamentally incorrect to say the paradox is confusing because people aren't indistinguishable in real life, when that really doesn't affect things much.
This is really really true. Particularly for any experiment with "delayed choice" in the name, where pop-science explanations follow correlations backwards instead of forwards then confuse correlation for causation and start talking about time travel (sigh).