User:MRFS/Belfast

From Wikipedia, the free encyclopedia

The Belfast Equations

What's the point of this page? I'm reading an intriguing book, Quantum Computing For Everyone by Chris Bernhardt that states the following on page 84.

Most people consider that Einstein has been proved wrong, but that his theory made sense. Bell, in particular, believed that the classical theory was the better of the two theories up until he saw the results of the experiments, saying, "This is so rational that I think that when Einstein saw that, and the others refused to see it, he was the rational man. The other people, although history has justified them, were burying their heads in the sand. So for me it is a pity that Einstein's idea doesn't work. The reasonable thing just doesn't work."
I am in total agreement with Bell. When you first meet these ideas, it seemed to me that Einstein's view is the natural one to take. I am surprised that Bohr was so convinced that it was wrong. Bell's result, often called Bell's theorem resulted in Bell's being nominated for the Nobel Prize in physics. Many people think that if he hadn't died of a stroke at the relatively young age of sixty-one he would have received it. Interestingly there is a street in Belfast named after Bell's theorem - this might be the only theorem that you can enter into Google Maps and get a location.
We have to abandon the standard assumption of local reality. When particles are entangled, but perhaps far apart, we should not think of spin as a local property associated with each of the particles separately; it is a global property that has to be considered in terms of the pair of particles.

So the scene is now set. But here's an idea that is easy to explain and suggests that local realism maybe isn't a dead duck after all.

Think of quantum entanglement/collision/interaction (call it what you like) as something similar to a car-crash. There are two main types, either a Glancing-Blow where neither vehicle radically alters its direction of travel or a Head-On smash which is more violent and significantly alters the courses of the vehicles. Suppose the directions of travel prior to the crash are represented by the qubits q1 and q2. The directions resulting from a glancing-blow are given by the equation q2 = q1q2. Those resulting from a head-on are given by q2 = – q1q2. Combining them gives q2 = ± q1q2 which I call The Belfast Equations after Bell's native city. In each case the two resultant qubits are direct opposites of one another, but what's also significant is that the paths resulting from a head-on collision are actually at right angles to those resulting from a glancing-blow. And here is where indeterminism enters the picture. If two qubits are on a collision course we cannot predict if it's going to be a head-on or a glancing-blow. If two streams of identical qubits smash into one another there will be a mixture of collisions and to model this accurately we need to know the ratio of head-ons to glancing-blows.

A key derivation from Bell's theorem called the CHSH Inequality concerns a vital statistic (the Bell Signal) which is denoted by S. This is a number between 0 and 2√2 which can be easily calculated whenever an experiment is performed. In many experiments the observed value of S has exceeded 2, and this has caused consternation because the theorem says any S>2 is incompatible with local realism.

Corrections

When I started this page I had a computer simulation that gave an S>2.2 but sadly there was a typo in the coding and S is now back below 2. However the angles typically chosen to give S>2 are 0°, 45°, 22.5°, 67.5° whereas Belfast predicts two main streams at right angles for which these angles are hardly ideal. My present suspicion is that the detectors are set up to capture qubits within a fairly narrow cone in which case a lot of qubits will be missed. But I'm not sure that that changes anything. Are poor detectors considered to be one of the loopholes? CB's book uses 3 detectors at angles 0°, 120°, 240° but I've recalculated his figure of 5/9 for The Classical Answer and it's clearly wrong. The correct value is 1/2 so in this case it's exactly the same as his Answer of Quantum Mechanics.

The Answer?

Only speculation, but I'll try to set up a few tests. The answer is blindingly simple. The theory claims that if we measure a stream of 60° qubits we will obtain N/S 75% of the time and S/N the other 25%. In other words our detector will misdiagnose the qubit 25% of the time. This is surely utter nonsense! It might have been reasonable to assume this many years ago when apparatus was fairly primitive, but this is 2020 and we would certainly not expect our detector to perform so poorly. An ideal detector on a stream of 60° qubits should give N/S 100% of the time and any real detector worth its salt should surely be able to achieve a misdiagnosis rate of under 1%. The probabilities are quoted on page 50 of CB's book with no justification whatsoever. It's just nice to be able to assume it's cos2(θ/2) and move on. What I plan to do is to redefine the probabilities so that all qubits in the two right hand quadrants are deemed to be N/S with probability 1 and those in the left quadrants are N/S with probability 0. We can then rerun the Bell test simulations and I shall be quite surprised if I can't produce an S greater than 2. If so then there may be plenty of mileage left in local realism after all.

Prediction

Here is the calculation for the qubit 1 with Alice's axes at 0° and 90° and Bob's axes at 135° and 225°. This seems to be a fairly standard configuration and allegedly the biggest one near the Tsirelson bound 2√2. The measured probabilities are respectively 1, 1/2, (2-√2)/4, and (2-√2)/4. Thus E1 = E2 = (2-√2)/4 and E3 = E4 = -√2/4 which gives S = √2/2 which is nowhere near the claimed bound or even 2. So what are these people playing at? Maybe it's more meaningful to express these figures as percentages. So assuming 4 binary strings of length 100 we have that the measured occurrences are 100, 50, 15, 15. So 100E1 = 100E2 = -70 and 100E3 = 100E4 = 35 giving S = 0.71.

Idea

The aim is to show that the scope of the Bell test is actually wider than that claimed; namely that for 4 random binary sequences of the same length the Bell Signal S never exceeds 2. Should this be true it would overturn the conclusions arising from many experiments which claim to have produced an S>2 and gone on to argue that this undermines local realism. In my computer simulations I've never seen an S>2 though my sequences so far have all been 8 bits or less. Nevertheless I've a hunch that no sequence has an S>2 and consequently the basis for QT is badly flawed. Here's my proof. Of course it needs to be carefully checked!!
The first point to note that given 4 such sequences there's a toggle process that converts every bit of the A sequence to a 1 without altering S. If the ith bit of A is 0 then toggle the ith bits of A', B, B'. Since this has no effect on the numbers of agreements and disagreements it doesn't change any of the E's and therefore S stays the same. So in seeking a maximum S we may assume that the A sequence consists entirely of 1's. Let x(U) denote the number of 1's in the set U and set n=x(A). Clearly nE1 = 2x(B)-n and nE2=2x(B')-n.
Second changing the order of the sequence doesn't affect S. We reorder so that all the x(A') 1's in A' are positioned on the left hand side whilst those x(A'∩B) 1's in A'∩B are also positioned on the left and the remaining 1's in B are positioned on the right.
To find E3 we compare A' and B and subtract the disagreements x(B)+x(A')-2x(A'∩B) from the agreements 2x(A'∩B)+n-x(B)-x(A') to give nE3=4x(A'∩B)+n-2x(B)-2x(A').
A similar process comparing A' and B' yields nE4=4x(A'∩B')+n-2x(B')-2x(A') and putting all the Ei together in Bell's formula gives nS=4x(A'∩B)+4x(A'∩B')-4x(B')-4x(A')+2n≤2n since x(A'∩B)≤x(B) and x(A'∩B')≤x(B'). The result follows.


I now have a much better proof that shows conclusively that |S| ≤ 2.
Theorem Let A, A', B, B' be four bitstreams of equal length n. Every entry is either 0 or 1. Use A(i), A'(i), B(i), B'(i) to denote the ith entry in each stream. Now construct four new streams F1, F2, F3, F4 of length n as follows :-
F1(i) = 1 if A(i) = B(i); otherwise F1(i) = –1.
F2(i) = 1 if A(i) ≠ B'(i); otherwise F2(i) = –1.
F3(i) = 1 if A'(i) = B(i); otherwise F3(i) = –1.
F4(i) = 1 if A'(i) = B'(i); otherwise F4(i) = –1.
Then for each i there are 16 possible combinations for A(i), A'(i), B(i), B'(i) and setting F(i) = F1(i) + F2(i) + F3(i) + F4(i) for 1 ≤ i ≤ n each F(i) has only 2 possible values namely 2 and –2. Therefore |∑F(i)| ≤ 2n. Let S be the mean of the F(i). Then clearly |S| ≤ 2.

Now compare this scenario with the one considered by Bell in 1964. Our four bitstreams are the same as his. He performs 4 comparisons, namely A with B, A with B', A' with B, and A' with B' counting the agreements and disagreements in each case. This leads him to 4 numbers E1, E2, E3, E4. These are calculated by subtracting the disagreements from the agreements and dividing the result by n. Finally he sets S = E1–E2+E3+E4 and concludes that |S| ≤ 2.

Evidently there's a close relationship between Bell's E's and our F's. In fact F1 = nE1, F2 = –nE2, F3 = nE3, F4 = nE4. This demonstrates that our S = (F1 + F2 + F3 + F4)/n is exactly the same as Bell's. And both of us have arrived at exactly the same conclusion, namely |S| ≤ 2, so we have merely verified something that Bell proved 50 years ago. What's the big deal?