Quantum Reality and Cosmology
Complementarity and Spooky Paradoxes
Genotype 1.0.69 Mar 20 - PDF
For significant updates, follow @dhushara on Twitter

Buffer Twitter Facebook Email LinkedIn Reddit StumbleUpon Digg


The Covid-19 / SARS-Cov-2 Papers: A Malthusian Catastrophe

Contents

Quantum Cosmology

  1. The Quantum Universe
  2. Origin of Time and Space
  3. The Holographic principle, Entanglement, Space-Time and Gravity
  4. Inflation, Dark Matter and Dark Energy
  5. Cosmic Symmetry-Breaking, Inflation and Grand Unification
  6. String Theory, Quantum Gravity and Space-time Structure
  7. Exotic Cosmologies

Quantum Theory and Relativity

  1. The Wave. the Particle and the Quantum
  2. Two-slit Interference and Complementarity
  3. The Cat Paradox and the Role of Conciousness
  4. The Two-timing Nature of Special Relativity
  5. Reality and Virtuality: Quantum fields and Seething Uncertainty

Quantum Reality

  1. The Spooky Nature of Quantum Entanglement
  2. Delayed Choice, Quantum Erasure, Entanglement Swapping and Procrastination
  3. Quantum Teleportation, Computing and Cryptography
  4. Weak Quantum Measurement, Surreal Trajectories and Many Interacting Worlds
  5. Quantum Decoherence, Darwinism, Discord and Recoherence
  6. Quantum Chaos and Entanglement Coupling
  7. Time Crystals, Reversing Time's Arrow
  8. Quantum Match-making: Transactional Supercausality and Reality
  9. Quantum Paradoxes of Tme and Causality
  10. The Sexually-Complex Quantum World
Appendix: Complementary Views of Quantum Mechanics and Field Theory
References

Introduction

This article is designed to give an overview of all the developments in quantum reality and cosmology, from the theory of everything to the spooky properties of quantum reality that may lie at the root of the conscious mind. Along the way, it takes a look at just about every kind of weird quantum effect so far discovered, while managing a description which the general reader can follow without a great deal of former knowledge of the area.

The Quantum Universe

The universe appears to have had an explosive beginning, sometimes called the big bang, in which space and time as well as the material leading to the galaxies were created. The evidence is pervasive, from the increasing red-shift of recession of the galaxy clusters, like the deepening sound of a train horn as the train recedes, to the existence of cosmic background radiation, the phenomenally stretched and cooled remnants of the original fireball. The cosmic background shows irregularities of the early universe at the time radiation separated from matter when the first atoms formed from the flux of charged particles. From a very regular symmetrical 'isotropic ' beginning for such an explosion, these fluctuations, which may be of a quantum nature, have become phenomenally expanded and smoothed to the scale of galaxies consistent with a theory called inflation. The large-scale structure of the universe in our vicinity, out to a billion light years surrounding the Milky Way, our super-cluster Laneakea, and even larger structures, including the Shapley Atractor and dipole Repeller, shaped by variations in dark matter, as in the MIllennium simulation are shown in Fig 1.

Fig 2:(a) The cosmic background - a red-shifted primal fireball (WMAP). This radiation separated from matter, as charged plasma condensed to atoms. The fluctuations are smoothed in a manner consistent with subsequent inflation. (b) Eternal inflation and big bounce models. Fractal inflation model leaves behind mature universes while inflation continues endlessly. Big crunch leads to a new big(ger) bang. (c) Darwin in Eden: "Paradise on the cosmic equator. " - life is an interactive complexity catastrophe consummating in intelligent organisms, resulting ultimately from force differentiation. This summative Σ interactive state is thus cosmological and as significant as the α of its origin and Ω of the big crunch or heat death in endless expansion.

Origin of Time and Space: In special relativity, the space-time interval (3)can be expressed either (left) in terms of real time in Minkowski space in which the interval is independent of the inertial frame of reference under the Lorenz transformations of special relativity, or equivalently (right) in terms of imaginary time in ordinary Euclidean 4-D space. These two are generalized in higher spatial dimensions into anti-De Sitter and De Sitter space respectively (see fig 3(b)). Hartle and Hawking suggest that if we could travel backward in time toward the beginning of the Universe, we would note that quite near what might have otherwise been the beginning, time gives way to space such that at first there is only space and no time. Beginnings are entities that have to do with time; because time did not exist before the Big Bang, the concept of a beginning of the Universe is meaningless. According to the Hartle-Hawking proposal, the Universe has no origin as we would understand it: the Universe was a singularity in both space and time, pre-Big Bang. Thus, the Hartle-Hawking state Universe, or its wave function, has no beginning - it simply has no initial boundaries in time nor space, rather like the south pole of the Earth in Euclidean space with imaginary time, but becomes a singularity in Minkowsi space in real time. According to the theory, time diverged from a three-state dimension after the Universe was at the age of Planck time , the time required for light to travel in a vacuum a distance of 1 Planck length , or approximately 5.39 x 10-44 s. Because the Planck time comes from dimensional analysis, to produce a factor with the dimensionality of time from fundamental units, which ignores constant factors, the Planck length and time represent a rough scale at which quantum gravitational effects are likely to become important. Also since the universe was finite and without boundary in its beginning, according to Hawking, it should ultimately contract again.

The Holographic Principle, Entanglement, Space-Time and Gravity

Two forms of evidence link quantum entanglement to cosmological processes that may involve gravity and the structure of space-time. The holographic primciple asserts that in a variety of unified theories, an n-D theory can be holographically represented by the physics of a corresponding (n-1)-D theory on a surface enclosing the region.

Fig 3: (a) An illustration of the holographic principle in which physics on the 3D interior of a region, involving gravitational forces represented as strings, is determined by a 2D holographic representation on the boundary in terms of the physics of particle interactions. This correspondence has been successfully used in condensed matter physics to represent the transition to superconductivity, as the dual of a cooling black hole's "halo" (Merali 2011), Sachdev arXiv:1108.1197) (b) Holographic principle explained. Einstein's field equations can be represented on anti-de Sitter space, a space similar to hyperbolic geometry, where there is an infinite distance from any point to the boundary. This 'bulk' space can also be thought of as a tensor network as in (c). In (1998) Juan Maldacena discovered a 1-1 correspondence between the gravitational tensor geometry in this space with a conformal quantum field theory like standard particle field theories on the boundary. A particle interaction in the volume would be represented as a more complex field interaction on the boundary, just as a hologram can generate a complex 3D image from wavefront information on a 2D photographic plate (Cowen 2015). The holographic principle can be used to generate dualities between higher dimensional string theories and more tractable theories that avoid the infinities that can arise when we try to do the analogue of Feynman diagrams to do perturbation theory calculations in string theory. (c) Entanglement plays a pivotal role because whan the entanglement between two regions on the boundary is reduced to zero, the bulk space pinches off and separates into two regions. (d) In an application to cosmology, entanglement on the horizon of black holes may occur if and only if a wormhole in space-time connects their interiors. Einstein and Rosen addressed both worm-holes and the pair-splitting EPR experiment. Juan Maldacena sent colleague Leonard Susskind the cryptic message ER=EPR outlining the root idea that entanglement and worm-holes were different views of the same phenomenon (Maldacena and Susskind 2013, Ananthaswamy 2015). (e) Time may itself be an emergent property of quantum entanglement (Moreva et al. 2013). An external observer (1) sees a fixed correlated state, while an internal observer using one particle of a correlated pair as a clock (2) sees the quantum state evolving through two time measurements using polarization-rotating quartz plates and two beam splitters PBS1 and PBS2.

The Holographic Principle: A collaboration between physicists and mathematicians has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory. The holographic principle states that gravity in say a three-dimensional volume can be described by quantum mechanics on a two-dimensional surface surrounding the volume. The process applies generaly to anti-de Sitter spaces modelling gravitation in n-dimensions and conformal field theories in (n-1)-dimensions and plays a central role in decoding string and M-theories. Juan Maldacena's (1998) paper has become the most cited one in theoretical physics, with over 7000 citations. Now the researchers have found that quantum entanglement is the key to solving this question. Using a quantum theory (that does not include gravity), they showed how to compute the energy density, which is a source of gravitational interactions in three dimensions, using quantum entanglement data on the surface. This allowed them to interpret universal properties of quantum entanglement as conditions on the energy density that should be satisfied by any consistent quantum theory of gravity, without actually explicitly including gravity in the theory (Lin et al. 2015).

In a second experimental investigation, working directly with quantum entangled states, (Moreva et al. 2013) time itself was found to be be an emergent property of quantum entanglement. In the experiment, an external observer sees time being fixed throughout, while an observer using one particle in and entanglement as a clock percieves time as evolving (fig 3(e)).

The holographic principle, otherwise known as the anti-de Sitter/conformal field theory (AdS/CFT) correspondence, has since been found to imply several conjectures when applied to combine gravity and quantum mechanics - that no global symmetries are possible, that internal gauge symmetries must come with dynamical objects that transform in all irreducible representations, and that internal gauge groups must be compact (Harlow & Ooguri 2019). Their previous work had found a precise mathematical analogy between the holographic principle and quantum error correcting codes, which protects information in a quantum computer. In the new paper, they showed such quantum error correcting codes are not compatible with any symmetry, meaning that symmetry would not be possible in quantum gravity. This result has several important consequences. It predicts for example that the protons are stable against decaying into other elementary particles, and that magnetic monopoles exist.

Fig 4: Above: Sketch of the timeline of the holographic Universe. Time runs from left to right. The far left denotes the holographic phase and the image is blurry because space and time are not yet well defined. At the end of this phase (denoted by the black fluctuating ellipse) the Universe enters a geometric phase, which can now be described by Einstein's equations. The cosmic microwave background was emitted about 375,000 years later. Patterns imprinted in it carry information about the very early Universe and seed the development of structures of stars and galaxies in the late time Universe (far right). Below: Angular power spectrum of CMB anisotropies, comparing Planck 2015 data with best fit ΛCDM (dotted blue curve) and holographic cosmology (solid red curve) models, for l ≥ 30.

Holographic Origin A class of holographic models for the very early Universe (Afshordi et al. 2017) based on three-dimensional perturbative super-renormalizable quantum field theory (QFT) has been tested against cosmological microwave background observations and found that they are competitive to the infationary standard cold dark matter model with a cosmological constant (ΛCDM) of cosmology.

Inflation, Dark Matter and Dark Energy

Cosmic Inflation: Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe - a repulsive force, qualitatively similar to dark energy, resulting in an enormous and exponential expansion of the universe just after the Big Bang but at a much higher energy density than the dark energy we observe today and is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Nearly all inflation models predict that the total (matter + energy) density of the universe should be very close to the critical density. The evidence from the early universe indicates that there was not simply an explosive beginning in a big bang but an extremely rapid exponential inflation of the universe in the first 10-35 sec into essentially the huge expanding universe we see today.

Since dark energy and dark matter dominate the cosmological mass-energy equation, interest is now focusing on dark inflation as being better capable of modelling and picturing the earliest phases of the universe, such as the inflationary period, whose energy parameters and precise dynamics remain highly uncertain. The range of energies at which inflation could have occurred is vast, stretching over 70 orders of magnitude. In order to recreate the observed dominance of radiation in the Universe, inflatons should lose energy rapidly. The researchers propose two physical mechanisms which could be responsible for the process. They reveal that the new model predicts the course of events of the Universe's thermal history with a far greater accuracy than previously. If inflation involved the dark sector, the input of gravitational waves increased proportionally. This means that traces of the primordial gravitational waves are not as weak as originally thought. Data suggests that primordial gravitational waves could be detected by observatories currently at the design stage or under construction (Artymowski M et al. 2018 doi:10.1088/1475-7516/2018/04/046). The stability of the earliest phase from immediate gravitational collapse also appears to be dependent on the relation between gravity and the Higgs' field in the inflationary epoch (Herranen M et al. 2018 Phys. Rev. Lett. 113, 211102).


Fig 4b: Dark inflation gives a more presice description of the inflationary period based on the dominant mass-energy components of th euniverse which predicts gravitational waves in existing pulsars (circled far right), which may soon be able to be detected by a new generation of instruments. Data suggests that primordial gravitational waves could be detected by observatories currently at the design stage or under construction, such as the Deci-Hertz Interferometer Gravitational Wave Observatory (DECIGO), Laser Interferometer Space Antenna (LISA), European Pulsar Timing Array (EPTA) and Square Kilometre Array (SKA). The first events could be detected in the coming decade.

In some 'eternal inflation' models the inflation is fractal, leaving behind mature 'bubble' universes while inflation continues unabated (fig 2(b)). The inflationary model explains the big bang neatly in terms of the same process of symmetry-breaking which caused the four forces of nature, gravity, electromagnetism and the weak and strong nuclear forces to become so different from one another. The large-scale cosmic structure is thus related to the quantum scale in one logical puzzle. In this symmetry-breaking the universe adopted its very complex 'twisted ' form which made hierarchical interaction of the particles to form protons and neutrons, and then atoms and finally molecules and complex molecular life possible. We can see this twisted nature in the fact that all the charges in the nucleus are positive or neutral protons and neutrons while the electrons orbiting an atom are all negatively charged.. Some theories model inflation on the idea of a scalar field and some of these consider that the Higgs particle may itself be the source of the hypothetical inflaton generating this field (arXiv:1011.4179).

Symmetry-breaking is a classic example of engendering at work. Cosmic inflation explains why the universe seems to have just about enough energy to fly apart into space and no more, and why disparate regions of the universe which seemingly couldn't have communicated since the big-bang at the speed of light, seem to be so regular. Inflation ties together the differentiation of the fundamental forces and an exponential expansion of the universe based on a form of anti-gravity which exists only until the forces break their symmetry. Inflation explains galactic clusters as phenomenally inflated quantum fluctuations and suggests that our entire universe may have emerged from its own wave function in a quantum fluctuation. However more recent modeling suggests that, due to these quantum effects, inflation can lead to a multiverse where the universe breaks up into an infinite number of patches, which explore all conceivable properties as you go from patch to patch. Hence we shall investigate other models such as the ekpyrotic scenario which also predict a smoothed out universe. On the other hand the latest data from the Planck satelitte does favour the simplest models of inflation, in which the size of temperature fluctuations is, on average, the same on all distance scales (doi:10.1038/nature.2014.16462).

Our view of the distant parts of the universe, which we see long ago because of the time light has taken to reach us, likewise confirm a different more energetic galactic early life. We can look out to the limits of the observable universe and because of the long delay which light takes to cross such a vast region, witness quasars and early energetic galaxies, which are quite different from mature galaxies such as our own milky way.


Fig 5: Researchers used instruments at the Atacama Large Millimeter/submillimeter Array observatory in Chile to observe light emitted in a galaxy called MACS1149-JD1, one of the farthest light sources visible from Earth. The emissions are a clue to the galaxy's redshift, detected in an emission line of doubly ionized oxygen at a redshift of 9.1096 ± 0.0006. The galaxy's redshift suggests that the starlight was emitted when the universe was about 550 million years old, but many of those stars were already about 300 million years old, further calculations indicate. That finding suggests that the stars would have blinked into existence some 250 million years after the universe's birth (Hashimoto doi:10.1038/s41586-018-0117-z).

Early Evolution: As shown in fig 7, the evolution of the early universe from the end of the hypothesized inflationary period begins with the cosmic background radiation emitted when light became separated from neutral matter when the charged plasma coupling with light condensed to form neutral atoms. This led initially to a dark age of a few hundred million years before the first galaxies formed and stars began to shine. The evidence from fig 5 implies that the first stars were radiating from as early as 250 mya after the cosmic origin. Other research from radio frequencies (SN: 3/31/18 p6) suggests star formation began as early as 180 mya.

Reheating: The link between the inflationary phase and the view we have of the hot particulate origin of the expanding universe in the Big Bang is called "reheating". The earliest phases of reheating should be marked by resonances. One form of high-energy matter dominates, and it's shaking back and forth in sync with itself across large expanses of space, leading to explosive production of new particles, the resonant effect to break up, and for the produced particles to scatter off each other and come to some sort of thermal equilibrium, reminiscent of Big Bang conditions.The scientists chose a model of inflation whose predictions closely match high-precision measurements of the cosmic microwave background emitted 380,000 years after the Big Bang, which is thought to contain traces of the inflationary period. The simulation tracked the behaviour of two types of matter that may have been dominant during inflation, very similar to a type of particle, the Higgs boson, that was recently observed in other experiments. Matter at very high energies, were modelled as interacting with gravity in ways that are modified by quantum mechanics at the atomic scale. Quantum-mechanical effects predict that the strength of gravity can vary in space and time when interacting with ultra-high-energy matter -- non-minimal coupling. They found that the stronger the quantum-modified gravitational effect was in affecting matter, the faster the universe transitioned from the cold, homogeneous matter in inflation to the much hotter, diverse forms of matter that are characteristic of the Big Bang. By tuning this quantum effect, they could make this crucial transition take place over 2 to 3 "e-folds," referring to the amount of time it takes for the universe to (roughly) triple in size. In this case, they managed to simulate the reheating phase within the time it takes for the universe to triple in size two to three times. By comparison, inflation itself took place over about 60 e-folds (Nguyen et al. 2019).

Ultimate fate: The eventual fate of the universe is less certain, because it's rate of expansion brings it very close to the limiting condition between the gravitational attraction of the mass energy it contains ultimately reversing the expansion, causing an eventual collapse, and continued expansion forever. The evidence is now in favour of a perpetual and possibly accelerating expansion and astronomers are seeking an explanation for this apparent lack of mass in dark matter and a dark energy, called 'quintessence' in some of its more varying forms, promoting accelerating expansion that may vary over time.

The missing mass is clearly evident in close galaxies, which spin so rapidly they would fly apart if the only matter present was the luminous matter of stars, black holes and gaseous nebulae. WMAP and Planck data now suggest the universe's rate of expansion has increased part way through its lifetime and that its large-scale dynamics are governed mostly by dark energy (68.3%), with successively smaller contributions from dark matter (26.8%) and ordinary galactic matter and radiation (4.9%). From the time of the cosmic microwave background radiation (CMB), dark matter comprised 63% of the matter, photons 15%, atoms 12% and neutrinos 10%, but because photons have zero rest mass, and the CMB is full of low energy photons, the particle ratio is about 109 photons for each proton or neutron.


Fig 6: (left) SN 2011fe, a type 1a supernova 21 million light-years away in galaxy M101 discovered in 2011, shown in before and after images of the galaxy. Theoretical models for the current expansion rate taking into account normal and dark matter and dark energy using cosmic microwave background data infer a Hubble constant of 67, but current measurements of supernovae put the figure at 73 or 74 based on actually measuring the expansion, by analyzing how the light from distant supernova explosions has dimmed over time. Explanations vary from quintessence models of an actual field rather than a cosmological constant, through additional neutrino types to relativistic particle moving close to light speed, or interactions between dark matter and radiationin the early universe. (right) Discrepancies between early and late measurements of the Hubble constant..

Dark energy: A type 1a supernova occurs in binary systems in which one of the stars is a white dwarf, which gradually accretes mass from its companion, which can be anything from a giant star to an even smaller white dwarf, until its core reaches the ignition temperature for carbon fusion. Within a few seconds of initiation of nuclear fusion, a substantial fraction of the matter in the white dwarf undergoes a runaway reaction, releasing enough energy to unbind the star in a supernova explosion. This process produces consistent peak luminosity because of the uniform mass of the white dwarfs that explode via the accretion mechanism. The stability of this value allows these explosions to be used as standard candles to measure the distance to their host galaxies because the visual magnitude of the supernovae depends primarily on the distance. In 1998 two separate teams noted that these distant supernovae were much dimmer than they should be. The simplest and most logical explanation is that the expansion of the universe is now accelerating by comparision with measures of the earlier universe such as the cosmic microwave background (arXiv:astro-ph/9805201, arXiv:astro-ph/9812133).

Fig 7: Left: Cosmic history including inflation and dark energy. Right: After stars formed in the early Universe, their ultraviolet light is expected, to have penetrated the primordial hydrogen gas and altered the excitation state of its 21-centimetre hyperfine line, causing the gas to absorb photons from the cosmic microwave background, producing a distortion at radio frequencies of less than 200 mHz. The latest onset of the cosmic dawn is estimated to be 180 million years after the Big Bang. The signal's disappearance gives away a second milestone – when more-energetic X-rays from the deaths of the first stars raised the temperature of the gas and turned off the signal – around 250 million years after the Big Bang. The strength suggests that either there was more radiation than expected in the cosmic dawn, or the gas was cooler than predicted. That points to dark matter, which theories suggest should have been cold in the cosmic dawn. The results suggest dark matter should be lighter than the current theory indicates. This could help to explain why physicists have failed to observe dark matter directly (doi:10.1038/nature25792).

Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background experiments observed the first acoustic peak in the CMB, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the differenceof some 70%.

Dark energy is poorly understood at a fundamental level, the main required properties are that it functions as a type of anti-gravity, it dilutes much more slowly than matter as the universe expands, and it clusters much more weakly than matter, or perhaps not at all.

The cosmological constant Λ (see equation 5), is the simplest possible form of dark energy since it is constant in both space and time, and this leads to the current standard ΛCDM model of cosmology, involving the cosmological constant Λ and cold dark matter. It is frequently referred to as the standard model of Big Bang cosmology, because it is the simplest model that provides a reasonably good account of (1) the cosmic microwave background, (2) the large-scale structure of the galaxies, (3) the abundances of hydrogen (including deuterium), helium, and lithium and (4) the accelerating expansion of the universe.

Antimatter gravity could also provide an explanation for the Universe's expansion if significant amounts of anti-matter can be found. The current formulation of general relativity predicts that matter and antimatter are both self-attractive, yet matter and antimatter mutually repel each other. CPT symmetry means that, in order to transform a physical system of matter into an equivalent antimatter system (or vice versa) described by the same physical laws, not only must particles be replaced with corresponding antiparticles (C operation), but an additional PT transformation is also needed. From this perspective, antimatter can be viewed as normal matter that has undergone a complete CPT transformation, in which its charge, parity and time are all reversed. Even though the charge component does not affect gravity, parity and time affect gravity by reversing its sign. So although antimatter has positive mass, it can be thought of as having negative gravitational mass, since the gravitational charge in the equation of motion of general relativity is not simply the mass, but includes a factor that is PT-sensitive and yields the change of sign. CPT symmetry means that antimatter basically exists in an inverted spacetime – the P operation inverts space, and the T operation inverts time (Villata 2011).

Quintessence is a model of dark energy in the form of a scalar field forming a fifth force of nature that changes over time, unlike the cosmological constant, which always stays fixed. It could be either attractive or repulsive, depending on the ratio of its kinetic and potential energy. Quintessance theories, which combine standard model constraints with a dark energy field, may help to provide a real contstraint which might enable string theories to be physically tested and to reduce their vast number of possible universes with differing laws of nature to the ones we experience (arXiv:1806.08362). The scalar field of the Higgs boson would appear to create a contradiction with quintessence constraints forbiding scalar field critical points if the two interacted unless theydo so in a particular way which would be physically identifiable (doi:10.1103/PhysRevD.98.086004).

Fig 7b: Claudia de Rham. In the latest acknowledgement of her breakthrough, she received the Blavatnik Award for Young Scientists, two years after winning the Adams prize, one of the University of Cambridge's oldest and most prestigious awards.

Chameleon Particle is a hypothetical scalar particle that couples to matter more weakly than gravity,postulated as a dark energy candidate. Due to a non-linear self-interaction, it has a variable effective mass which is an increasing function of the ambient energy density—as a result, the range of the force mediated by the particle is predicted to be very small in regions of high density (for example on Earth, where it is less than 1mm) but much larger in low-density intergalactic regions: out in the cosmos chameleon models permit a range of up to several thousand parsecs. As a result of this variable mass, the hypothetical fifth force mediated by the chameleon is able to evade current constraints on equivalence principle violation derived from terrestrial experiments even if it couples to matter with a strength equal or greater than that of gravity. Although this property would allow the chameleon to drive the currently observed acceleration of the universe's expansion, it also makes it very difficult to test for experimentally. It has been proposed in realistic models of galaxy formation (Realistic simulations of galaxy formation in f(R) modified gravity Nature Astronomy doi:10.1038/s41550-019-0823-y) and could be produced in the medial tachocline layers of the Sun through magnetic fields interacting with photons and has been putatively detected in the XENON1T dark matter experiment (arXiv 2103.15834).

Fig 7c: Above galaxy formation models. Below XENON1t detections and chamelion model of solar emission.

Massive Gravity: If gravitons have a mass, then gravity is expected to have a weaker influence on very large distance scales, which could explain dark energy and why the expansion of the universe has not been reined in. Claudia deRham's work (de Rham, Gabadadze & Tolley 2011, de Rham 2014, de Rham et al. 2017 ) marks a breakthrough in a century-long quest to build a working theory of massive gravity. Despite successive efforts, previous versions of the theory had the unfortunate feature of predicting the instantaneous decay of every particle in the universe -- an intractable issue that mathematicians refer to as a "ghost". In 2011, De Rham and her collaborators published their landmark paper on massive gravity, the response was initially swift and hostile, due to the possible presence of ghosts in their theory, but after 8 years, the theory has stood up and is gaining traction. A discussion of these issues can be found in Merali (2013).

Particle physics predicts the existence of vacuum energy which could explain dark energy, but also asserts that it should be 10120 times larger than what is needed to explain the dark force acceleration observed by astronomers. Gravity is long-range because we feel gravity from the Sun. However, if the graviton had a tiny mass of less than 10-33 eV, it would still fit with all astronomical observations. (Neutrinos have masses of the order of 1 eV, and the electron has a mass of about 511,000 eV). A mass-carrying graviton would swallow up almost all of the vacuum's energy, leaving behind just a small fraction as dark energy to cause the Universe to accelerate outwards. Such experiments could soon be carried out within the Solar System, because massive-gravity models predict a gravitational field between Earth and the Moon that is slightly different to that of the Sun. This would create a detectable difference of one part in 1012 in the precession of the Moon's orbit around Earth. Experiments that fire lasers back and forth between Earth and mirrors left on the Moon currently measure the distance between the two bodies and that angle with an accuracy of one part in 1011.

Dark matter/energy: A model of dark energy emerging as a repulsive magnetic effect of dark matter has also been proposed (Loeve, Nielsen & Hansen 2021).


Fig 8: (Left) accelerating expansion with supernova and cosmic background measures. (Right) Phantom dark energy picute of the Big Rip. Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of dark matter and baryons. The density of dark matter in an expanding universe decreases and eventually the dark energy dominates. When the volume of the universe doubles, the density of dark matter is halved, but the density of dark energy is constant for a cosmological constant and changes only slowly otherwise.

The changing rate of expansion of the universe is described by the equation of state constant where p is the pressure and ρ is the energy density. Einstein's field equations (5) have an exact solution in the form of the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric describing the scale a of a homogeneous, isotropic expanding or contracting universe that is path connected, but not necessarily simply connected:. If we examine the "effective" pressure and energy density:, we can see that (6), where . This shows us the underlying relationship between the cosmological constant Λ and w.

From the above equation (6) we can see that for w < -1/3, the expansion of the universe will continue to accelerate. A fixed cosmological constant corresponds to w = -1, which we can see as follows: The cosmological constant has negative pressure equal to its energy density and so causes the expansion of the universe to accelerate if it is already expanding, and vice versa. This is because energy must be lost from inside a container (the container must do work on its environment) in order for the volume to increase. Specifically, a change in volume dV requires work done equal to a change of energy −p.dV, wherepis the pressure. But the amount of energy in a container full of vacuum actually increases when the volume increases, because the energy is equal to ρV, where ρ is the energy density of the cosmological constant. Therefore, p is negative and p = −ρ.

Phantom Dark Energy: Hypothetical phantom energy would have an equation of state w < -1. It possesses negative kinetic energy, and predicts expansion of the universe in excess of that predicted by a cosmological constant, which leads to a Big Rip. The expansion rate becomes infinite in finite time, causing the expansion to accelerate without bounds, passing the speed of light (since it involves expansion of the universe itself, not particles moving within it), causing more and more objects to leave our observable universe faster than its expansion, as light and information emitted from distant stars and other cosmic sources cannot "catch up" with the expansion. As the observable universe expands, objects will be unable to interact with each other via fundamental forces, and eventually the expansion will prevent any action of forces between any particles, even within atoms, "ripping apart" the universe. One estimate puts the destruction times of the Milky Way at -60mya, solar system -3 months, Earth -30 min and atoms at 10-19 sec as smaller and smaller scales become overwhelmed (fig 8). The value of w has been constrained by Planck in 2013 and 2015 to be w = -1.13 ± 0.13 and w = -1.006 ± 0.045 respectively, and the value from WMAP9 is w = -1.084 ± 0.063, suggesting a possible big-rip (arXiv: 1708.06981, arXiv:1502.01589, arXiv:1212.5226, arXiv:1409.4918).

One application of phantom energy involves a cyclic model of the universe (arXiv:hep-th0610213), in which dark energy with w < -1 equation of state leads to a turnaround at a time, extremely shortly before the would-be Big Rip, at which both volume and entropy of our universe decrease by a gigantic factor, while very many independent similarly small contracting universes are spawned.

One theory attaches the turning on of dark energy part way through the expansion to new types of string-theory related axions (arXiv:1409.0549). Another to an additional scalar field that operates in a see-saw mechanism with the grand unification energy of the Higgs particle (doi:10.1103/PhysRevLett.111.061802) gaining a very small energy in inverse relation to the Higgs energy. The see-saw mechanism is used to model small neutrino masses. Yet another ascribes it to 'dark magnetism' - primordial photons with wavelength greater than the universe (arxiv.org/abs/1112.1106).

Unimodular gravity and Mass-Energy Leakage: Dark energy could come about because the total amount of energy in the universe isn't conserved, but may gradually disappear. Dark energy could be a new field, a bit like an electric field, that fills space. Or it could be part of space itself - a pressure inherent in the vacuum - called a cosmological constant. Quantum mechanics suggests the vacuum itself should fluctuate imperceptibly. In general relativity, those tiny quantum fluctuations produce an energy that would serve as the cosmological constant. Yet, it should be 120 orders of magnitude too big - big enough to obliterate the universe. General relativity assumes a mathematical symmetry called general covariance, which says that no matter how you label or map spacetime coordinates - i.e. positions and times of events - the predictions of the theory must be the same. That symmetry immediately requires that energy and momentum are conserved. Unimodular gravity possesses a more limited version of that mathematical symmetry and quantum fluctuations of the vacuum do not produce gravity or add to the cosmological constant, which can be set to the desired value. If one allows the violation of the conservation of energy and momentum, it can set the value of the cosmological constant. Dark energy thus keeps track of how much energy and momentum has been lost over the history of the universe (arXiv:1604.04183, doi:10.1126/science.aal0603).

A thermodynamic interpretation of dark energy has also been proposed. Carroll and Chatwin-Davies (arxiv.org/abs/1703.09241) take a definition of entropy which uses a quantum mechanical description of space-time to calculate what happens to the geometry of space-time as it evolves. Once a universe has reached peak entropy it is effectively one described by de Sitter geometry. In the 1980s, Robert Wald showed that a universe with a positive cosmological constant will end up as a flat, empty, featureless void known as de Sitter space and Tom Banks suggested then that the value of dark energy could be related to the entropy of space-time. This thermodynamic way of thinking turns the standard view of dark energy on its head: dark energy emerges from the quantum structure of space-time and then drives the accelerated expansion. Solving the mystery of dark energy's value then becomes a case of justifying the choice of a particular quantum mechanical description of space-time..

Dark matter is likewise poorly understood. There are four basic candidates, axions, machos (non-luminous, small stars, black holes etc) and wimps (weakly interacting massive particles which might emerge from extensions of the standard model), complex dark matter experiencing strong self-interactions, while intercting with normal matter only through gravity.

A model of dark matter also being the source of dark energy, through repulsive magnetic effects, has also been proposed (Loeve, Nielsen & Hansen 2021).

In terms of WIMPs, the most sensitive dark matter detector in the world is Gran Sasso's XENON1T, which looks for flashes of light created when dark matter interacts with atoms in its 3.5-tonne tank of extremely pure liquid xenon. But the team reported no dark matter from its first run. As of May 2018 the larger second run reported likewise. Neither was there any signal in data collected over two years during the second iteration of China's PandaX experiment, based in Jinping in Sichuan province. Hunts in space have also failed to find WIMPs, and hopes are fading that a once-promising γ-ray signal detected by NASA's Fermi telescope from the centre of the Milky Way (see fig 11) was due to dark matter — more-conventional sources seem to explain the observation. There has been only one major report of a dark-matter detection, made by the DAMA collaboration at Gran Sasso, but no group has succeeded in replicating that highly controversial result, although renewed attempts to match it are under way (Gibney 2017). In 2018 DAMA announced new results confirming the effect with new detectors. However, the upgrade has made it sensitive to lower-energy collisions. For typical dark-matter models, the timing of the fluctuations, as seen from Earth, should reverse below certain energies. The latest results don't show that. Furthermore, the COSINE-100 experiment which also uses sodium iodide crystals as does DAMA has seen no effect. LUX the Large Underground Xenon in South Dakota also reported no sign (Aron 2016). The latest round of results seem to rule out the simplest and most elegant super­symmetry-based wimp theories, leaving open the possibility of axions, or a hidden sector of particles interacting more feebly, or not at all, with normal matter.

Interest has also converged on a link between dark matter and anti-matter, in which axions interact differently with anti-matter, leading to a possible explanation of both dark matter and the preponderance of matter over anti-matter (Carosi G. 2019 Nature 575 293-4 doi:10.1038/d41586-019-03431-5). Significantly, because axions are bosons and can cohabit, the constraints on their possible masses are much less confining than other dark matter candidates. Such approaches include the notion of "quark nuggets" massive collections e.g. of anti-quarks surrounded by an envelope of axions which maintain their stability and shield their interaction with external ordinary matter, so that almost no interaction occurs (Quark nuggets of wisdom , Cosmic-ray detector might have spotted nuggets of dark matter).

The simplest model of dark matter portrays it as a single particle - one that happens to interact with others of its kind and normal matter very little or not at all. Physicists favor the most basic explanations that fit the bill and add extra complications only when necessary, so this scenario tends to be the most popular. For dark matter to interact with itself requires not only dark matter particles but also a dark force to govern their interactions and dark boson particles to carry this force. This more complex picture mirrors our understanding of normal matter particles, which interact through force-carrying particles. Self-interacting dark matter with dark forces and dark photons may not be as simple as the single-particle explanation but it is just as reasonable an idea.

A 2018 study of four colliding galaxies in the galaxy cluster Abell 3827 for the first time suggests that the dark matter in them may be interacting with itself through some unknown force other than gravity that has no effect on ordinary matter. The dark matter in Abell 3827 is plentiful, so it warps the space around it significantly. The scientists found that in at least one of the colliding galaxies the dark matter in the galaxy had become separated from its stars and other visible matter by about 5,000 light-years. One explanation is that the dark matter from this galaxy interacted with dark matter from one of the other galaxies flying by it, and these interactions slowed it down, causing it to separate and lag behind the normal matter (doi:10.1038/nature.2015.17350).

D-star hexaquark Bose-Einstein condensate: When six quarks combine, this creates a type of particle called a dibaryon, or hexaquark. The d-star hexaquark, described in 2014, ududud made of six light quarks -- 3 u-quarks and 3 d-quarks, was the first non-trivial hexaquark detection. Because they are bosons, at close to absolute zero they could form Bose-Einstein condensates which might remain stable and form dark matter without having to extend the standard model (Bashkanov & Watts 2020). During the earliest moments after the Big Bang, as the cosmos slowly cooled, stable d*(2830) hexaquarks could have formed alongside baryonic matter, and the production rate of this particle would have been sufficient to account for the 85% of the Universe's mass that is believed to be Dark Matter. Calculations have shown that the H dibaryon udsuds, which could result from the combination of two uds hyperons, is light and (meta)stable and takes more than twice the age of the universe to decay.

Dark Negative Energy: A new theory unifies dark matter and dark energy into a single phenomenon: a fluid which possesses negative mass accompanied by negative gravity. To avoid this rapidly diluting itself a 'creation tensor," which allows for negative masses to be continuously created. It demonstrates that under continuous creation, this negative mass fluid does not dilute during the expansion of the cosmos and appears to be identical to dark energy. It also provides the first correct predictions of the behaviour of dark matter halos. Their computer simulation, predicts the formation of dark matter halos just like the ones inferred by observations using modern radio telescopes (Farnes J (2018) Astronomy & Astrophysics arXiv:1712.07962).

Dark Sector Theories: There are further searches under way for lighter dark matter candidates such as the dark photon, using very intense rays of lower energy particles. Complex dark matter, or the dark sector was first suggested in 1986 (Holdom B 1986 Phys. Lett. B 166,196-198), but remained largely unexplored until a group of theorists resurrected the theory (Arkani-Hamed B et al. 2009 Phys. Rev. D 79, 015014), in light of results from a 2006 satellite mission called PAMELA (Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics), which had observed a puzzling excess of positrons in space. Two nearby pulsars, Geminga and PSR B0565+14, were identified as possible sources, however an international team using the High-Altitude Water Cherenkov Gamma-ray Observatory has measured the positrons emanating from these and found it couldn't account for the surplus reaching Earth. Theorists suggested that they might be spawned by dark-matter particles annihilating each other, but the weakly interacting massive particles (WIMPs) most often suggested would have also decayed into protons and antiprotons, which weren't seen by PAMELA. Another motivation came from a result reported in 2004 that found that the magnetic moment created by the spin and charge of the muon did not match the predictions of the standard model thus suggesting a supersymmetriy explanation. This experimental anomaly, called the muon g-2, could also be rectified by a dark-sector force. However theoretical paper (arXiv:1801.10244) now suggests the effect may be purely due to gravitational space-time effects of the Earth on the relativistic high-energy muons. Theoretical sugestions for dark matter collapse to form additional hidden galactic structures (doi:10.1103/PhysRevLett.120.051102) and dark fusion (doi:10.1103/PhysRevLett.120.221806) have also been proposed.

The most recent cosmological data including the cosmic microwave background radiation anisotropies from Planck 2015, Type Ia supernovae, baryon acoustic oscillations, the Hubble constant and redshift-space distortions show that the interaction in the dark sector parameterized as an energy transfer from dark matter to dark energy is strongly suppressed by the whole updated cosmological data. On the other hand, an interaction between dark sectors with the energy flow from dark energy to dark matter is proved in better agreement with the available cosmological observations (arXiv:1605.04138).


Fig 9: A variety of searches are underway for lighter dark matter candidates. Log-log plots vertical axis relative interaction strength horizontal GeV from 0.01 to 1.

High precision measurements of the fine structure constant, fig 9 right, (Science 360/6385 191-195 doi: 10.1126/science.aap7706) have added further constraints the all but eliminte a dark photon but do favour a dark axial vector boson including the relaxion mass and reaction strength as in the central green triangle (arXiv:1708.00010).

There are two qualitatively different types of neutron life-time measurements. In the bottle method, ultracold neutrons are stored in a container for a time comparable to the neutron lifetime. The remaining neutrons that did not decay are counted and fit to a decaying exponential, exp(-tn). The average from the five bottle experiments included in the Particle Data Group (PDG) is 879.6 ± 0.6 s. In the beam method, both the number of neutrons N in a beam and the protons resulting from β decays are counted, and the lifetime is obtained from the decay rate, dN/dt = -Nn. This yields a considerably longer neutron lifetime; the average from the two beam experiments included in the PDG average is 888.0 ± 2.0 s. The discrepancy between the two results is 4.0 σ. A possible explanation arises from the neutron having a second decay path into the dark sector. This path violates baryon number and generically gives rise to proton decay via the neutron followed by its alternate decay, can be eliminated from the theory if the sum of masses of particles in the minimal final state of the neutron decay process is larger than mp-me. On the other hand, for the neutron to decay, its mass must be smaller than the neutron mass, setting prospective bounds on the dark particle mass (Fornal & Grinstein 2017). However recent evidence from the UCNtau team claims to have ruled out the presence of the telltale gamma rays with 99 percent certainty (arXiv:1802.01595).

Modified Newtonian Dynamics (MOND) attempts avoid the need for dark matter by modifying gravity to account for the observed high velocities of stars around the galaxy by amending Newton's Second Law so that gravity is proportional to the square of the acceleration instead of the first power, so that it varies inversely with galactic radius (as opposed to the inverse square) at extremely small accelerations, characteristic of galaxies, yet far below anything typically encountered in the Solar System or on Earth. However, MOND and its relativistic generalisations such as TeVeS do not adequately account for observed properties of galaxy clusters, and no satisfactory cosmological model has been constructed. Furthermore TeVeS and another class of so-called Galileon theories which introduce scalar and/or vector fields which decay more slowly, have been decisively disproved by the LIGO neutron star collision because this proved gravitational waves travel at the speed of light, inconsistent with these theories.

Fig 10: EG and it's experimental test: (a) Two forms of long range entanglement connecting bulk excitations that carry the positive dark energy either with the states on the horizon or with each other. (b) In anti-de-Sitter space (left) the entanglement entropy obeys a strict area law and all information is stored on the boundary. In de-Sitter space (right) the information delocalizes into the bulk volume and creates a memory effect in the dark energy medium by removing the entropy from an inclusion region. (c) The ESD profile predicted by EG for isolated central galaxies, both in the case of the point mass approximation (dark red, solid) and the extended galaxy model (dark blue, solid), compared with observed values. The difference between the predictions of the two models is comparable to the median 1σ uncertainty on our lensing measurements (grey band).

Emergent Gravity (EG) as a Comprehensive Solution: ES is a radical new theory of gravitation developed by Erik Verlinde in 2011 (arXiv:1001.0785), in which he developed from scratch a fundamental theory of how Newtonian gravitation can be shown to arise naturally in a theory in which space is emergent through a holographic scenario similar to the one discussed above in the context of black holes. Gravity is explained as an entropic force caused by changes in the information associated with the positions of material bodies. A relativistic generalization of the presented arguments directly leads to Einstein's equations. The way in which gravity arises from entropy can most easily be visualized in the context of polymer elasticity where a linear polymer which randomly wriggles into a disordered arrangement thermodynamically is pulled out straight, resulting in an elastic force tending to take it back into a disordered configuration. Space is then an emergent property of the holographic boundary and gravitation a consequence of entropy following an area law at the boundary surface, as in black hole entropy (Bekenstein J 1973 Black holes and entropy Phys. Rev. D 7, 2333).

In November 2016 Verlinde (arXiv:1611.02269) extended the theory to make predictions that can explain both dark energy and dark matter as manifestations of entanglement under the holographic scenario. The entanglement is long range and connects bulk excitations that carry the positive dark energy either with the states on the horizon or with each other. Both situations lead to a thermal volume law contribution to the entanglement entropy that overtakes the area law at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic states do not thermalise at sub-Hubble scales, but exhibit memory effects in the form of an entropy displacement caused by (baryonic) matter. The emergent laws of gravity thus contain an additional 'dark' gravitational force describing the 'elastic' response due to the entropy displacement, which in turn explains the observed phenomena in galaxies and clusters currently attributed to dark matter.

A month later in December 2016 (arXiv:1612.03034), a group of astronomers made a test of the theory using weak gravitational lensing measurements. As noted, in EG the standard gravitational laws are modified on galactic and larger scales due to the displacement of dark energy by baryonic matter. EG thus gives an estimate of the excess gravity (as an apparent dark matter density) in terms of the baryonic mass distribution and the Hubble parameter. The group measured the apparent average surface mass density profiles of 33,613 isolated central galaxies, and compared them to those predicted by EG based on the galaxies' baryonic masses and find that the prediction from EG, despite requiring no free parameters, is in good agreement with the observed galaxy-galaxy lensing profiles. This suggests that a radical revisioning of the relationship between gravity and cosmology could be under way which will transform current attempts at unifying gravitation and quantum cosmology.

Fig 10b: Reaction profile of the protophobic-X

Protophobic X as a Possible Fifth Force In 2016 a Hungarian team fired protons at thin targets of lithium-7, which created unstable beryllium-8 nuclei that then decayed and spat out pairs of electrons and positrons(arXiv:1504.01527). According to the standard model, physicists should see that the number of observed pairs drops as the angle separating the trajectory of the electron and positron increases. But the team reported that at about 140o, the number of such emissions jumps - creating a 'bump' when the number of pairs are plotted against the angle - before dropping off again at higher angles with 6.8σ significance. This suggests that a minute fraction of the unstable beryllium-8 nuclei shed their excess energy in the form of a new particle with a mass of about 17 MeV, which then decays into an electron-positron pair. They were searching for a dark photon candidate, but subsequently papers suggest a "protophobic X boson". The theorists showed that the data didn't conflict with any previous experiments – and concluded that it could be evidence for a fifth fundamental force (arXiv:1604.07411). Such a particle would carry an extremely short-range force that acts over distances only several times the width of an atomic nucleus. And where a dark photon (like a conventional photon) would couple to electrons and protons, the new boson would couple to electrons and neutrons. Experimental resolution of this anomaly should be forthcoming within a year. The DarkLight experiment at the Jefferson Laboratory is designed to search for dark photons with masses of 10–100 MeV, by firing electrons at a hydrogen gas target. Now it will target the 17-MeV region as a priority, and could either find the proposed particle or set stringent limits on its coupling with normal matter (doi:10.1038/nature.2016.19957). Indeed, a new transition has now been found in helium-4 (arXiv:1910.10459) with 7.2σ significance.

Extended SM with Dark Matter Inducing Lepton Flavour Violation: A further theory, which is in-principle testable at the LHC (Arcadi G et al. 2018 Lepton flavor violation induced by dark matter doi:10.1103/PhysRevD.97.075022), is one in which the standard model is extended by including dark matter leptons extending the SU(2) of the SM electroweak flavours to a second SU(3) symmetry comprised of an electron, electron neutrino, and neutral fermion later to be identified as dark matter. This multiplet structure is replicated among the three generations to embed the SM fermionic content. Therefore, we have three neutral Dirac fermions, the lightest one being a dark matter candidate, which might induce lepton flavor violation (LFV) decays μ → eγ and μ → eee as well as μ − e conversion. This theoretical arrangement may be albe to show that one may have a viable dark matter candidate yielding flavor violation signatures that can be probed in upcoming experiments. Keeping the dark matter mass at the TeV scale, a sizable LFV signal is possible, while reproducing the correct dark matter relic density and meeting limits from direct-detection experiments.

Fig 11: Waxing and waning of prospects of galactic centre gamma ray source from dark matter: Lower left: Dark matter illuminated: The bullet cluster, two colliding galaxies 3.4 billion light-years away, have total mass far less than the mass of the cluster's two clouds of hot x-ray emitting gas (red). The blue hues show the distribution of dark matter in the cluster with far more mass than the gas. Otherwise invisible, the dark matter was mapped by observations of gravitational lensing of background galaxies. Unlike the gas, the dark matter seems to have passed right through indicating little interaction with itself or other matter although obersvations of galaxy cluster Abell 3827 sugests a possible dark force interaction consistent with complex dark matter. Top and right: False colour view of excess x-ray emissions from the centre of our galaxy suggesting a dark matter particle with mass ranging from around 10 GeV at possible LHC energies upwards. The 1-3 GeV signal is in good fit with a 36-51 GeV dark matter particle annihilating to bb. However in 2018 a group of astronomers has concluded that this radiation comes from 10 billion year old stars in the galactic bulge close the the black hole (doi:10.1038/s41550-018-0414-3). The angular distribution of the excess is approximately spherically symmetric and centered around the dynamical center of the Milky Way (within 0.05o of Saggitarius A* the central black hole), showing no sign of elongation along the Galactic Plane, which would be expected with a pulsar distribution. The signal is observed to extend to at least 10o from the Galactic Center, disfavoring the possibility that this emission originates from millisecond pulsars. The shape of the gamma-ray spectrum from millisecond pulsars appears to be significantly softer than that of the gamma-ray excess observed from the Inner Galaxy (ArXiv: 1402.6703). Earthbound dark matter detectors have also caught events consistent with this mass range (doi:10.1038/521017a). However a survey of wider regions of the Milky Way also show similar peaks, implying this is not sourced only in the galactic center (rXiv:1704.03910), shedding doubt on the notion of a central source of dark matter decay. The Alpha Magnetic Spectrometer on board the International Space Station has also detected more positrons than expected which could be the result of dark matter being annihilated, but might also be caused by nearby pulsars. However the idea of a disk of dark matter coplanar with the disc of baryonic matter has received a blow from lack of experimental verification in a search for stellar kinematics of a thin disk using the Gaia satellite (arXiv:1711.03103). More recently estimates of the gamma ray distribution find it is closer to galactic stellar distributions than that of assumed dark matter (doi:10.1038/s41550-018-0531-z).

Gravitational Waves: Confirmation of the existence of gravitational waves came in 2016 with the detection of the 'chirp' at two widely space detectors (fig 12) in the ground-breaking LIGO experiment, which can detect minute gravitational fluctuations in a change in the 4 km laser mirror spacing of less than a ten-thousandth the charge diameter of a proton, equivalent to measuring the distance to Proxima Centauri with an accuracy smaller than the width of a human hair. This characteristic "chirp" signal is believed to be due to the last throes of two colliding black holes in a death spiral. An alternative explanation to a coalescing black hole is a gravastar.

Since then, gravitational waves have also been detected, along with a burst of gamma rays, from a pair of colliding neutron stars (fig 12), demonsrating both gravity and electromagnetism are transmitted at the speed of light eliminating theories where the speed of gravity is modified to be slower or faster than light. These coincident signals have enabled a much more detailed picture of neutron stars to emerge. It generated gravitational waves, picked up by LIGO, and Virgo - that lasted an astounding 100 seconds. Less than two seconds later, a NASA satellite recorded a burst of gamma rays. In the wake of the collision, the churning residue forged gold, silver, platinum and a smattering of other heavy elements such as uranium. Such elements' birthplaces were previously unknown, but their origins were revealed by the cataclysm's afterglow. As the collision spurted neutron-rich material into space, a bevy of heavy elements formed, through a chain of reactions called the r-process, which requires an environment crammed with neutrons. Atomic nuclei rapidly gobble up neutrons and decay radioactively, thereby transforming into new elements, before resuming their neutron fest. The r-process is thought to produce about half of the elements heavier than iron (Strickland A (2017) First-seen neutron star collision creates light, gravitational waves and gold CNN, Conover E (2017) Neutron star collision showers the universe with a wealth of discoveries Science News).

Fig 12: Left: Signal of gravitational waves from LIGO believed to be from two collidiing black holes in a binary system, the "chirp" coming from their increasing orbital frequency as they merge. However evidence for the significance of the gamma ray signal at the galactic center has since diminished (arXiv:1704.03910). In October 2017, two neutron stars in a neighbouring galaxy were likewise detected to be colliding from gravitational waves detected by both LIGO and Virgo, but this time because they were not black holes and light could escape, there was a coincident burst of gamma radiation. Such collisions are also believed to provide nearly half the heavy elements such as gold, later swept into planetary systems Right: Light images of the radiation burst of the colliding neutron stars, coincident with a similar gravitational wave chirp, showing its change in radiation over time.

More than a week later, as those wavelengths faded away, X-rays crescendoed, followed by radio waves. That detailed picture revealed the inner workings of neutron star collisions and the source of brief blasts of high-energy light called short gamma-ray bursts. Researchers also tested the properties of the odd material within neutron stars. The neutron stars' union also gave researchers the opportunity to gauge the universe's expansion rate, by measuring the distance of the collision using gravitational waves and comparing that to how much the wavelength of light from the galaxy was stretched by the expansion. It falls squarely between the two previous estimates of 67 and 73 km/s per megaparsec. The neutron stars, whose masses were between 1.17 and 1.60 times that of the sun, probably collapsed into a black hole, although LIGO scientists were unable to determine the stars' fate for certain. By studying how the neutron stars spiraled inward, astrophysicists also tested the rigidity of neutron star material for the first time, ruling out ultrasquishy neutron stars The outer crust of neutron stars has thus been proposed to consists of ultra-dense ulltra-hard mountains htemselves radiating gravitational waves (doi:10.1103/PhysRevLett.102.191102) and the inner crust consisting of ultra-strong structures having the topologies of gnocchi, spaghetti and lasagna (Science News 9-14-18) The relative strengths of the gravitational waves and gamma radiation also suggest there is no leakage of gravity into other dimensions, which has been cited as an explanation for its relative weakness (doi:10.1088/1475-7516/2018/07/048).

The confirmation of the existence of gravitational waves has led to a surge of interest in unified theories in which gravitational waves play a formative role. In one theory, primordial chiral gravitatonal waves are conceived as a generator of both visible baryons and strongly interacting dark baryons, dark matter candidates, whose masses would be lower than wimps and would not be directly detectable but whose eixstence might leave evience in the cosmic background (arXiv:1801.07255). In a second gravitational wave theory called bigravity, gravitational waves have two modes (f and g), with g being the usual and f , a sterile massive non-interactive mode, with oscillations between them in the same manner as neutrinos (arXiv:1703.07785).

Singularities, Cosmic Censorship and Classical Predictivity.

Fig 12b: Many potential futures in a collapsing black hole as the field equations cross the Cauchy horizon (light ring). (Inset) Wormholes, in which distinct regions of space-time become connected through form another hypothetical alternative to a black hole in which there is no actual event horizon Alternative entitiesto black holes, lacking event horizons include boson stars, possibly consisting of axions, gravastars, fuzzballs and wormholes, which were originally theorized by Einstein and Rosen. Neutron stars could also collapse further to quark stars, strange stars electroweak stars and planck stars - arising when the energy density of a collapsing star reaches the Planck energy density, assuming gravity and spacetime are quantized, there arises a repulsive 'force' derived from Heisenberg's uncertainty principle. The accumulation of mass-energy inside the Planck star cannot collapse beyond this limit because it would violate the uncertainty principle for spacetime.

The idea of black hole gravitational collapse leads to the notion of a singularity in space-time which can come in two forms - space-like where particles are crushed together and time-like in rotating black holes where light rays pass through a point of infinite curvature. In the 1960s mathematicians found a physical scenario in which Einstein's field equations - which form the core of his theory of general relativity - cease to describe a predictable universe when we deal with the evolution of space-time inside a rotating black hole. Beyond the event horizon where light becomes trapped, the field equations still hold predictively, but when one passes a second threshold the Cauchy horizon, Einstein's equations start to report that many different configurations of space-time could unfold, all o which satisfy the equations. The theory cannot tell us which option is true. Roger Penrose suggested a principle called cosmic censorship that meant that space-time and with it the laws of motion cease at the Cauchy horizon, but in 2018 (arXiv:1710.01722) it was discovered that this is not the case, but rather that space-time ceases to be smooth enough to use Einstein's differential equations, so the multiple solutions do not apply. The basis of this is that the singularity, is milder than Penrose's - a weak 'light-like' singularity rather than a strong 'space-like' singularity. This gives a classical view of the boundary between general relativity and the quantum world discussed in quantum gravity theories.

Gravastars: A gravastar is an object originally hypothesized as an alternative to black holes by Mazur and Mottola, resulting from assuming real, physical limitations on the formation of black holes, such as discrete length and time quanta, which were not known to exist when black holes were originally theorized. Gravastars are how a black hole becomes transformed if we define space-time as quantized, based on the Planck length. Matter does not collapse inside because quantization makes this impossible in a manner consistent with dark energy preventing collapse. Instead of an event horizon, we have light in orbit. The notion builds on general relativity imposing a universal "smallest size" known to exist according to well-accepted quantum theory in the form of the Planck length . Quantum theory says that any scale smaller than the Planck length is unobservable and meaningless. This limit can be imposed on the wavelength of a beam of light so as to obtain a limit of blue shift that the light can undergo. A gravitational well blue-shifts incoming light, so around the extremely large mass of a gravastar there is a region of "immeasurability" to the outside universe as the wavelength of the light crosses the Planck length. This region is a "gravitational vacuum" - a void in the fabric of space and time. The researchers suggest that the violent creation of a gravastar might be an explanation for the origin of our universe and many other universes, because all the matter from a collapsing star would implode "through" the central hole and explode into a new dimension and expand forever, which would be consistent with the current theories regarding the Big Bang. This "new dimension" exerts an outward pressure on the Bose-Einstein condensate layer and prevents it from collapsing further.

If one half of an entangled pair of particles were to cross the event horizon and disappear into the singularity while the other did not, then this entanglement would be destroyed, and that is forbidden by quantum theory. Since quantum theory is generally considered the more fundamental theory, general relativity cannot provide a true description of gravity close to a black hole, horizons do not form. Instead, space-time undergoes a shift in its fundamental properties. Mazur's alternative arises from the fact that a superfluid can exist in a number of "phases". As the star collapses in on itself, the particles within it reach a density that matches the density of the particles that make up the condensate of superfluid space-time. At this point, the material of the star can interact with the material that makes up space-time, and the result is that the two materials undergo a phase change. Inside a spherical boundary, where conditions "go critical", the stellar matter is converted to energy, and the superfluid changes its phase, just like water turning to steam. According to Mazur and Chapline's calculations, the energy associated with this phase of the superfluid space-time has a negative pressure, which manifests as repulsive gravity.

The internal pressure might also become manifest in a big bang origin of a daughter universe (Cardoso V et al. 2016 Is the Gravitational-Wave Ringdown a Probe of the Event Horizon? Phys Rev Lett 116, 171101). Subsequently investigations of echoes of the chirp following the main pulse of the colliding black hole event originally detected by LIGO, have been found to be consistent with a quantum-mechanical boundary, such as a firewall, a quantum-mechanically high energy interface at the event horizon, or of a gravistar. Some versions of string theory also suggest that black holes are 'fuzzballs' - tangled threads of energy with a fuzzy surface, in place of a sharply-defined event horizon (Merali Z 2016 doi:10.1038/nature.2016.21135, Cardoso V et al. 2016 Gravitational-wave signatures of exotic compact objects and of quantum corrections at the horizon scale Phys. Rev. D 94, 084031. doi: 10.1103/PhysRevD.94.084031 ). Any of these quantum theories would contradict the universality of general relativity but the lack of such structures would likewise contradict quantum theory, so this is a potential acid test of their relationship.

Bose-Einstein cosmological simulations involving ultra-cold atoms are also being used to explore cosmological questions. A Bose-Einstein condensate is formed when integer spin atoms which behave as bosons and thus can superimpose in the same manner as lasers do with photons, are brought togenther in a single superimposed quantum state.

Fig 12c: Bose-Einstein condensate mimicking the early universe.

By rapidly increasing the size of a ring-shaped cloud of atoms (doi: 10.1038/d41586-018-04972-x), experimenters induced behaviour in the system that mimicked how light waves were stretched and damped as space expanded in the early Universe. Sound waves (phonons) travelling through a Bose-Einstein condensate obey the same equations that describe how light would have moved through empty space at the dawn of the Universe. The wavelength of the waves increase as the ring grows, mimicking a redshift, in which the expansion of space gradually stretches light. The intensity of the waves decrease during expansion, mirroring Hubble friction, describing how the amplitude of light waves fell as they lose energy to the expanding space. Finally, they observed preheating at the end of inflation, when energy involved in the initial rapid expansion dissipated to create the range of particles we see today. In the ultra-cooled atoms, when expansion stopped, the waves sloshed back and forth before dissipating through a series of whirlpools into waves that travelled around the ring.

In a second experiment (doi:10.1038/536258a), simulated Hawking radiation was observed. Steinhauer created an event horizon by accelerating atomsin a Bose-Einstein condensate until some were travelling at more than 1 mm/s - a supersonic speed for the condensate. At its ultra-cold temperature, the condensate undergoes only weak quantum fluctuations that are similar to those in the vacuum of space. And these should produce phonons, just as the vacuum produces photons, the partners should separate from each other, with one partner on the supersonic side of the horizon and the other forming Hawking radiation. On one side of his acoustical event horizon, where the atoms move at supersonic speeds, phonons became trapped. And when Steinhauer took pictures of the condensate, he found correlations between the densities of atoms that were an equal distance from the event horizon but on opposite sides. This demonstrates that pairs of phonons were entangled - a sign that they originated spontaneously from the same quantum fluctuation, he says, and that the condensate was producing Hawking radiation.

Engendering Nature: Cosmic Symmetry-Breaking, Inflation and Grand Unification

At the core of the cosmic inflation concept is cosmolgical symmetry-breaking, in which the fundamental forces of nature, which make up the matter and radiation we relate to in the everyday world gained the very different properties they have today from a single super-force. There are four quite different forces. The first two are well known - electromagnetism and gravity - both long-range forces we can witness as we look out at distant galaxies. The others are two short-range nuclear forces. The colour force holds together the three quarks in any neutron or proton and indirectly binds the nucleus together by the strong force, generating the energy of stars and atom bombs. The weak radioactive force is responsible for balancing the protons and neutrons in the nucleus by interconverting the flavours of quarks and leptons (p 311).

There is a fundamental 'sexual' division among the wave-particles based on their quantum spin. All particles come in one of two types:

Fermions, of half-integral spin can only clump in complementary pairs in a single wave function and thus, being incompressible, make up matter.

Bosons of integral spin which can become coherent and can all enter the same wave function in unlimited numbers, as in a laser, and hence form radiation and as virtual particles appearing and disappearing through quantum uncertainty, the force fields which act between the particles.

Fig 13: Scalar and vector fields illustrate the classical behaviour or potential functions and electrostatic fields and fluid flows. A scalar is a single quantity wheras a vector field in 3 dimensions has 3-dimensional vectors. Quantum fields likewise can have differing dimensions depending on their spin. Spin-0 fields have one degree of freedom and are scalar. Spin-1 fields have three degrees of freedom and are vectors. Photons, because they are massless have lost the longitudinal mode and have only two degrees of freedom (polarisation). The one additional degree of freedom contributed by the Higg's boson gives back to the weak bosons the degree of freedom they need to be massive and have a varying velocity. Spin 1/2 fermions have two-component wave functions which turn into their negatives upon a 360 degree revolution, leading to the Pauli exclusion principle.

Spin-1 bosons such, as the photon, behave like 3-D vector fields and form the well-known fields of electricity and magnetism. Electric charge is essentially the capacity to emit and absorb virtual photons and comes in +/- attractive-repulsive forms. The photon's longitudinal field however is lost because it is massless, leaving only the two transverse fields defined by the polarization. Combining with a Higgs to form a heavy photon such as the Z0 particle adds this missing field.

Fig 14: Symmetry and local symmetries are believed to underlie the fundamental forces. Top left: 60 degree rotational geometric symmetry of a snowflake, charge symmetry of electromagnetism and isotopic spin symmetry between a neurton and a proton illustrate symmetries in nature. Right: The electromagnetic force can be conceived of as an effect required to make the global symmetry of phase change local. A global phase shift does not alter the two-slit interference of electron waves (which usually have one light band in the centre), but a phase filter which locally shifts the phase through one slit has precisely the same effect as applying a magnetic field between the slits. The local phase shift causes the centre peak to become split in both cases. Gravitation can likewise be conceived as a symmetry of the Lorenz transformations of relativity, usually referred to as Poincare invariance.

Spin 1/2 fermions behave very differently. They have fields with only two degrees of freedom and, unlike the photon whose wave function becomes itself when turned through 360o, when the fermionic fields are rotated by 360o their wave function becomes its negative. Hence two particles in the same wave function, such as electrons in an atomic or molecular orbital have to have opposite spins to remain attracted, or they will fly apart. Hence fermions resist compression and form matter. Gravity behaves as stress tensors in space-time, and it is universally attractive, so its quantum fields behave as spin-2 gravitons.

We thus have another fundamental sexual complementarity manifesting as the relationship between matter and radiation. The half integral spin of electrons was first discovered in the splitting of the spectral lines of electrons in atomic orbitals into pairs whose spin angular momentum corresponded to +/-1/2 rather than the 0, 1 , 2 etc. of atomic s, p , d and f -orbitals (p 318). As spin states have to differ by a multiple of Planck's constant h a particle of spin s has 2s+1 components. A glance at the known wave-particles (p 311), indicates that the bosons and fermions we know are very different from one another in their properties and patterns of arrangement. There is no obvious way to pair off the known bosons and fermions, however there are reasons why there may be a hidden underlying symmetry, which pairs each boson with a fermion of one-half less spin, called super-symmetry, because in super-symmetric theories the infinities that plague quantum field theories cancel and vanish, the negative contributions of the fermions exactly balancing the positive contributions of the bosons. This would mean that there must be undiscovered particles. For example corresponding to the spin-2 graviton would be a spin-3/2 gravitino, a spin-1 graviphoton a spin-1/2 gravifermion and a spin-0 graviscalar.

Fig 15: (a) Standard model of the four fundamental forces is based on the combined SU(3) x SU(2) x U(1) symmetries of the RGB color force, the +/- electroweak force (charge and flavor) and weak hypercharge. The wave-particles are divided into two disparate groups - bosons and fermions. The fermions, which make matter are divided between quarks which experience all the forces including colour and leptons which experience only the electroweak and gravity. The bosons, which mediate the forces have integer spin and freely superimpose, as in lasers and hence also make radiation. Half-integer spin fermions only superimpose in pairs of opposite spin and hence resist compression into one space, thus making solid matter. Each quark comes in three colours (RGB) and pairs of flavours (e.g. up and down with charges 2/3 and -1/3) with anti-quarks having anticolors (CMY) and anti-flavors with opposite charges. Quarks associate (a) in pairs to form mesons (e.g. π+ ud, ud, ud, depending on color, π0 uu or dd, π- du, with higher mass mesons involving the heavier quarks) (b) in triplets to forms baryons (e.g. p+ uud, n udd), (c) transient tetraquarks (e.g. udsb) and (d) transient pentaquarks (uudcc). The fermions also come in three series of increasing mass. The gluons have a color-anti-color charge. (b) The forces converge at high energies. Electromagnetism is first united with the weak force ostensibly through the spin-0 Higgs boson, then with the colour force gluons and finally with gravity. (c) Force differentiation tree, in which the four forces differentiate from a single super-force, with gravity displaying a more fundamental divergence. (d) the scalar Higgs field has lowest energy in the polarized state, resulting in electro-weak symmetrybreaking. The SU(2) x U(1) symmetry corresponds to three W bosons and one B boson all massless. Under symmetry-breaking the last W and the B coalesce into Z0 and γ, leaving W±. (e) the stable atomic nuclei with their increasing preponderance of neutrons are equilibrated by the weak force. This force is chiral, engaging left-handed interactions, for example in neutron decay, as shown. Weak interactions may explain the chirality of RNA and proteins (King R372, R374). Right: Electron-positron creation illustrating the trajectories of identical mass but oppositey charged particles in a magnetic field.

Every physicist knows the approximate value (α = e2/ hc ~1/137) of a fundamental constant called the fine-structure constant. This constant describes the strength of the electromagnetic force between elementary particles in the standard model of particle physics and is therefore central to the foundations of physics. For example, the binding energy of a hydrogen atom — the energy required to break apart the atom's electron and proton — is about α2/2 times the energy associated with an electron's mass. Likewise the terms in Feyman diagrams of quantum electrodynamics decline by factors of α. Moreover, the magnetic moment of an electron is subtly larger than that expected for a charged, point-like particle by a factor of roughly 1 + α/(2π) caused by the emission and reabsorption of virtual photons. This 'anomaly' of the magnetic moment has been verified to ever-increasing accuracy, becoming "the standard model's greatest triumph" (Müller H 2020). However a hint of further particles in the virtual milieu comes from measurements of the heavier muon where the magnetic moment anomaly muon g-2 may indicate further virtual particles increasing the moment, including a possibly composite Higgs consisting of varying arrangements of subparticles, or a supersymmetric Higgs quartet (Castelvecchi D 2021 doi:10.1038/d41586-021-00833-2).

The four fundamental forces appear to converge at very great energies and to have been in a state of symmetry at the cosmic origin as a common super-force. A key process mediating the differentiation of the fundamental forces is cosmic symmetry-breaking. The short-range weak force behaves in many ways as if it is the same as electromagnetism, except the charged W+,W- and neutral Z0 carrier particles corresponding to the electromagnetic photon are very massive. One can of course consider this division of a common super-force into distinct complementary forces as a nd of sexual division, just as the division into male and female is a primary division. In this respect gravity stands apart from the other three forces which share a common medium of spin-1 bosons and broke symmetry first.


Fig 16: Colour force: Top left: Mesons (quark-antiquark) and baryons (three quarks) mediate their color by exchanging gluons of appropriate color-anticolour combinations. Top centre: The electromagnetic field reduces effective charge by forming virtual electron-antielectron pairs. Top right: The colour force also does this by forming quark-antiquark pairs, but in addition the gluons have a colour charge (unlike the uncharged photon) which increases the effective charge towards infinity at great distances, while remaining relaxed at short distances (asymptotic freedom), allowing the quarks to move freelywithin a confined space. This phenomenon, which is also known as camoflage is also illustrated in the lower series of diagrams where electromagnetism has only shielding while colour has shielding and camoflage. The effect of quark and gluon confinement is that individual particles cannot be isolated. When they are driven apart in a very energetic collision, a shower of particles results which eventually neutralizes the colour charge. In another form of asymptotic freedom, quantum electrodynamics and in particular electric charge (the gauge coupling constant) diminishes at very high energies in the presence of gravity, with the same trends expected for the weak and color forces (doi:10.1038/nature09506). Right: Internal dynamics of proton in which the prepondernace of up quarks is compansated for by an increased incidence of down antiquarks by a factor of 1.4. This may be explained by the virtual emission and reabsorption of a meson leading to a transient state of the proton being a neutron (Quanta 2021).

Every proton and neutron is itself believed to consist of three subparticles called quarks as follows: n=udd, p+=uud. Neutron decay is thus actually the transformation of a down quark into an up (see figs 15, 16). The three quarks are bound together by a force, called the colour force because each quark comes in one of three colours, just as electric charges come in two types, positive and negative. Each neutron has one up and two down quarks and each proton two up and one down. To balance the charges each up must have charge 2/3 and each down -1/3. However, regardless of their up or down flavour, there is always one of each colour, so that the proton and neutron are colourless.

Recent investigations of the weak charge of the proton by firing electrons of both spins at protons suspended in liquid hydrogen, which separates the chiral effect of the weak force affecting only one of the spins, shows that the proton's weak charge of 0.0719 ± 0.0045 is in excellent agreement with the standard model and sets multi-teraelectronvolt-scale constraints on any semi-leptonic parity-violating physics not described within the standard model, and rules out leptoquark masses below 2.3 TeV (doi:10.1038/s41586-018-0096-0), illustrating how a low energy experiment can indirectly provide evidence of physics at higher energies than current particle accelerators.

lepton
mass
symbol
charge
quark
mass
symbol
charge
electron neutrino
< 16 eV
0
up
2.3 MeV
u u u
2/3
electron
0.5 MeV
e
1
down
4.8 MeV
d d d
-1/3
muon neutrino
< 65 eV
0
charm
1275 MeV
c c c
2/3
muon
106.6 MeV
1
strange
95 MeV
s s s
-1/3
tau neutrino
< 65 eV
0
truth
173 GeV
t t t
2/3
tau
1784 MeV
1
beauty
4180 MeV
b b b
-1/3
Table 1: Fermion menagerie

A second, quite different force, the weak nuclear force, is responsible for radioactive decay. If a nucleus has too many neutrons, one neutron can decay into a proton, an electron and an antineutrino (fig 15e). This reaction and its reverse act to keep the balance of protons and neutrons, which is roughly 50:50 to keep each nuclear particle in the lowest possible energy states under the strong force, but becomes biased toward neutrons in heavier elements, fig 15(e) because of instability caused by the accumulated repulsive positive charges of the protons. Significantly, the reaction does not preserve mirror symmetry, as it gives rise only to left-handed electrons, the anti-neutrino involved in beta decay having right handed helicity.

Fig 17: CP Violation: (a) Decay of the Ko meson is a parallel to photon polarization. The K1 component (see below) by decay is similar to vertical polarization removing the horizontal component from circularly polarized light. However there is a small amplitude for the K2 to go into resonance back into the K1 form, just as dextrose rotates the polarization of light, allowing it to subsequently decay again, similarly to detecting horizontal polarization in the rotated light. Lower left Feynman diagram for quark flavour mixing. Just as classical chirality requires three dimensions, CP-violation of the Ko requires at least three families of fermions. Investigations of the B meson containing a b (beauty) quark indicate flavour mixing, suggesting a fourth family is possible. There can be no more than four or the extra neutrino types would cause an unrealistic expansion rate of the universe. (b) T-reversal violation experiment on the B meson. When one meson decays at time t1 , the identity of the other is "tagged" but not measured specifically. In the top panel, the tagged meson is a "B0", where B stands for B-bar. This surviving meson decays later at t2 , encapsulating a time-ordered event, which in this case corresponds to "B0" -> B- . To study time reversal, the BaBar collaboration compared the rates of decay in one set of events to the rates in the time-reversed pair. In the present case, these would be the "B-" -> B0 events, shown in the bottom panel (Zeller 2012). (c) Interactions in the decay of the Λ0b baryon.

The weak force is known to be chiral, but the asymmetry of nature runs even deeper. In 1964 the principle of CP (charge-parity) conservation was overthrown by the neutral K0 meson. CP violation is accommodated in the standard model (SM) of particle physics by the Cabibbo–Kobayashi–Maskawa (CKM) mechanism that describes the transitions between up- and down-type quarks, in which quark decays proceed by the emission of a virtual W boson and where the phases of the couplings change sign between quarks and antiquarks. The neutral K0 usually decays into 3 π-mesons, but once in 500 times is found after a strange delay to decay into only two. The neutral K0 meson, and its antiparticle both decay into a pair of mesons. The rapid decay of the component into π-mesons, subsequently leaves the remaining component which does not follow the same decay. However subsequently there is a small amplitude for conversion of some of the K2 back to K1 resulting in a KL which is not matter-antimatter symmetric, since it contains differing components of K0 and its anti-particle. Thus the reaction is preferred over the mirror-image. Since the K0 has quark constituents (d, anti-s) and its anti-particle (anti-d,s), this implies that the reaction should be directed in time. Similar considerations are used to explain the preponderance of matter over anti-matter. It is suggested that the one part in 108 of matter to radiation could have come from a similar process resulting in a slight differential in the stability of matter and anti-matter with respect to time. Potential confirmation of baryonic CP violation, which would be pivotal for matter - anti-matter asymmetry, has also been found in LHC studies of the decay of Λ0b baryons decaying to pπ -π +π and pπ -K +K final states with the former having a 3.3 sigma significance of around 1 in 1000 due to chance alone (Nature Physics 2017 doi:10.1038/nphys4021).

An even more glaring symmetry violation has been discovered in the B meson which indicates a direct violation of time reversal as shown in fig 17(b). T-violation can be inferred from CP violation by applying the CPT theorem, which states that all local Lorentz invariant quantum field theories are invariant under the simultaneous operation of charge conjugation, parity reversal, and time reversal, but in the B-meson experiment (Lees et al. 2012), T-violation was detected directly. The experiment takes advantage of entangled B and B (B-bar) mesons in the Y(4s) resonance produced in positron-electron collisions at SLAC. This allows measurement of an asymmetry that can only come about through a T inversion, and not by a CP transformation. Each of the entangled B0 and B0 mesons resulting from the Y(4s) can decay into either a CP eigenstate, or a state that identifies the flavour of the meson. To study T inversion, the experimenters selected events where one meson decayed into a flavour state and the other decayed into a CP eigenstate. The time between these two decays was measured, and the rate of decay of the second with respect to the first was determined. After detecting and identifying the mesons, the experimenters determined the proper time difference between the decay of the two B states by determining the energy of each meson and measuring the separation of the two meson decay vertices along the e+ - e- beam axis. When time-reversed pairs were compared, the BaBar collaboration found discrepancies in the decay rates. The asymmetry, which could only come from a T transformation and not a CP violation, was significant, being fourteen standard deviations away from time invariance (Zeller 2012).

CP-violation has also been detected at the LHC in the D0 meson which consist of a charm-anti-up quark pair. The D0 and the anti-D0 don't decay at the same rate. The ratios of decay differed by a tenth of a percent (https://cds.cern.ch/record/2668357/files/LHCb-PAPER-2019-006.pdf).

Although all these three types of CP violations are too tiny to account for our matter-dominated universe, cientists are holding out hope of finding much larger matter-antimatter differences elsewhere, such as in neutrinos or reactions involving the Higgs boson.

This has since led to a theory of the arrow of time based on the existence of T-violating quantum interactions. Despite the Lorenz equations of special relativity connecting space and time, in conventional quantum theory states are presumed to undergo continuous translation over time. There is thus a fundamental difference between space and time in that quanta can be confined in space but have to persist in time to avoid non-conservation of mass-energy. In the theory separate wave eqations are established which would allow quanta to be located in time in the way they are in space. In the words of the researcher Joan Vaccaro (2016): "If T symmetry is obeyed, then the formalism treats time and space symmetrically such that states of matter are localized both in space and in time. In this case, equations of motion and conservation laws are undefined or inapplicable. However, if T symmetry is violated, then the same sum over paths formalism yields states that are localized in space and distributed without bound over time, creating an asymmetry between time and space. Moreover, the states satisfy an equation of motion (the Schrodinger equation) and conservation laws apply. The Schrodinger equation of conventional quantum mechanics, where time is reduced to a classical parameter, emerges as a result of coarse graining over time".

Fig 18: (a) Oscillation of an electron neutrino into the other two known types muon and tauon over distance. The key to understanding the oscillation phenomenon is that electron neutrinos do not have a definite mass: they are a superposition of the three neutrino mass eigenstates. The neutrino mass matrix is not diagonal in the flavor (electron-muon-tau) basis. Therefore, the wave equation that describes how they move through space will mix them up and therefore they 'oscillate.' (b): An experiment using neutrino oscillations to verify quantum 'entanglement' in terms of the violation of the constraints imposed by local causality over the longest distance ever - 735 km - using the Leggett-Garg inequality, a variant of Bell's inequalities that works over distinct times, or energies in the case of this neutrino experiment. Quantum and classical theoretical predictions are in blue and red and the experimental result is in black (Arxiv:1602.00041). Instead of a single evolving system at different times, we can use an ensemble with 'stationarity' the correlations depending only on the time differences. One can then perform measurements on distinct members of an identically prepared ensemble, each of which begins in some known initial state. The combination of the prepared and stationarity conditions acts as a substitute for non-invasive (weak) quantum measurements, because wave function collapse and classical disturbance in a given system do not influence previous or subsequent measurements on distinct members of the ensemble. Energy can be used as a proxy for time because the energy of a neutrino determines the unitary time evolution. (c) Results from T2K and NOvA experiments suggest the rate of oscillation may differ between neutrinos and anti-neutrinos resulting in a symmetry violation. Heavy neutrinos and their anti-particles could also have had different decay profiles potentially explaining the preponderance of matter over anti-matter in the universe.

Although the standard model gives zero rest mass for the neutrino, neutrinos are now known to have a small mass (currently less than 0.8 eV Katrin experiment), making them current focal candidates for exploring beyond the standard model. This is consistent with the idea that the neutrino types are able to interconvert by a resonance, or oscillation, similar to that of the K0 meson. This explains the small observed flux of neutrinos from the sun, which is only about 1/3 what it should be for the nuclear energy required to keep it at current luminosity.

Weak interactions create neutrinos in one of three leptonic flavours: electron neutrinos (ve), muon neutrinos (νμ), or tau neutrinos (vτ), in association with the corresponding electron, muon, and tau charged leptons, respectively. It is now known that there are three discrete neutrino masses. Each neutrino flavour state is a linear combination of the three discrete mass eigenstates. From cosmological measurements, it has been calculated that the sum of the three neutrino masses must be less than one millionth that of the electron. A neutrino created in a specific flavour eigenstate is in an associated specific quantum superposition of all three mass eigenstates. Researchers (doi:10.1103/PhysRevLett.123.081301) combined data from 1.1 million galaxies from the Baryon Oscillation Spectroscopic Survey (BOSS) to measure the rate of expansion of the Universe, and constraints from particle accelerator experiments and nuclear reactors and from a variety of sources including space- and ground-based telescopes observing the first light of the Universe (the CMB radiation), exploding stars, the largest 3D map of galaxies in the Universe. They found the maximum possible mass of the lightest neutrino to be 0.086 eV, and that three neutrino flavours together have an upper bound of 0.26 eV, compared with the mass of the electron of 0.511 MeV.

In the early universe there was a sea of protons and neutrons constantly interacting with electrons, neutrinos of every type and their anti-particles through weak interactions. Because neutrons are slightly more massive (939.5 MeV) than the proton (938.2 MeV), there are fewer of them. As the expansion separates, these the weak interactions cease, leaving about a 1:5 n:p ratio at 1 second. The neutrons begin to decay with a half-life of 15 minutes (see fig 15). After 3 minutes, deuterium ( n + p+ + e-) becomes stable and is rapidly converted to helium. At this point neutron decay has reduced the n:p ratio to 1:8. These flush out another 1/8 of the particles (protons) leaving a 1:4 ratio of helium to hydrogen. More families of neutrinos than four would cause a faster expansion rate, and the faster reaction would produce more helium than observed. Experiments on supernovae limit the electron neutrino mass to less than 16 eV. All neutrinos must have a mass less than 65 eV or the universe will be closed and collapse and moreover the expansion rate would be slower than observed. Recent evidence from the Planck survey indicates that the summed masses of the three neutrinos must be less than 0.21 eV. There are even more neutrinos than photons, several billion for every proton, electron and neutron.

It is also unknown whether neutrinos are their own anti-particle and are thus Majorana fermions, which would have real wave functions and behave differently from other (Dirac) fermions which have distinct anti-particles. The concept goes back to Majorana's suggestion in 1937 that neutral spin-1/2 particles can be described by a real wave equation, in contrast to the complex wave functions of our known Dirac fermions such as the electron. Majorana fermions would therefore be identical to their antiparticle (because the wave functions of particle and antiparticle are related by complex conjugation).

The only CP violation observed so far is in the weak interactions of quarks, and it is too small to explain the matter-antimatter imbalance of the universe. It has been shown that CP violation in the lepton sector could generate the matter-antimatter disparity through leptogenesis. Collected over nearly a decade, the data suggest that neutrinos oscillated more than expected, while antineutrinos oscillated less than expected - a sign of CP violation. The T2K (Tokai to Kamioka see: t2k-experiment.org/) and NOvA experiments suggest that there is a difference in the oscillation rate of muon and anti-muon neutrinos. In 2016, 32 muon neutrinos changed to electron neutrinos on their way to Super-K. To test for CP violation in neutrinos, T2K researchers sent beams composed of neutrinos or antineutrinos 300-km trek across Japan. The beams initially consist of muon neutrinos or muon antineutrinos. The researchers counted how often the particles converted into electron neutrinos or electron antineutrinos. When the researchers sent muon antineutrinos, only four became electron antineutrinos. In the 2017 T2K results, more electron neutrinos were produced than expected (89 rather than 67 ratio 1.33) and fewer electron anti-neutrinos than expected (7 rather than 9 ratio 0.77). A preliminary analysis of T2K's data rejects the hypothesis that neutrinos and antineutrinos oscillate with the same probability at 95% confidence (2σ) level. This could help explain the preponderance of matter over anti-matter in the universe which the kaon and B meson CP violations are insufficient to account for.

For the first time, the researchers are beginning to narrow down the potential values of a complex phase delta CP, concluding with a significance of at 99.7% confidence (3σ) level. The results show an indication of CP violation in the lepton sector. Future measurements with larger data samples will determine whether the leptonic CP violation is larger than the quark sector CP violation. (K. Abe et al. Constraint on the matter-antimatter symmetry-violating phase in neutrino oscillations. arXiv:1910.03887, Nature doi:10.1038/s41586-020-2177-0).

The three neutrino states that interact with the charged leptons in weak interactions are each a different superposition of the three neutrino states, each of definite mass. Neutrinos are created in weak processes in one of the three flavours. As a neutrino propagates through space, the quantum mechanical phases of the three mass states advance at slightly different rates due to the slight differences in the neutrino masses. This results in a changing mixture of mass states as the neutrino travels, but a different mixture of mass states corresponds to a different mixture of flavour states. So a neutrino born as, say, an electron neutrino will be some mixture of electron, mu, and tau neutrino after traveling some distance. This shape-shifting ability is measured by three mixing angles: θ12, θ23 and θ13 which determines the periodicity of each shape shift.

In the Standard Model of particle physics, fermions have mass only because of interactions with the Higgs field. These interactions involve both left- and right-handed versions of the fermion. However, only left-handed neutrinos (with left helicities - spins antiparallel to momenta) have been observed so far, with anti-neutrinos being right-handed. Attempts to verify whether neutrinos are Majorana particles depends on investigating the existence of neutrinoless double beta decay (right) where a nucleus e.g. of Germanium simultaneously converts two neutrons to protons releasing a pair of electrons with energy the nuclear energy difference, but no neutrinos, due to one being absorbed by the other nucleus, by becoming an absorbed anti-neutrino rather than a second emitted neutrino. Limits on the half-life limiting the production rate (e.g. from GERDA) are now over 2x1025 years.

Neutrinos may have another source of mass through the Majorana mass term. The see-saw mechanism involves a model of physics where right-handed neutrinos with very large Majorana masses are included. If the right-handed neutrinos are very heavy, they induce a very small mass for the left-handed neutrinos, which is proportional to the inverse of the heavy mass. The actual mass of the right-handed neutrinos is unknown and could have any value between 1015 GeV and less than 1eV. These neutrinos are called "sterile" as they would interact only with other neutrinos or via gravity and thus could be candidates for dark matter or the "dark raditation" connecting other dark matter particles, possibly of right-handed chirality as noted. The number of sterile neutrino types is undetermined, in contrast to the number of active neutrino types, which has to equal that of charged leptons and quark generations to ensure the anomaly freedom of the electroweak interaction.

Sterile neutrinos have gained interest as a possible explanation for the excess of matter in the universe - that in the first microseconds after the big bang, the young, hot universe contained extremely heavy, unstable sterile neutrinos that soon decayed, some into leptons and the remainder into their antimatter counterparts, but at unequal rates. They would become heavy and the other neutrinos very light by the see-saw mechanism. The slight excess would then become the matter after mutual annihilation of the majority. This would require sterile neutrinos to be Majorana fermions.

During the early universe when particle concentrations and temperatures were high, neutrino oscillations can behave differently. Depending on neutrino mixing-angle parameters and masses, a broad spectrum of behavior may arise including vacuum-like neutrino oscillations, smooth evolution, or self-maintained coherence. The physics for this system is non-trivial and involves neutrino oscillations in a dense neutrino gas.

Evidence of the degree of cosmic clumping and of the combined neutrino masses (above) from Planck may make their existence less likely (doi:10.1038/nature.2014.16462). On the other hand best estimates of neutrino number from Planck and WMAP are around 3.3 (Olive et al., Chin. Phys. C, 38, 090001) leaving some room for a neutrino contribution to dark radiation (ArXiv: 1109.2767). Although some experiments have found no evidence for sterile neutrinos in particle decays, several experiments, including the Liquid Scintillator Neutrino Detector (LSND) in Los Alamos in the 1990s and its sucessor - MiniBooNE running over 15-years to 2018 which fire muons at an oil-filled detector and count the electron neutrinos resulting from neutrino oscillations, have both noted anomalies of a few hundred extra electron neutrinos in the shape-shifting of neutrinos in which there are more electron neutrinos produced than consistent with direct oscillation from from the targeting muon neutrinos alone, suggesting a sterile intermediate upping the conversion rate.

There is also a deficit of neutrinos from nuclear reactors, when atoms such as U235 produce ffission products that emit anti-neutrinos, such as the repeated beta-decay of krypton89 through rubidium89, strontium89 and yttrium89, suggesting a route to an undetectable form, however this discrepancy was in 2017 found to fluctuate with the U235 content of a nuclear reactor as opposed to other nuclei such as U238 and plutonium (arXiv:1704.01082). The shortfall fluctuating with U235 content is not consistent with a resonance mechanism converting neutrino types, which should depend only on the neutrino flux.

The discovery in 2016 that the current local rate of cosmic expansion is 9% faster than previously thought tests the consistency of dark energy models and could be explained by the existence of a sterile neutrino (Sokol 2016), however at about 1 eV, the hypothetical mass of the sterile neutrino from MiniBooNE would be too small to function as a dark matter candidate. Alternatively an axion-like light particle that interacted with quarks in the very early universe could also explain why the cosmic abundance of lithium is lower than predicted levels (Goudelis et al. (2016).

However, the failure to detect the sterile neutrino has now led to most researchers abandoning the idea as it stands has led to the idea that there could be a more complex set of dark sector neutrinos (Lewton 2021). An analysis where sterile neutrinos can decay into other, invisible particles, actually favors the sterile neutrino's existence (Moulai 2021). Analyses that consider all neutrino oscillation experiments together also find support for decaying sterile neutrinos (Diaz et al. 2019). A dark sector model has been proposed (Vergani et al. 2021) that includes three heavy neutrinos of different masses. Their model accounts for the LSND and MiniBooNE data through a concoction of both a heavy neutrino decaying and lightweight ones oscillating; it also leaves room to explain the origin of neutrino mass, the universe's matter-antimatter asymmetry through the seesaw mechanism, and dark matter..

Fig 19: LHC and Higgs manifestations. (a): Generation of the Higgs by four pathways, q and g are the quarks and gluons making up the colliding protons. Bremsstralung is "braking radiation" caused by particles glancing off one another. (b) Two pathways of Higgs decay into photons, or Z0 bosons. L are leptons. (c) Anomalies in the decays of the Higgs hints at effects beyond the standard model. (d) LHC Atlas Higgs decay. The observed mass of the Higgs at 125 GeV has been claimed to be consistent with technicolour an older theory extending the standard model with a fifth force, implying the Higgs could be a compostie of 'techniquarks' (doiI: 10.1103/PhysRevD.90.035012). Both the Higgs and the B-meson (ArXiv:1506.08614) have shown some anomalies in the way they decay with biases in how often they decay in into taus, muons or electrons, which may indicate a second Higgs or other force or particles appearing, but so far no evidence of supersymmetry. B-meson decay could be modified by virtual particles of greater masses than attainable in the LHC including wither a 30 times heavier Z' or a technicolor leptoquark (doi:10.1038/nature21721). Technicolor is a force field invoked to explain the hidden mechanism of electorweak symmetry-breaking. The mechanism for the breaking of electroweak gauge symmetry in the remains unknown. The breaking must be spontaneous, meaning that the underlying theory manifests the symmetry exactly, but the solutions (ground and excited states) do not and the W and Z bosons become massive and also acquire an extra polarization state. Despite the agreement of the electroweak theory with experiment at energies accessible so far, the process causing the symmetry breaking remains hidden. The simplest mechanism of electroweak symmetry breaking introduces a single complex field and predicts the existence of the Higgs boson. Typically, the Higgs boson is "unnatural" in the sense that quantum mechanical fluctuations produce corrections to its mass that lift it to such high values that it cannot play the role for which it was introduced. Unless the Standard Model breaks down at energies less than a few TeV, the Higgs mass can be kept small only by a delicate fine-tuning of parameters. Technicolor avoids this problem by hypothesizing a new interaction coupled to new massless fermions. This interaction is asymptotically free at very high energies and becomes strong and confining as the energy decreases to the electroweak scale of 246 GeV. These strong forces spontaneously break the massless fermions' chiral symmetries, some of which are weakly gauged as part of the Standard Model. This is the dynamical version of the Higgs mechanism. The electroweak gauge symmetry is thus broken, producing masses for the W and Z bosons. The new strong interaction leads to a host of new composite, short-lived particles at energies accessible at the Large Hadron Collider (LHC). This framework is natural because there are no elementary Higgs bosons and, hence, no fine-tuning of parameters.

The Higgs Particle, Symmetry-breaking and Cosmic Inflation: A key explanation for symmetry-breaking is that originally all the particles had zero rest mass like the photon, but some of the boson force carriers like the W changed to mediate a short-range force by becoming massive and gaining an extra degree of freedom (the freedom to change speed) by picking up an additional spin-0 particle called a Higgs boson. The elusive Higgs, which has now been discovered in the LHC may also explain why the universe flew apart. The universe begins at a temperature a little below the unification temperature - slightly supercooled, possibly even a result of a quantum fluctuation. In the early symmetric universe empty space is forced into a higher-energy arrangement than its temperature can support called the false vacuum.

The result is a tremendous energy of the Higgs field, or rather the 'inflation' field, as the energy is ascribed to another elusive force. This behaves as a super anti-gravity, exponentially decreasing the universe's curvature, inflating the universe in 10-35 of a second to something already compaable to its present size. This inflationary phase becomes broken once the Higgs field collapses, breaking symmetry to a lower energy polarized state, rather like a ferromagnet. does, to create the asymmetric force arrangement we experience to form the true vacuum. In this process the Higgs particles, which are zero spin and have one wave function component, unite with some of the particles, such as W+/- and Z0 to give them non-zero rest mass by adding their extra component , allowing the additional longitudinal component of the wave function associated with a varying velocity.

Because the true vacuum is at a lower energy than the false one, it grows to engulf it releasing the latent heat of this energy difference as a shower of hot particles, the hot fireball we associate with the big bang. Normal gravity has now become the attractive force we are familiar with. The reversal of the sign of gravity means that the potential energy os now reversed, so that it adds to the large kinetic energy of the universe flying apart. Two energies which cancelled now became two which add - an insignificant universe - almost nothing - becomes one of almost incalculable proportions. The end result is a universe flying apart at almost exactly its own escape velocity, whose kinetic energy almost balances the potential energy of gravitation. Symmetry-breaking can leave behind defects if the true vacuum emerges in a series of local bubbles which join. Depending on whether the symmetries which are broken are discrete, circular, or spherical, corresponding anomalies in the form of domain walls, cosmic strings or magnetic monopoles may form. In some models, cosmic inflation has a fractal branched structure, like a snowflake, which is perpetually leaving behind mature universes like ours.

Fig 19b: Left and centre: LHC collision illustrating the production of a Higgs particle and a top quark anti-quark pair again confirms the standard model predictions, despite the very high masses of the particles involved (doi:10.1103/PhysRevLett.120.231801). This comes on top of the discovery of the Higgs ecaying into a pair of W bosons (inset top right), leaving very little room for any deviations from the standard model to give rise to particels explaining dark matter. Right: Decay of the Higgs into a pair of bottom quarks (CMS/CERN).

Leptoquarks are hypothetical bosons that carry information between quarks and leptons of a given generation that allow quarks and leptons to interact. They are color-triplet bosons that carry both lepton and baryon numbers. They are encountered in various extensions of the Standard Model, such as technicolor theories or GUTs based on Pati-Salam model, SU(5) or E6, etc. Their quantum numbers like spin, (fractional) electric charge and weak isospin vary among theories. According to the Standard Model, a B+ meson should decay to a kaon, electron and positron as often as it decays to a kaon, muon and antimuon - a situation known as lepton universality. If a measurement of both rates shows a difference between them, it could be the first sign of something new beyond the standard model. According to the LHCb collisions, B+ mesons decay to muons about 25% less often than they decay to electrons. The observed difference has a significance of 2.6 standard deviations, corresponding to a chance of one in a hundred that it is due to a statistical fluctuation. Flavour-changing neutral current decays, whereby a quark changes its flavour without altering its electric charge are also a key way to test departures from the standard model. One example of such a transition is the decay of a beauty quark into a strange quark.

Fig 20: (a) Hypothetical leptoquark interaction complementing th weak force (b) Hypothetical leptoquark in proton decay. (c) Candidate event at CMS. Leptoquarks could be produced in pairs, and each would decay into a lepton (such as an electron) and a quark (which becomes a jet). (d) Feynman diagrams for hypothetical gluon leptoquark interactions. (e) Leptoquark pair formation. These intereactions are distinct from those of the hypothetical hyper-weak force previously conceived to transform quarks into leptons..

The deviations discovered with respect to the standard model predictions could be explained by the existence of a new particle, not predicted by the standard model, whose contribution to the decay amplitude destructively interferes with the standard model diagram. One option is an ultra-heavy Z' version of the weak Z0 boson, another is a leptoquark. Here, typical solutions require a leptoquark that couples more strongly to the second and third generations of decay than to the first. This hierarchical flavour structure can naturally explain the lepton non-universality ratio Rk, and could be related to other hints of lepton non-universality seen in other b-hadron decays. Some models predict a leptoquark mass of around 1 TeV, which with some luck could be directly observed at the LHC. The LHeC project to add an electron ring to collide bunches with the existing LHC proton ring is proposed as a project to look for higher-generation leptoquarks/ The beauty meson decay anomaly as of October 2018 had a precision of 3.4 sigma, or 99.97 percent. When the modelling includes long-distance interactions fo the disintegration products, according to the Standard Model, the confidence goes up to 6.1 sigma.

Virtual particle methods In a move at the opposite extreme to the LHC, virtual particles emitted by some of our most familiar particles could leave an impression of force unification in their virtual particle exchanges. An extreme alternative to using particle accelerators to try to prize out new realexotic particles extending the standard model is to look for subtle effects in some of the most fundamental virtual particle interactions, such as those emitted and absorbed by the electron as a function of its charge and the electric field this generates. The standard model predicts an almost vanishing electric moment for the electron but other models suggest a larger value supported by the creation and absorption of exotic virtual particles outside the standard model, all of which should be allowed by the uncertainty principle. Experiments are underway to try to gain an accurate estimate of this value. The standard model predicts a value less than 10-38 e-cm, but if the neutrino is a Majorana particle (its own anti-particle) it could rise to 10-33. Various technicolor models predict a value between 10-29 and 10-27. Supersymmetric models a value greater than 10-26. Current experimental limits are below 8.7 x 10-29 with several groups working on refining the value, the latest of which is down to (4.3 ±3.1stat ±2.6syst) x 10-30 e cm (doi:10.1038/s41586-018-0599-8). This result implies that a broad class of conjectured particles, if they exist and time-reversal symmetry is maximally violated, have masses that greatly exceed what can be measured directly at the Large Hadron Collider.

A second series of experiments involves magnetic precession of the muon in a field perpendicular to the orientation of its magnetization. Standard, theory predicts that in a magnetic field a muon's magnetism should precess at the same rate as the particle itself circulates, so that if it starts out polarized in the direction it's flying, it will remain locked that way throughout its orbit. Thanks to quantum uncertainty, however, the muon continually emits and reabsorbs other particles. That haze of particles popping in and out of existence increases the muon's magnetism and makes it precess slightly faster than it circulates. Because the muon can emit and reabsorb any virtual particle within uncertainty, its magnetism like the electron experiments above, tallies all possible particles – even new ones too massive for the LHC to make. Researchers in the g-2 experiment at Brookhaven National Laboratory tested this by injecting muons into a ring-shaped vacuum chamber sandwiched between superconducting magnets. Over hundreds of microseconds, the positively charged muons decay into positrons, which tend to be emitted in the direction of the muons' polarization. Physicists can track the muons' precession by detecting the positrons. The experiment which detected an anomaly at 3.5σ has since been moved to Fermilab where an expanded more sensitive program is underway.


Fig 21: Left (a) A photon may undergo mixing to a pseudoscalar particle - an axion - in an external magnetic field. (b) The magnetic field is a pseudovector field because, when one axis is reflected, as shown, reversing parity, the magnetic field is not reflected, but reversed, because the currents are reversed. The position of the wire and its current are vectors, but the magnetic field B is a pseudovector, as is any vector cross product p=a x b. Any scalar product between a pseudovector and an ordinary vector is a pseudoscalar. A pseudoscalar particle corresponds to a scalar field which is likewise inverted under a change of parity. (c) When photons in an initially unpolarized light beam (consisting of both parallel and perpenidcular components) enter an external magnetic field, axion-photon mixing depletes only the parallel electric field components (dichrosim) leading to polarization. This could be used to detect axions in distant quasars. Right: Axion mass ranges ruled out so far by ADMX.

Axions: In addition, other weakly-interacting particles may emerge, such as the axions which some researchers associate with cold dark matter. Axions were originally envisaged to explain why CP (charge-parity) violation does not happen with the color force as it does with the weak force. The strong nuclear force arranges quarks inside the neutron, so that their overall charge seemingly never grows lopsided - charge-parity (CP) symmetry: Inverting each quark's charge and reflecting them all in a mirror doesn't affect the neutron's behaviour.  However the weak nuclear force doesn't share it: Two neutral kaons decay in ways CP symmetry forbids. Since quarks are involved in both cases, experts would have expected the weak-force symmetry-breaking to extend to the strong force as well. The neutron's charge distribution thus becomes the strong CP problem. The axion represents the leading solution..

One of the terms in the Lagrangian energy equation for chromodynamics is chiral, breaking CP symmetry, but the color force does not break symmetry. The strong CP problem boils down to the unexpected value of an angle theta in the equations that describe the strong force. Its value seems to be zero, which makes the neutron's charges stay in line. After some fudging, Quinn and Peccei promoted θ from a constant to a field that permeates space, with a value that could naturally settle down to zero everywhere. Weinberg and  Wilczek observed that the Peccei-Quinn field requires a particle excitation in the field and the axion was born. The most elegant solution to this is thus a new continuous U(1) symmetry whose spontaneous symmetry breaking relaxes the chiral term to zero. This leads to a new spin-0 pseudoscalar particle - the axion. If axions inherit a mass they become natural cold dark matter candidates. One theory of axions relevant to cosmology predicts that they would have no electric charge, a very small mass in the range from 10-6 to 1 eV/c2, and very low interaction cross-sections for strong and weak forces.

Because of their properties, axions would interact only minimally with ordinary matter, but could change to and from photons in magnetic fields, as a result of mixing due to the pseudovector nature of the magnetic field as a cross product (fig 21). Pierre Sikivie, noted that the axion would be something of a spiritual cousin to the photon, but with just a hint of mass and tweaked the classical electromagnetic theory to incorporate the axion and found that axions just might pack the universe tightly enough that they could add up to the missing dark matter and that now and then they would transform into two photons. Axions' minuscule mass makes them extremely low-energy waves, with wavelengths somewhere between a building and a football field in length. Sikivie realised that the key to coaxing these low-energy axions to turn into photons would be a device that could be tuned to resonate at precisely the same wavelength as the axions - the principle that drives the ADMX experiment, which has scanned from 0.65 to nearly 0.68 gigahertz looking for excess power from axion-spawned photons; this year the collaboration has continued on to 0.8 gigahertz (arXiv:1910.08638). These frequencies mean that the experiment has ruled out axions weighing between 187 billion times and 151 billion times less than the electron, with wider ranges to come.

Fig 22: The 95% CL upper limits on the gluino (left) and squark (right) pair production cross sections as a function of neutralino versus gluino (squark) mass ("Search for supersymmetry in events with photons and missing transverse energy in pp collisions at 13 TeV" CMS Collaboration).

The SU(5) theory (fig 23) extending the standard model, sometimes also referred to as hyperweak, is an attempt to make an immediate extension of the idea of the electroweak unification to unification with the colour force, through which a quark could decay into leptons. However its prediction that the proton should also be unstable, like the neutron, and decay e.g. as in (b), has not been validated in any experiment. Fornal & Grinstein B (2017) have developed a form of SU5 where the proton remains stable, reigniting the potential viability of the theory (fig 23) extending the standard model, sometimes also referred to as hyperweak, is an attempt to make an immediate extension of the idea of the electroweak unification to unification with the colour force, through which a quark could decay into leptons. However its prediction that the proton should also be unstable, like the neutron, and decay e.g. as in (b), has not been validated in any experiment. Fornal & Grinstein B (2017) have developed a form of SU5 where the proton remains stable, under suitable constrants on the parameters by placing the quarks and leptons in distinct irreducible representations of SU5, and other heavy forces linking these in yet other irreducible representations, reigniting the potential viability of the theory. Representations "represent" the elements of a group as linear transformations of vector spaces, including Hilbert space, so in the case of SU5 represent the forces induced by the (broken) symmetries. The representations linking quarks and fermions correspond to 'heavy' forces with short range and low interaction probability. Stability of the proton requires three relations between the parameters of the model to hold. However, abandoning the requirement of absolute proton stability, the model fulfills current experimental constraints limiting proton decay rate without fine-tuning.

Supersymmetry, also illustrated in fig 23, in which each boson has a fermion partner and vice versa, which has been a favorate of extensions of the Standard Model because it balances the positive and negative vacuum energy contributions of the bosons and fermions, has failed to demonstrate any evidence of its existence in the latest rounds of LHC experiments, leading up to the end of 2016, using energies up to 13 TeV.

Supersymmetry is a pairing between bosons and fermions of adjacent spin. The idea behind this is based on ground state zero-point fluctuations - the energies that arise through uncertainty when a quantum is considered in its lowest (ground) energy state. Only a perfect balancing of the negative zero-point energies of the fermions against the corresponding positive zero-point energies of the bosons implied by supersymmetry would cancel the potential infinities arising from the arbitrarily short wavelengths that result from the electromagnetic field when quantum gravitation is included in the unification scheme. These would effectively curl space-time to a point (Hawking R303 46, 50). It is possible however that it is the collective contribution of the two groups which balance so that there is not an individual set of boson-fermion pairings but two symmetry-broken groups - bosons and fermions which collectively balance one another - reflecting the standard model. Garrett Lisi's "exceptionally simple" theory is like this.

Fig 22 shows the production limits for particle pair production of two supersymmetric candidates with no experimental evidence of their existence up to the high range of energies provided by the LHC. Ths means that no evidence for any extension of the Standard Model in terms of fundamental particle creation is likely to occur in the current round, leaving physics with only the single Higgs as a trophy and no immediate prospect of a resolution.

Fig 23: Unproven symmetries: (a) SU(5) theory extending the standard model. Since a quark could decay into leptons its prediction is that the proton should also be unstable, like the neutron, and decay e.g. as in (b), (c-e) Supersymmetry, a hypothetical symmetry between fermions and bosons identifies each with a supersymmetric partner of one half spin less. The hierarchy problem deals with why the weak force for example, is 1032 times stronger than gravity. In other words, how the electro-weak scale (~246 Gev the Higgs field vacuum expectation) relates to the Planck scale (10-35 m or 1019 GeV) where black holes could spontaneouly form from uncertainty applied to gravity (lower chart) . Supersymmetry would provide a solution because the negative vacuum contribution of the fermions would then balance the positive contribution of the bosons. This would also see the strengths of the three vector forces coming together neatly at high energies. 1 eV is the energy to move one electron through 1 volt. 1 GeV= 109 eV. A proton has a mass of 0.9 GeV and an electron 0.511 MeV. The Z0 and Higgs have masses of 91 and ~126 GeV. Because E=mc2, strictly the units are GeV/c2. The LHC is running at up to 14 TeV = 1.4 x 104 GeV. The simplest theory minimal supersymmetric standard model (MSSM) is shown in (e). Symmetry-breaking would give the supersymmetric partners a large mass, but again, the highest energy LHC results have so far shown no evidence for supersymmetric partners appearing.

SMASH Physics: A minimal extension of the Standard Model (SM) called SMASH ( Standard Model Axion Seesaw Higgs portal inflation) provides a potentially complete and consistent picture of particle physics and cosmology up to the Planck scale. According to Ballesteros et al. (2016) the model adds to the SM three right-handed SM-singlet neutrinos, a new vector-like color triplet fermion and a complex SM singlet scalar σ whose vacuum expectation value at ∼ 1011 GeV breaks lepton number and a Peccei-Quinn symmetry simultaneously. Primordial inflation is produced by a combination of σ and the SM Higgs. Baryogenesis proceeds via thermal leptogenesis. At low energies, the model reduces to the SM, augmented by seesaw-generated neutrino masses, plus the axion, which solves the strong CP problem and accounts for the dark matter in the Universe. The model can be probed decisively by the next generation of cosmic microwave background and axion dark matter experiments. It builds on Shaposhnikov's (2005) model, which added three neutrinos to the three already known in order to solve four fundamental problems in physics: dark matter, inflation, some questions about the nature of neutrinos, and the origins of matter. SMASH adds a new field to explain some of those problems a little differently. This field includes two particles: the axion, a dark horse candidate for dark matter, and the inflaton, the particle behind inflation. As a final flourish, SMASH uses the field to introduce the solution to a fifth puzzle: the strong CP problem, which helps explain why there is more matter than antimatter in the universe.

Fig 24: Scale symmetry-breaking model

Scale Symmetry theories and their resulting scale symmetry-breaking suggest the fundamental description of the universe does not include mass and length, implying that, at its core, nature lacks a sense of scale and thus may not differentiate between scales. The physics starts with a basic equation that sets forth a massless collection of particles, each with unique characteristics, such as whether it is matter or antimatter and has positive or negative charge. As these particles attract and repel one another and the effects of their interactions cascade, scale symmetry breaks, and masses and lengths spontaneously arise. Similar dynamical effects generate 99% of the mass in the visible universe. Protons and neutrons are each a trio of lightweight quarks. The energy used to hold these quarks together gives them a combined mass that is around 100 times more than the sum of the parts. In the Standard Model, the Higgs boson, comes equipped with mass. It provides mass to other elementary particles through its interactions with them, adding an extra scalar field to their wave function. The masses of electrons, W and Z bosons, individual quarks and so on derive from the Higgs boson and, in a feedback effect, they simultaneously affect the Higgs mass. The scale symmetry approach traces back to 1995, when William Bardeen showed that the mass of the Higgs boson and the other Standard Model particles could be calculated as consequences of spontaneous scale-symmetry breaking. But the delicate balance of his calculations seemed easy to spoil when researchers attempted to incorporate new, undiscovered particles, like those of dark matter and gravity. Instead, researchers gravitated toward supersymmetry that naturally predicted dozens of new particles some of which could account for dark matter.

In the standard approach, the Higgs boson's interactions with other particles tend to elevate its mass toward the highest scales present in the equations, as particles even each other out through quantum mechanical effects, dragging the other particle masses up with it. But physicists propose that far beyond the Standard Model, at a scale about 10^18 times heavier - the Planck mass - there exist unknown giants associated with gravity. These heavyweights would be expected to fatten up the Higgs boson and pull the mass of every other elementary particle up to the Planck scale. Instead, an unnatural hierarchy seems to separate the lightweight Standard Model particles and the Planck mass. Supersymmetry posits the existence of a heavier twin for every particle found in nature with spin 1/2 more, thus switching from boson to fermion and vice versa. If for each particle the Higgs boson encounters (such as an electron) it also meets that particle's slightly heavier twin (the selectron), the combined effects would nearly cancel, preventing the Higgs mass from ballooning toward the highest scales. Yet decades after their prediction, none of the supersymmetric particles have been found. But without supersymmetry, the Higgs boson mass seems as if it is reduced not by mirror-image effects but by random, improbable cancellations between unrelated numbers. Essentially, the initial mass of the Higgs seems to exactly counterbalance the huge contributions to its mass from gluons, quarks, and gravitational states. And if the universe is improbable, then it must just be one universe of many - a rare bubble in a foaming multiverse - an unsatisfactory conclusion.

For a scale-symmetric theory to work, it must account for both the small masses of the Standard Model and the gargantuan masses associated with gravity. Both scales must arise dynamically - and separately - starting from nothing. In agravity, or adimensional gravity (arXiv:1403.4226), the Higgs mass and the Planck mass both arise through separate dynamical effects, neatly identifying the inflation of the inflationary scenario with the Higgs particle of gravity. However, the theory requires the existence of particle-like ghosts, which either have negative energies or negative probabilities of existing, effectively wreaking havoc on the probabilistic interpretation of quantum mechanics. However, like the 'holes' of antimatter, these may gain a satisfactory explanation over time. In an alternative scale-symmetric theory (arXiv:1408.3429), Bardeen and others posit that the scales of the Standard Model and gravity are separated as if by a phase transition. The researchers have identified a mass scale where the Higgs boson stops interacting with other particles, causing their masses to drop to zero. It is at this scale-free point that a phase change-like crossover occurs.

.

Monopoles: Although Maxwell's equations have symmetry between the electric and magnetic fields E and B and do not prohibit magnetic monopoles, their absence led to Gauss law: . However Dirac discovered that the existence of a single magnetic monopole in the universe would explain the quantization of charge. Consider a system consisting of a single stationary electric charge (e.g. an electron) and a single stationary magnetic monopole. Classically, the electromagnetic field surrounding them has a total angular momentum proportional to the product qeqm, and independent of the distance between them. Quantum mechanics dictates that angular momentum is quantized in units of h, so therefore the product qeqm must also be quantized. This means that if even a single magnetic monopole existed in the universe, and the form of Maxwell's equations is valid, all electric charges would then be quantized: .

Fig 25: GUT structure of a magnetic monopole. Near the center (about 10-29 cm) there is a GUT symmetric vacuum. At about 10-16 cm, its content is the electroweak gauge fields of the standard model. At 10-15 cm, it is made up of photons and gluons. At the edge to the distance of 10-13 cm, there are fermion-antifermion pairs. Far beyond nuclear distances it behaves as a magnetically-charged pole of the Dirac type. In effect, the sequence of events during the earliest moment of the universe has been fossilized inside the magnetic monopole.

In a U(1) gauge group with quantized charge, the group is a circle of radius 2π/e. Such a U(1) gauge group is called compact. Grand unified theories (GUTs) uniting electroweak and strong forces lead to compact U(1) gauge groups, so they explain charge quantization in a way that seems to be logically independent from magnetic monopoles. However, the explanation is essentially the same, because in any GUT which breaks down into a U(1) gauge group at long distances, there are magnetic monopoles. In the early universe if the symmetrical unified state of the GUT froze out in different regions, various topological defects in the symmetry-breaking can form 2D domain walls, 1D cosmic strings, or monopoles, depending on the type of symmetry which is broken. Unlike Dirac monopoles, which would be point singularities of infinite self-energy, such monopoles would have unified force interactions at very short radii and would thus have a finite but very large mass. If the horizon of the domains were very large, due to inflation, these would be vanishingly infrequent but would still be integral to the cosmic description. Although such cosmic monopoles have never been detected experimentally there is good evidence for Dirac magnetic monopoles as lattice quanta in condensed matter spin ices (doi:10.1126/science.1177582, doi:10.1126/science.1178868). Angulons in molecules spinning in superfluid liquid helium 4He (doi:10.1103/PhysRevLett.118.095301) have also been demonstrated to behave as monopoles (doi:10.1103/PhysRevLett.119.235301).

Rehabilitating Duality: Quantum Gravity, String Theory, and Space-time Structure

Quantum theory is formulated within space-time, but mass-energy, through gravitation in general relativity alters the structure of space-time by curving it. This has made a comprehensive integration of gravity with the other forces of nature difficult to achieve and may indicate a fundamental complementarity between the theories. Something of this paradox can be understood in graphic terms if we consider the implications of quantum uncertainty over very small time intervals, small enough to allow a virtual black hole to form. In this case a quantum fluctuation could give rise to a wormhole in the very space-time in which it is conceived raising all manner of paradoxes of connected universes and time loops into the bargain. This leads to a fundamental conceptual paradox in which space-time is flat or slightly curved on large scales but a seething topological foam of worm-holes on very small scales. These problems lead to fundamental difficulties in describing any form of quantum field in the presence of gravity.

The unification of gravity with the other forces brings new and deeper mysteries into play. Theories which treat particles as points are plagued with infinities the very points themselves imply as infinite concentrations of energy. Point particles may thus on very small scales become string, loop or membrane excitations. The theories broadly called 'superstring' explain the infinite self-energies associated with a point particle, and the different particles themselves as different excitation on a closed or open loop or string. However none have been found so far which correspond to our own peculiar asymmetric set of particles.

Supergravity adds a number of particles of higher integer and half-integer spin which also give a good example of how supersymmetry might work. Like any field theory of gravity, where gravitational space-time stress tensors convert to spin-2 particles, a supergravity theory contains a spin-2 field whose quantum is the graviton. Supersymmetry requires the graviton field to have a superpartner. This field has spin 3/2 and its quantum is the gravitino. The number of gravitino fields is equal to the number of supersymmetries. There are 8 extended supergravity theories and each of them has a characteristic number of distinct supersymmetries ranging from n = 1 to 8. These generate successive generations of particles of lower spin as in fig23(c). In each theory there is one spin-2 graviton and there are n spin-3/2 gravitinos. The number of particles with lower spins is also completely determined. If n is equal to 1 the theory is simply supergravity with one graviton and one gravitino. If n is 2. the theory includes 1 graviton, 2 gravitinos and 1 spin-1 particle (graviphoton). Perhaps the most realistic model of this kind is given when n = 8. The complement of elementary particles then consists of 1 graviton, 8 gravitinos. 28 graviphotons variously 48 to 56 spin-1/2 particles (gravifermions) and 70 spin-0 particles (graviscalars). An intriguing property of the extended supergravity theories is their extreme degree of symmetry. Each particle is related to particles with adjacent values of spin by supersymmetry transformations. and these supersymmetries are of local form. Thus a graviton can be transformed into a gravitino and a gravitino into a graviphoton. Within each family of particles that have the same spin all the particles are related by a global internal symmetry, much like the internal symmetry that relates proton and neutron.

In 2018 a version of n = 8 supergravity involving E10 an infinite dimensional Lie algrbra extending the exceptional simple symmetry group E8 has been found to provide an extension of all the forces in the standard model to a unification with gravity (Meissner & Nicolai 2018). In supergravity theories in four spatiotemporal dimensions, there cannot be more than eight different supersymmetric rotations. In this version of n = 8 supergravity, there are 48 fermions (with spin 1/2), which is precisely the number of degrees of freedom required to account for the six types of quarks and six types of leptons observed in nature. After making an adjustment for charge anomalies (the electron had a charge of -5/6 instead of -1, the neutrino had 1/6 instead of 0, etc. in Gell-Mann's original model 30 years previously) in 2015 Meissner and Nicolai obtained a structure with the symmetries U(1) and SU(3) known from the Standard Model. The motivation was strengthened by the fact that the LHC accelerator failed to produce anything beyond the Standard Model and n = 8 supergravity fermion content is compatible with this observation. What was missing was to add the SU(2) group, responsible for the weak nuclear force. In 2018 Meissner and Nicolai show that the weak force SU(2) symmetry can also be accommodated. Unlike the symmetry groups previously used in unification theories, E10 is an infinite group, very poorly studied even in the purely mathematical sense. It keeps the number of spin 1/2 fermions as in the Standard Model but on the other hand suggests the existence of new particles with very unusual properties. Importantly, at least some of them could be present in our immediate surroundings, and their detection should be within the possibilities of modern detection equipment.

Superstrings attempt to address the singularity of the infinite self-energy of point particles by assuming that on the Planck scale these turn into ibrating quantum strings, which thus also do not have point vertices, but continuous interactions as illustrated in fig 26.

Fig 26: (Above) Point particles (a), such as the charged electron, have infinite self-energies because their fields tend to infinity at the vertex and they have precise vertices of interaction. Strings (b) turn the infinities into harmonic quantum excitations at a fundamental scale such as the Planck scale smoothing both the infinities and turning the vertices into smooth manifold tansitions. (Wolfson R760, Sci. Am. Jan 96). They can be regarded either as open strings or loops. The different excitations (c) correspond to different particles e.g. of higher mass-energy.(Below) Compactification of the 12 or so unseen dimensions leave only our 4 of space-time on large scales (Sci. Am. Jan 96). Compactification of one dimension to form a tube is a way 11-D M-theory can be linked to 10-D superstrings which are on smaller scales, string-like tubes.

Such theories also generally require over 10 dimensions to converge, all but four of which are 'compactified' - curled up on sub-particulate scales, leaving only our four dimensions of space-time as global dimensions. Such 'theories of everything' or TOEs have not yet fully explained how the particular arrangements of particles and forces in our universe are chosen out of the millions of possibilities for compactification these higher dimensional theories permit when supersymmetry is broken to produce the particles and forces we experience at low energies.

The internal symmetry dimensions of existing particles come close to the additional number of hidden dimensions required, suggesting the key can be found in the known particles. If we take 1 for the Higgs, 1 for the neutrino, 2 for the electroweak, 3 for colour, and 4 for space-time we have 11. However in string theory the compactifications occur on a huge variety of spaces called Calabi-Yau manifolds, presenting up to 10500 possible configurations. Four-dimensional space-time is optimal mathematically for complexity. In some unification theories, one of the compactified dimensions might be much larger (fig 26). Duality, in which fundamental particles in one description may become composite in another and vice versa may also enable apparently divergent theories to be understood through a convergent dual.

Fig 27: Relation between M-theory and dualities between string theories (ex Hawking R303, Duff R170). Originally string theories were entirely bosonic and formulated in 26 dimensions for internal consistency until the advent of consistent 10 dimensional theories. Type I has one supersymmetry in 10 dimensions. It is based on unoriented open and closed strings, while the rest are based on oriented closed strings. Type II have two supersymmetries. IIA is non-chiral (parity conserving) while the IIB is chiral (parity violating). The heterotic string theories are based on a hybrid of a type I superstring and a bosonic string. There are two kinds of heterotic strings differing in their ten-dimensional gauge groups: the heterotic E8xE8 string and the heterotic SO(32) string. There are two types of duality. S-duality says that a collection of strongly interacting particles in one theory can be viewed as a collection of weakly interacting particles in another thus avoiding infinities. T-duality states that a string propagating around a circle of radius R is equivalent to a string in the dual propagating around a circle of radius 1/R. If a string has momentum p and winding number n around the circle in one description, it will have momentum n and winding number p in the dual description (see fig 28).

M-Theory represents a form of unification of several theories including 10-dimensional superstring theories and 11-dimensional supergravity have been proposed in the form of M-theory - M-for membrane, or according to its proponents, magic. The essential idea is that 11-dimensional membrane theory looks like 10-dimensional string theory if one of the two membrane dimensions are rolled up into a tiny tube along with one of the 11-dimensions. In this point of view several of these theories are actually complementary mathematical formulations of the same object. This brings in the 'holographic principle' (fig 3), in which a theory in a multidimensional region can be equivalent to a theory on the boundary of the region, one dimension lower (Cowen 2015, Duff R175).

Particles can come in two types, one vibrational states of strings (vibrating particles) and the other topological - how many times a string wraps around the compactified dimension (winding particles). The winding particles on a tube of radius R are identical to the vibrational particles on a tube of radius 1/R. Duality is a paradoxical concept in which there is a natural relationship between theories which continue to have strong interactions and the perturbation theory fails, with dual theories whose interaction strengths are the reciprocals of the originals and hence converge nicely. The nemesis comes if we end up having to deal with a TOE whose interactions are mid rage, so that neither the original nor the dual can be unraveled.

Fig 28: Duality between string theories. Winding particles in one have the same energetics as vibrational particles in the other and vice versa (Duff R175). The concept of duality may solve intractable infinities by finding a dual theory which is convergent. In the dual theory, particles like magnetic monopoles, which are a composite of quarks and other particles, become fundamental and electrons and quarks become composites of these. No particle is thus truly fundamental, each locked in sexual paradox with its dual.

Another branch of string theory, F-theory (arXiv: hep-th 9602114) has allowed physicists to work with strongly interacting, or strongly coupled, strings. This means that string theorists can use algebraic geometry - which uses algebraic techniques to tackle geometric problems - to analyze the various ways of compactifying extra dimensions in F-theory and to find solutions. More recently, researchers (arXiv: hep-th 1903.00009) have identified a class of solutions with string vibrational modes that lead to a similar spectrum of fermions to the standard model - including the property that all fermions come in three generations. The F-theory solutions found have particles that also exhibit the handedness, or chirality, of the standard model particles - reproducing the exact 'chiral spectrum' of standard model particles. For example, the quarks and leptons in these solutions come in left and right-handed versions, as they do in our universe. There are at least a quadrillion solutions in which particles have the same chiral spectrum as the standard model, which is 10 orders of magnitude more solutions than had been found within string theory until now.

Supersymmetry provides at least one candidate for dark matter. There are four neutralinos that are their own antiparticles (Majorana fermions) and are electrically neutral, the lightest of which is typically stable. Because these particles only interact with weak vector bosons, they are not directly produced at hadron colliders in copious numbers and are favoured dark matter candidates. In supersymmetry models, all Standard Model particles have partners with the same quantum numbers except for spin, which differs by 1/2 from its partner. Since the superpartners of the Z0 boson (zino), the photon (photino) and the neutral higgs (higgsino) have the same quantum numbers, they can mix to form four eigenstates of the mass operator called "neutralinos". Alternatively these four states can be considered mixtures of the bino the neutral wino (superpartners of the U(1) gauge field corresponding to weak hypercharge and the W bosons), and the neutral higgsino. In many models the lightest of the four neutralinos turns out to be the lightest supersymmetric particle (LSP).

Fig 29: Superstring theories suffer from having many different forms of compactification during symmetry breaking - up to 10500. Two recent papers (arXiv:1806.08362, 1806.09718) suggest the overwhelming majority of multiverses in this landscape are assigned to the swampland of unviable universes, where dark energy is unstable, also reviving the popularity of time-varying dark energy models such as quintessence. Right: An attempt is made to find why our universe has an optimal configuration among the millions of possibilities. The Calabi-Yau manifold illustrated is just one compactification, which shows a local 2D cross-section of the real 6D manifold known in string theory as the Calabi-Yau quintic. This satisfies the Einstein field equations and is a popular candidate for the wrapped-up 6 hidden dimensions of 10-dimensional string theory at the scale of the Planck length (1.6 x 10-35 m or about 10-20 times the size of a proton). The 5 rings that form the outer boundaries shrink to points at infinity, so that a proper global embedding would be seen to have genus 6 (6 handles on a sphere, Euler characteristic -10). The underlying real 6D manifold (3D complex manifold) has Euler characteristic -200, is embedded in the 4D complex projective plane, and is described by the equation z05 + z15 + z25 + z35 + z45 = 0 in five complex variables. The displayed surface is computed by assuming that some pair of inhomogeneous complex variables, say z3/z0 and z4/z0, are constant (thus defining a 2-manifold slice of the 6-manifold), renormalizing the resulting equations, and plotting the local Euclidean space solutions to the complex equation z15 + z25 = 1.

An alternative dark matter candidate has emerged from extending the SU(3) symmetry of the color force with one extra force field to conserve baryon number, resulting in an SU(4) x SU(2) x U(1) extension of the standard model explaining why the proton never decays as the lightest baryon. The model requires quarks to have heavier partners, and the lightest of these has the right properties to be dark matter (ArXiv:1511.07380).

In one form of the holographic principle, discussed above and illustrated in fig 3, quantum entanglement on the boundary gives rise to gravitation-like forces on the interior, possibly explaining gravity and relativity in terms of holographic entanglement.

A possible key to the higher dimensional theories is the 8-dimensional number system called the octonians. Just as complex numbers form a two dimensional plane, for which the second component is a multiple of i, the square root of -1, octonians form a system of 8-components. Associated with the octonians are the exceptional symmetry groups such as G4 and E8. Internal symmetries such as that of colour, and of charge, as well as the well-know Lorentz transformations of special relativity are already the basis for explaining the standard model.

Another key to a possible unraveling of the Gordian knot of the theory of everything comes from dualities. Electromagnetism is renormalizable because by adjusting for the infinite self energy of a charge we arrive at a theory like quantum electrodynamics where each more complicated diagram with more vertices makes a contribution 137 times smaller to the interaction and it is then possible to correctly deduce the combined effects without infinities creeping in. Essentially the idea is as follows:

Another dimensional issue is that the only spheres which will admit a vector field without singularities, so-called 'hairy ball's, are S1 the circle, and S3 , S7 the 3-D and 7-D spheres. Our two-sphere S2 always gets places where one hair stands on end like the crown of your head. Thus the status of the unit octonians has a dual 7-D coincidence between algebra and topology, which may be essential in establishing for example a uniform time flow.

The three dimensional nature of space has also been linked to quantum reality. In the 'emergent' picture the three dimensions of space and one of time would arise from quantum gravity and the differentiaton of the forces of nature in the cosmic origin. But a more fundamental basis has been suggested that, given a single dimension for time, quantum theory is the only theory that can supply the degree of randomness and correlation seen in nature - and it can only do so if space is 3D (ArXiv:1206.0630, ArXiv:1212.2115). A subsequent paper shows this constraint applies if microscopic objects interact "pairwise" with each other, as they appear to in ours (ArXiv:1307.3984) but could be higher if pairwise became three or more. If our conventional complex number quantum theories are replaced by quaternions or octonians, the dimensionality could rise to five or nine (Phys. Rev. D 84 125016).

Stephen Hawking, who has been a consistent champion of the TOE quest, has lamented that although the connections implied by M-theory dualities are so convincing that to not think they are on the right track "would be a bit like believing that God put fossils into the rocks in order to mislead Darwin about the evolution of life " (Hawking R303 57), he now worries (R304) that the search for a consistent theory may remain beyond reach in a single theory because of the implications of Godel's theorem, which proves that any logical system containing finite arithmetic admits formally undecidable propositions. If the search for a TOE runs up against this nemesis, the description of the universe may become undecidable.

E8 and Octonionic Foundation Theories Two theories attempting to unify the foundations of physics leading to a TOE have been generated by creative mavericks coming out of left-field with ingenious unifying concepts based on a deep unity between mathematics and physics, depending on number systems from real through complex to quarternions and octonians and exceptional simple groups based on octonians, such as G2 and E8, which has 248 dimensions and is generated by a 240 vector root system in R8.

Fig 30: Left: Octonians and the Fano plane. Just as complex numbers have two components a + bi with i2 = -1, so the octonians have eight components 1, e1, ..., e7 such that ei2 = -1. Multiplication of coordinate vectors is determined by the 'Fano plane'. Any ei , ej, ek connected by arrows multiply in the manner ei x ej = ek. Like the quarternions, the octonians are non-commutative. Those connected in the reverse direction inherit a minus sign. Each line also loops back to the first coodinate in a cyclic manner. Octonian multiplication is also non-associative, with bracketing rearrangements also invoking a minus sign. Lower left: Dynkin diagrams for E8 and its infinite-dimensional extension E10. E10 has been proposed to be a symmetry group in a realizable supergravity model representing the standard model and its further extension E11 has been conjectured to be the underlying symmetry group in M-theory. Right: Right: E8' 240 root vectors in R8.

The first of these is Garrett Lisi's (2007) "An Exceptionally Simple Theory of Everything", which caused a convulsion of debate throughout the physics community when it was first proposed. After getting his Ph.D., Lisi left academia and moved to Maui - expressing his dissatisfaction with the state of theoretical physics. On Maui, Lisi volunteered as a staff member at a local school, and split his time between working on his own physics research and surfing. On July 31, 2006, Lisi was awarded an FQXi grant to develop his research in quantum mechanics and unification. On June 9, 2007, Lisi realized that the algebraic structure he had constructed to unify the standard model of particle physics with general relativity partially matched part of the algebraic structure of the E8 Lie group. On July 8, 2009, at a FQXi conference in the Azores, Lisi made a public bet with Frank Wilczek that superparticles would not be detected by July 8, 2015. After a one-year extension to allow for more data collection from the Large Hadron Collider, Frank Wilczek conceded the superparticle bet to Lisi in 2016.

Lisi's theory sets out a possible scheme for a theory uniting gravity with the other forces based on root vector systems generating E8 "via a superconnection described by the curvature and action over a four dimensional base manifold". Although this theory remains speculative, it brings together an ingenius utilization of the internal symmetries of E8 with the dynamical topology of the underlying manifold, retaining an intrinsic complementarity between discrete and continuous aspects, despite its manifestly algebraic basis.


Fig 31: Left: A depiction in Garrett Lisi's Exceptionally Simple Theory of Everything. Centre: Garrett Lisi (TED Talk).

However a later paper (Distler and Garibaldi 2009) claims any "Theory of Everything" obtained by embedding the gauge groups of gravity and the Standard Model into a real or complex form of E8 lacks certain representation theoretic properties required by physical reality. Lisi (2011) has commented on critiques of his work. A 2015 discussion of this can be found at (www.physicsforums.com/threads/is-there-any-news-to-garrett-lisi-theory.790057/).

While the existence of the Higgs particle has now been confirmed in the first round of the LHC runs, completing the standard model of physics, there is still no experimental support for Supersymmetry.

One particularly siginificant prediction of this model is that the universe may be algebraically symmetry-broken so that the bosons and fermions give a balanced positive and negative contriution to the mass-energy of the universe, but collectively rather than in supersymmetric pairs, while each have different numbers and arrangements of particles, as is the case in the standard model. In the 240 dimensional root system of E8, there are 22.8C2 = 112 'bosonic' root vectors with integer coordinates and 128 = 28/2 'fermionic' ones with half integer coordinates. Both types are 8-D vectors with Pythagorean length 21/2 and coordinates adding to an even number, hence the 128 rather than 256. Stephen Adler in a (2014) paper has invoked a process involving both SU(8) unification and supergravity into a non-supersymmetric model based on such a complementation.


Fig 31b: Cohl Furey describing how CxO through the ideals of the Clifford algebra Cl6 can be shown to be composed into singletons, doublets
and so on forming a system equivalent to a set of quarks and leptons with quantized electric charge, as in the standard model. (Videos)

The second is Cohl Furey (2014-2018) a Canadian physicist now at Cambridge, who has develped ways of generating key features of the standard model with special relativity using the tensor product of the the unique four division algebras the reals, complex numbers, quaternions and octonians . This entity is sometimes called the Dixon algebra, after Geoffrey Dixon, a physicist who first took this tack in the 1970s and '80s before failing to get a faculty job and leaving the field. (Dixon forwarded me a passage from his memoirs: "What I had was an out-of-control intuition that these algebras were key to understanding particle physics, and I was willing to follow this intuition off a cliff if need be. Some might say I did.") Unlike Dixon who attached other features of physics to the algebra, Furey works essentiall with this entity operating upn itself.

In her (2015) thesis and more recent publications, she has shown that when this is cleanly split into the two factors and , the former can be transformed, through the Clifford algebra Cl2, to form twistors and replicate the Lorenzian transformations of special relativity, while the latter can be shown to factor into a single generation of SU(3)xSU(2)xU(1) components, transformed through the Clifford algebra Cl6, identifying quarks and leptons and their discrete electric charges characteristic of the standard model of physics as ideals. One can see some features of this immediately. If one of the octonionic unit vectors, say e7, is fixed, the transformations under this constraint are those of SU(3). An ideal is a subalgebra in which every element of the algebra is mapped into the ideal by multiplication with any ideal element. This will in turn mean that under operation of the other transformations, the fundamental ideal sub-structure is preserved, giving the nascent particle stability under the wider transformations, thus producing a similar effect to that of Kaluza-Klein theories above, but here more cleanly in the context of the standard model.

These relationsare gradually being extended by Furey (2018) to include systems having features of the SU(5) extension of the standard model, without proton decay, and development of electroweak parity violation. Multiplicative chains of elements of the RCHO algebra can be shown to have 10 generators. Nine of the generators act like spatial dimensions, and the 10th, which has the opposite sign, behaves like time, in a manner similar to the 10 dimensions of some string theories.

These theories fit very neatly into the situation that has resulted from the failure of supersymmetric and other exotic supertheory particles to be detected, even at the high energies of the LHC experiments. The failure of these supertheories makes a concise succinct basis for the alternative approach which is to define the symmetries defining the standard model from "inside" on the basis of interactive symmetries of the four division algebras R, C, H and O. Furey's goal is to find the model that, "in hindsight, feels inevitable and that includes mass, the Higgs mechanism, gravity and space-time". Extending the Lorenzian component to general relativity and gravity would effectively turn the standard model or its further extension on the basis of fundamental algebraic symmetries directly into the TOE everyone has been looking for.

Like Dixon and Lisi, Furey knows this path is perilous, noting that if a faculty position isn't forthcoming after her fellowship ends, there's always mixed martial arts, the ski slopes or busking her accordion, as she has done in the past to make ends meet. "Accordions are the octonions of the music world," she said - tragically misunderstood. Even if I pursued that, I would always be working on this project".

Superfluidity and Quasiparticles: An indication of the possible complexity of a TOE uniting gravity and quantum field theories comes from superfluid helium 3, which forms a superfluid at a lower temperatures than helium 4 because 3He atoms are fermionic and unlike bosonic 4He atoms, have to condense into bosonic pairs before superfluidity ensues. At close to absolute zero, helium 3 is superfluid, and as the temperature rises fractionally a number of bound quantum quasi-particles, form in the medium (Dobbs). Several of the known properties of unified field theories can be modeled using superfluidity on the one hand and these bound structures on the other, as equivalents of gravitational and the other quantum fields. This indicates that the theory sought may not just be a limit of gravitation and quantum fields, but a deeper theory in which both of these are merely stability states. A simulation in helium 3 modelling a collision of branes using the A and B phases of the superfluid (Bradley et al. 2008) illustrates these ideas, which have also been applied to the gravastar alternative to black holes.

Exotic Cosmologies

Brane cosmology: forms an explanation alternative to supersymmetry for the hierarchy problem - why gravity is so much weaker than the other forces. The central idea is that the visible, four-dimensional universe is restricted to a brane inside a higher-dimensional space, called the "bulk" or "hyperspace". If the additional dimensions are compact, as in compactified superstring theories, then the observed universe contains the extra dimensions, and then no reference to the bulk is appropriate. Some versions of brane cosmology, based on the large extra dimension idea, can explain the weakness of gravity relative to the other fundamental forces of nature, thus solving the hierarchy problem. In the brane picture, the other three forces (electromagnetism and the weak and strong nuclear forces) are localized on the brane, i.e 4D space-time, but gravity has no such constraint and propagates on the full e.g. 5D spacetime. Much of the gravitational attractive power "leaks" into the bulk. As a consequence, the force of gravity should appear significantly stronger on small (subatomic or at least sub-millimetre) scales, where less gravitational force has "leaked". Various experiments are currently under way to test this. Extensions of the large extra dimension idea with supersymmetry in the bulk appears to be promising in addressing the so-called cosmological constant problem.

The Big Bounce and Loop Quantum Gravity (LQG) is a theory that attempts to quantize general relativity. The quantum states in the theory do not live inside the space-time. Rather they themselves define spacetime. The solutions describe different possible spacetimes. Space becomes granular as a result of quantization. Space can be viewed as an extremely fine fabric or network "woven" of finite loops. These networks of loops are called spin networks, whose evolution over time is called a spin foam. When the spin network is tied in a braid, it forms something like a particle. This entity is stable, and it can have electric charge and handedness. Some of the different braids match known particles as shown in fig 32, where a complete twist corresponds to +1/3 or -1/3 unit of electric charge depending on the direction of the twist. Heavier particles are conceived as more complex braids in space-time. The configuration can be stabilized from space-time quantum fluctuations by considering each quantum of space as a bit of quantum information resulting in a kind of quantum computation. The predicted size of this structure is the Planck length, ~10-35 m. There is no meaning to distance at scales smaller than this. LQG predicts that not just matter, but space itself, has an atomic structure. There are fundamental isues reconciling LQG with special relativity. LQG had also been invoked in theories seeking to integrate it with string theory or as a holographic duality to it, which also might help resolve the inconsistency at its core with sepcial relativity.

Fig 32: Left: Loop quantum gravity is an alternative to superstring theory. Right: Braided space-time gives an underlying basis for unifying the fundamental particles. It is similar to the preonic Rishon model where an TTT = antielectron; VVV = electron neutrino; TTV, TVT and VTT = three colours of up quarks; TVV, VTV and VVT = three colours of down antiquarks; with the other particles appearing from the anti-rishons (Nuclear Physics B 204 1982 141-167).

The most spectacular consequence of loop quantum cosmology is that the evolution of the universe can be continued beyond the Big Bang, which becomes a sort of cosmic Big Bounce, in which a previously existing universe collapsed, not to the point of singularity, but to a point before that where the quantum effects of gravity become so strongly repulsive that the universe rebounds back out, forming a new branch. Successive universes might thus be able to evolve their laws of nature. The big bounce has also been calculated to invoke a form of cosmic inflation ( doi: 10.1016/j.physletb.2010.09.058). Hints of an experimental result that might confirm the existence of space-time foam come from extreme high energy gamma ray bursts from quasar black holes, where the extremely high energy rays appear to arrive later than lower energies consistent with being slowed by space-time quantization (arXiv:1305.2626).

In 2018 two new models of a big bounce cosmology have been proposed (arXiv:1710.05990, arXiv:1709.01999). Both of these get around the problem of the universe collapsing into a singularity, one by introducing a scalar field and the other by invoking a rotational energy confined in six compactified dimensions.

Fig 33: The big bounce in loop quantum gravity.

In general relativity space-time ceases to be a "container" over which physics takes place and has no objective physical meaning. Instead the gravitational interaction is represented as just one of the fields forming the world. Einstein's comment was "Beyond my wildest expectations". In quantum gravity, the problem of time remains an unsolved conceptual conflict between general relativity and quantum mechanics. Roughly speaking, the problem of time is that there is none in general relativity. This is because the Hamiltonian is a constraint that must vanish. However, in quantum mechanics, the Hamiltonian generates the time evolution of quantum states. Therefore, we arrive at the conclusion that "nothing moves" ("there is no time") in general relativity. Since "there is no time", the usual interpretation of quantum mechanics measurements at given moments of time breaks down.

The ekpyrotic scenario, the term meaning 'conflagrationary', is a cosmological model of the early universe that explains the origin of the large-scale structure of the cosmos and also has a big bounce. The original ekpyrotic models relied on string theory, branes and extra dimensions, but most contemporary ekyprotic and cyclic models use the same physical ingredients as inflationary models (quantum fields evolving in ordinary space-time). The model has also been incorporated in the cyclic universe theory (or ekpyrotic cyclic universe theory), which proposes a complete cosmological history, both the past and future. The name is well-suited to the theory, which addresses the fundamental question that remains unanswered by the big bang inflationary model: what happened before the big bang?

The explanation, is that the big bang was a transition from a previous epoch of contraction to the present epoch of expansion. The key events that shaped our universe occurred before the bounce, and, in a cyclic version, the universe bounces at regular intervals. It predicts a uniform, flat universe with patterns of hot spots and cold spots now visible in the cosmic microwave background (CMB), and has been confirmed by the WMAP and Planck satellite experiments. Discovery of the CMB was originally considered a landmark test of the big bang, but proponents of the ekpyrotic and cyclic theories have shown that the CMB is also consistent with a big bounce.

Fig 34: A cyclic ekpyrotic universe based on colliding branes, which are periodically attracted to oneanother, causing a big bang (inset left), resulting in a cycle two period of which are illustrated. Mutual forces between the branes may provide a feedback driving expansion and contraction.

The search for primordial gravitational waves in the CMB (which produce patterns of polarized light known as B-modes) may eventually help scientists distinguish between the rival theories, since the ekpyrotic and cyclic models predict that no B-mode patterns should be observed.

A key advantage of ekpyrotic and cyclic models is that they do not produce a multiverse. When the effects of quantum fluctuations are properly included in the big bang inflationary model, they prevent the universe from achieving the uniformity and flatness that the cosmologists are trying to explain. Instead, inflated quantum fluctuations cause the universe to break up into patches with every conceivable combination of physical properties. Instead of making clear predictions, inflationary theory allows any outcome. The idea that the properties of our universe are an accident and come from a theory that allows a multiverse of other possibilities is hard to reconcile with fact that the universe is extraordinarily simple (uniform and flat) on large scales and that elementary particles appear to be described by fundamental symmetries.

There are two types of polarization, called E-modes and B-modes. This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. The E-modes arise naturally from scattering in a heterogeneous plasma. The B-modes are not sourced by standard scalar perturbations. Instead they can be created either by gravitational lensing of E-modes, which has been measured by the South Pole Telescope in 2013, or from gravitational waves arising from cosmic inflation. Detecting the B-modes is extremely difficult, as the degree of foreground contamination is unknown, and the weak gravitational lensing signal mixes the relatively strong E-mode signal with the B-mode signal. In 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-mode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation. However, based on the combined data of BICEP2 and Planck, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way.

The possibilities remain open between our universe having unique laws derived from fundamental symmetries or being one of many types of universe whose laws happen to support complexity and life - a 'many-universes' perspective. Some theories (Smolin R649) even suggest the laws of nature might be capable of evolution from universe to universe, resulting in one containing observers. The anthropic principle asserts that the existence of (conscious) observers is a constraint delimiting what laws of nature are possible. Anthropic arguments (Barrow and Tipler R45) may enable a form of self-selection in the sense that simple universe which could not sustain life or observers would never be observed, guaranteeing our universe has dimensionalities, symmetry-breakings giving rise to fundamental constants consistent with the interactive fractal complexity (p 317). Regardless of these uncertainties in the final TOE, the general features of force unification, symmetry-breaking and inflation are likely to remain part of our understanding of the cosmic origin.

The Wave, the Particle and the Quantum

We all exist in a quantum universe, and the classical one we assume and link to our experience of the everyday world is just an extrapolation. To understand both the foundations of cosmology and the spooky world of quantum reality, we need to set aside the classical ideas of mechanism, determinism and the mathematical notions of sets made out of discrete points and come to terms with ultimate paradoxes of space-time and complementarity. To fully understand the implications we need to examine all aspects of the universe in detail, from the smallest particles to the universe as a whole and only then come to a synthesis of the role complementarity and 'sexual paradox' may play at the cosmological level.

Our quantum world is very subtle and much more mysterious than a mechanical 'building blocks ' view of the universe with simple separate classical particles interacting in empty space. Many people lead their lives at the macroscopic level as if quantum reality didn't exist, but quantum reality runs from the very foundations of physics to the ways we perceive. Our senses of sight, hearing, touch and taste/smell are all distinct quantum modes of interaction with the environment. Senses aren't just biological adaptions but fundamental quantum modes of information transfer, by photons, phonons, solitons and orbital interactions. Quantum processes such as tunneling are central to the function of our enzymes and to the ion channels and synapses that support our excitable neurons (Walker R724).

Fig 35: Bohr and Einstein - their debate which sparked the Copenhagen interpretation that Quantum mechanics describes only our knowledge of a system not its actual state, eventually led to the discovery of quantum non-locality.

The 'correspondence principle' by which the quantum world is supposed to fade into classical 'reality ' is never fully realized. Many phenomena in the everyday world involve chance events which themselves are often sensitively related to uncertainties at the quantum level. Chaotic, self-critical and certain other processes may 'inflate ' quantum effects into global fluctuations. Conscious interaction with the physical world may likewise depend both on quantum excitations and the loophole of uncertainty in expressing 'free-will '. We need to understand how quantum reality interacts with conscious experience, however in doing so we immediately find the most challenging examples of sexual paradox that lie at the core of the cosmological puzzle - wave-particle complementarity. A quantum manifests in two complementary ways as a non-local flowing 'wave ' which has a frequency and spatial extension and as a localized 'particle ' which is created or destroyed in a single step. It can manifest as either but not both at the same time. All the weird quantum paradoxes of non-locality, entanglement andmwave function collapse emerge from this complementary relationship. To understand the full dimensions of this mystery we need to see how this strange reality was discovered and do a little fairly simple maths.

In the late 19th century, classical physics seemed to have captured all the phenomena of reality, including Clerk Maxwell's equations for the electromagnetic transmission of light: , where .

However Lord Kelvin noticed what he called 'two small dark clouds on the horizon', which together plunged classical physics into the quantum-theoretic age.

Why we don't burn to a crisp: The first of these was black-body radiation, named after the thermal radiation from a dark cavity and also from bright thermal objects like the sun. We know the sun has some ultra-violet and can burn us, but not as much as the peak of visible light. If classical physics were true it should have more ultra-violet and even more x- and gamma rays - a situation called ultra-violet catastrophe.

Fig 36: The solar spectrum Fraunhoffer 1814, and Planck's radiation law both have a peak about 5,000 oC

Planck eventually solved the problem by quantizing the radiation into little packets proportional to h called quanta. The particles responsible for this packeting are now identified as the photon. The answer to the problem is this. In the classical view energy distribution should increase endlessly into the high frequencies, but in the quantum view, to release a particulate photon of a given frequency, there has to be an atom somewhere with an energized enough electron to radiate the photon, so the energy is limited by the temperature of the thermal body. Thus because the photons come in quanta, or packets, the radiation cannot go endlessly up into the ultra-violet. Planck's equation is displayed in fig 36. It starts out growing for small energies but falls off exponentially after the peak corresponding to the exponentially rarer thermodynamic excitations at a given temperature.

The Photoelectric Effect and Einstein's Law: Einstein made the next breatthrough addressing the other dark cloud - the photoelectric effect - If you shine light on a plate in a vacuum valve and vary the voltage required to stop the resulting current flow, you find the more light, the more current, but no more voltage. The voltage turns out to depend only on the frequency. That is, the energy doesn't change, just the flow rate. This makes no sense with a classical wave, because a bigger wave has both more flow and more energy.

Fig 37: Photoelectric effect apparatus

The answer is that a given frequency of light contains particles called photons. The more photons, the more excited electrons cross the vacuum by gaining this energy, but there is no change in the energy because each photon has the same energy for a given colour (frequency), regardless of how bright the light.

Einstein solved this problem by realizing the energy of any particle is proportional to its frequency as a wave by the same factor h - Planck's constant - the fundamental unit of quantumness. Energy is thus intimately related to frequency - in a sense it IS frequency. Measuring one is necessarily measuring the other. We can thus write (1)

Quantum Uncertainty: Supposing we try to imagine how we would calculate the frequency of a wave if we had no means to examine it except by using another similar wave and counting the number of beats that the 'strange wave ' makes against the standard wave we have generated. This is exactly the situation we face in quantum physics, because all our tools are ultimately made up of the same kinds of wave-particle quanta we are trying to investigate. If we can't measure the amplitude of the wave at a given time, but only how many beats occur in a given period, we can then only determine the frequency with any accuracy by letting several beats pass. We then however have let a considerable time elapse, so we don't know exactly when the frequency was at this value.

The closer we choose our frequency to get a given accuracy, the longer the beats take to occur. We thus cannot know the time and the frequency simultaneously. The more precisely we try to define the frequency, the greater the time is smeared out. Measuring a wave frequency with beats has intrinsic uncertainty as to the time, which becomes a smeared-out interval. The relationship between the frequencies and the beats is: (2)

Fig 38: Waves and beats.

Despite gaining his fame for discovering relativity, and the doom equation E = mc2 which made the atom bomb possible, Einstein, possibly in cooperation with his wife, also made a critical discovery about the quantum. Einstein's law connects to every energetic particle a frequency

If we apply equations (1) & (2) together, we immediately get the famous Heisenberg uncertainty relation . It tells us something is happening which is impossible in the classical world. We can't know the energy of a quantum interaction and the time it happened simultaneously. Energy and time have entered into a primal type of prisoners ' dilemma catch 22. The closer we try to tie down the energy, the less precisely we know the time. This peculiar relationship places a specific taboo on knowing all the features of a situation and means we cannot predict precise outcomes, only probabilities. The same goes for momentum and position in each of the three spatial dimensions. Notice also that this links energy and momentum, time and space, and frequency and wavelength as three manifestations of one another. The way in which this happens is illuminating. Each quantum can be conceived as a particle or as a wave but not both at the same time. Depending on how we are interacting with it or describing it, it may appear as either.

Quantum Chemistry:All particles, such as the electrons, protons and neutrons which make up the atoms of our chemical elements and molecules all exist as both particles and waves. The orbitals of the electrons around atoms and those linking each molecule together occur only at the energies and sizes which correspond to a perfect standing wave, forming a set of discrete levels like the layers of an onion. These in turn determine the chemical properties of each substance. Because the molecular orbitals formed between a pair of atoms have lower energy than their individual atomic counterparts, the atoms react to form a molecule releasing the spare energy as heat. The characteristic energy differences between the levels of a given atom can be seen, both on earth and in the universe at large, as emission or absorbtion lines in the electromagnetic spectrum.

Fig 39: Quantum chemistry. (a) s, p, d, f orbitals have spins 0, 1, 2 and 3 respectively. Each occurs in a series of levels, forming the shells or orbitals of the atom. The first level 1s can contain 2 electrons of ossosite spin. The second with 2s, and three p orbitals 2px 2py and 2pz can hold 8. These can form energy-balancing linear combinations, resulting in hybrid sp orbitals. (b) Two s orbitals form a lower energy σ molecular orbital as well as a higher energy σ * repelling anti-bonding orbital if the electron spins are not complementary (see fermions below). (c) Bonding p obitals can also form π orbitals. Six p orbitals can combine to form a single delocalized π molecular orbital as in in the benzene ring. Hybrid atomic orbitals sp, sp2 and sp3 lead to linear, planar and tetrahedral bonding arrangements seen in many molecules, due to energy minimization. (d) Absorbed or emitted photons cause electron transitions between orbitals in hydrogen, giving rise to the signature of the hydrogen spectrum (e). This signature in space, red-shifted far into the low frequencies, revealed the expanding universe.

Two-slit interference and Complementarity

We are all familiar with the fact that CDs have a rainbow appearance on their underside. This comes from the circular tracks spaced a distance similar to the wavelength of visible light. If we used light of a single wavelength we would see light and dark bands. We can visualize this process more simply with just two slits as in fig 40. When many photons pass through, their waves interfere as shown and the photographic plate gets dark and light interference bands where the waves from the two slits reinforce or cancel, because the photons are more likely to end up where their superimposed wave amplitude is large. The experiment confirms the wave nature of light, since the size of the bands is determined by the distance between the slits in relation to the wavelength where c is the velocity of light:

We know each photon passes through both slits, because we can slow the experiment down so much that only one photon is released at a time and we still eventually get the interference pattern over time. Each photon released from the light bulb is emitted as a particle from a single hot atom, whose excited electron is jumping down from a high energy orbit to a lower one. It is thus released locally and as a single 'particle ' created by a single transition between two stable electron orbitals, but it spreads and passes through both slits as a wave. After this the two sets of waves interfere as shown in fig 40 to make light and dark bands on the photographic plate when the light is of a single frequency, and the rainbows we see on a CD or DVD when white light of many frequencies is reflected off the shiny rings between the grooves in the manner of a multi-slit apparatus.

The evolution of the wave is described by an equation involving rates of change of a wave function φ with respect to space and time. For example for a massive particle in free space, we have a 1-D differential equation: . For Schrodinger's and Dirac's wave equations see the appendix.

This equation emphasizes the relationship between space and time we see emerging in special relativity below. The comlementary relationship between Schrodinger's continuous wave equation and Heisenberg's discrete matrix mechanics (see appendix), which in a sense mirros the wave and particle aspects of the quantum, highlights a deeper complementarity in mathematics between the discrete operations of algebra and the continuous properties of calculus, which may also be expressed in the brain (p 367).

Fig 40: Two-slit interference experiment (Sci. Am. Jul 92)

For the bands to appear in the interference experiment, each single photon has to travel through both slits as a wave. If you try to put any form of transparent detector in the slits to tell if it went through one or both you will always find only one particle but now the interference pattern will be destroyed. This happens even if you use the gentlest forms of detection possible such as an empty resonant maser chamber (a maser is a microwave laser). Any measurement sensitive enough to detect a particle alters its momentum enough to smear the interference pattern into the same picture you would get if the particle just went through one slit. Knowing one aspect destroys the other.

Now another confounding twist to the catch 22. The photon has to be absorbed again as a particle by an atom on the photographic plate, or somewhere else, before or after, if it doesn't career forever through empty space, something we shall deal with shortly. Where exactly does it go? The rules of quantum mechanics are only statistical. They tell us only that the particle is more likely to end up where the amplitude of the wave is large, not where it will actually go on any one occasion. The probability is precisely the complex square of the wave's amplitude at any point (the Born rule): .

Hence the probability is spread throughout the extent of the wave function, extending throughout the universe at very low probabilities. Quantum theory thus describes all future (and past) states as probabilities. Unlike classical probabilities, we cannot find out more about the situation and reduce the probability to a certainty by deeper investigation, because of the limits imposed by quantum uncertainty. The photon could end up anywhere the wave is non-zero. Nobody can tell exactly where, for a single photon. Each individual photon really does seem to end up being absorbed as a particle somewhere, because we will get a scattered pattern of individual dark crystals on the film at very low light intensities, which slowly build up to make the bands again. This is the mysterious phenomenon called 'reduction, or collapse, of the wave packet'. Effectively the photon was in a superposition of states represented by all the possible locations within the wave, but suddenly became one of those possible states, now absorbed into a single localized atom where we can see its evidence as a silver mark on the film. Only when there are many photons does the behaviour average out to the wave distribution. Thus each photon seems to make its own mind up about where it is going to end up, with the proviso that on average many do this according to the wave amplitude's probability distribution. So is this quantum free-will? It may be.

Fig 40a: Interference demonstrated in 2019 for large molecules (Fein et al. 2019).

Experiments can also be done using electrons, but in 2019, a team (Fein et al.) have reported extending the domain of interference experiments to large molecules. They report interference of a molecular library of functionalized oligo-porphyrins masses beyond 25,000 Da (atomic mass units 1/12 that of carbon-12), consisting of up to 2,000 atoms, by far the heaviest objects shown to exhibit matter-wave interference to date. Porphyrins, such as chlorophyll and heme are polycyclic molecules which also appear on prebiotic syntheses. This shows that molecules, including those in biological organisms are spreading as quantum waves.

The Cat Paradox and the Role of Consciousness

This situation is the subject of a famous thought experiment by Schrodinger, who invented the wave equation. In Schrodinger's cat paradox, we use an interference experiment with about one photon a second and we detect whether the photon hits one of the bright bands to the left (we can do the same thing measuring electron spin using an asymmetric magnetic field). If it does then a cat is killed by smashing a cyanide flask. Now when the experimenter opens the box, they find the cat is either alive or dead, but quantum theory simply tells us that the cat is both alive and dead, each with differing probabilities - superimposed alive and dead states. This is counterintuitive, but fundamental to quantum reality. The cat paradox can also apply to other variables such as temperature. In a classical situation, temperature is measured by establishing equilibrium with the process being measured, but in a quantum context for example in sampling the temperature of a quantum dot, uncertainty will reduce on measurement to a cat paradox situation (Miller & Anders 2018).

Fig 41: Cat paradox experiment variations (King)

In the cat paradox experiment, the wave function remains uncollapsed at least until the experimenter I opens the box. Heisenberg suggested representing the collapse as occurring when the system enters the domain of thermodynamic irreversibility, i.e. at C. Schrodinger suggested the formation of a permanent record e.g. classical physical events D, E or computer data G, and Wigner (see below) to a paradox with a second observer H. However even these classical outcomes could be superpositions at least until a conscious observer experiences them, as the many-worlds theory below suggests. Schrodinger in "What is Life?" also viewed consciousness as primary: "The observer is never entirely replaced by instruments; for if he were, he could obviously obtain no knowledge whatsoever. ...They must be read! The observer's senses have to step in eventually. The most careful record, when not inspected, tells us nothing". This process is called quantum measurement.

John von Neumann went further and proposed that quantum observation is the action of a conscious mind and that everything in the universe that is subject to the laws of quantum physics creates one vast quantum superposition. But the conscious mind is different, being able to select out one of the quantum possibilities on offer, making it real - to that mind. Max Planck, the founder of quantum theory, said in 1931, "I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness".

Indeed there seems to be no way any purely physical non-conscious interaction can result in a quantum measurement because when two inanimate objects interact they simply become quantum-mechanically entangled with one another, but no actual quantum measurement is performed. We thus simply have a larger physical system, again in a superposition of states. The claim that entanglement is observed only in microscopic systems and, therefore, its peculiarities are allegedly irrelevant to the world of tables and chairs is patently untrue as macrosocpic systems have now also become entangled.

Collapse of the wave function projects the linear combination onto one of its basis states. Originally, it was thought by von Neumann and others that a measurement on a quantum system would inevitably destroy all quantum superpositions. Later, Lüders pointed out that certain superpositions should survive, so that a sequence of ideal measurements would preserve quantum coherence. In theory, an ideal measurement projects a quantum state onto the eigenbasis of the measurement observable, while preserving coherences between eigenstates that have the same eigenvalue for example representing the same energy in different configurations. Experiments have now proved this possibility using the coupling of a trapped ion qutrit (a unit of quantum information that is realized by a quantum system described by a superposition of three mutually orthogonal quantum states) to the photon environment, by taking tomographic snapshots during the detection process (Pokorny F et al. (2020) Tracking the Dynamics of an Ideal Quantum Measurement Phys. Rev. Lett. doi: 10.1103/PhysRevLett.124.080401)..

Some claim that quantum decoherence rules out consciousness as the agency of measurement. According to this claim, when a quantum system in a superposition state is probed, information about the overlapping possibilities in the superposition "leaks out" and becomes dispersed in the surrounding environment. This allegedly explains in a fairly mechanical manner why the superposition becomes indiscernible after measurement. But decoherence cannot explain how the state of the surrounding environment becomes definite to begin with, so it doesn't solve the measurement problem. or rule out the role of consciousness. Indeed Wojciech Zurek who proposed the idea of decoherence noted "an exhaustive answer to [the question of why we perceive a definite world] would undoubtedly have to involve a model of 'consciousness', since what we are really asking concerns our [observers'] impression that 'we are conscious' of just one of the alternatives".

Contrary to the assumed vulneability of all quanta to decoherence, scientists have found that quasiparticles in quantum systems could be effectively immortal (Verrensen R et al. 2019 Avoided quasiparticle decay from strong quantum interactions Nature Physics doi:10.1038/s41567-019-0535-3). This doesn't mean they don't decay, but once they have, they are able to reorganise themselves back into existence, possibly ad infinitum. Although we know that qunatum interactions are in-principle time reversible, the assumption was that quasiparticles in interacting quantum systems would ulltimately decay, consistent with the second law of thermodynamics. However, using a detailed computer simulations, the researchers found that if this decay proceeds very quickly, an inverse reaction will occur after a certain time and the debris will converge again. This process can recur endlessly and a sustained oscillation between decay and rebirth emerges. Because the oscillation is a wave that is transformed into matter, which is covered by wave-particle duality, their entropy is not decreasing, but remaining constant. Examples of quasiparticles are the phonons of harmonic chemical bond excitation, magnons in exotic magnetic materials that are paradoxically stable, and rotons in super-fluid helium.

As noted by Kastrup, Stapp, & Kafatos (2018) the Bell theorem experiments indicate that the everyday world we perceive does not exist until observed, which in turn suggests a primary role for mind in nature. The mind that underlies the world is a transpersonal mind behaving according to natural laws. It comprises but far transcends any individual psyche. The dynamics of all inanimate matter in the universe correspond to transpersonal mentation, just as an individual's brain activity - which is also made of matter - corresponds to personal mentation. This notion eliminates arbitrary discontinuities and provides the missing inner essence of the physical world: all atter - not only that in living brains - is the outer appearance of inner experience, different configurations of matter reflecting different patterns or modes of mental activity.

What philosophers of science such as Philip Goff have realized (Cook G 2020 Does Consciousness Pervade the Universe? Scientific American https://www.scientificamerican.com/article/does-consciousness-pervade-the-universe/.) is that physical science, for all its richness, is confined to telling us about the behavior of matter, what it does. Physics tells us, for example, that matter has mass and charge. These properties are completely defined in terms of behavior, things like attraction, repulsion, resistance to acceleration. Physics tells us absolutely nothing about what philosophers like to call the intrinsic nature of matter: what matter is, in and of itself. So it turns out that there is a huge hole in our scientific story. The proposal of the panpsychist is to put consciousness in that hole. Consciousness, for the panpsychist, is the intrinsic nature of matter. There's just matter, on this view, nothing supernatural or spiritual. But matter can be described from two perspectives. Physical science describes matter "from the outside," in terms of its behavior. But matter "from the inside" - i.e., in terms of its intrinsic nature - is constituted of forms of consciousness, just as our subjective experiences complement molecular brain function. We will see in the context of quantum entanglement that this interpretation extends to entangled particles and that there is no calculable upper limit on how complex such entanglements could become.

Nearly 60 years ago, the Nobel Prize–winning physicist Eugene Wigner captured one of the many oddities of quantum mechanics in a thought experiment. He imagined a friend of his, sealed in a lab, measuring a particle such as an atom while Wigner stood outside. Quantum mechanics famously allows particles to occupy many locations at once—a so-called superposition—but the friend's observation "collapses" the particle to just one spot. Yet for Wigner, the superposition remains: The collapse occurs only when he makes a measurement sometime later. Worse, Wigner also sees the friend in a superposition. Their experiences directly conflict.

Wigner's friend is a version of the cat paradox, in which a human or AI assistant G or H, possibly sensing directly inside the box or even being in the box (hence the cat-like features in fig 41), reports on the result, establishing that unless the first conscious observer collapses the wave function, there could be a conscious observer in a multiplicity of alternative states, which is an omnipresent drawback of the many worlds view. In a macabre version the conscious assistant is of course the cat. According to the Copenhagen interpretation, it its not the system which collapses, but only our knowledge of its behavior. The superimposed state within the wave function is then not regarded as a real physical entity at all, but only a means of describing our knowledge of the quantum system, and calculating probabilities.


Fig 41a: Gedanken experiment Frauchiger and Renner (2018) leading to possible internal inconsistency of the Copenhagen interpretation (Ananthaswamy 2018). The key assumption that appears to have been glossed over in the analysis is the role of the conscious observer. By putting conscious observers in the box, the assumption that an external quantum measurement will still find a superposition of states when the conscious observer inside has seen a tail rather than a head and has furthermore acted upon it by preparing a particle in a certain state, goes to the heart of the question of consciousness being key to a quantum measurement collapsing the wave function. However subjective consciousness is not an objective physical system either, so grouping this into the assumption that quantum theory is universal is facile because quanum theory may indeed be universal to physical reality, but subjective consciousness is not an objective physical phenomenon as such. Thus it is not quantum theory that is on the stake but the physical nature of subjective consciousness which raises another deep question about how the physical brain actually generates subjective consciousness which may be a much more difficult question to rationalize than quantum theory itself. Substitution of a quantum computer would appear to render the consistency assumption invalid, as the inconsistency is contained in the superposition of states and Alice's friend and Bob's friend now don't receive one or other prepared particle type but a superposition. This m=would mean that both Alice and Bob are now observing mixed states and would thus disagree with a certain probability arising from the superpositions. This is effectively similar to the conclusion that the many worlds interpretation renders.

In an elaboration of the Wigner's friend idea, Frauchiger and Renner (2018) describe a gedanken experiment in which hey have two Wigners, Alice and Bob each doing an experiment on one of a pair of physicist friends Alice F and Bob F whom they each keep in a box containing a laboratory. They investigate the question whether quantum theory can, in principle, have universal validity. The idea is that, if the answer is yes, it must be possible to employ quantum theory to model complex systems that include agents who are themselves using quantum theory. Analysing the experiment under this presumption, they find that one agent, upon observing a particular measurement outcome, must conclude that another agent has predicted the opposite outcome with certainty. The agents' conclusions, although all derived within quantum theory, are thus claimed to be inconsistent.

One of the two friends, Alice F, can toss a coin and - using her knowledge of quantum physics - prepare a quantum message to send to Bob F. Using his knowledge of quantum theory, Bob can detect Alice's message and guess the result of her coin toss. When the two Wigners effectively open their boxes, by making a quantum measurement on them as a complex quantum system in a superposition of states, in some situations they can conclude with certainty which side the coin landed on, but occasionally - about 1/12 times given the design of the preparation their conclusions are inconsistent.

This raises fundamental questions about the consistency of the Copenhagen interpretation, but the experiment requires knowing all the quantum variables of Alice F and Bob F which is currently unfeasable although a quantum computing version might do so. Other theories, from Bohmian and other hidden variable theories to gravitational collapse, violate one or more of the assumed conditions of Q (the Born rule), C (consistent reasoning) and S (non multiple measurement values) and are thus not ruled out of consistency.

The contradiction between the superposition of the wave function and the observer's experience of the wave function having collapsed into one of its many superimposed possibilities has led to various spontaneous collapse theories such as the GRW theory (Ghirardi, Rimini, Weber 1986), in which collapse is spontaneous and random, but because of the large number of potential particles with which the quantum can interact (and thus become entangled) with during a measurement by a macroscopic device and/or conscious observer, the probability of collapse approaches unity. However neither this, nor the CSL (continuous spontaneous localization) theory (Ghirardi, Pearle, and Rimini 1990), which sucessfully models systems of identical particles, are free from physical contradictions. To avoid violating the principle of the conservation of energy, any collapse be incomplete. Almost all of the wave function is contained at the one value, but there are one or more small tails where the function should intuitively equal zero but mathematically does not. Under the probability interpretaton, this would mean that some matter has collapsed elsewhere than the measurement indicates, or that (with low probability) an object might jump from one collapsed state to another. These options are counterintuitive and physically unlikely.

The Quantum Measurement Problem May Contradict Objective Reality

 

In quantum theory, before collapse, the system is said to be in a superposition of two states, and this quantum state is described by the wave function, which evolves in time and space. This evolution is both deterministic and reversible: given an initial wave function, one can predict what it’ll be at some future time, and one can in principle run the evolution backward to recover the prior state. Measuring the wave function, however, causes it to collapse, mathematically speaking, such that the system in our example shows up as either heads or tails. It’s an irreversible, one-time-only and no one knows what defines the process or boundaries of measurement.

 

One model that preserves the absoluteness of the observed event — either heads or tails for all observers—is the GRW theory, where quantum systems exist in a superposition of states until the superposition spontaneously and randomly collapses, independent of an observer. Whatever the outcome—heads or tails in our example—it shall hold for all observers. But GRW, and the broader class of “spontaneous collapse” theories, run foul of a long-cherished physical principle: the preservation of information.  By contrast, the “many worlds” interpretation of quantum mechanics allows for non-absoluteness of observed events, because the wave function branches into multiple contemporaneous realities, in which in one “world,” the system will come up heads, while in another, it’ll be tails.

 

Ormrod, Venkatesh and Barrett (2023, Ananthaswamy 2023) focus on perspectival theories that obey three properties:

 

(1) Bell nonlocality (B). Alice chooses her type of measurement freely and independently of Bob, and vice versa –  of their own free will – an important assumption. Then, when they eventually compare notes, the duo will find that their measurement outcomes are correlated in a manner that implies the states of the two particles are inseparable: knowing the state of one tells you about the state of the other.

(2) The preservation of information (I). Quantum systems that show deterministic and reversible evolution satisfy this condition.  If you are wearing a green sweater today, in an information-preserving theory, it should still be possible, in principle, 10 years hence to retrieve the colour of your sweater even if no one saw you wearing it.

(3) Local dynamics (L). If there exists a frame of reference in which two events appear simultaneous, then the regions of space are said to be “space-like separated.” Local dynamics implies that the transformation of a system that takes a set of input states and produces a set of output states in one of these regions cannot causally affect the transformation of a system in the other region any faster than the speed of light, and vice versa. Each subsystem undergoes its own transformation, and so does the entire system as a whole. If the dynamics are local, the transformation of the full system can be decomposed into transformations of its individual parts: the dynamics are said to be separable.   In contrast, when two particles share a state that’s Bell nonlocal (that is, when two particles are entangled, per quantum theory), the state is said to be inseparable into the individual states of the two particles. If transformations behaved similarly, in that the global transformation could not be described in terms of the transformations of individual subsystems, then the whole system would be dynamically inseparable.

 

Fig 76b: A graphical summary of the theorems. Possibilistic Bell Nonlocality is Bell Nonlocality that arises not only at the level of probabilities, but at the level of possibilities.

 

Their work analyses how pespectival quantum theories are BINSC, and that NSC implies L, so BINSC is BIL. Such BIL theories are then required to handle a deceptively simple thought experiment. Imagine that Alice and Bob, each in their own lab, make a measurement on one of a pair of particles. Both Alice and Bob make one measurement each, and both do the exact same measurement. For example, they might both measure the spin of their particle in the up-down direction. Viewing Alice and Bob and their labs from the outside are Charlie and Daniela, respectively. In principle, Charlie and Daniela should be able to measure the spin of the same particles, say, in the left-right direction. In an information-preserving theory, this should be possible.  Using this scenario, the team proved that the predictions of any BIL theory for the measurement outcomes of the four observers contradict the absoluteness of observed events. This leaves physicists at an unpalatable impasse: either accept the non-absoluteness of observed events or give up one of the assumptions of a BIL theory.

 

Ormrod says dynamical separability is “kind of an assumption of reductionism – you can explain the big stuff in terms of these little pieces.” Just like a Bell nonlocal state cannot be reduced to some constituent states, it may be that the dynamics of a system are similarly holistic, adding another kind of nonlocality to the universe.  Importantly, giving it up doesn’t cause a theory to fall afoul of Einstein’s theories of relativity, much like physicists have argued that Bell nonlocality doesn’t require superluminal or nonlocal causal influences but merely nonseparable states. Ormrod, Venkatesh and Barrett note: “Perhaps the lesson of Bell is that the states of distant particles are inextricably linked, and the lesson of the new ... theorems is that their dynamics are too.” The assumptions used to prove the theorem don’t explicitly include an assumption about freedom of choice because no one is exercising such a choice. But if a theory is Bell nonlocal, it implicitly acknowledges the free will of the experimenters.

 

Fig 76c: Above An experimental realisation of the Wigner' friend setup showing there is no such thing as objective reality - quantum mechanics allows two observers to experience different, conflicting realities. Below the proof of principle experiment of Bong et al. (2020) demonstrating mutual inconsistency of 'No-Superdeterminism', 'Locality' and 'Absoluteness of Observed Events’.

 

An experimental realisation of non-absoluteness of observation has been devised (Proietti et al., 2019) as shown in fig 76c using quantum entanglement. The experiment involves two people observing a single photon that can exist in one of two alignments, but until the moment someone actually measures it to determine which, the photon is in a superposition. A scientist analyses the photon and determines its alignment. Another scientist, unaware of the first's measurement, is able to confirm that the photon - and thus the first scientist's measurement - still exists in a quantum superposition of possible outcomes. As a result, each scientist experiences a different reality - both "true" even though they disagree with each other. In a subsequent experiment, Bong et al. (2020) transform the thought experiment into a mathematical theorem that confirms the irreconcilable contradiction at the heart of the Wigner scenario. The team also tests the theorem with an experiment, using photons as proxies for the humans, accompanied by new forms of Bell's inequalities, by building on a scenario with two separated but entangled friends. The researchers prove that if quantum evolution is controllable on the scale of an observer, then one of (a) No-Superdeterminism — the assumption of 'freedom of choice' used in derivations of Bell inequalities - that the experimental settings can be chosen freely — uncorrelated with any relevant variables prior to that choice, (2) Locality or (3) Absoluteness of Observed Events — that every observed event exists absolutely, not relatively – must be false. Although the violation of Bell-type inequalities in such scenarios is not in general sufficient to demonstrate the contradiction between those three assumptions, new inequalities can be derived, in a theory-independent manner, that are violated by quantum correlations. This is demonstrated in a proof-of-principle experiment where a photon's path is deemed an observer. This new theorem places strictly stronger constraints on physical reality than Bell's theorem.

Penrose in objective reduction singles out gravity as the key unifying force and suggests that interaction with gravitons splits the wave function, causing reduction. Others try to discover hidden laws which might provide the sub-quantum process, for example the pilot wave theory, in which a well-defined particle piloted within a non-local wave as developed by David Bohm (1966). This can produce comparable results with quantum mechanics and provides an example of a plausible theory underlying quantum reality, but it has difficulties defining positions when new particles with new quantum degrees of freedom are created.

It has also met with a failure to replicate its results in analogous macroscopic systems using small oil droplets suspended on acoustically excited water waves, including work by a team led by Bohr's grandson Tomas (Andersen et al. 2015), which have not replicated the two-slit interference fringes that should appear, because the wave becomes separated by obstructions and one component decays due to the particle introducing a term in the Hamiltonian which also influences the wave.

Another approach we will explore, is the transactional interpretation, which has features of all these ideas and seeks to explain this process in terms of a hand-shaking relationship between the past and the future, in which space-time itself becomes sexual. Key here is the fact that reduction is not like any other physical process. One cannot tell when or where it happens again suggesting it is part of the 'spooky ' interface between quantum and consciousness.

Whatever model of quantum mechanics is used, the Born rule assigning a particulate probability on the basis of the squared amplitude of the wave function applies. In the dynamical collapse theories such as GRW, collapse is assumed to be a random physical event. In the pilot wave theory it becomes the probability estimate of our incomplete knowledge of the presumed deterministic hidden variables, which remain inaccessible to direct measurement. In the Everett many-worlds interpretation, due to self locating uncertainty on a given branch, the credence you should attach to being on any particular branch of the wave function is just the amplitude squared for that branch, just as in ordinary quantum mechanics.

In many situations people try to pass the intrinsic problems of uncertainty away on the basis that in the large real processes we witness, individual quantum uncertainties cancel in the law of averages of large numbers of particles. They will suggest for example that neurons are huge in terms of quantum phenomena and that the 'law of mass action ' engulfs quantum effects. However brain processes are notoriously sensitive to external and internal perturbations. Moreover history itself is a unique process emerging out of a sequence of such unrepeated events at each stage of the process. Critical decisions we make become watersheds. History and evolution are both processes littered with unique idiosyncratic acts in a counterpoint to the major forces shaping the environment and landscape. Therefore it is unclear whether these processes adhere to the Born rule's probabilities, which become established only on multiple repetitions of a given detection event any more than the uncertainty of the position of a single quantum detection within the entire wave function, as in the Cat paradox, or in the interference experiment where we cannot ask to find the position of a single photon because it i entirely uncertain. Chaotic processes are potentially able to inflate arbitrarily small fluctuations, so molecular chaos may 'inflate ' the fluctuations associated with quantum uncertainty into macroscopic uncertainties.

As a final eipthet to the cat paradox, physicists have discovered how to catch and even reverse a quantum jump mid-flight (Minev et al. 2019 Nature doi: 10.1038/s41586-019-1287-z). An atom releasing a photon, as in fig 41, is making what is apparently a discrete transition from an exicted sate to a lower state, thereby releasing a single photon as a quantum, however the wave aspect of the photon has to be released over time, effectively as a continuous radiative transiton. Quantum jumps were first observed in an atomic ion driven by a weak deterministic force while under strong continuous energy measurement. The times at which the discontinuous jump transitions occur are reputed to be fundamentally unpredictable. The experiment demonstrates that the jump from the ground state to an excited state of a superconducting artificial three-level atom can be tracked as it follows a predictable 'flight', by monitoring the population of an auxiliary energy level coupled to the ground state. The evolution of each completed jump is continuous, coherent and deterministic. Using real-time monitoring and feedback, the experimenters were also able to catch and reverse quantum jumps mid-flight - thus deterministically preventing their completion. The findings support quantum trajectory theory (Gardiner et al. 1992 Phys. Rev A 46/7 4363).


Fig 41b: Left: Three-level atom possessing a hidden transition (shaded region) between its ground and dark state, driven by the Rabi drive. Quantum jumps between ground and dark are indirectly monitored by a stronger Rabi drive between the ground and the bright state, whose occupancy is continuously monitored by an auxiliary oscillator (LC circuit on the right), itself measured in reflection by continuous-wave microwave light (depicted in light blue). When the atom is in the bright state, the resonance frequency of the LC circuit shifts to a lower frequency than when the atom is in ground or dark. Right: Catching the quantum jump mid-flight. a, The atom is initially prepared in the bright state. The readout tone and atom Rabi drive are turned on until the catch condition is fulfilled, consisting of the detection of a click followed by the absence of click detections for a total threshold catch time. The Rabi drive can be shut off prematurely, before the end of the catch. A tomography measurement is performed after the catch time. b Conditional tomography revealing the continuous, coherent and, surprisingly, deterministic flight (when completed) of the quantum jump from ground to dark. Data obtained from 6.8 × 106 experimental realizations. Solid lines represent theoretical predictions.

The quantum jump method is an approach which operates by evolving the system's wave function in time with a pseudo-Hamiltonian, where at each time step, a random quantum jump (discontinuous change) may take place with some probability. The calculated system state as a function of time is known as a quantum trajectory, and the desired density matrix as a function of time can be calculated by averaging over many such trajectories. In this case a three level atom is exicted by a harmonic stimulation (Rabi drive) close to the atoms resonant frequency.

The Two-timing Nature of Special Relativity

We also live in a paradoxical relationship with space and time. While space is to all purposes symmetric and multidimensional, and not polarized in any particular direction, time is singular in the present and polarized between past and future. We talk about the arrow of time as a mystery related to the increasing disorder or entropy of the universe. We imagine space-time as a four dimensional manifold but we live out a strange sequential reality in which the present is evanescent. In the words of the song Fly Like an Eagle - "time keeps slipping, slipping, slipping ... into the future ". There is also a polarized gulf between a past we can remember, the living present and a shadowy future of nascent potentialities and foreboding uncertainty. In a sense, space and time are complementary dimensionalities, which behave rather like real and imaginary complex variables, as we shall see below.

A second fundamentally important discovery in twentieth century physics, complementing quantum theory, which transformed our notions of time and space, was the special theory of relativity. In Maxwell's classical equations for transmission for light, light always has the same velocity, c regardless of the movement of the observer, or the source. Einstein realized that Maxwell's equations and the properties of physics could be preserved under all inertial systems - the principle of special relativity - only if the properties of space and time changed according to the Lorenz transformations as a particle approaches the velocity of light c:

Space becomes shortened along the line of movement and time becomes dilated. Effectively space and time are each being rotated towards one-another like a pair of closing scissors. Consequently the mass and energy of any particle with non-zero rest mass tend to infinity at the velocity of light:

By integrating this equation, Einstein was able to deduce that the rest mass must also correspond to a huge energy Eo=moc2 which could be released for example in a nuclear explosion, as the mass of the radioactive products is less than the mass of the uranium that produces them, thus becoming the doom equation of the atom bomb.

In special relativity, space and time become related entities, which form a composite four dimensional space-time, in which points are related by light-cones - signals travelling at the speed of light from a given origin. In space-time, time behaves differently to space. When time is squared it has a negative sign just like the imaginary complex number does.

Hence the negative sign in the formula for space-time distance (3) and the scissor-like reversed rotations of time and space into one another expressed in the Lorenz transformations. Stephen Hawking has noted that, if we treat time as an imaginary variable, the space-time universe could become a closed 'manifold ' rather like a 4-D sphere, in which the cosmic origin is rather like the north pole of Earth, because imaginary time will reverse the negative sign in (3) and give us the usual Pythagorean distance formula in 4D.

Fig 42: Space-time light cone permits linkage of 'time-like ' points connected by slower-then-light communication. In the 'space-like ' region, temporal order of events and causality depends on the observer.

A significant feature of special relativity is the fact that the relativistic energy-momentum equation E2=p2+ m2 has dual energy solutions: (4)

The negative energy solution has reversed temporal direction. Effectively a negative energy anti-particle travelling backwards in time is exactly the same as a positive energy particle travelling forwards in time in the usual manner. The solution which travels in the normal direction (subsequent points are reached later) is called the retarded solution. The one which travels backwards in time is called the advanced solution. A photon is its own anti-particle so in this case we just have an advanced or retarded photon.

General relativity goes beyond this to associate gravity with the curvature of space-time caused by mass-energy. The Einstein field equations are governed by the following relationship:

(5)

where is the Ricci tensor representing curvature, R is the scalar curvature, is the metric tensor representing the gravitational potential, is the cosmological constant, G is Newton's gravitational constant, cis the speed of light, and is the stress-energy tensor representing the gravitational mass-energy field. Hence the equation explains gravitation as the curvature of space-time caused by mass-energy. Einstein introduced the cosmological constant as a mechanism to obtain a solution of the gravitational field equation that would lead to a static universe, but it was later realized that this would not be stable: local inhomogeneities would ultimately lead to either runaway expansion, or contraction. The cosmological constant is equivalent to the vacuum energy - the energy density of empty space, which can be positive or negative. The stress-energy tensor responds to both pressure and mass-energy, as we shall see in dark energy models (equation 6).

Reality and Virtuality: Quantum fields and Seething Uncertainty

We have learned about waves and particles, but what about fields? What about the strange action-at-a-distance of electromagnetism and gravity? Special relativity and quantum theory combine to provide succinct explanations of electromagnetism, in fact they are the most succinct theories ever invented by the human mind, accurate to at least seven decimal places when describing the magnetic moment of an electron in terms of the hidden virtual photons which the electron emits and then almost immediately absorbs again.

Richard Feynman and others discovered the answer to this riddle by using uncertainty itself to do the job. The field is generated by particles propagated by a rule based on wave spreading. These particles are called virtual because they have no net positive energy and appear and disappear entirely within the window of quantum uncertainty, so we never see them except as expressed in the force itself. This seething tumult of virtual particles exactly produces the familiar effects of the electromagnetic field, and other fields as well. We can find the force between two electrons by integrating the effects of every virtual photon which could be exchanged within the limits of uncertainty and of every other possible virtual particle system, including pairs of electrons and positrons coming into a fleeting existence. However, note that we can't really eliminate the wave description because the amplitudes with which the particles are propagated from point to point are the hidden wave amplitudes. Uncertainty not only can create indefiniteness but it can actively create every conceivable particle out of the vacuum, and does so sine qua non. Special relativity and the advanced and retarded solutions that arise are also essential to enable the interactions that make the fabric of the quantum field. The advanced solutions are required to have negative energy and retarded solutions positive energy thus giving the correct results for both scattering and electron-positron interactions within the field so that electron scattering is the same as electron positron creation and annihilation.

Fig 43: Quantum electrodynamics: (a,b) Two Feynman diagrams in the electromagnetic repulsion of two electrons. In the first a single virtual photon is exchanged between two electrons, in the second the photon becomes a virtual electron-positron pair during its transit. All such diagrams are integrated together to calculate the strength of the electromagnetic force. (c) A homologous weak force diagram shows how neutron decay occurs via the W-particle of the weak nuclear force, which itself is a heavy charged photon, as a result of symmetry-breaking. A down quark becoming up changes a neutron (ddu) into a proton (duu). (d) Time-reversed electron scattering is the same as positron creation and annihilation.

Each more complex interaction involving one more particle vertex is smaller by a factor where e is the electron charge and h and c are as above, called the 'fine structure constant '. This allows the contribution of all the diagrams to sum to a finite interaction, unlike many unified theories, which are plagued by infinities, as we shall see. The electromagnetic force is generated by virtual photons exchanged between charged particles existing only for a time and energy permitted by the uncertainty relation. The closer the two electrons, the larger the energy fluctuation possible over the shorter time taken to travel between them and hence the greater the force upon them. Even in the vacuum, where we think there is nothing at all, there is actually a sea of all possible particles being created and destroyed by the rules of uncertainty.

The virtual particles of a force field and the real particles we experience as radiation such as light are one and the same. If we pump energy into the field, for example by oscillating it in a radio transmitter, the virtual photons composing the electromagnetic field become the real positive energy photons in radio waves entering the receiver as a coherent stream of real photons, encoding the music we hear. Relativistic quantum field theories always have both advanced and retarded solutions, one with positive and the other with negative energy, because of the two square roots of special relativity (4). They are often described by Feynman space-time diagrams. When the Feynman diagram for electron scattering becomes time-reversed, it then becomes precisely the diagram for creation and annihilation of the electron's anti-particle, the positron, as shown in fig 43. This hints at a fundamental role for the exotic time-reversed advanced solutions.

As a simple example, the wave equation for a zero spin particle with mass m has two solutions: , where .

The weak and strong nuclear forces can be explained as quantum particle fields in a similar way, but gravity holds out further serious catch-22s. Gravity is associated with the curvature of space-time, but this introduces fundamental contradictions with quantum field theory. To date there remains no fully consistent way to reconcile quantum field theory and gravitation as we shall see.

The Spooky Nature of Quantum Entanglement

We have already seen how the photon wave passing through two slits ends up being absorbed by a single atom. But how does the wave avoid two particles accidentally being absorbed in far flung parts of its wave function out of direct communication?

Because we can't sample two different points of a single-particle wave, it is impossible to devise an experiment which can test how a wave might collapse. One way to learn more about this situation is to try to find situations in which two or more correlated particles will be released coherently in a single wave. This happens with many particles in a laser and in the holograms made by coherent laser light and in Bose-Einstein condensates. It also happens in other situations where two particles of opposite spin or complementary polarization become created together. Many years ago Einstein, Rosen and Podolsky (EPR) suggested we might be able to break through the veil of quantum uncertainty this way, indirectly finding out more about a single particle than it is usually prepared to let on. Einstein commented "I do not believe God is playing dice with the universe", but that is precisely what quantum entnglement seems to entail.


Fig 44: (a) Pair-splitting experiment for photons using polarization. The first experiments were done on electron's spins using a Stern-Gerlach magnet's non-unifom field to separate spin up and down particles.(b) A variant of the experiment in (a), in which a polarized beam splitter leads to two detectors on each side, ensuring both polarization states are detected separately avoiding errors from non-detection. (c) The results are consistent with quantum mechanics but inconsistent with Bell's inequalities for a locally causal system. Below is shown the CHSH (Clauser, Horne, Shimony, and Holt) inequality, an easier to use version of Bell's inequalities applicable to configuration (b), where N+- is the number of coindicences detected between Da+ and Db- etc. where a, b are the angles. Using Bell's proof, the combined expectancies on the left are bounded above by 2, but the sinusiodal angular projection of quantum theory allows . (d) Time-varying analyzers are added driven by an optical switch too fast for light to cross the apparatus showing the effect persists even when light doean't have time to cross the apparatus. (e) The calcium transition (Aspect R25). (f) An experiment using the GHZ (Greenberger, Horne, and Zeilinger) arrangement involving three entangled photons generatd by a pulse passed through a down converter (BBO) to create an entangled pair, beam-splitters (BS and PBS) as well as quarter and half-wave plates and detected at D1, D2, D3, with T used as a trigger, according to the GHZ equation below, where the three photons are collectively either horizontally or vertically polarized (Nature 403 515-9). GHZ can return a violation of local causality directly without having to build a statistical distribution. A third relation called the Leggett-Garg inequality (Arxiv:1304.5133) applies instead to the varying time of observations and has been performed on systems from qbits through to neutrino oscillations (Arxiv:1602.0004, fig 18).

A calcium atom's electron excited into a higher spin-0 s-orbital cannot fall back to its original s-orbital in one step because a photon has spin 1 and the spins don't match, since you can't go between two orbits of equal spin and radiate a spin-1 photon, or the summed spins don't tally. The atom however can radiate two photons together as one quantum event, thereby cancelling one another's spins, to transit to its ground state, via an intermediate spin-1 p-orbital. This releases a blue and a yellow photon, each of which travel off in opposite directions, with complementary polarizations.

When we perform the experiment, it turns out that the polarization of neither photon is defined until we measure one of them. When we measure the polarization of one photon, the other immediately - instantaneously - has complementary polarization. The nature of the angular correlations between the detectors is inconsistent with any locally-causal theory - that is no theory based on information exchanged between the detectors by particles at the speed of light can do the trick, as proved in a famous result by John Bell (1966) and subsequent experiments. The correlation persists even if the detectors' configurations are changed so fast that there is no time for information to be exchanged between them at the speed of light as demonstrated by Alain Aspect (1982). This phenomenon has been called quantum non-locality and in its various forms quantum 'entanglement', a name coined by Schrodinger, which is itself very suggestive of the throes of a sexual 'tryst'. The situation is subtly different from any kind of classical causality we can imagine. The information at either detector looks random until we compare the two. When we do, we find the two seemingly random lists are precisely correlated in a way which implies instantaneous correlatedness, but there is no way we can use the situation to send classically precise information faster than the speed of light by this means. We can see however in the correlations just how the ordinary one-particle wave function can be instantaneously auto-correlated and hence not slip up in its accounting during collapse.

Entanglement has also been verified to apply not just to real particles, but to the virtual particles appearing and diappearing in the quantum vacuum (Benea-Chelmus et al. 2019). Although indirect effects of virtual particles are well known, it is only by probing a vacuum on very short timescales that the particles temorarily become real and can be directly observed. But do these particles appear completely randomly, or are they also correlated in space and time? The researchers have now provided an answer to this questio, by finding evidence for correlations between fluctuations in the electric field of a vacuum.

Entanglement raises a fundamental issue of non-locality which poses a potential threat to special relativity, because strict locality in time and space is compromised by non-local interactions. Tumulka (2006) showed how all the empirical predictions of quantum mechanics for entangled pairs of particles could be reproduced by a modification of the GRW theory of spontaneous collapse, which is nonlocal, and yet it is fully compatible with the spacetime geometry of special relativity.

However Albert and Galchen (2009) have discovered further circumstances in which the quantum reality becomes too rich to be described through any time-directed narative description, because special relativity tends to mix up space and time in a way that transforms quantum-mechanical entanglement among distinct physical systems into an entanglement among physical situations at different times, due to the fact that observers in morion with respect to one another can perceive the time order of events differently. Entangled histories in fig 47 give another illustration of this problem about temporality.

In 2019 a laser experiment has for the first time visualized the entangled photons in a Bell type experiment as shown in fig 45b.


Fig 44b: Visualizing entangled photons in a Bell theorem test (Moreau P et al. 2019 Imaging Bell-type nonlocal behavior. Sci. Adv. 5 eaaw2563.). Right: The apparatus used: A b-Barium Borate crystal pumped by an ultraviolet laser is used as a source of entangled photon pairs. The two photons are separated on a beam splitter (BS). An intensified camera triggered by a single-photon avalanche diode (SPAD) is used to acquire ghost images of a phase object placed on the path of the first photon and nonlocally filtered by four different spatial filters that can be displayed on a spatial light modulator (SLM 2) placed in the other arm. By being triggered by the SPAD, the camera acquires coincidence images that can be used to perform a Bell test. Left: The four coincidence counting images are presented, which correspond to images of the phase circle acquired with the four phase filters with different orientations, q2 = {0, 45, 90, 135}, necessary to perform the Bell test.

Other attempts to find hidden variable theories that could explain entanglement tend to invoke retrocausality. Costa de Beauregard suggested that non-local interaction could be avoided if a retrocausal influence was traced back in time from the spatially-separated detectors to the source of the pair of entangled particles creation. His supervisor Louis de Broglie forbade him to publish the idea for several years until Richard Feynman showed the positron to be a time-reversed electron in quantum electrodynamics (fig 43). Feynman has also shown that using the fact that special relativity also has time revered solutions an absorber theory based on reversed causality could also provide a valid description of physics. Price and Wharton (2015) have extended this idea into a full explanation of the Bell's theorem results. Likewise Sutherland (2017) invokes a Lagrangian theory invoking future boundary conditions to provide a description consistent with special relativity. The transactional interpretation discussed below and the two-state formalism of Aharonov & Vaidman (2014) provide further extensions of this approach.

Antony Valentini (2001, 2002, Valentini & Westman 2005) has developed an extension of the pilot wave theory which proposes that the wave function is genuinely non-local as Bohm's potential describes, but the probability interpretation is an expression of the fact that the hidden-variable realm has reached a thermodynamic equilibrium in the universe as we find it today. This notion was also recognised by Bohm. In this description, the early universe would have been far from equilibrium and non-local effects might have been able to pass direct signals in a way which is now forbidden. He envisages this equilibrium arose through a dynamical process which resulted in an exponentially increasing sub-quantum entropy converging to the probability interpretation. He suggests that some undetected non-equilibrium matter particles might still have interactions stemming from this early stage, which could be used to send instantaneous signals, to violate the uncertainty principle, to distinguish non-orthogonal quantum states without disturbing them, to eavesdrop on quantum key distribution, and to outpace quantum computation.

The issue of probabilities appears in all models of quantum mechanics. In the Copenhagen interpretation, it applies to out knowledge of the system.

Researchers report that, by beaming photons between the quantum satellite Micius and two distant ground stations, they have demonstrated Bell's theorem over a distance of more than 1,200 kilometres (Yin, J. et al. 2017 Science 356 1140-4) and are exploring using entanglement for encryption which cannot be read without dusturbing the entanglement using quantum teleportation. Entanglement has also been demonstrated between two spatially separated components of a spin-squeezed Bose-Einstein condensate consisting of cold atoms in the same quantum state (doi:10.1126/science.aao1850 ).

There are several loopholes that might undermine the conclusions of the Bell's theorem quantum entanglement tests. The detection loophole is that not all photons produced in the experiment are detected. Then we have the communication loophole if the entangled paticles are too close together, then, in principle, measurements made on one could affect the other without violating the speed-of-light limit. In fig 45(a) is shown an experiment (arxiv:1608.01683) where both electrons and photons are used, closing these two loopholes simultaneously.

There is thus no easy way out for a locally realistic theory to circumvent the limits imposed by Bell's theorem, without undermining the principle that the observer has the free will to choose the orientations of the detectors. In order for the argument for Bell's inequality to follow, it is necessary to be able to speak meaningfully of what the result of the experiment would have been, had different choices been made. This assumption is called counterfactual definiteness.

But this means Bell tests still have a freedom-of-choice loophole, because they assume experimenters have free choice over which measurements they perform on each of the pair. But some unknown effect could be influencing both the particles and what tests are performed (either by affecting choice of measurement directly, or by restricting the available options), to produce correlations that give the illusion of entanglement. Superdeterminism asserts this can never happen because the entire state of the universe, including the observer, is deterministic, so the experimenter can choose only those configurations already stipulated. Gerard 't Hooft has some papers exploring this idea (arXiv:0908.3408, arXiv:0701097, arXiv:hep-th/0104219).

However, as shown in in fig 45 (b), experiments have now been performed which significantly close the time frame on this loophole as well. To narrow the freedom-of-choice loophole, researchers have previously put 144 kilometres between the source of entangled particles and the random-number generator that they use to pick experimental settings. The distance between them means that if any unknown process influenced both set-ups, it would have to have done so at a point in time before the experiment. But this only rules out any influences in the microseconds before. The latest paper (arXiv: 1611.06985) has sought to push this time back dramatically, by using light from two distant stars to determine the experimental settings for each photon. The team picked which properties of the entangled photons to observe depending on whether its two telescopes detected incoming light as blue or red. The colour is decided when the light is emitted, and does not change during travel. This means that if some unknown effect, rather than quantum entanglement, explains the correlation, it would have to have been set in motion at least around 600 years ago, because the closest star is 575 light-years away. The approach has since pushed back this limit to billions of years ago by doing the experiment with light from distant high-red-shift quasars (Rauch et al. 2018).


Fig 45: 2015 Experiment (a) closes two loopholes: Experiments that use entangled photons are prone to the detection loophole: not all photons produced in the experiment are detected, and sometimes as many as 80% are lost. Experimenters therefore have to assume that the properties of the photons they capture are representative of the entire set. To get around the detection loophole, physicists often use particles that are easier to keep track of than photons, such as atoms. But it is tough to separate distant atoms apart without destroying their entanglement. This opens the communication loophole: if the entangled atoms are too close together, then, in principle, measurements made on one could affect the other without violating the speed-of-light limit. The team used a cunning technique called entanglement swapping to combine the benefits of using both light and matter. The researchers started with two unentangled electrons sitting in diamond crystals held in different labs 1.3 km apart. Each electron was individually entangled with a photon, and both of those photons were then zipped to a third location. There, the two photons were entangled with each other - and this caused both their partner electrons to become entangled, too. This did not work every time. In total, the team managed to generate 245 entangled pairs of electrons over the course of nine days. The team's measurements exceeded Bell's bound, once again supporting the standard quantum view. Moreover, the experiment closed both loopholes at once: because the electrons were easy to monitor, the detection loophole was not an issue, and they were separated far enough apart to close the communication loophole, too (Hensen et al. 2015). Experiment (b) substantially closes a third loophole the freedom-of-choice loophole (Handsteiner et al. 2017). Light sources from two telescopes trained on distant stars up to 600 light years away is used to determine the choice of orientations, eliminating the gray light cone in the lower image, extending back up to 600 years. A third experiment (c) shows that two histories in which the order of events and accompanying changes induced by the Anne and Bob are inverted in one of two histories can become entangled so that the usual idea of causality does not apply (Rubino et al. 2016).

The BIG Bell Test asked volunteers to choose the measurements, in order to close the so-called 'freedom-of-choice loophole' as noted above - the possibility that the particles themselves influence the choice of measurement. Such influence, if it existed, would invalidate the test; it would be like allowing students to write their own exam questions. This loophole cannot be closed by choosing with dice or random number generators, because there is always the possibility that these physical systems are coordinated with the entangled particles. Human choices introduce the element of free will, by which people can choose independently of whatever the particles might be doing. The BIG Bell Test participants contributed unpredictable sequences of zeros and ones (bits) through an online video game. The bits were routed to state-of-the-art experiments in Brisbane, Shanghai, Vienna, Rome, Munich, Zurich, Nice, Barcelona, Buenos Aires, Concepcion Chile and Boulder Colorado, where they were used to set the angles of polarizers and other laboratory elements to determine how entangled particles were measured. The participants contributed with 97,347,490 bits, making possible a strong test of local realism, as well as other experiments on realism in quantum mechanics (Abellan 2018).

It has also been proposed to test whether consciousness can alter the nature of Bell's theorem entanglement by performing a large number of measurements at A and B and extracting the small fraction in which the EEG signals caused changes to the settings at A and B after the particles departed their original position but before they arrived and were measured. If the amount of correlation between these measurements doesn't tally with previous Bell tests, it implies a violation of quantum theory, hinting that the measurements at A and B are being influenced by processes outside the purview of standard physics (arXiv:1705.04620v1). It has also been suggested to perform a more challenging experiment where the conscious intent of humans is used to perform the switching.

Entanglement also raises intriguing questions about the nature of randomness and may provide a way to make uncrackable forms of randomness for security protection. All randomness in the universe essentially derives from quantum uncertainty, so it is naural that quantum entanglement could become the basis of ultra-secure randomness. Encryption schemes used in modern cryptography make extensive use of random, unpredictable numbers to ensure that an adversary cannot decipher encrypted data or messages.

Fig 46: Experimental arrangement.

Security can be established only if the random-number generator satisfies two conditions. First, the user must know how the numbers have been generated to verify that a valid procedure is being implemented. And second, the system must be a black box from the adversary's perspective to prevent them from exploiting knowledge about its internal mechanism. Thanks to the laws of quantum physics, it is possible to create a provably secure random-number generator for which the user has no knowledge about the internal generation mechanism, whereas the adversary has a fully detailed description of it, but can't crack it. Bierhorst et al. (2018) prepared two photons in an entangled state. They then sent each photon to a different remote measurement station, where the photons' polarizations were recorded. During measurement, the photons were unable to interact with each other – the stations were so distant that this would require signals travelling faster than the speed of light. A key difficulty has been that most experiments that the Bell inequality loopholes, mean that they cannot be considered as black-box demonstrations. For instance, the constraint that the two photons cannot exchange signals at subluminal speeds was not strictly enforced in two previous demonstrations of randomness generation based on Bell inequalities. In the loophole-free experiments to date, the magnitude of the Bell-inequality violations observed in these experiments, although sufficient to confirm the correlated behaviour of the photons, was too low to verify the presence of randomness of sufficient quality for cryptographic purposes. The current experiment improves existing loophole-free experimental set-ups to the point at which the realization of such randomness becomes possible. However, this threshold is barely reached. Every time a photon is measured in the authors' experiment, the randomness that is generated is equivalent to tossing a coin that has 99.98% probability of landing on heads, however a powerful statistical technique ultimately enabled the authors to generate 1,024 random bits in about 10 minutes of data acquisition – corresponding to the measurement of 55 million photon pairs.

Entanglement can also apply not just to quantum states but to quantum histories, so that a photon which has specific states at two points in time cannot be assigned a state at intermediate points. Its history thus becomes a superposition of inconsistent histories which separate and come together again at the final point, illustrating many worlds features of quantum superposition. An experimental realization has been performed (Cotler et al. 2016) which is a temporal version of a three degrees of freedom version of the Bell experiment entitled the Greenberger-Horne-Zeilinger (GHZ) type, where instead of three photons at the same time, the experiment explored a single photon at three different time points. In this case the bounds on the Bell's theorem equivalent are again violated, showing no single definite history can be assigned, indicating entangled histories.


Fig 47: Quantum entangled causality - a particle can be switched by a qbit so that instead of traversing from A to B it now makes the reverse transition. When the qbit is put is a superposition of 0 and 1 states, causality becomes a superposition of causal and causally reversed trajectories. In a theroretic experiment using Bell theorm principles, researcheres have found that if a massive object subject to relativity such as a planet can exist in a superposition of states, time itself would become entangled in a similar way to the causality above (doi:10.1038/s41467-019-11579-x).

Causality can also become entangled, so that intermediate states can become a supersposition of inconsistent causalities, as shown in fig 47 (Ball 2017). This could both enlarge the scope of quantum computation. A causal superposition in the order of signals travelling through two gates means that each can be considered to send information to the other simultaneously, effectively doubling information processing efficiency. It could also help unravel the relationship between quantum reality and relativity since causality in space-time is at the root of the difficulty of unifying these two theories, consistent with some novel axiomatic approaches to the foundations of quantum theory (arXiv:quant-ph/0101012, arXiv:quant-ph/0212084).

In an experimental realisation, a series of 'waveplates' (crystals that change a photon's polarization) are used and partial mirrors that reflect light and also let some pass through. These devices act as the logic gates A and B to manipulate the polarization of a test photon. A control qubit determines whether the photon experiences AB or BA - or a causal superposition of both (Rubino, G. et al. 2017 Sci. Adv. 3, e1602589). But any attempt to find out whether the photon goes through gate A or gate B first will destroy the superposition of gate ordering. In another other experiment (MacLean, J et al. 2017 Nature Commun. 8, 15149), utilizes quantum circuits that manipulate photon states in other ways causally. A photon passes through gates A and B in that order, but its state is determined by a mixture of two causal procedures: either the effect of B is determined by the effect of A, or the effects of A and B are individually determined by some other event acting on them both. As with the previous experiment, it's not possible to assign a single causal 'story' to the state the photons acquire.

Fig 47b: Left: Quantum switch (a,b) H and V polarized paths, (c) diagonal path is a superposition (d) degrees of freedom. Right: Causal asymmetry is abolished in the quentum equivalents of classical causal asymmetry (arXiv 1712.02368 [quant-ph]).

A robust form of indefinite causal order eliminating previous loopholes can be observed in a quantum switch, where two operations act in a quantum superposition of the two possible orders (arXiv:1803.04302 2018). The operations cannot be distinguished by spatial or temporal position, and the experimenters show this quantum switch has no definite causal order, by constructing a causal witness and measuring its value to be 18 standard deviations beyond the definite-order bound.

Many classical systems possess causal assymetry. For example a hammer smashing a window pane has efficient forwards causality resulting in the glass shattering stochastically into many pieces, while the retrocausal inference is exceedingly complex requiring massive memory and computation to resolve how all the pieces would end up back together. Various other stochastic systems demonstrate causal assymetry precisely. For example if we take a random sequence of 0s and 1s and then assign a 2 to every case where a 1 transitions to a 0 - viz 11000101001 becomes 11200121201. In the forward driection 1 is followed by a 1 or a 2, 2 is followed by a 0 and 0 is followed by a 0 or 1. But in the reverse process , a a 2 is preceded by a 1, a 0 is preceded by a 0 or a 2, but 1 can be preceded by 0, 1, or 2 requring more memory and computation. Researchers studying the quantum versions of such systems (arXiv 1712.02368 [quant-ph]) have found the quantum versions retain causal symmetry, raising unsolved questions about both the causal arrow of time and increasing entropy.

In a simulation using IBMs public quantum computer, a team have demonstrated that it is possible for the arrow of time to be reversed in a quantum system (Lesovik et al. 2019). To achieve the time reversal, the research team developed an algorithm for IBM's public quantum computer that simulates the scattering of a particle. To reverse its quantum evolution is like reversing the rings created when a stone is thrown into a pond. In nature, restoring this particle back to its original state -- in essence, putting the broken teacup back together -- is impossible, because you would need a "supersystem to manipulate the particle's quantum waves at every point. Their algorithm simulated an electron scattering by a two-level quantum system, "impersonated" by a quantum computer qubit and its related evolution in time. The electron goes from a localized, or "seen," state, to a scattered one. Then the algorithm throws the process in reverse, and the particle returns to its initial state -- in other words, it moves back in time, if only by a tiny fraction of a second.

An intriguing illustration of how different the quantum world can be is illustrated by a quantum game of NIM (Arxiv:1304.5133) utilizing the Leggett-Garg (1985) Bell-type inequality (see figs 18, 44). This places a bound on measurements, say Q = +/-1 at three times t1, t2, t3, where we find ≤ 1. Consider a quantum version of the three-box game, played by Alice and Bob, who manipulate the same three-level system. Alice first prepares the system in state |3> and then evolves it with a unitary operator that takes . Bob then has a choice of measurement: with probability p1B he decides to test whether the system is in state |1> or not (classically, he opens box 1), and with probability p2B he tests whether the system is in state |2> or not. Alice then applies a second unitary to the system, which takes before she makes her final measurement to check the occupation of state |3>. If both Alice and Bob find the system in the state that they check (e.g., Bob measures level 1 and finds the system there and Alice, the same for state 3), then Alice wins. If Alice finds the system in state 3, but Bob's measurement fails, then Bob wins. Finally, if Alice doesn't find the system in state 3, the game is drawn. In a realistic description of this game in which Bob's measurements are non-invasive, Alice's chance of winning can be no better than 50/50 as long as Bob chooses his measurements at random . In the quantum version, however, interference between various paths means that Alice wins every time. Alice's quantum strategy therefore outstrips all classical (i.e. realistic NIM) ones.

Various quantum games exploiting additional knowledge of a system provided by entanglement and Bell's inequalties, fig 48(right), have been shown to provide an indication of whether the universe is finitely or infinitely complex. Games like NIM where two players, Alice and Bob do not have knowledge of one another's choices can have their odds improved of both winning by using quantum entanglement and Bell's inequalities. Recent papers (arXiv: 1606.03140, 1703.08618, 1709.05032[quant-ph])has shown that by increasing the number of entangled particles accessed, the chances of winning can be improved without bound, suggesting a way to actually test thi question.


Fig 48: Left: Schrodinger's cat split into two entangled boxes. (Right): Quantum NIM game using Bell's inequalities to improve success.

Scientists have in 2016 split Schrodinger's cat between two entangled boxes (Wang et al. 2016). Microwaves inside a superconducting aluminum cavity take the place of the cat. the microwaves' electric fields can be pointing in two opposing directions at the same time, just as Schrodinger's cat can be simultaneously alive and dead. Because the states of the two boxes are entangled, if the cat turns out to be alive in one box, it's also alive in the other. Measurements from the two boxes will agree on the cat's status. For microwaves, this means the electric field will be in sync in both cavities. The scientists measured the cat states produced and found a fidelity of 81 percent. The result is a step toward quantum computing with such devices. The two cavities could serve the purpose of two quantum bits, or qubits. The cat states are more resistant to errors than other types of qubits so the system could eventually lead to more fault-tolerant quantum computers.

This clash between subjective experience and quantum theory has lead to much soul-searching. The Copenhagen interpretation says quantum theory just describes our state of knowledge of the system and is essentially incomplete. This effectively passes the problem back from physics to the observer.

Some physicists think all the possibilities happen and there is a probability universe for each case. This is called the many-worlds interpretation of Hugh Everett III. The universe becomes a superabundant superimposed set of all possible probability futures and indeed all pasts as well in a smeared out 'holographic ' multi-verse in which everything happens. It suffers from a key difficulty. All the experience we have suggests just one possibility is chosen in each situation - the one we actually experience. Some scientists thus think collapse depends on a conscious observer. Many worlds defenders claim an observer wouldn't see the probability branching because they too would be split but this leaves us either with infinite split consciousness, or all we lose all forms of decision-making process, all forms of historicity in which there is a distinct line of history, in which watershed events do actually occur, and the role of memory in representing it.


Fig 49: (Left) Basic intuition and experimental setup for entangled atoms. (a) When atoms spontaneously emit photons, phase coherence between the atoms leads to constructive interference and enhanced emission probability in a certain direction, measured by a single photon detector (SPD). Emission in any other direction is incoherent and hence not enhanced. If this phase coherence is generated by absorbing a single photon, the atoms are necessarily entangled. (Right) The experimental design used in the first experiment below.

Just as laser light is coherent, consisting of many photons in a single wave function, so it is possible for large populations of wave-particles to form entangled states. Researchers (arXiv:1703.04704) have demonstrated quantum entanglement of 16 million atoms, smashing the previous record of about 3,000 entangled atoms (fig 49). Meanwhile, another team (arXiv:1703.04709) used a similar technique to entangle over 200 groups of a billion atoms each. Both teams demonstrated entanglement using "quantum memories" consisting of a crystal interspersed with rare-earth ions designed to absorb a single photon and reemit it after a short delay. The single photon is collectively absorbed by many rare-earth ions at once, entangling them. After tens of nanoseconds, the quantum memory emits an echo of the original photon: another photon continuing in the same direction as the photon that entered the crystal. By studying the echoes from single photons, the scientists quantified how much entanglement occurred in the crystals. The more reliable the timing and direction of the echo, the more extensive the entanglement was. While the second team based its measurement on the timing of the emitted photon, the first team focused on the direction of the photon.

Whole tardigrades have also been entangled in quantum dots (arXiv:2112.07978v2): "We observe coupling between the animal in cryptobiosis and a superconducting quantum bit and prepare a highly entangled state between this combined system and another qubit. The tardigrade itself is shown to be entangled with the remaining subsystems. The animal is then observed to return to its active form after 420 hours at sub 10 mK temperatures and pressure of 6 × 10−6 mbar, setting a new record for the conditions that a complex form of life can survive".

Randomness arises ultimately from Quantum Uncertainty

 

Randomness has been described as having two sources, one epistemic about our state of knowledge of a system and the other ontic, about the actual physics of reality, perhaps consisting of two forms, quantum uncertainty, and the supposed randomness of molecular kinetic processes, but these are actually all sourced in quantum uncertainty.

 

If we think of a chamber filled with helium atoms and consider one atom in the chamber, viewed classically, this is 3-D billiards and we know multi-body billiards is chaotic because small differences in the position of any ball colliding with another causes larger deviations in their positions in the outgoing trajectories. If we view this quantum mechanically, it is a 3-D interference experiment in which the apparatus is all the other atoms and the chamber itself. Suppose we release a single helium atom through a very small aperture at time zero. As it proceeds through the chamber, its position becomes indeterminate through wave spreading in the same way a photon does.

 

Fig: Molecular interference demonstrated for three large molecules (Gerlich et al. (2011)).

 

This is interesting and requires further thought because it is a complex system with a lot of bound particles, but the experiments on large molecules show clearly that this is taking place. This means that successive collisions result in exponentiating increases in the uncertainty of position and the indeterminacy of the trajectories, so the entire concept of the atoms as “particles” having some other kind of randomness is derived from chaotic amplification of the uncertainty of position of each of the atoms in the chamber. This effectively means that all the perceived randomness in the kinetic process was derived originally from positional quantum uncertainty, also amplified by the chaotic boundary conditions of the interacting atoms.

 

As far as I can see this process extrapolates all the way into real life, where we walk around the corner and nearly get run over on the way to the supermarket, because all these uncertainties in life, although we think the universe looks “classical” are just larger more complicated instances of unmitigated amplified quantum uncertainty, obviously including mutation, ageing, cancer and mortality itself, as well as winning the lottery!  So I think we just have two sources of “randomness” the epistemic form is just partial observation through sampling subpopulations and observational uncertainty to do with the vagaries of the ontic form above and quantum uncertainty itself as the ontic form. Notice also that this means butterfly effect systems are really using quantum uncertainty to generate their ergodicity, so tornadoes are inflated quantum systems that might also be "conscious" if subjectivity also occurs at the quantum level through wave function sensitivity throughout space-time and wave function collapse.

 

To assume this is all random is an extremely dangerous assumption if we don't actually know. Lots of classical processes can appear quasi-random and are used as random number generators. Quantum measurement also has features of ergodicity that make us use the notion of randomness normalised by the wave function amplitude to explain the probability interpretation, but that’s a gross simplification. Schrödinger's cat is either very much alive or stone dead when we view it. It’s not an alive/dead superposition, nor is it in a random state. Entanglement gives us a hint that more is going, on because the detector stats at either end show apparently random polarisations but when we do Bell theorem sampling we find they are powerfully correlated, even ‘deterministically' complementary, when we sample at arbitrary relative orientations. Clearly uncertainty can and does handle multi-quantum entanglement which is called decoherence and it probably pervades the entire universe and its compound wave function.

 

So then we come to karma. It’s not moral, but the ultimate manifestation of quantum uncertainty in why things just happen to us.  Until it is proved otherwise, I take the position as an 'avatar' that the living present is a unique karmic instance and we are treading on thin ice every step of the way. So the answer about enlightenment is not to vacate your karma, but to look very carefully long and hard into it, because the bell tolls for us.

 

We can't SEE Schrodinger Cats, but we can FEEL them!

 

Fig 102c: The BEC juggling and Sapphire jiggling experimental results.

 

Here is a possible denouement to the cat paradox … this is a discovery process in the completion.

 

A BEC is an unlimited number of bosons (or integer spin atoms) in the same wave function, like a laser, and as we know, giving lectures by laser pointer can last as long as we push the button. But we can prepare a system in a superposition of states e.g. a quantum dot or BEC and hold it as long as we want in this entangled state until we choose measure it in some way through a particle absorption. So what IS an experiment using temporal separations falling within Heisenberg uncertainty limits? We'll see in a paragraph or two as the cat collapses and the penny drops!

 

Kovachy et al. (2015) throw a Bose Einstein condensate up like a juggling ball so two cusps end up in a spatially separated superimposed state:

 

We achieve long free-fall times by launching a Bose–Einstein condensed cloud of ~105 ultracold 87Rb atoms into a 10 m atomic fountain using a chirped optical lattice24. After the lattice launch, we use a sequence of optical pulses to apply a beam splitter that places each atom into a superposition of two wave packets with different momenta.

 

The sapphire experiment (Bild M et al. 2023) poses yet another situation where we have an acoustically oscillated atomic lattice that can end up superimposed between two oscillatory states with 1017 atoms involved.

 

Schrodinger's original thought experiment was not just so that two macroscopic states were superimposed but diabolically, so that the two states were biologically impossible, since live cat / dead cat is “unlawful" for the cat to exist in this superimposed state. Contrast that with the sapphire, where the two states are two very slightly different arrangements of a molecular lattice where there is no more inconsistency than in an ordinary interference experiment.

 

Now in a cat paradox experiment, we simply have a Geiger counter and a weak source emitting an average of 1 particle per sec and we leave the cat in the box for 0.5 secs and open it. But the cat is also a conscious organism, so what has actually happened is that the Cat starts to smell cyanide and at that point its fate is sealed, so the cat made the first conscious observation by detecting HCN molecules by smell and it has nothing to do with the experimenter opening the box.

 

John Kinemann suggested that a form of autopoiesis might prevent an organism becoming superimposed:

 

But it says, "That means a true Schrödinger’s cat is still far from being realised."  Meaning it has not been demonstrated for a living organism.  Nor do I think it will be because life preserves quantum superposition for decision making, but also closes the causal loop to preserve life and identity. That decision has already been made by the organism. If every aspect of the organism were a real time choice life could not be sustained. So an actual cat is much more complex than any of these lab experiments that demonstrate only one necessary principle.

 

I replied that free will has to collapse the wave function because it’s the only way to affect the universe without causal conflict with brain function. It doesn’t need further causal loops except as an indirect effect of this constraint.

 

Now, on reflection, I can effectively take John's argument and turn it inside out. We are witnessing superpositions of states all the time when our brains reach an unstable butterfly-effect-sensitive dynamical state through just the uncertainty window CN is pointing to. This is what we call making an uncertain intuitive decision, where the unconsciously competing factors interfere. Pair-splitting EPR experiments are a distraction, because they are designed to demonstrate entanglement Bell violations, but superposition is manifest in all uncertain situations. So when we have an “Aha!” moment, or when we simply make an arbitrary, idiosyncratic or creative choice, we are collapsing a superposition we have actually perceived internally through our very sense of mounting “uncertainty”.

 

It is this transition that we are experiencing all the time, each of which IS a Schrödinger cat before during and after collapse. We can't see these Schrödinger cats because they are hiding in "plain sight" in our sense of decision making, just as I came to this conclusion, formulating to myself what the hell is going on with these goddamn cats and why we can't see them!!! So the answer is that we can't SEE a Schrödinger cat but we FEEL them all the time and that's what intuition IS! So the real world is not as classical as it appears, but as quantum uncertain as our inner feelings indicate.

The amount of 'spooky action at a distance' that could be involved in the actions of the universe may prove to be incalculable, as potentially demonstrated in a proof of a theorem (Ji Z et al. 2020 "MIP∗ = RE" arXiv 2001.04383 [quant-ph]) combining complexity theory and quantum entanglement. This involves an analysis of a team of two players who are able to coordinate their actions through quantum entanglement, even though they are not allowed to talk to each other. This enables both players to 'win' more often than they would without quantum entanglement. But the authors show that it is intrinsically impossible for the two players to calculate an optimal strategy . This implies that it is impossible to calculate how much coordination they could theoretically reach. There is thus no algorithm that is going to tell you what is the maximal violation you can get in quantum mechanics through entanglement. Connes' embedding conjecture in the theory of operators used to provide the foundations of quantum mechanics. Operators are numbrer matrices that can have either a finite or an infinite number of rows and columns which have a crucial role in quantum theory, whereby each operator encodes an observable property of a physical object. Connes asked whether quantum systems with infinitely many measurable variables could be approximated by simpler systems that have a finite numberasked of you could always approximate an infinite system with a finite one. But the paper shows that the answer is no: there are, in principle, quantum systems that cannot be approximated by finite ones. This also means that it is impossible to calculate the amount of correlation that two such systems can display across space when entangled.

Entanglement and Quantum Gravity: Two papers have proposed invesitgating whether two systems can become entangled by gravitational attraction (doi:10.1103/PhysRevLett.119.240401, doi:10.1103/PhysRevLett.119.240402). Gravity is assumed to be mediated by the graviton, but the gravitatonal force is too weak at the particle level to have any hope of measuring single graviton interactions. The classical nature of general relativity has also caused some scientists to question whether gravity is a quantum force at all, exemplified by Penroses idea of objective reduction, where gravitational interaction collapses the wave function, returning the quantum superpositions to a classical end result. While we cannot measure a photon directly, we might be able to detect whether gravitation can quantum entangle two systems, thus demonstrating that gravitation is also a quantum force and that quantum gravity is thus a reality. The basic form of the experiment is to prepare two quantum systems in a superposition of states and see if they become entanged by gravitational attraction while falling through a vacuum. The current suggestion is to take two micro-diamonds and embed a nitrogen atom in each which can be put in a superposition of spin up and down states by zapping it with a microwave pulse. A magnetic field is them applied so that the spin up component moves left and the spin down moves right putting each in a superposition of two spatially separated states. The diamonds are then allowed to fall in a vacuum and tested for signs of entanglement between them caused by gravitational interaction as they fall.

Quantum Tunneling: In a version of the pair-splitting experiment (Chiao and Kwait 1993), which illustrates the difficulty of using super-luminal correlations to violate classical causality, one photon of a pair goes directly to a detector, while the other has to quantum tunnel through a partially reflecting mirror's energy barrier designed so it succeeds 1% of the time. When the tunneling photon's arrival time is compaired with the other it is detected sooner more than 50% of the time, indicating it was traveling up to 1.7 times the speed of light, but the effect results from reshaping the wave so that the leading edges of the two photon's waves both arrive together, but the peak of the tunneling photon arrives sooner because its wave packet has been shortened and its peak, where detection is most probable, arrives sooner. However this doesn't mean it can be used to convey information faster than the speed of light, because the effect lies within the uncertainty of position of the photon that detemined the tunneling in the first place.

Fig 50: In radioactivity, particles can escape the nucleus, by quantum tunneling out, even though there is an energy barrier greater than their own energy holding them together. They can do this because the wave function extends through the barrier, declining exponentially and there is a non-zero probability of finding the particle outside, since its wave function continues. The energy (frequency) is unchanged, but the amplitude (probability or intensity) is reduced. In the same way we can think of tunneling as a fluctuation of energy for a short enough time to jump over the barrier , which must be returned within the Heisenberg time limit. A game of looking-glass croquet above has Alice hitting rolled-up hedgehogs, each bearing an uncanny resemblance to Werner Heisenberg towards a wall, overlooked by Einstein. Classically the hedgehogs always bounce off. Quantum-mechanically however a small probability exists that a hedgehog will appear on the far side. The puzzle facing quantum-mechanics is how long does it take to go through the wall? Does the traversal time violate Einstein's light-speed limit? In the pair-splitting experiment when one is required to quantum tunnel, the tunneling photon seems to jump the barrier faster than light, so that it is likely to arrive sooner, but is not able to do this in a way which violates Einsteinian causality by sending usable information faster than light.

Since the first pair-splitting result in the 1980s there have been a veritable conjurer's collection of experiments, all of which verify the predictions of quantum mechanics in every case and confirm all the general principles of the pair-splitting experiment. Even if we clone photons to form quartets of correlated particles, any attempt to gain information about one of such a multiple collection collapses the correlations between the related twins. Furthermore these effects can be retrospective, leading photons to be able to be superpositions of states which were created at different times.

Superconductivity Entanglement is also involved in superconductivity. Electrons in the material form orbiting pairs, because the positively charged atomic ions are attracted to the negative electrons, a small peak of atomic density forms in the neighbourhood of two electrons which can cause them to form entrapped orbits even though the negatively charged electrons would naturally repel. The pairs cannot then collide with the atoms in the material because the activation energy required for either electron to escape the attractive pair exchanging phonons in their minimum energy configuration, is greater than the thermal energy of the material. Hence the electric current flows unobstructed.

Entanglement can also explain the Meissner effect, in which a magnet levitates above superconducting material. The magnetic field induces a current in the surface of the superconductor, and this current effectively excludes the magnetic field from the interior of the material, causing the magnet to hover. The current halts the photons of the magnetic field after they have travelled only a short distance through the superconductor. For the normally massless photons it is as if they have suddenly entered treacle, effectively giving them a mass. A similar mechanism may be behind the mass of all particles. The source of this mass is believed to be the Higgs field mediated by the Higgs boson, existing in a "condensed" state that excludes mediator particles such as gluons in the same way that a superconductor's entangled electrons exclude the photons of a magnetic field (Quantum quirk may give objects mass New Scientist 24 October 2004).

Delayed Choice, Quantum Erasure, Entanglement Swapping and Procrastination

A counter-intuitive aspect of quantum reality is that it is possible to change the way a quantum is detected after it has traversed its route in such a way as to retrospectively determine whether it traversed both paths as a wave or just one as a particle, or in the case of erasure merge it back into the entangled state.

In aWheeler's delayed choice experiment, illustrated in fig 51, we can sample photons either by an interference pattern, verifying they went along both paths (e.g. both sides of the galaxy in fig 14), or place separate directional detectors which will detect they went one way around only as particles (which will destroy the interference pattern. Moreover, we can decide which to perform after the photon has passed the galaxy, at the end of its path. Thus the configuration of the latter parts of the wave appear to be able to alter the earlier history. The delayed choice experiment has a deep link with Schrodinger's cat because opening the cat's box is exactly like using particle detectors because it determines whether or not a scintillation particle was emitted, while leaving the box closed retains the superposition of the wave. Furthermore, Wheeler's purpose in proposing the experiment was devoted to establishing that anti-realism - the idea that the photon doesn't have a state until it is measured is inconsistent with a causal determininsm that avoids retrocausality - the eventual configurationof the detectors altering the assumed path or paths the photon has taken..

Fig 51: (a) Wheeler delayed choice experiment on a cosmic scale. A very distant quasar is gravitationally lensed by an intervening galaxy. (b) An experimental implementation of Wheeler's idea along a satellite-ground interferometer that extends for thousands of kilometers in space (Vedovato et al. 2017), using shutters on an orbiting satellite (c, d).

Just how large such waves can become can be appreciated if we glance out at a distant galaxy, whose light has had to traverse the universe to reach us, perhaps taking as long as the history of Earth to get here. The ultimate size is as big as the universe. Only one photon is ever absorbed for each such wave, so once we detect it, the probability of finding the photon anywhere else, and hence the amplitude of the wave, must immediately become zero everywhere. How can this happen, if information cannot travel faster than the speed of light? For a large wave, such as light from a galaxy, (and in principle for any wave) this collapse process has to cover the universe. When I shine my torch against the window, the amplitude of each photon is both reflected, so I can see it, and transmitted, escaping into the night sky. Although the wave may spread far and wide, if the particle is absorbed anywhere, the probability across vast tracks of space has to suddenly become zero. Moreover collapse may involve the situation at the end of the path influencing the earlier history, as in the Wheeler delayed choice experiment.

In Quantum Erasure, it is also possible to 'uncollapse' or erase such losses of correlation by re-interfering the wave functions so we can no longer tell the difference. The superposition choices of the delayed choice experiment do this. This successfully recreates the lost correlations, inducing information about one of the particles and then erase it again by re-interfering it back into the wave function provided we use none of its information - the quantum eraser. In such situations the interference, which would be destroyed had we looked at the information, is reintegrated undiminished.

Fig 52: Quantum erasure (Scientific American)

Erasing information about the path of a photon restores wavelike correlated behavior. Pairs of identically polarized correlated photons produced by a 'down-converter', bounce off mirrors, converge again at a beam splitter and pass into two detectors. A coincidence counter observes an interference pattern in the rate of simultaneous detections by the two detectors, indicating that each photon has gone both ways at the beam splitter, as a wave. Adding a polarization shifter to one path destroys the pattern, by making it possible to distinguish the photons' paths. Placing two polarizing filters in front of the detectors makes the photons identical again, erasing the distinction, restoring the interference pattern.

Fig 53: Delayed choice quantum eraser configuration (en.wikipedia.org/wiki/Delayed_choice_quantum_eraser, doi:10.1103/PhysRevLett.84.1).

Use of entangled photons enables the design and implementation of versions of the quantum eraser that are impossible to achieve with single-photon interference. What makes the Wheeler's delayed choice quantum eraser astonishing is that, unlike in the classic double-slit experiment, the choice of whether to preserve or erase the which-path information of the idler was not made until 8 ns after the position of the signal photon had already been measured.

An individual photon goes through one (or both) of the two slits. In the illustration, the photon paths are color-coded (red A, blue B). One of the photons - the "signal" photon (red and blue lines going upwards from the prism at BBO) continues to the target detector D0. Detector D0 is scanned in steps along its x-axis. A plot of "signal" photon counts detected by D0 versus x can be examined to discover whether the cumulative signal forms an interference pattern. The other entangled photon - the "idler" photon (red and blue lines going downwards from the prism), is deflected by prism PS that sends it along divergent paths depending on whether it came from slit A or slit B. Beyond the path split, the idler photons encounter beam splitters BSa, BSb, and BSc that each have a 50% chance of allowing the idler photon to pass through and a 50% chance of causing it to be reflected. The beam splitters and mirrors direct the idler photons towards detectors labeled D1, D2, D3 and D4.

Note that:

Detection of the idler photon by D3 or D4 provides delayed "which-path information" indicating whether the signal photon with which it is entangled had gone through slit A or B. On the other hand, detection of the idler photon by D1 or D2 provides a delayed indication that such information is not available for its entangled signal photon. Insofar as which-path information had earlier potentially been available from the idler photon, it is said that the information has been subjected to a "delayed erasure".

Fig 54: (Above) Delayed choice entanglement swapping in which Victor is able to decide whether Alice's and Bob's photons are entangled or not after they have already been measured. (Below) A photon is entangled with a photon that has already died (sampled) even though they never coexisted at any point in time.

In a second experiment, fig 54(below), two photons can become entangled even though they have never coexisted at any point in time. Photons 1 & 2 are entangled and 1 is detected killing it. A second entangled pair 3 & 4 are later created and 3 is then entangled with 2 disrupting the original entanglement with 4. But when 4 is measured, we then find it is entangled with the dead photon 1 (ArXiv: 1209.4191).

Closing Wheeler loopholes: In 2018 a group of physicists (Chaves R, Barreto Lemos G, Pienaar J 2018 Causal Modeling the Delayed-Choice Experiment doi:10.1103/PhysRevLett.120.190401) used the emerging field of causal modeling to find a loophole in Wheeler's delayed-choice experiment, that showed that in an experiment in which two phase shifts change the interference pattern, and with it, the presumed "wavelike" or "particle-like" behavior of the photon, it is possible to use a hidden variable to write down rules that use the variable's value and the presence or absence of a change in the detectors to guide the photon to one detector or another in a manner that mimics the predictions of quantum mechanics. Causal modeling involves establishing cause-and-effect relationships between various elements of an experiment. Often when studying correlated events if one cannot conclusively say that A causes B, or that B causes A, there exists a possibility that a previously unsuspected or "hidden" third event, C, causes both. In such cases, causal modeling can help uncover C.

They then constructed a formula that takes as its input probabilities calculated from the number of times that photons land on particular detectors (based on the setting of the two phase shifts). If the formula equals zero, the classical causal model can explain the statistics, but if the equation spits out a number greater than zero, then, subject to some constraints on the hidden variable, there's no classical explanation for the experiment's outcome. However subsequent experiments (arXiv:1806.00156, 1806.00211) have shown that in every configuartion tested the value is non-zero and the loophole has been closed. However more complex hidden variable theories carrying more than one bit of information and the Bohm pilot wave theory (because the wave and particle co-exist and the wave can guide the particle) could still explain the phenomenon.

Fig 54b: Time reversing entanglement can also occur (arXiv 2210.17046, arXiv 2211.01283).

Entanglement Swapping A third intriguing phenomenon called entanglement swapping can also be made into a Wheeler delayed choice version. In this there is a mediator, Victor. In the entanglement swapping procedure, fig 54 (above), two pairs of entangled photons are produced, and one photon from each pair is sent to Victor. The two other photons from each pair are sent to Alice and Bob, respectively. If Victor projects his two photons onto an entangled state, Alice's and Bob's photons are entangled although they have never interacted or shared any common past. What might be considered as even more puzzling is the idea of delayed-choice for entanglement swapping. Victor is free to choose either to project his two photons onto an entangled state and thus project Alice's and Bob's photons onto an entangled state, or to measure them individually and then project Alice's and Bob's photons onto a separable state. If Alice and Bob measure their photons' polarization states before Victor makes his choice and projects his two photons either onto an entangled state or onto a separable state, it implies that whether their two photons are entangled (showing quantum correlations) or separable (showing classical correlations) can be defined after they have been measured (ArXiv: 1203.4384).

Quantum Procrastination and Delayed Choice Entanglement: An ingenious version of the delayed choice experiment has also been applied to the idea of morphing the wave aspect into the particle aspect through a superposition of the two. A simple version of the delayed choice experiment involves an interferometer that contains two beam splitters. The first splits the incoming beam of light, and the second recombines them, producing an interference pattern. Such a device demonstrates wave-particle duality in the following way. If light is sent into the interferometer a single photon at a time, the result is still an interference pattern - even though a single photon cannot be split, but is passing through both routes as a wave. If you remove the device that recombines the two beams, interference is no longer possible, and the photon emerges from one or other route as a particle, which can be detected as before, even when this decision is made after the photon entered the splitter.

Fig 55: Quantum procrastination. Morphing the probablility statistic of a wave into that of a particle by altering the detection angle of the external photon in the delayed choice entanglement. Inset black and white and artist's impression of the transition.

Now two groups of researchers have taken this a step further by replacing the second beam splitter with a quantum version that is simultaneously operational and non-operational, because it is entangled with a second photon outside the interferometer. Hence whether it is operational or not can be determined only by measuring the state of the second photon. The researchers found that this allowed them to delay the photon's wave or particle quality until after it has passed through all the experimental equipment, including the second beam splitter tasked with determining that very thing and by varying the detection angle of the second entangled photon according to Bell's theorem to morph the resutl between wave and particle aspects of the transmitted photon (doi: 10.1126/science.1226719, doi:10.1126/science.1226755). The ability to delay the measurement which determines the degree of wave-like or particle-like behavior to any desired degree has deservedly been termed 'quantum procrastination'.

Quantum Teleportation, Computing and Cryptography

In Quantum teleportation, information defining a quantum particle in a given state is 'teleported' by another particle, has also become an experimental reality. These experiments give us a broad intuition of quantum reality. In quantum teleportation one of a pair of entanged particles is interacted with by a third distinct particle to produce a signal which is 'teleported' as classical information, e.g. as part of the state of a transmitted particle, such as its porarization, although this particle must still be an isolated non-interacting quantum. This later interacts with the second entangled particle resulting in the generation of a particle with identical properties to the third particle. The illustrations below the theoretical process and two exerimental realizations.


Fig 56: (a) In quantum teleportation, a quantum (blue left) is combined in an interference measurement with one of an entangled pair (pink left) by experimenter 1, who then sends the result of the measurement as classical information to 2 who applies this to transform the other entangled particle, causing it to enter the same quantum state as the original blue one. (a) Teleporting a grin - the magnetic moment (the grin) of a neutron (Cheshire cat) traversed a different path from the particle (doi: 10.1038/ncomms5492). (b) Quantum teleportation has been achieved over distances greater than 100 km and more recently from Earth 1400 km to a satellite (arXiv:1707.00934). Right: A minimal model of quantum energy transfer in which observers can induce an entanged state in which a positive energy measurment at A can indcue a negative energy measurement of the vacuum state at B (arXiv2203.16269, arXiv2301.02666).

In continuous-variable quantum teleportation, entangled particles help to transmit a stream of information comprising numerical values that can range widely, such as the amplitudes of a laser's light waves. But until now, this form of teleportation has been achieved only over very short distances in the lab. Xiaojun Jia and colleagues have now used optical fibre to carry out continuous-variable teleportation of laser-light values across a distance of 6 kilometres. A fidelity of 0.62 ± 0.03 was achieved for the retrieved quantum state, which breaks through the classical limit of 1/2. A fidelity of 0.69 ± 0.03 breaking through the no-cloning limit of 2/3 has also been achieved when the transmission distance is 2.0 km. This approach could allow optical fibre to be used for powerful forms of quantum computing (Huo et al. Sci. Adv. 2018 4 eaas9401 doi:10.1126/sciadv.aas9401).

Quantum Computing: Classical computation suffers from the potentially unlimited time it takes to check out every one of the possibilities. To crack a code we need to check all the combinations, whose numbers can increase more than exponentially with the size of the code numbers and possibly taking as long as the history of the universe to compute. Factorizing a large number composed of two primes is known to be computationally intractable enough to provide the basis for public key encryption by which banks records and passwords are kept safe. Although the brain ingeniously uses massively parallel computation, there is as yet no systematic way to boot strap an arbitrary number of parallel computations together in a coherent manner.

However quantum reality is a superposition of all the possible states in a single wave function, so if we can arrange a wave function to represent all the possibilities in such a computation, superposition might give us the answer by a form of parallel quantum computation. A large number could in principle be factorized in a few superimposed steps, which would otherwise require vast time-consuming classical computer power to check all the possible factors one by one. Suppose we know an atom is excited by a certain quantum of energy, but only provide it a part of the energy required. The atom then enters a superposition of the ground state and the excited state, suspended between the two like Schrodinger's cat. If we then collapse the wave function, squaring it to its probability, as in , it will be found to be in either the ground state or excited state with equal probability. This superimposed state is sometimes called the 'square root of not' when it is used to partially excite a system which flips between 0 and 1 corresponding to a logical negation.

To factorize a large number, we could devise a quantum system in two parts. The left part is excited to a superposition. Suppose we have a small array of atoms which effectively form the 0s and 1s of a binary number - 0 in the ground state and 1 in the excited state. If we then partially excite them all they represent a superposition of all the binary numbers - e.g. 00, 01, 10 and 11. The right half of the system is designed to give the factorization remainder of a test number taken to the power of each of the possible numbers in the left. These turn out to be periodic, so if we measure the right we get one of the values. This in turn collapses the left side into a superposition of only those numbers with this particular value in the right. We can then recombine the reduced state on the left to find its frequency spectrum and decode the answer. As a simple example, you are trying to factorise n=15. Take the test number x = 2. The powers of 2 give you 2, 4. 8, 16, 32, 64, 128, 256 ... Now divide by 15, and if the number won't go, keep the remainder. That produces a repeating sequence 2, 4, 8, 1, 2, 4, 8, 1 ... with period n = 4 we can use this to figure that 3 = 24/2-1 is a factor of 15. The quantum parallelism solves all the computations simultaneously - this is known as Shor's algorithm, after Peter Shor.

Stage three is the most complex and depends on the fact that the frequency of of these repeats can be made to pop out of a calculation by getting the different universes to interfere with one another. A complex series of quantum logic operations has to be performed and interference then brought about by looking at the final answer. The final observed value, the frequency f, has a good chance of revealing the factors of n from the expression xf/2-1. in the simple example above, the repeat sequence is the four values 2, 4, 8, 1, so the repeat frequency is 4. Thus Shor's algorithm produces the number: 24/2-1 = 3 which is a factor of 15.

Fig 57: Above: Two qubit logic gate performance (doi:10.1038/nature15263). Below: Adiabatic quantum computing on the spin-chain problem one-dimensional spin problems with variable local fields and couplings between adjacent spins. An example of a stoquastic problem. With evolution of the system for 9 qbits shown at right (doi:10.1038/nature17658).

Such quantum computers require isolation from the environment to avoid quantum superpositions collapsing in decoherence. A two qubit quantum logic gate has been recently constructed using silicon transistor technology, promising a proof-of-principle breakthrough in the construction of quantum computers (Veldhorst et al. 2015).

In an ingenious strategy, a team have used a well-tested four qubit quantum computer to simulate the creation of pairs of particles and antiparticles in a proof of concept simulation in which energy is converted into matter, creating an electron and a positron. Quantum electrodynamics has the most excellent predictions of any physical theory, but interactions involving strong nuclear and colour forces become too complex, requiring simulations which are prone to exponential runaway in classical computing because it lacks quantum superposition. The team used a quantum computer in which an electromagnetic field traps four ions in a row, each one encoding a qubit, in a vacuum. They manipulated the ions' spins (magnetic orientations) using laser beams, coaxing the ions to perform logic operations. The team's quantum calculations confirmed the predictions of a simplified version of quantum electrodynamics: "The stronger the field, the faster we can create particles and antiparticles" (Martinez et al. 2016).


Fig 58: Left: (a) An experiment to simulate the coherent real-time dynamics of particle-antiparticle creation by realizing the Schwinger model (one-dimensional quantum electrodynamics) on a lattice. (b) The four qubit arrangement. (c, d) Experimental and theoretical data showing the evolution of the particle number density as a function of time wt and particle mass m/w. Right: Quantum tomography a technique akin to weak quantum measurement. In a functional quantum computer this could be used to inform error-correction measures on connected qubits in the same device. A qubit is created using a circuit with two superconducting metals separated by an insulating barrier. Passing a current produces a qubit with two simultaneous superposed energy levels simultaneously. Reducing the energy barrier maintaining the superposition collapses its wavefunction into one of the two energy levels. But if it is set just above the highest of the two energy levels, it only partially collapses the waveform - in a "partial measurement". Scanning the qubit using microwave radiation, and then fully removing the energy barrier can then reveal its state of superposition and document its collapse (Science 312 1498). Right: (a) The Google designed sycamore processor involving 53 qubits (Arute et al. 2019) (b) The controllabe coupling array of the qubits. (c) The control pathway.

The D-wave computer works on an entirely different principle of adiabatic quantum computing - quantum-annealing of a potential energy landscape with multiple local minima. Classical annealing works to find a sub-optimal local minimum by starting at a high thermodynamic temperature of random excitations to effectively throw a marble around the landscape to avoid it getting caught in a high-altitude lake, before gradully lowering the temperature to assist in finding a local minimum not too far in value from the global minimum. Quantum annealing replaces kinetic excitation with graduated quantum tunneling to achieve the same effect. This approach works only on problems decodable into an energy landscape based on array computing. Whether it achieves better performance than classical computing remains unproven according to wikipedia (en.wikipedia.org/wiki/Adiabatic_quantum_computation).

In a veritable 'quantum computing leap', a team from Google (Arute et al. 2019) have announced that the Sycamore processor (fig 58 right), consisting of 53 conrollable qubits, takes only about 200 seconds to sample one instance of a quantum circuit a million times. Benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. In their words, "This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm..

A team from Google (Barends et al. 2016), see fig 57, have more recently begun working with fundamental research into adiabatic quantum computing of systems such as Stoquastic spin-chain problems. Stoquastic Hamiltonians, those for which all off-diagonal matrix elements in the standard basis are real and non-positive, are common in the physical world (Bravy et al. 2008). They include flux-type Josephson junction qbits (Barends et al. 2013). A Josephson junction is a conductor pair separated by a thin insulating barrier, which permits quantum-tunneling. It is a macroscopic quantum phenomenon resulting in a current crossing the junction, determined by the junction's flux quantum, in the absence of any external electromagnetic field, with discrete steps under increasing voltage.

Quantum Cryptography exploits quantum mechanical properties to perform cryptographic tasks. Quantum key distribution offers an information-theoretically secure solution to the key exchange problem. Public-key encryption and signature schemes such as RSA, can be broken by quantum adversaries. Quantum cryptography allows the completion of various cryptographic tasks that are proven or conjectured to be impossible using only classical communication, for example, it is impossible to copy data encoded in a quantum state and the very act of reading data encoded in a quantum state changes the state. This is used to detect eavesdropping in quantum key distribution.

The most well known and developed application of quantum cryptography is quantum key distribution (QKD), which is the process of using quantum communication to establish a shared key between two parties (Alice and Bob, for example) without a third party (Eve) learning anything about that key, even if Eve can eavesdrop on all communication between Alice and Bob. This is achieved by Alice encoding the bits of the key as quantum data and sending them to Bob; if Eve tries to learn these bits, the messages will be disturbed and Alice and Bob will notice. The key is then typically used for encrypted communication using classical techniques. For instance, the exchanged key could be used as the seed of the same random number generator both by Alice and Bob.

Fig 59: Quantum cryptography (1) To begin creating a key, Alice sends a photon through either the 0 or 1 slot of the rectilinear or diagonal polarizing filters, while making a record of their orientations. (2) For each incoming bit, Bob chooses randomly which filter slot he uses for detection down both the polarization and the bit value. (3) If Eve tries to spy on the photon train, quantum mechanics prohibits her using both filters but if she chooses the wrong one she may create errors by modifying their polarization. (4) After all the photons reach Bob, he tells Alice openly his sequence of filters, but not the value of the bit value. (5) Alice tells Bob openly in turn which filters he chose correctly. These determine the bits they will use to form their encryption key.

Following the discovery of quantum key distribution and its unconditional security, researchers tried to achieve other cryptographic tasks with unconditional security. One such task was quantum commitment. A commitment scheme allows a party Alice to fix a certain value (to "commit") in such a way that Alice cannot change that value while at the same time ensuring that the recipient Bob cannot learn anything about that value until Alice decides to reveal it.

Weak Quantum Measurement and Surreal Bohmian Trajectories

Weak quantum measurement (Aharonov et al. 1988) is a process where a quantum wave function is not irreversibly collapsed by absorbing the particle but a small deformation is made in the wave function whose effects become apparent later when the particle is eventually absorbed in a 'post-selection' e.g. on a photographic plate in a strong quantum measurement. Weak quantum measurement changes the wave function slightly mid-flight between emission and absorption, and hence before the particle meets the future absorber involved in eventual detection. A small change is induced in the wave function, e.g. by slightly altering its polarization along a given axis (Kocsis et al. 2011). This cannot be used to deduce the state of a given wave-particle at the time of measurement because the wave function is only slightly perturbed, and is not collapsed or absorbed, as in strong measurement, but one can build up a prediction statistically over many repeated quanta of the conditions at the point of weak measurement, once post-selection data is assembled after absorption at specific points in the eventual path.

Fig 60: Weak quantum measurement in a double slit apparatus generating single photons using a laser stimulated quantum dot and split fiber optics. The overlapping wave function is elliptically polarized in the xy-plane transverse to the z-direction of travel. A calcite crystal is used to make a small shift in the phase of one component, while the other retains the information leading to absorption of the photon on a charged coupled device. By combining the information from the two transverse components at varying lens settings, it becomes possible to make a statistical portrait of the evolving particle trajectories within the wave function. Pivotally the weak quantum measurement is made in a way, which is confirmed only in the future of the ensemble when the absorption takes place (Kocsis et al. 2011).

Weak measurement also suggests (Merali 2010, Cho 2011) that, in some sense, the future is determining the present, but in a way we can discover conclusively only by many repeats. Focus on any single instance and you are left with an effect with no apparent cause, which one has to put it down to a random experimental error. This has led some physicists to suggest that free-will exists only in the freedom to choose not to make the post-selection(s) revealing the future's pull on the present. Yakir Aharonov, the codiscoverer of weak quantum measurement (Aharonov et al. 1988) sees this occurring through an advanced wave travelling backwards in time from the future absorbing states to the time of weak measurement. What God gains by 'playing dice with the universe', in Einstein's words, in the quantum fuzziness of uncertainty, is just what is needed, so that the future can exert an effect on the present, without ever being caught in the act of doing it in any particular instance: "The future can only affect the present if there is room to write its influence off as a mistake", neatly explaining why no subjective account of prescience can do so either.

Weak quantum measurements have been used to elucidate the trajectories of the wave function during its passage through a two-slit interference apparatus (Kocsis et al. 2011), to determine all aspects of the complex waveform of the wave function (Hosten 2011, Lundeen et al. 2011), to make ultra sensitive measurements of small deflections (Hosten & Kwiat 2008, Dixon et al. 2008) and to demonstrate counter factual results involving both negative and positive post-selection probabilities, which still add up to certainty, when two interference pathways overlap in a way which could result in annihilation (Lundeen & Steinberg 2009). In a more recent development, a team led by Aharanov (et al. 2014) has found that post-selection can also induce forms of entanglement in particles even if they have no previous quantum connection coupling their wave functions.

In formulating a theoretical point of view in which weak measurement plays an integral part, Aharonov and co-workers (Aharonov, Bergmann & Lebowitz 1964, Aharonov & Vaidman 2008), noting that the assumption that reduction of the wave packet was an irreversible process making quantum reality non-time symmetric, because it was unsuited to the post-selection of weak quantum measurement, devised a time-symmetric version of quantum mechanics - the two state vector approach - in which a state is defined by two vectors, defined by the results of measurements performed on the system in the past relative to the time t and of a backward evolving quantum state defined by the results of measurements performed on this system after the time t. In this sense, the post-selection strong measurement after the event becomes critical in defining the quantum state. We will see this has a close relationship with the transactional interpretation although the Bohmian idea of pilot waves guiding a real particle assumes a causality in the classical direction of increasing time.


Fig 60b: Experimental realization of the pigeohole paradox - three photons and only two polarizations.

In another manifestation of quantum reality exposed by weak measurement, we have the pigeonhole paradox. The pigeonhole principle states that if there are more pigeons than boxes, at least one box must contain more than one pigeon. Aharonov et al. (2014, 2016) put forward a quantum pigeonhole paradox where the classical pigeonhole counting principle in some case may break down. Chen et al. (2019) demonstrate that when three single photons transmit through two polarization channels, in a well-defined pre- and post-selected ensemble, there are no two photons in the same polarization channel by weak-strength measurement. The effect of variable-strength quantum measurement is experimentally analysed order by order and a transition of violation of the pigeonhole principle is observed. The different kinds of measurement-induced entanglement are responsible for the photons' abnormal collective behaviour in the paradox.

Surreal Bohmian Trajectories: The space-time profile of WQM displays detection trajectories comparable with David Bohm's (1952) pilot wave theory in which the particle has a defined position and the wave acts simply as a guide, albeit with non-local influences. The link with Bohm's pilot wave theory became reinforced when a critical experiment demonstrated the existence of so-called "surreal Bohmian trajectories. A group of physicists with the initial letters ESSM (1992) in their names pointed out that Bohmian hidden variable in which particles were guided by a non-local 'pilot' wave could in principle lead to 'surreal' trajectories which could violate the predictions of quantum theory. However, when a second group (Mahler et al. 2016) set out to test surreal trajectories experimentally they found they physically exist.

Fig 61: Left: The apparatus used to discover surreal trajectories (Mahler et al. 2016 see discussion below). Right: The experimental results show that some photons which should have gone say through the red slit according to their entangled twins, appeared to do so near the slit but further along the trajectory veer off to erratically behave as if they are a superposition of either polarization indicating some unseen non-local connection occurring between the now separated entangled photons.

The experiment (fig 61) first prepares a pair of highly entangled photons with complementary polarization and then passes one into a double slit apparatus in which the photon to be measured is directed to one or other slit depending on its entangled twin's polarization. The measured photons are then passed through an apparatus to do weak quantum measurement of their trajectories as an ensemble and then detect the eventual position destructively in the same manner as fig 60. However when the polarization of the other entangled photon is used to determine which slit the first one must have gone through, the orbits near the centre of the interference pattern display clear signs of surreal trajectories.

When weak measurement is used to detect the trajectory close to the slit. it confirms that the photon has gone through the correct slit according to its assumed polarization as subsequently measured by sampling its entangled twin. However, as the position of weak measurement moves towards the photographic plate the predictions fall to an even superposition of the two polarizations. Since the weak quantum measurement is a physical realization of the ensemble trajectories going to this particular point on the plate surreal trajectories are real but the prediction made of the spin by the entangled twin has become changed. This implies in turn that changes have occurred between entering the slits and hitting the plate of a non-local nature implying the there is substance to the Bohmian reality.

A brief synopsis of Bohm's pilot wave theory, which can be generalized, e.g. to bosons, runs along the following lines. Consider a wave function  defined in a configuration space consisting of mdistinguishable particles x1, ... xm in d dimensions, forming an md dimensional space consisting of q={ q 11, q12, q13, ... , qm1, qm2, qm3} assuming d=3. Giving them masses in each direction Mp,q, we can derive a Schrodinger wave equation . We also consider the 'world particle' x consisting of real component positions x(t)={x11(t), ... , xm3(t)}, with x(0) being a random variable distributed with probability density P0(x) where .  Under the wave function, the velocity of the particle is defined by , guaranteeing the probability density for x(t) is Pt(x). Equivalently this gives an equation of motion , where f  is the classical force arising from the potential V(q) and r is a repulsive force due to the quantum potential: , where . We thus have essentially real particles with defined positions subject to their (random) initial conditions whose dynamics is determined both by a classical potential and an additional quantum potential, whose effects are broadly consistent with the results we find in experiments such as weak quantum measurement.

 

Superposition, Entanglement and Counter-particles

 

Fig 62: Elitzur's proposed experiment (arXiv:1707.09483), whose principles have already been confirmed in a related experiment causing a photon to be reflected off both of two slits (Okamoto & Takeuchi 2018 Experimental demonstration of a quantum shutter closing two slits simultaneously Sci. Repts. doi:10.1038/srep35161).

 

In a new and more puzzling twist about quantum superpositions (Elitzur et al. 2018 arXiv:1707.09483) following on from Aharonov's two state vector past-future handshaking view, a probe photon (yellow left) is sent in a superposition through three boxes, A, B and C, simultaneously to see if the shutter photon (red) is inside them. If it is, the probe photon is reflected. This lets the probe photon report on where the shutter photon is without looking at it directly. The shutter photon is placed in a superposition that makes its location within the boxes vary through time: At moment 1 it is in A and C but not B, at moment 2 it is only in C and at moment 3 it is only in B and C. Although in C at all times the shutter photon appears to be in A a moment 1 only to disappear and reappear in B at moment 3. So this superposition is in some places some of the time rather than all places at once. The experiment is designed so the probe photon can only show interference if it interacted with the shutter photon in this particular sequence of places and times, so interference in the probe photon would be a definitive sign the shutter photon made this bizarre, logic-defying sequence of disjointed appearances among the boxes at different times.

 

The apparent vanishing of particles in one place at one time - and their reappearance in other times and places - suggests a new and extraordinary vision of the underlying processes involved in the nonlocal existence of quantum particles can be understood as a series of events in which a particle's presence in one place is somehow 'canceled' by its own 'counter-particle' in the same location, in a manner reminiscent of particle-antiparticle annihilation. The disappearance of quantum particles is not 'annihilation' in this same sense but it is somewhat analogous - these putative counter-particles should possess negative energy and negative mass, allowing them to cancel their counterparts. So although the traditional 'two places at once' view of superposition might seem odd enough, 'it's possible a superposition is a collection of states that are even crazier,' Elitzur says. 'Quantum mechanics just tells you about their average'.

 

Self-Simulated Universe Another theory put forward by gravitational theorists (Irwin, Amaral & Chester 2020) also uses retrocausality to try to explain the ultimate questions: Why is there anything here at all? What primal state of existence could have possibly birthed all that matter, energy, and time, all that everything? and the way did consciousness arise—is it some fundamental proto-state of the universe itself, or an emergent phenomenon thats purely neurochemical and material in nature?

 

Fig 77b: Self-Simulated Universe: Humans are near the point of demarcation, where EC or thinking matter emerges into the choice-sphere of the infinite set of possibilities of thought, EC. Beyond the human level, physics allows for larger and more powerful networks that are also conscious. At some stage of the simulation run, a conscious EC system emerges that is capable of acting as the substrate for the primitive spacetime code, its initial conditions, as mathematical thought, and simulation run, as a thought, to self-actualize itself. Linear time would not permit this logic, but non-linear time does.

 

This approach attempts to answer both questions in a way that weds aspects of Nick Bostroms Simulation Argument with timeless emergentism.termed the panpsychism self-simulation model,that says the physical universe may be a strange loopthat may self-generate new sub-realities in an almost infinite hierarchy of tiers in-laid with simulated realities of conscious experience. In other words, the universe is creating itself through thought, willing itself into existence on a perpetual loop that efficiently uses all mathematics and fundamental particles at its disposal. The universe, they say, was always here (timeless emergentism) and is like one grand thought that makes mini thoughts, called code-steps or actions, again sort of a Matryoshka doll.

 

David Chester comments:

 

While many scientists presume materialism to be true, we believe that quantum physics may provide hints that our reality could be a mental construct. Recent advances in quantum gravity, like seeing spacetime emergent via a hologram, is also a touch that spacetime isnt fundamental. this can be also compatible with ancient Hermetic and Indian philosophy. In a sense, the mental construct of reality creates spacetime to efficiently understand itself by creating a network of subconscious entities that may interact and explore the totality of possibilities.

 

They modify the simulation hypothesis to a self-simulation hypothesis, where the physical universe, as a strange loop, is a mental self-simulation that might exist as one of a broad class of possible code-theoretic quantum gravity models of reality obeying the principle of efficient language axiom, and discuss implications of the self-simulation hypothesis such as an informational arrow of time.

 

The self-simulation hypothesis is built upon the following axioms:

 

1. Reality, as a strange loop, is a code-based self-simulation in the mind of a panpsychic universal consciousness that emerges from itself via the information of code-based mathematical thought or self-referential symbolism plus emergent non-self-referential thought. Accordingly, reality is made of information called thought.

2. Non-local spacetime and particles are secondary or emergent from this code, which is itself a pre-spacetime thought within a self-emergent mind.

3. The panconsciousness has freewill to choose the code and make syntactical choices. Emergent lower levels of consciousness also make choices through observation that influence the code syntax choices of the panconsciousness.

4. Principle of efficient language (Irwin 2019). The desire or decision of the panconscious reality is to generate as much meaning or information as possible for a minimal number of primitive thoughts, i.e., syntactical choices, which are mathematical operations at the pre-spacetime code level.

 

Fig 77c: This emphasis on coding is problematic, as it is trying to assert a consciousness-makes-reality loop through an apparently abstract coded representation based on discrete computation-like processes, assuming  an "tit-from-bit" notion that reality is made from information, not just described by it.

 

It from bit: Otherwise put, every it — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence entirely — even if in some contexts indirectly — from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. It from bit symbolizes the idea that every item of the physical world has at bottom — at a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe (Wheeler 1990).

Many Interacting Worlds: More recently Hall, Deckert and Wiseman (2014 doi:10.1103/PhysRevX.4.041013) have extended these ideas to encompass a many interacting worlds (MIW) approach, replacing the quantum potential, with the effects of a large number of worlds with Newtonian dynamics following the classical force above, but under a very unusual type of interaction where the force between worlds is non-negligible only when the two worlds are close in configuration space. The authors admit that such an interaction is quite unlike anything in classical physics, and it is clear that an observer in one world would have no experience of the other worlds in their everyday observations. But unlike Everett's many worlds interpretation, where all the probability universes are equal and simply represent the alternative outcomes of Schrodinger's cat, the interacting worlds are not equal but have a mutually repulsive global interaction, so, by careful experiment an observer might detect a subtle non-local action on the molecules of its world.

 

Fig 63: Two-slit interference amplitudes using the pilot wave theory above and below, the many interacting worlds theory. Both correspond closely to the distributions of standard quantum mechanics in this case.

Suppose now that instead of only one world-particle, as in the pilot wave interpretation, there were a huge number Nof world-particles coexisting, with positions (world configurations) x1 ...  xN. If each of the initial world configurations is chosen at random from P0(q), as described above, then by construction. One can thus approximate Pt(q), and its derivatives, from a suitably smoothed version of the empirical density at time t. From this smoothed density, one may also obtain a corresponding approximation of the Bohmian force  for N1  in terms of the list of world configurations Xt={x1(t) ...  xN(t)} at time t.

Note, in fact, that since only local properties of Pt(q),  are required for  rt(q), the approximation rN(q; Xt), requires only worlds from the set of Xt which are in the N-dimensional neighborhood of q. That is, the approximate force is local in configuration space.

The MIW theory replaces the Bohmian force acting on each world-particle xn(t)  by the approximation rN(xn; Xt), Thus, the evolution of the world configuration xn(t)   is directly determined by the other configurations in Xt. This makes the wave function , and the functions Pt(q) and St(q) derived from it, superfluous. Its fundamental dynamics are described by the system of N×m×3 second-order differential equations . While each world evolves deterministically, which of the Nworlds we are actually living in is unknown. Hence, assertions about the configuration of the particles in our world are again probabilistic. For a given function  of the world configuration, only an equally weighted population mean over all the worlds compatible with observed macroscopic properties, can be predicted at any time.  Moreover since the worlds are distributed with , for any smooth function so the description in limit approaches the wave function. The description is complete only when the form of the force between worlds is specified. There are different possible ways of doing so, each leading to a different version. For example in a simplified 1-D example we might have the repulsive potential . Ideally we want a conservative interaction in which the average energy per world approaches the quantum average energy in the limit. Suitable choices lead to estimates, which closely follow pilot wave and quantum descriptions for several quantum phenomena.

MIW is provocative because it shows that multiple configuration space hidden variable theories can evoke commensurate dynamics to quantum theory, but the action between configuration spaces is a re-description of the same phenomena of global dynamics that quantum entanglement demonstrates, so in a sense it is a multiverse theory of entanglement.

However neither MIA nor the pilot wave theory can explain all aspects of wave-function collapse because of cases like the decay of a photon into an electron-positron pair, where there are more degrees of freedom in the more complicated massive two-particle system than the initial conditions, and both still depend on random variables in their defining conditions.

Quantum Decoherence, Procrastination and Recoherence

Quantum Decoherence: (Zurek 1991, 2003) explains how reduction of the wave packet can lead to the classical interpretation through interation of the sytem with other quanta. Supposing we consider a measurement of electron spin. If an ideal detector is placed in the spin up path, it will than click only if the electron is spin up so we can assume the undetermined detector state dn is equivalent to spin down. If we start with an electron in the pure state then the composite system can be described as and the detector system will evolve into a correlated state: . This correlated state involves two branches of the detector, one in which it measures spin up and the other (passively) spin down. This is the splitting of the wave function into two branches advanced by Everett to articulate the many-worlds description of quantum mechanics. However in the real world, we know the alternatives are distinct outcomes rather than a mere superposition of states. Von Neumann was well aware of these difficulties and postulated that in addition to the unitary evolution of the wave function there is a non-unitary 'reduction of the state vector or wave function which converts the superposition into a mixture by cancelling the correlating off-diagonal terms of the pure density matrix: to get a reduced density matrix, which enables us to interpret the coefficients as classical probabilities.

However, as we have seen with the EPR pair-splitting experiments, the quantum system has not made any decisions about its nature until measurement has taken place. This explains the off-diagonal terms, which are essential to maintain the fully undetermined state of the quantum system which has not yet even decided whether the electrons are spin up or spin down. One way to explain how this additional information is disposed of is to include the interaction of the system with the environment in other ways. Consider a system S detector D and environment E. If the environment can also interact and become correlated with the apparatus, we have the following transition: .

 

This final state extends the correlation beyond the system-detector pair. When the states of the environment corresponding to the spin up and spin down states of the detector are orthogonal, we can take the trace over the uncontrolled degrees of freedom to get the same results as the reduced matrix. Essentially whenever the observable is a constant of motion of the detector-environment Hamiltonian, the observable will be reduced from a superposition to a mixture. In practice, the interaction of the particle carrying the quantum spin states with a photon and the large number of degrees of freedom of the open environment can make this loss of coherence or decoherence irreversible. Zurek describes such decoherence as an inevitable result of interactions with other particles.


Fig 64: Left: Cancellation of off diagonal elements in a cat paradox experiment due to decoherence arising from interactions with other quanta leads to a distribution along the diagonal and a classical real probability distribution (inset), representing the probability that the cat is either alive or dead, but not both. Right: Quantum daarwinism experiment (Unden et al. 2019) showing the setup of centres in teh diamond and quantum redundancy emerging.

Quantum Darwinism However, in contrast to the apparent simplicity of the decoherence model, we know the actual explanation of decoherence is interaction with other quantum systems, resulting not in a simple decline of the off-diagonal elements, but multiple quantum entanglements with third parties. To explain the emergence of objective, classical reality, it is not enough to say that decoherence washes away quantum behavior and thereby makes it appear classical to an observer. Somehow, it's possible for multiple observers to agree about the properties of quantum systems. Wojciech Zurek argues that two things must be true. First, quantum systems must have "pointer states" that are especially robust in the face of disruptive decoherence by the environment as is a pointer on the dial of a measuring instrument, such as the particular location of a particle, its speed, spin, or polarization. Zurek argues that classical behavior - the existence of well-defined, stable, objective properties - is possible only because pointer states of quantum objects exist. What is special mathematically about pointer states is that they are preserved, or transformed into a nearly identical state. This implies that the environment preserves some states while degrading others. A particle's position is resilient to decoherence, but superpositions decohere into localized pointer states, so that only one can be observed. Zurek (1982) described this "environment-induced superselection" of pointer states in the 1980s. But there's a second condition that a quantum property must meet to be observed. As Zurek (2009) argues, our ability to observe some property depends not only on whether it is selected as a pointer state, but also on how substantial a footprint it makes in the environment. The states that are best at creating replicas in the environment - i.e. the "fittest" - are the only ones accessible to measurement. It turns out that the same stability property that promotes environment-induced superselection of pointer states also promotes quantum Darwinian fitness, or the capacity to generate replicas. "The environment, through its monitoring efforts, decoheres systems, and the very same process that is responsible for decoherence should inscribe multiple copies of the information in the environment".

An experimental realization of this idea (fig 64 right) has been performed by Fedor Jelezko and co-workers (Unden et al. 2019). The team focused on NV centres, which occur when two adjacent carbon atoms within a diamond lattice are replaced with a nitrogen atom and an empty lattice site. The nitrogen atom has an extra electron that remains unpaired. This behaves as an isolated spin - which can be up, down or in a superposition of the two. The spin state can be probed in a well-established process that involves illuminating the diamond with laser light and recording the florescence given off. The researchers set out to monitor how the NV spin interacts with the spins of several neighbouring carbon atoms. Most carbon in the diamond is carbon-12, which has zero spin. However, around 1% of the atoms are carbon-13, which has a nuclear spin. Their experiment involved probing the interaction of a NV spin with, on average, four carbon-13 atoms, about 1 nm away. The carbon-13 spins - which serve as the environment - are too weak to interact with one another but nevertheless cause decoherence in the NV spin. This process involves the carbon-13 spins changing to new quantum states that depend on the state of the NV spin. The experiment is done by shining a green laser light onto NV spins within a millimetre-sized sample of diamond and measuring the photons emitted as microwave and radio frequency fields are switched on and off. Because they were not able to observe the carbon-13 spins directly, the team transferred these spin states to the NV spins and again exploited fluorescence measurements. By measuring the spin of just one carbon-13 nucleus, and repeating the experiment many times, they found they could correctly deduce most of the NV spin properties most of the time. But measurements of additional nuclear spins added little to this knowledge. These results, "give the first laboratory demonstration of quantum Darwinism in action in a natural environment". Two other groups meanwhile have carried out similar measurements (using the polarization of photons) that also show redundancy which demonstrate the proliferation of classical information and also an 'uptick' in information taking place at the quantum level. Adan Cabello argues that other approaches can reveal crucial insights into the emergence of classical reality (Pokomy et al. 2019). He and colleagues have shown how to make measurements on trapped ions while still preserving the remaining parts of the system's quantum coherence, which shows that measurement is the result of a dynamical process governed itself by quantum mechanics.

Quantum Darwinism may provide an interactive bridge which can explain how the conscious brain derives its model of the classical world and uses it to anticipate opportunities and threats to survival. Conscious brains states are characterized by a maximal degree of global coupling in terms of phase coherence of the EEG across regions, by contrast with non-coherent regional processing which does not reach the level of consciousness. Karl Pribram has drawn attention to the similarities of this form of processing by comparison with quantum measurement where the uncertainty relation is defined by wave beats. Given the sensitive dependence of edge of chaos and self-organized criticality and interactive capacity between micro levels of the synapse and ion channel and global scale excitations, the conscious brain forms the most complex interactive system of quantum entanglements in the known universe. The brain states corresponding to the evolving Cartesian theatre of consciousness thus provide the richest set of boundary conditions for quantum Darwinism to shape brain states and in turn be shaped by them just as the carbon-13 atoms form an interactive basis for quantum Darwinism with the unpaired N atom electron. This way one can see the conscious brain as a two-way interactive process both shaping the fluctuations of the quantum milieu and and being in turn shaped by them in an interactive resonance with the foundations of the transitions from the quantum superimposed world to that of unfolding experienced real world history. This mutual interaction invokes a defining interface between the two worlds in which both consciousness and intentional will are modulating the apparent randomness of wave-function collapse dependent only on wave amplitude and probability, resolving the question of how apparently random quantum processes can lead to anticipative intentional acts.

Quantum Discord (Ollivier & Zurek 2002), is an extension of entanglement to more general forms of coherence in which partial correlations induced through interaction with mixed state particles can still be used to induce quantum correlated effects (Gu et al. 2012). Quantum discord is a promising candidate for a complete description of all quantum correlations, including coherent interactions that generate negligible entanglement. Coherent interactions can harness discord to complete a task that is otherwise impossible. Experimental implementation of this task demonstrates that this advantage can be directly observed, even in the absence of entanglement. Quantum discord does not require isolation from decoherence, and can even derive additional quantum information from interaction with mixed states which would annihilate entangled states.

Quantum discord is thus a viable model for processes ongoing at biological temperatures, which could disrupt full entanglement, such as photosynthesis receptors which are claimed to use a spatial form of quantum computing to utilize the most efficient conduction path of the chemical reaction centers (Brooks 2014, Thyrhaug et al. 2018), although this is still being debated (Ball 2018). Biology is full of phenomena at the quantum level, which are essential to biological function. Enzymes invoke quantum tunneling to enable transitions through their substrate's activation energy. Protein folding is a manifestation of quantum computation intractable by classical computing. When a photosynthetic active centre absorbs a photon, the wave function of the excitation is able to perform a quantum computation, which enables the excitation to travel down the most efficient route to reach the chemical reaction site (McAlpine 2010, Hildner et al. 2013). Frog rod cells are sensitive to single photons (King 2008) and recent research suggests the conscious brain can detect as few as three individual photons (Castelvecchi 2015). Quantum discord may also be integral to the coherent excitations of active brain states (King 2014). A scheme has also been used to perform certain forms of quantum computing, such as finding the diagonal sum of a 2x2 matrix using quantum discord rather than entanglement (Merali 2011b fig 65 right).

Fig 65: Left/centre: Quantum discord. Alice encodes information within one arm of a two-arm quantum state ρAB. Bob attempts to retrieve the encoded data. We compute Bob's optimal performance when he is restricted to performing a single local measurement on each arm (that is, Bob can make a local measurement first on A, then B, or vice versa). We compare this to the case where Bob can, in addition, coherently interact the processes, which allows him to effectively measure in an arbitrary joint basis of A and B. We show that coherent two-body interactions are advantageous if and only if ρAB contains discord and that the amount of discord Alice consumes during encoding bounds exactly this advantage. Curve (a) represents the amount of information Bob can theoretically gain should he be capable of coherent interactions. For our proposed implementation, this maximum is reduced to the level of curve (b), where, experimentally, Bob's knowledge about the encoded signal is represented by the blue data points. Curve (c) models these observations, taking experimental imperfections into account. Despite these imperfections, Bob is still able to gain more information than the incoherent limit given by curve (d). The blue shaded region highlights this quantum advantage, which is even more apparent if we compare Bob's performance to the reduced incoherent limit when experimental imperfections are accounted for (curve (e)). We can also compare these rates to a practical decoding scheme for Bob when limited to a single measurement on each optimal mode (curve f) and its imperfect experimental realization (curve g). Right: Quantum computation usually requires entangled qbits to perform calculations such as Shor's algorithm above, but it has been found that a collection of qubits with all but one in a state of discord and only one in a pure state, or even all of them in a discordant state, as long as the discord is above the zero value of classical systems can be used to perform types of quantum computation, when the computation is averaged over several runs (Merali 2011b).

The original motivations for discord were to understand the correlation between a quantum system and classical apparatus and the division between quantum and classical correlation. It shows us the quantum interior of what is happening during decoherence (Zurek 1991). A similar quantity called deficit was employed to study the thermodynamic work extraction and Maxwell's demon. Discord is equal to the amount of classical correlation that can be unlocked in quantum-classical states. Discord between two parties is related to the resource for quantum state merging with another party. Coherent quantum interactions (two-body operations) between separable systems that result in negligible entanglement could still lead to exponential speed-ups in computation, or the extraction of otherwise inaccessible information.

Fig 66: Recoherence experimental apparatus with on the right evidence for the increase in amplitude of off-diagonal elements as the apparatus os moved into the recoherence confiuration.

Recoherence is the reversal of decoherence by providing back the information which was lost in decoherence. All forms of entanglement involve decoherence because the system has become coupled toanother wave-particle. Once two quantum subsystems have become entangled, it is no longer possible to ascribe an independent state to either. Iinstead, the subsystems are completely described only as part of a greater, composite system. As a consequence of this, each entangled subsystem experiences a lloss of coherence or decoherence following entanglement. Decoherence leads to the leaking of information from each subsystem to the composite entangled system. In fig 66 the researchers demonstrate a process of decoherence reversal, whereby they recover information lost from the entanglement of the optical orbital angular momentum and radial profile degrees of freedom possessed by a photon pair. They note that these results carry great potential significance, since quantum memories and quantum communication schemes depend on an experimenter's ability to retain the coherent properties of a particular quantum system (Bouchard et al. 2015). They show that quantum information in the orbital angular momentum (OAM) degree of freedom of an entangled photon pair can be lost and retrieved through propagation, by manipulating the degree of entanglement between their OAM and radial mode Hilbert spaces. This effect is different from entanglement migration, in which information is transferred between wave-function phase and amplitude, rather than having been lost to ancilliary Hilbert spaces, and likewise differes from quantum erasure, which occurs by information loss due to projective measurement.

Quantum Chaos, Criticality and Entanglement Coupling

The wave-particle complementarity of quantum systems alters the behavior of these systems when the dynamics is chaotic. Nuclear energetics for example which are chaotic, as they are highly energetic and spatially confined, unlike the electron orbits of atoms which have energy levels converging at high energy, have consistent energy gaps between their eigenfunctions representing closed orbits.

Fig 67: (1) Quantum chaos. Confined wave function in a quantum dot shows statistices displaying finite separation of energy levels similar to the atomic nuclear chaoit eigenfunctions and with the quantum stadium (2) Quantum stadium shows 'scarring' of the wave function along periodic repelling orbits which ar unstable in the classical case but here have stability due to the spatially extended wave packets overlapping (King 2009). The classical analogue (3) is fully chaotic with dense sets of repelling periodic orbits and space-filling trajectories. (4) Top to bottom classical and quantum kicked top phase spaces and linear entropies, with left to right ordered and chaotic dynamics. The lack of a dip in linear entropies in the chaotic regime indicates entanglement with nuclear spin, rather than quantum suppression of chaos, as occurs in closed quantum systems (Chaudhury et al. 2009, Steck 2009).

Likewise the quantum stadium displays 'scarring' of the wave function where the probab ility remains consistently high in an ordered manner, around dominant classical repelling periodic orbits, which remains stable for the wave packet because of its spatial extension and frequency resonance, while the classical orbits are chaotic with positive butterfly-effect Lyapunov exponents, consisting dense repelling periodic orbits in a sea of ergodic space-filling chaotic orbits.

Both of these examples illustrate what is called the quantum supression of chaos.

In another example (Yan B, Sinitsyn N (2020) Recovery of Damaged Information and the Out-of-Time-Ordered Correlators doi:10.1103/PhysRevLett.125.040605) the researchers wanted to know what would happen if they rewound the entangled interactions in qubits and then introduced the quantum analogy of a butterfly effect change. Would the future remain intact or become inevitably altered like the time traveler in sci fi stories? A number of entangled qubits were run through a set of logic gates before being returned to their initial setup. Back at the starting point, a measurement was made, effectively turning its wave superposition into an 'actuality', collapsing its superposition. The whole setup was then allowed to run again. They found that they could then easily recover the useful information because this damage is not magnified by a decoding process.

However, unlike closed quantum systems, when we investigate open quantum systems, or those which can be energetically coupled to other transitions, such as in the quantum kicked top, consisting of a Ceasium atom optically excited to a spin 3 state and then magnetically perturbed, where in the chaotic regime, entanglement coupling occurs between electronic and nuclear spin states fig 67(4). We thus find that quantum chaos can lead to new forms of entanglement between the coupled states, showing quantum chaos can paradoxically lead to further 'spooky' interactive wave effects. Werner Heisenberg cryptically commented "When I meet God, I'm going to ask him two questions, 'Why relativity?' and 'Why turbulence?' I really believe he will have an answer to the first" - implying the second, i.e. chaos is the very nemesis,

In a case which illustrates entanglement coupling on a grand scale, physicists have observed quantum entanglement among 'billions of billions' of flowing electrons in a quantum critical material (Prochaska. et al. 2020 Singular charge fluctuations at a magnetic quantum critical point Science 367/6475 285-288 doi:10.1126/science.aag1595). The research provides the strongest direct evidence to date of entanglement's role in bringing about quantum criticality.

A wide variety of metallic ferromagnets and antiferromagnets have been observed to develop quantum critical behavior when their magnetic transition temperature is driven to zero through the application of pressure, chemical doping or magnetic fields. Quantum criticality is also believed to drive high-temperature superconductivity.

A quantum critical point is a point in the phase diagram of a material where a continuous phase transition takes place at absolute zero. A quantum critical point is typically achieved by a continuous suppression of a nonzero temperature phase transition to zero temperature by the application of a pressure, field, or through doping. Conventional phase transitions occur at nonzero temperature when the growth of random thermal fluctuations leads to a change in the physical state of a system e.g. solid to liquid. Condensed matter physics research over the past few decades has revealed a new class of phase transitions called quantum phase transitions which take place at absolute zero. In the absence of the thermal fluctuations which trigger conventional phase transitions, quantum phase transitions are driven by the zero point quantum fluctuations associated with Heisenberg's uncertainty principle and in particular in the light of this study by lage-scale entanglement. Within the class of phase transitions, there are two main categories: at a first-order phase transition, the properties shift discontinuously, as in the melting of solid, whereas at a second order phase transition, the state of the system changes in a continuous fashion. Second-order phase transitions are marked by the growth of fluctuations on ever-longer length-scales. These fluctuations are called "critical fluctuations". At the critical point where a second-order transition occurs the critical fluctuations are fractally scale invariant and extend over the entire system. At a quantum critical point, the critical fluctuations are quantum mechanical in nature, exhibiting scale invariance in both space and in time.

The current research examined the electronic and magnetic behavior of a "strange metal" compound YbRh2Si2, as it both neared and passed through a critical transition at the boundary between two well-studied quantum phases. In quantum-critical heavy-fermion antiferromagnets, such physics may be realized as critical Kondo entanglement of spin and charge and probed with optical conductivity. At a magnetic quantum critical point, conventional wisdom dictates that only the spin sector will be critical, but if the charge and spin sectors are quantum-entangled, the charge sector will end up being critical as well. The discovery suggests that critical charge fluctuations play a central role in the strange metal behavior, elucidating one of the long-standing mysteries of correlated quantum matter.

Time Crystals and reversing Time's Arrow

Time Crystals: Another manifestation of quantum reality associated with disorder, which Frank Wilczek (arXiv:1308.5949) proposed the concept of in 2012, is the time crystal. Quantum time crystals are systems characterized by spontaneously emerging periodic order in the time domain. The laws of physics are symmetrical in that they apply equally to all points in space and time. Many systems violate the symmetry of physical laws in space and time, resulting in symmetry-breaking. In a magnet, atomic spins line up in their lowest energy state, rather than pointing in all directions. The symmetry-breaking of the weak and electroagnetic forces via the Higgs particle behaves similarly. In a mineral crystal, atoms occupy set positions in space, and the crystal does not look the same if it is shifted slightly. In the same way a time crystal would repeat in time without expending any energy rather like a perpetual motion machine. However other researchers (doi:10.1103/PhysRevLett.111.070402) quickly proved there was no way to create time crystals, of rotating minimum energy quantum systems. But the proof left a loophole. It did not rule out time crystals in systems that have not yet settled into a steady state and are out of equilibrium. Three ingredients are essential: a force repeatedly disturbing the particles, a way to make them interact with each other and an element of random disorder. The combination of these ensures that particles are limited in how much energy they can absorb, allowing them to maintain a steady, ordered state.


Fig 68:
(a) Laser pumping at the resonant frequency repeatedly reverses the spins of a system of atoms, but requires two precise energy inputs to cycle the states. If the lasers are tuned off the resonant frequency (b), the spins will not move by 180o and will not cycle back to the initial state. However if suitable degrees of disorder and internal interactions occur, the system may enter a state where the spins flip endlessly at a new period even when the laser frequencies are off resonance. In the inset (d) the red light shows a diamond time crystal (red) flipping at a different freuency from the stimulating laser (green).

In the first of two experiments (doi:10.1038/nature21413), this meant repeatedly firing alternating lasers at a chain of ten ytterbium ions: the first laser flips their spins and the second makes the spins interact with each other in random ways. That combination caused the atomic spins to oscillate, but at twice the period they were being flipped. More than that, the researchers found that even if they started to flip the system in an imperfect way, such as by slightly changing the frequency of the kicks, the oscillation remained the same. Spatial crystals are similarly resistant to any attempt to nudge their atoms from their set spacing. In the second (doi:10.1038/nature21426) using a 3D chunk of diamond riddled with around a million defects, each harbouring a spin, the diamond's impurities provided a natural disorder. When the team used microwave pulses to flip the spins, they saw the system respond at a fraction of the frequency with which it was being disturbed. These seem to be the first examples of a host of new phases that exist in relatively unexplored out-of-equilibrium states. They could also have several practical applications frm room temperature simulation of quantum systems to super-sensitive detectors. Autti S et al. (2020) have since been able to induce neighbouring time crystals to interact, then observing an exchange of magnons between the two time crystals leading to opposite-phase oscillations in their populations — a signature of the AC Josephson effect — while the defining periodic motion remains phase coherent throughout the experiment.

Entanglement Reversing the Arrow of Time: Quantum entanglement can also be used to reverse the thermodynamic arrow of time (Micadei et al. 2017) The existence of an arrow follows from the second law of thermodynamics. The law states that entropy, or disorder, tends to increase over time, explaining why it's easy to shatter a glass but hard to put it back together, and why heat spontaneously flows from hot to cold but not the opposite direction. The new result shows that the arrow of time is relative rather than absolute. They experimentally demonstrate the reversal of the arrow of time for two initially quantum correlated spins-1/2, prepared in local thermal states at different temperatures, employing a Nuclear Magnetic Resonance setup.

Fig 69: Reversal of the arrow of time: (A) Heat flows from the hot to the cold spin (at thermal contact) when both are initially uncorrelated. This corresponds to the standard thermodynamic arrow of time. For initially quantum correlated spins, heat is spontaneously transferred from the cold to the hot spin. The arrow of time is here reversed. (B) View of the magnetometer used. (C) Experimental pulse sequence for the partial thermalization process.

The experimenters manipulated molecules of chloroform - CHCl3, which are made of carbon, hydrogen and chlorine atoms. The scientists prepared the molecules so that the temperature - judged by the probability of an atom's nucleus being found in a higher energy state - was greater for the hydrogen nucleus than for the carbon. When the two nuclei's energy states were uncorrelated, the heat flowed as normal, from hot hydrogen to cold carbon. But when the two nuclei had strong enough quantum correlations, heat flowed backward, making the hot nucleus hotter and the cold nucleus colder.

The standard second law of thermodynamics assumes that there are no such correlations. When the second law is generalized to take correlations into account, the law holds firm. As the heat flows, the correlations between the two nuclei dissipate, a process that compensates for the entropy decrease due to the reverse heat flow. The experimenters note: "Our results on the thermodynamic arrow of time might also have stimulating consequences on the cosmological arrow of time.".

Quantum Match-making: Transactional Supercausality and Reality

For reasons which immediately become apparent, the collapse in the pair-splitting experiment has to not only be immediate, but also to reconcile information looking backwards in time. The two photons we are trying to detect are linked through the common calcium atom. Their absorptions are thus actually connected via a path travelling back in space-time from one detector to the calcium atom and forward again to the other detector. Trying to connect the detectors directly, for example by hypothetical faster-than-light tachyons, leads to contradictions. Tachyons transform by the rules of special relativity, so a tachyon which appears to be travelling at an infinite speed according to one observer, is travelling only at a little more than the speed of light according to another. One travelling in one direction to one observer may be travelling in the opposite direction to another. They can also cause causality violations (King R365). There is thus no consistent way of knitting together all parts of a wave or the detector responses using tachyons. Even in a single-particle wave, the wave function in regions it has already traversed (and those it would subsequently pass through in future) also have to collapse retrospectively (and prospectively) so that no inconsistencies can occur, in which a particle is created in two locations in space-time from the same wave function, as the Wheeler delayed choice experiment makes clear.

Fig 70: In the transactional interpretation, a single photon exchanged between emitter and absorber is formed by constructive interference between a retarded offer wave (solid) and an advanced confirmation wave (dotted). (b) The transactional interpretation of pair-splitting. Confirmation waves intersect at the emission point. (c) Contingent absorbers of an emitter in a single passage of a photon. (d) Collapse of contingent emitters and absorbers in a transactional match-making (King R365). (e) Experiment by Shahri Afshar (see Chown R114). A grid is placed at the interference minima of the wave fronts coming from two slits just below a lens designed to focus the light from each slit into a separate detector. Measurements by detectors (top) test whether a photon (particle) passed through the left or right slit (bottom). There is no reduction in intensity when the grid is placed below the lens at the interference minima of the offer waves from the two slits. The grid does however cause a loss of detector intensity when the dashed left-hand slit is covered and the negative wave interference between the offer waves at the grid is removed, so that the non-interfered wave from the right slit now hits the grid, causing scattering. This suggests both that we can measure wave and particle aspects simultaneously, and that the transactional interpretation is valid in a way which neither many worlds (which predicts a splitting into histories where a photon from the source goes through one slit or other) or the Copenhagen interpretation of complementarity (where detecting a particle forbids the photon manifesting as a wave).

In the transactional interpretation (Cramer R136), such a 'backward travelling' wave in time gives a neat explanation, not only for the above effect, but also for the probability aspect of the quantum in every quantum experiment. Instead of one photon travelling between the emitter and absorber, there are two shadow waves, which superimposed make up the complete photon. The emitter transmits an offer wave both forwards and backwards in time, declaring its capacity to emit a photon. All the potential absorbers of this photon transmit a corresponding confirmation wave. The confirmation waves travelling backwards in time send a hand-shaking signal back to the emitter. In the extension transactional approach to supercausality, a non-linearity now reduces the set of possibilities to one offer and confirmation wave, which superimpose constructively to form a real photon only on the space-time path connecting the emitter to the absorber as shown in fig 70. This always connects an emitter at an earlier time to an absorber at later time because a real positive energy photon is a retarded particle which travels in the usual direction in time.

A negative energy photon travelling backwards in time is precisely the anti-particle of the positive energy photon and has just the same effect. The two are identifiable in the transactional interpretation, as in quantum electrodynamics (p 304), where time-reversed electron scattering is the same as positron creation and annihilation. The transactional relationship is in effect a match-making process. Before collapse of the wave function we have many potential emitters interacting with many potential absorbers. After all the collapses have taken place, each emitter is paired with an absorber in a kind of marriage dance. One emitter cannot connect with two absorbers without violating the quantum rules, so there is a frustration between the possibilities which can only be fully resolved if emitters and absorbers can be linked in pairs. The number of contingent emitters and absorbers are not necessarily equal, but the number of matched pairs is equal to the number of real particles exchanged.

In the pair-splitting experiment you can now see that the calcium atom emits in response to the advanced confirmation waves reaching it from both the detectors simultaneously right at the time it is emitting the photon pair, as in fig 70(b). Thus the faster than light linkage is neatly explained by the combined retarded and advanced aspects of the photon having a net forwards and backwards connection which is instantaneous at the detectors. One can also explain the arrow of time if the cosmic origin is a reflecting boundary that causes all the positive energy real particles in our universe to move in the retarded direction we all experience in the arrow of time. This in turn gives the sign for increasing disorder or entropy and the time direction for the second law of thermodynamics to manifest. The equivalence of real and virtual particles raises the possibility that all particles have an emitter and absorber and arose, like virtual particles, through mutual interaction when the universe first emerged

Quantum Paradoxes of Time and Causality

Although classical physics is in principle time-symmetrc in the sense that the laws of motion are in principle symmetric under time inversion, stochastic processes define a thermodynamic arrow of time in which low entropy physical systems tend to a high-entropy equlibrium state under the second law of thermodynamics. Chaotic systems likewise display a time directedness in the function defining the dynamic in the sense that the Lyapunov exponent defining the butterfly effect is directed with increasing time, while the time reversed dynamic is ordered and convergent. Thus we see a glass being smashed, or paint being mixed, but dont see the reverse processes of the glass spontaneously coming together because these events have vanising probability. The classical Laplacian universe is also regulated by a mechanical causality determined by the laws of motion acting in the direction of increasing time. We thus have in-principle time symmetry but no retrocausality and a causal direction identified with the thermodynamic arrow although these are not strictly related.

Quantum reality changes these assumptions in fundamental ways. Although the evolution of the Hamiltonian wave function is in principle time reversible in the same way the classical situation is, reduction of the wave packet appears at first glance to be a causality violating step leading to randomness in the sense the the outcome can only predicted in terms of probabilities. On the other hand, special relativity, by virtue of the Lorenz transformations with their dual square roots implicitly procudes both retarded solutions travelling in the usual direction of increasing time and advanced solutions travelling backwards in time. Diverse manifestations of quantum reality from the Wheeler delayed choice experiment to quantum erasure, entangled histories and entangled time inversion display features suggesting a deep time symmetry not seen in the classical description. Both the two-state vector description and the transactional interpretation are founded on a time-symmetry that ptentially invokes retrospective interaction, in which future absorbing boundary conditions influence a qantum inteaction qualitatively and quantitatively as fundamental as the past emitting boundary conditions

Moreover, quantum uncertaintly is not simply a statistical limitation on what can be observed, becausethe fundamental force fields are generated through the emission and absorption of virtual particles of every possible kind generated through the space-time window of uncertainty. This brings us to the core of a debate between time-symmetry itself and qustions of retrocausality – the future acting causally on the present or past, that could give rise to paradoxical contradictions of time loops and inconsistent quantum histories. It is clear that every quantum, not only entangled particles have to balance an equation that involves future states in maintaining the consistency that avoids a particle being absorbed at two separate locations in space-time.

We have seen that Bell's theoren rules out locally Einsteinian causality and that one way the EPR pair-splitting experiments can be resolved is through advanced waves from the absorbing detectors intersecting at the emission vertex. On the other hand both the Everett many-worlds approach and the Bohm pilot wave theory invoke process with a time arrow in the rearded direction of increasing time, where we may have time symmetry but are assumed not to have retrocausality. We also have issues betwen descriptions of quantum processes based on their reality as physical processes with Copenhagen perspectives where no actual reality is assocated with a quantum state except as a state of partial knowledge of a system on the part of the experimenter. The status of the wave function by comparison with the particle also remains debated with some approaches regarding the particles as real but the wave function only being a calculating device for particle positions and momenta of no physical reality in itself. This in a sense goes against the concept of wave-particle complementarity at the foundation of the uncertainty principle.

Contrasting again with these world views, researchers in foundations of quantum reality (Price 2012, Leifer & Pusey 2017) attempt to unravel the ontological relationships between time symmery and retrocausality. Price suggests a time symmetric ontology for quantum theory must necessarily be retrocausal. More precisely Realism + Time symmetry + Discreteness ⇒ Retrocausality, where discreteness means for example if a particle is detected on one channel it can't be in the other. Leifer & Pusey expand on Price's argument without having to assume the quantum state is a state of reality, replacing it with the notion of λ-mediation limiting the causal interactions to those involved in the apparatus itself and the free-choices of the experimenters. They show that, an ontological model satisfying No Retrocausality and λ-mediation in an experiment satisfying Time Symmetry must obey a temporal analogue of Bell's local causality condition and hence, the impossibility of a non-retrocausal time symmetric ontology.

The difficulty in all these diverse accounts is that we simply don't have a good model for the underlying processes 'governing" entanglement, reduction of the wave function or how hidden variables might interact in space-time. We can see that the situation invoked in entanglement experiments where we have a manifest correlation in which knowing the polarization of one-of a an entangled pair of photons immediately tells us the other has complementary polarization, but we can't use this information to perform any form of causal signalling to transmit classical information.

We thus appear to be dealing with a quantum universe in which future states are as fomative as boundary conditions on all interactions as past emitting boundary conditions are and that the interactive proces is fundamentally time-symmetric, but one in which classical ideas of causality either conventional or retro- taken together result in contradictions.

Neither does the thermodynamic arrow of time provide a clear symmetry-breaker, although the cosmic origin as a reflecting boundary condition resulting in retarded particles, as in the transactional interpretation might. Even if dark-energy, causes an increasing expansion, or fractal inflation leads to an open universe model in which some photons may never find an absorber, the excitations of brain oscillations, because they are both emitted and absorbed by past and future brain states could still be universally subject to transactional supercausal coupling (King 2008, 2014). Thus consciousness itself may have a central role in the process of collapsing the wave function and in the anticipatory role consciousness appears to play as critical to organismic survival, even if not based on a directly causal principle..

The hand-shaking space-time relation implied by transactions makes it possible that the apparent randomness of quantum events masks a vast interconnectivity at the quantum level, which has been termed the 'implicate order' by David Bohm (R70). This might not itself be a random process, but because it connects past and future events in a time-symmetric way, it cannot be reduced to predictive determinism, because the initial conditions are insufficient to describe the transaction, which also includes quantum 'information' coming from the future. However this future is also unformed in real terms at the early point in time emission takes place. My eye didn't even exist, when the quasar emitted its photon, except as a profoundly unlikely branch of the combined probability 'waves' of all the events throughout the history of the universe between the ancient time the quasar released its photon and my eye developing and me being in the right place at the right time to see it. Transactional supercausality thus involves a huge catch 22 about space, time and prediction, uncertainty and destiny. It doesn't suggest the future is determined, but that the contingent futures do superimpose to create a space-time paradox in collapsing the wave function.

Roger Penrose (R535, R536), has suggested that the one-graviton limit of interaction is an objective trigger for wave packet reduction, because of the bifurcation in space-times induced, leading to theories in which the random or pseudo-random manifestations of the particle within the wave are non-linear consequences of gravity. Objective orchestrated reduction or OOR is then cited as a basis which intentional consciousness uses to follow collapse rather than participating in it, as the transactional model makes possible. The OOR model unlike transactional anticipation thus leaves free-will with a kind of orphan status, following, but not participating in, the collapse process itself.

By reducing the energy of a transaction to a superposition of ground and excited states, the transactional approach may combine with quantum computation to produce a space-time anticipating quantum entangled system which may be pivotal in how the conscious brain does its anticipation. The brain is not a marvelous computer in any classical sense. We can barely repeat seven digits. But it is a phenomenally sensitive anticipator of environmental and behavioral change. Subjective consciousness has its survival value in enabling us to jump out of the way when the tiger is about to strike, not so much in computing which path the tiger might be on, because this is an intractable problem and the tiger can also take it into account in avoiding the places we would expect it to most likely be, but by intuitive conscious anticipation. What is critical here is that in the usual quantum description which considers only the emitter, we have only the probability function because the initial conditions are insufficient to determine the outcome. There is thus no useful way quantum uncertainty can be linked to conscious free-will. Only by including the advanced absorber waves can we see how such anticipation might be achieved.

The Sexually-Complex Quantum World

We have seen that all phenomena in the quantum universe present as a succession of fundamental complementarities in a shifting vacuum ground-swell of uncertainty, out of which the super-abundance of quantum diversity emerges. In this process we have discovered a multiple overlapping series of divisions: (i) wave-particle complementarity fundamental to the quantum, (ii) the roles of emitters and absorbers, (iii) the advanced and retarded solutions of special relativity, (iv) the fermions comprising matter complementaing the bosons mediating radiation, (v) virtual and real particles distinguishing force fields from positive energy matter and radiation, and the engendered symmetry-breakings between (vi) space and time (reflecting that between momentum and energy) and (vii) between the four fundamental forces of nature, which in turn cause the quantum architecture of atoms and molecules to be asymmetric and capable of complexity of interaction to form living systems (p 317) and finally (viii) duality, which makes it difficult or impossible to determine what is a fundamental particle and what is composite in a sexual paradox between dual descriptions. Sexual paradox may also be manifest in the difficulty of separating the forces from the seething quantum 'ground' of vaccum uncertainty, which is generative of all types of quantum. To understand conscious anticipation, or free-will, may require the inclusion of advanced waves, forming a paradoxical complement to the positive energy arrow of time.

All these complementarities possess attributes of sexual paradox and are pivotal to generating the complexity and diversity of the universe as we know it. There is no way to validly mount a single description based on only one of these complementary aspects alone. All attempts to define a theory based only on one aspect implicitly involves the other as a fundamental component, just as the propagators of the particles in quantum field theory are based on wave-spreading. Classical mechanistic notions of a whole made out of clearly defined parts, as well as temporal determinism fail. The mathematical idea of a reality made out sets of points or point particle becomes replaced by the excitations of strings, again with wave-based harmonic energies. Just as we have an irreducible complementarity between subjective expereince and the objective world, so all the features of the quantum universe present in sexually paradoxical complementarities. It is thus hardly surprising that these fundamental and irreducible complementarities may come to be expressed as fundamental themes in biological complexity, thus making sexuality a cumulative expression of a sexual paradox which lies at the foundation of the cosmos itself.

Although both the Taoist and Tantric views of cosmology are based on a complementation between female and male generative principles, many people, including a good proportion of scientists still adhere to a mechanistic view of the universe as a Newtonian machine. In this view biological sexuality seems to be barred from having any fundamental cosmological basis, being an end product of an idiosyncratic process of chance and selection, in a biological evolution which has no apparent relation with or capacity to influence the vast energies and forces which shape the cosmological process. The origins of life remain mysterious and potentially accidental rather than cosmological in nature and evolution an erratic series of accidents preserved by natural selection.

However if we reverse this logic and begin with a sexually paradoxical cosmology, the phenomenon of biological sexuality then becomes a natural cumulative expression of physical sexual paradox operating in a new evolutionary paradigm in the biological world, rich with new feedback processes which give it the central role in genetics and organismic reproduction we regard as the signature and raison d'etre of reproductive sexuality.

Appendix: Complementary Views of Quantum Mechanics and Field Theory

Fig 71: Werner Heisenberg (R760).

Heisenberg was the first person to define the concept of quantum uncertainty, or indetermincay, as the term also means in German.

Heisenberg's research concentrated on momentum and angular momentum. It is well known both rotations in 3-D space and matrices in general do not commute. , because matrix multiplication multiplies the rows of the first matrix by the columns of the second:

, but .

Hence AB - BA 0. More generally, if C = AB, . In quantum mechanical notation, we have so , showing that , all states leading to completeness with unit probability.

Fig 72: Erwin Schrodinger (R760)

Schrodinger's wave equation and Heisenberg's matrix mechanics highlight a deeper complementarity in mathematics between the discrete operations of algebra and the continuous properties of calculus. When Heisenberg was trying to solve his matrix equations, the mathematician David Hilbert suggested to look at the differential equations instead. But it fell to Schrodinger, who took his mistress up into the Alps and discovered his wave equation on a romantic tryst. It was only when Hilbert and others examined the two theories closely that it was discovered they were identical, but complementary, descriptions.

Schrodinger derived his time-independent wave equation as follows. The Hamiltonian dynamical operator representing the total kinetic and potential energy H = K + V , of the system, in terms of how the wave varies with time and space:

, where .

This is a non-relativistic equation expressed in terms of the first time derivative. If we now assume the wave function consists of separate space and time terms , and seek time independence of the wave function at constant energy E, we get

, or .

Interpreted in terms of matrix mechanics, the Schrodinger wave equation becomes a sum of basis vectors representing each of the wave states. The algebraic version of the equation , , becomes . Solving in terms of a transformation to a new state, we have , where . Hence and so . Thus and . This the famous eigenvalue (own-value) problem, whose stable standing wave solutions are the s, p, d and f orbitals of an atom.

Heisenberg's problem of uncertainty expressed in non-commuting operators such as position x and momentum p gives us back the uncertainty relation when we reinterpret momentum in terms of the wave function as a differential operator , we have

.

Hence , another view of the uncertainty relation .

In Schrodinger's view the wave function varies with time accodring to a fixed operator, but in the Heisenberg view the wave function is a fixed vector in Hilbert space and the Hermitian operator is time evolving.

Fig 73: Paul Dirac

Dirac extended Schrodinger's equation to make it relativistic, at the same time ushering in the existence of the positron and anti-matter generally as solutions coming out of the equation. His equation is: , where ψ(x, t) is the wave function for the electron of rest mass m with spacetime coordinates x, t. The p1, p2, p3 are the components of the momentum, c is the speed of light, and h is Planck's constant divided by 2π. The new elements in this equation are the 4x4 matrices αk and β and the four-component wave function. The four components are interpreted as a superposition of a spin-up electron, a spin-down electron, a spin-up positron, and a spin-down positron.


Fig 74: Feynman diagram for first order photon exchange in electron-electron repulsion. Richard Feynman with his own diagram (R760).

The underlying wave-particle complementarity in Feynman's approach to quantum field theory, despite its apparent explanation of the electromagnetic field in terms of particle interaction is succinctly demonstrated in the first-order diagram from electron-electron scattering (electromagnetic charge repulsion) through exchange of virtual photons provided by uncertainty. The propagator for the diagram is:

where are the variants of the Pauli spin matrices, the Dirac function represents the discrete interaction of the virtual photon over the space-time interval, and K are the propagators for electrons a and b to be carried by Huygen's wave-front principle according to the wave summations for t2 > t1 representing positive energy 'retarded' solutions travelling in the usual direction in time and for the corresponding negative energy solutions in the reversed 'advanced' time direction t2 < t1 , where En and are the energy eigenvalues and eigenfunctions for the wave equation.

This both explains how the relativistic solution gives rise to both time backward negative energy solutions and time forward positive energy ones, which make particle-anti-particle creation and annihilation events critical to the sequence of Feynman diagrams possible, and also shows clearly in the complex exponentials the sinusoidal wave transmission hidden in the particle diagrams of the quantum field approach.

References

  1. Abellán et al (2018) Challenging local realism with human choices. Nature 557 212-216 doi:10.1038/s41586-018-0085-3.
  2. Adler S (2014) SU(8) family unification with boson-fermion balance ArXiv: 1403.2099.
  3. Afshordi N et al. (2017) From Planck Data to Planck Era: Observational Tests of Holographic Cosmology. Physical Review Letters 118/4 doi:10.1103/PhysRevLett.118.041301.
  4. Aharonov Y. , Albert D.Z. , Vaidman L. (1988) How the Result of a Measurement of a Component of the Spin of a Spin-2 Particle Can Turn Out to be 100 Physical review letters 60, 1351.
  5. Aharonov Y, Colombo F, Popescu S, Sabadini I, Struppa D, Tollaksen J (2014) The quantum pigeonhole principle and the nature of quantum correlations http://arxiv.org/pdf/1407.3194v1.
  6. Aharonov Y, Bergmann P, Lebowitz J. (1964) Time symmetry in the quantum process of measurement Phys. Rev. B 134 1410-1416 doi:10.1103/PhysRev.134.B1410.
  7. Aharonov Y, Vaidman L (2014) The Two-State Vector Formalism: An Updated Review www.tau.ac.il/~yakir/yahp/yh165.pdf.
  8. Aharonov Y et al. (2016) Quantum violation of the pigeonhole principle and the nature of quantum correlations. PNAS 113 532–535.
  9. Albert D, Galchen R (2009) A Quantum Threat Sci. Am. Mar 32-9
  10. Ananthaswamy A (2015) Entangled universe: Could wormholes hold the cosmos together? New Scientist 4 Nov.
  11. Ananthaswamy A (2018) New Quantum Paradox Clarifies Where Our Views of Reality Go Wrong Quanta www.quantamagazine.org/frauchiger-renner-paradox-clarifies-where-our-views-of-reality-go-wrong-20181203/.
  12. Andersen A et al. (2015) Double-slit experiment with single wave-driven particles and its relation to quantum mechanics Physical Review doi:10.1103/PhysRevE.92.013006.
  13. Aron J (2016) Dark matter no-show puts favoured particles on death row New Scientist 21 July. www.newscientist.com/article/2098217-dark-matter-no-show-puts-favoured-particles-on-death-row.
  14. Arute F et al. (2019) Quantum supremacy using a programmable superconducting processor Nature 574 505 doi:10.1038/s41586-019-1666-5.
  15. Aspect A., Dalibard J., Roger G., (1982) Experimental tests of Bell’s theorem using time-varying analysers Phys. Rev. Lett. 49, 1804.
  16. Autti S et al. (2020) AC Josephson effect between two superfluid time crystals Nature Materials doi:10.1038/s41563-020-0780-y..
  17. Ball P (2017) A world without cause and effect Nature 546 590–2 doi:10.1038/546590a.
  18. Ball P (2018) Is photosynthesis quantum-ish? Physics World, 31/4 https://iopscience.iop.org/article/10.1088/2058-7058/31/4/39/pdf.
  19. Ballesteros G et al. (2016) Unifying inflation with the axion, dark matter, baryogenesis and the seesaw mechanism arXiv:1608.05414.
  20. Barends R et al. (2013) Coherent Josephson qubit suitable for scalable quantum integrated circuits arXiv:1304.2322.
  21. Barends R et al. (2016) Digitized adiabatic quantum computing with a superconducting circuit doi:10.1038/nature17658.
  22. Bashkanov M & Watts D (2020) A new possibility for light-quark dark matter J. Phys. G Nucl. Part. Phys. 47 doi:10.1088/1361-6471/ab67e8.
  23. Bell J.S. (1966) On the Problem of Hidden Variables in Quantum Mechanics Rev. Mod. Phys. 38/3, 447.
  24. Benea-Chelmus, I.-C., Settembrini, F. F., Scalari, G. & Faist, J. (2019) Electric field correlation measurements on the electromagnetic vacuum state Nature 568 202-206 doi:10.1038/s41586-019-1083-9.
  25. Bierhorst et al. (2018) Experimentally generated randomness certified by the impossibility of superluminal signals Nature doi:10.1038/s41586-018-0019-0.
  26. Bohm D. (1952) A suggested interpretation of the quantum theory in terms of 'hidden' variables I & II Phys. Rev. 85 166-93
  27. Bong et al. (2020) A strong no-go theorem on the Wigner's friend paradox Nature Phys. https://www.nature.com/articles/s41567-020-0990-x..
  28. Bouchard F et al. (2015) Observation of quantum recoherence of photons by spatial propagation Scientific RepoRts doi:10.1038/srep15330.
  29. Bradley et al. (2008) Relic topological defects from brane annihilation simulated in superfluid 3He. Nature Physics doi:10.1038/nphys815.
  30. Bravy S et al. (2008) The Complexity of Stoquastic Local Hamiltonian Problems arXiv:quant-ph/0606140.
  31. Brooks M (2014) Quantum control: How weird do you want it? New Scientist 11 Sep.
  32. Castelvecchi D. (2015) Quantum technology probes ultimate limits of vision Nature doi:10.1038/nature.2015.17731.
  33. Chaudhury S, Smith A, Anderson B, Ghose S, Jessen P (2009) Quantum signatures of chaos in a kicked top Nature 461 768-771.
  34. Chen M et al. (2019) Experimental demonstration of quantum pigeonhole paradox PNAS 116/5 1549 doi:10.1073/pnas.1815462116.
  35. Chiao T, Kwait P (1993) Faster than Light Steinberg Scientific American Aug
  36. Cho A. (2011) Furtive Approach Rolls Back the Limits of Quantum Uncertainty Science 333 690-3.
  37. Cotler et al. (2016) Experimental test of entangled histories ArXiv:1601.02943
  38. Cowen R (2015) The Quantum Source of Space-Time Nature 527 290-293.
  39. de Rham C, Gabadadze G, Tolley A (2011) Resummation of Massive Gravity Phys. Rev. Letts. doi:10.1103/PhysRevLett.106.231101.
  40. de Rham C (2014) Massive Gravity Living Rev. Relativity 17 7 doi:10.12942/lrr-2014-7.
  41. de Rham C et al. (2017) Graviton mass bounds Rev. Mod. Phys. doi:10.1103/RevModPhys.89.025004.
  42. Diaz A et al. (2019) Where Are We With Light Sterile Neutrinos? arXiv 1906.00045.
  43. Loeve K, Nielsen K & Hansen S (2021) Consistency analysis of a Dark Matter velocity dependent force as an alternative to the Cosmological Constant arXiv 2102.07792v1.
  44. Distler J, Garibaldi S (2009) There is no "Theory of Everything" inside E8 ArXiv:0905.2658.
  45. Dixon, P. B., Starling, D. J., Jordan, A. N., & Howell, J. C. (2009) Ultrasensitive beam deflection measurement via interferometric weak value amplification. Physical review letters, 102(17), 173601.
  46. Englert B, Scully M, Sussmann G, Walther H (1992) Surrealistic Böhm Trajectories DOI: 10.1515/zna-1992-120.
  47. Fein et al. (2019) Quantum superposition of molecules beyond 25 kDa Nature Physics doi:10.1038/s41567-019-0663-9.
  48. Fornal B, Grinstein B (2017) SU(5) Unification without Proton Decay arXiv:1706.08535.
  49. Fornal B, Grinstein B (2017) Dark Matter Interpretation of the Neutron Decay Anomaly arXiv 1801.01124.
  50. Frauchiger D, Renner R (2018) Quantum theory cannot consistently describe the use of itself doi:10.1038/s41467-018-05739-8.
  51. Furey C (2014) Generations: three prints, in colour JHep 10 046 arXiv:1405.4601.
  52. Furey C (2015) Phys. Lett. B 742 195-9 Charge quantization from a number operator arXiv:1603.04078.
  53. Furey C (2015) Standard model physics from an algebra? arXiv:1611.09182.
  54. Furey C (2018) A demonstration that electroweak theory could violate parity automatically (leptonic case) Int.J.Mod.Phys.A, 33/04.
  55. Furey C (2018) SU(3)C x SU(2)L x U(1)Y ( x U(1)X ) as a symmetry of division algebraic ladder operators, Eur. Phys. J. C 78 5 375.
  56. Ghirardi G, Rimini A, Weber W (1986) Unified dynamics for microscopic and macroscopic systems Phys. Rev. D 34/2 470-49.1.
  57. Ghirardi G, Pearle P, Rimini A (1990) Markov processes in Hilbert space and continuous spontaneous localization of systems of identical particles Phys. Rev. A 42/1 78-89.
  58. Gibney E (2017) Nature 551Dark-matter hunt fails to find the elusive particles  153–154 doi:10.1038/551153a.
  59. Goudelis A et al. (2016) Light Particle Solution to the Cosmic Lithium Problem Physical Review Letters 116 211303.
  60. Gu, M., Chrzanowski, H., Assad, S., Symul, T., Modi, K., Ralph, T., Vedral, V. and Lam, P. (2012) Observing the operational significance of discord consumption Nature Physics doi: 10.1038/NPHYS2376.
  61. Handsteiner J et al. (2017) Cosmic Bell Test: Measurement Settings from Milky Way Stars Phys. Rev. Lett. 118 060401 doi:10.1103/PhysRevLett.118.060401.
  62. Harlow D & Ooguri H (2019) Constraints on Symmetries from Holography. Phys. Rev. Lett 122(19) 191601 doi:10.1103/PhysRevLett.122.19160.
  63. Hensen B et al. (2015) Experimental loophole-free violation of a Bell inequality using entangled electron spins separated by 1.3 km arXiv: 1508.05949.
  64. Hildner, R., Brinks, D., Nieder, J. B., Cogdell, R. J., & van Hulst, N. F. (2013). Quantum Coherent Energy Transfer over Varying Pathways in Single Light-Harvesting Complexes. Science, 340(6139), 1448-1451. doi: 10.1126/science.1235820.
  65. Hosten O., Kwiat P. (2008) Observation of the Spin Hall Effect of Light via Weak Measurements Science 319 787-790.
  66. Hosten O. (2011) How to catch a wave Nature 474 170-1
  67. Kastner R, Kauffman S, Epperson M (2017) Taking Heisenberg's Potentia Seriously arXiv:170903595.
  68. Kastrup B, Stapp H, Kafatos M (2018) Coming to Grips with the Implications of Quantum Mechanics https://blogs.scientificamerican.com/observations/coming-to-grips-with-the-implications-of-quantum-mechanics/.
  69. King C. C. (2008) The Central Enigma of Consciousness Nature Precedings 5 November 2008 Journal of Consciousness Exploration and Research 2(1) 2011
  70. King C. C. (2014) Space, Time and Consciousness Cosmology.
  71. Kocsis, S., Braverman, B., Ravets, S., Stevens, M. J., Mirin, R. P., Shalm, L. K., & Steinberg, A. M. (2011). Observing the average trajectories of single photons in a two-slit interferometer. Science, 332(6034), 1170-1173.
  72. Lees J. et al. (2012) Observation of Time-Reversal Violation in the B0 Meson System Physical Review Letters 109 211801.
  73. Leggett A and Garg A (1985) Quantum Mechanics versus macroscopic realism: is the flux there when nobody looks? Phys. Rev. Lett. 54, 857.
  74. Leifer M, Pusey M (2017) Is a time symmetric interpretation of quantum theory possible without retrocausality? Proc. R. Soc. A 473 20160607 doi:0.1098/rspa.2016.0607.
  75. G. B. Lesovik, I. A. Sadovskyy, M. V. Suslov, A. V. Lebedev, V. M. Vinokur (2019) Arrow of time and its reversal on the IBM quantum computer Scientific Reports, 2019; 9/1 doi:10.1038/s41598-019-40765-6.
  76. Lewton T (2021) Is the Great Neutrino Puzzle Pointing to Multiple Missing Particles? https://www.quantamagazine.org/neutrino-puzzles-point-to-the-possibility-of-multiple-missing-particles-20211028/.
  77. Lin J, Marcolli M, Ooguri H, Stoica B (2015) Locality of Gravitational Systems from Entanglement of Conformal Field Theories Phys. Rev. Lett. 114, 221601.
  78. Lisi Garrett (2007) An Exceptionally Simple Theory of Everything arXiv:0711.0770
  79. Lisi Garrett (2011) Garrett Lisi Responds to Criticism of his Proposed Unified Theory of Physics http://blogs.scientificamerican.com/observations/garrett-lisi-responds-to-criticism-of-his-proposed-unified-theory-of-physics/
  80. Lundeen J. S., Steinberg A. M. (2009) Experimental Joint Weak Measurement on a Photon Pair as a Probe of Hardy's Paradox Physical review letters 102 020404.
  81. Lundeen J.S., Sutherland B., Patel A., Stewart C, Bamber C. (2011) Direct measurement of the quantum wavefunction Nature 474 188-191.
  82. Mahler D et al. (2016) Experimental nonlocal and surreal Bohmian trajectories Sci. Adv. 2016;2:e1501466 doi: 10.1126/science.1501466
  83. Maldacena J (1998) The Large N Limit of Superconformal Field Theories and Supergravity Adv. Theor. Math. Phys 2: 231-252. arXiv:hep-th/9711200.
  84. Maldacena J, Susskind L (2013) Cool horizons for entangled black holes Fortschr. Phys. 61/9, 781-811 doi:10.1002/prop.201300020.
  85. Martinez E. et al. (2016) Nature 534, 516-519 doi:10.1038/nature18318.
  86. McAlpine K. (2010) Nature's hot green quantum computers revealed New Scientist 3 Feb.
  87. Meissner K & Nicolai H (2018) Standard Model Fermions and Infinite-Dimensional R Symmetries doi:10.1103/PhysRevLett.121.091601.
  88. Merali Z. (2010) Back From the Future Discover Magazine August 26.
  89. Merali Z (2011) Collaborative physics: String theory finds a bench mate Nature 478 302-4 doi:10.1038/478302a.
  90. Merali Z (2011b) Quantum computing: The power of discord Nature 474, 24-26 doi:10.1038/474024a.
  91. Merali Z (2013) Fat gravity particle gives clues to dark energy Nature doi:10.1038/nature.2013.13707.
  92. Micadei K et al. (2017) Reversing the thermodynamic arrow of time using quantum correlations. arXiv:1711.03323
  93. Miller H, Anders J (2018) Energy-temperature uncertainty relation in quantum thermodynamics doi:10.1038/s41467-018-04536-7..
  94. Moreva E, Brida G, Gramegna M, Giovannetti V, Maccone L, Genovese M (2014) Time from quantum entanglement: an experimental illustration ArXiv http://arxiv.org/pdf/1310.4691v1.
  95. Moulai M (2021) Light, Unstable Sterile Neutrinos: Phenomenology, a Search in the IceCube Experiment, and a Global Picture arXiv 2110.02351.
  96. Müller H (2020) Standard model of particle physics tested by the fine-structure constant Nature 588, 37-38 doi:10.1038/d41586-020-03314-0.
  97. Nguyen et al. (2019)  Nonlinear Dynamics of Preheating after Multifield Inflation with Nonminimal Couplings. Physical Review Letters 123 (17) doi:10.1103/PhysRevLett.123.171301.
  98. Ollivier H. & Zurek W (2002) Quantum Discord: A Measure of the Quantumness of Correlations Phys. Rev. Lett. 88/1 017901.
  99. Pokorny F et al. (2019) Tracking the dynamics of an ideal quantum measurement arXiv:1903.10398..
  100. Price H. (2012) Does time-symmetry imply retrocausality? How the quantum world says 'maybe'? Stud. Hist. Phil. Mod. Phys. 43 75–83 doi:10.1016/j.shpsb.2011.12.003.
  101. Price H, Wharton K (2015) Disentangling the Quantum World Entropy 17 7752-67.
  102. Rauch D et al. (2018) Cosmic Bell Test Using Random Measurement Settings from High-Redshift Quasars Physical Review Letters doi: 10.1103/PhysRevLett.121.080403..
  103. Redi M (2016) Is Particle Physics About to Crack Wide Open? 13 Jun See also: en.m.wikipedia.org/wiki/750_GeV_diphoton_excess.
  104. Rubino G et al. (2016) Experimental Verification of an Indefinite Causal Order arXiv:1608.01683.
  105. Shaposhnikov M, Tkachev I (2006) Phys. Lett. B639, 414 arXiv:hep-ph/0604236 [hep-ph].
  106. Sokol J (2016) Why is the universe expanding 9 per cent faster than we thought? New Scientist 3 Jun.
  107. Steck D. (2009) Passage through chaos Nature 461 736-7.
  108. Sutherland R (2017) Lagrangian Description for Particle Interpretations of Quantum Mechanics - Entangled Many-Particle Case Foundations of Physics 47 174-207.
  109. Thyrhaug E et al. (2018) Identification and characterization of diverse coherences in the Fenna-Matthews-Olson complex Nature Chemistry doi:10.1038/s41557-018-0060-5.
  110. Tumulka R (2006) Collapse and Relativity arXiv:0602208.
  111. Unden T et al. (2019) Revealing the emergence of classicality in nitrogen-vacancy centers arXiv:1809.10456.
  112. Vaccaro JA. (2016) Quantum asymmetry between time and space. Proc. R. Soc. A 472 20150670 doi:10.1098/rspa.2015.067.
  113. Valentini A (2001) Hidden Variables, Statistical Mechanics and the Early Universe arXiv quant-ph/0104067.
  114. Valentini A (2002) Subquantum Information and Computation arXiv quant-ph/0203049.
  115. Valentini A, Westman H (2005) Dynamical origin of quantum probabilities, Proceedings of the Royal Society A 461/2053 253-272, doi:10.1098/rspa.2004.1394.0.
  116. Veldhorst M et al. (2015) A two-qubit logic gate in silicon Nature 526 410–414 doi:10.1038/nature1526.
  117. Vedovato et al. (2017) Extending Wheeler's delayed-choice experiment to space Sci. Adv. 3 e1701180 doi:10.1126/sciadv.1701180.
  118. Vergani S et al. (2021) Explaining the MiniBooNE Excess Through a Mixed Model of Oscillation and Decay arXiv 2105.06470v5.
  119. Villata M (2011) CPT symmetry and antimatter gravity in general relativity. Europhysics Letters 94 20001 doi:10.1209/0295-5075/94/20001.
  120. Wang et al. (2016) A Schrodinger cat living in two boxes doi:10.1126/science.aaf2941.
  121. Zeller M (2012) Particle Decays Point to an Arrow of Time Physics 5, 129 doi:10.1103/Physics.5.129.
  122. Zurek W (1982) Environment-induced superselection rules Phys. Rev. D 26, 1862..
  123. Zurek W. (1991) Decoherence and the Transition from Quantum to Classical Physics Today Oct
  124. Zurek W (2003) Decoherence and the transition from quantum to classical - Revisited arXiv:quant-ph/0306072.
  125. Zurek W (2009) Quantum Darwinism Nature Physics 5 181-188.