Quantum Reality and Cosmology
Complementarity and Spooky Paradoxes
Genotype 1.0.69 Mar 20  PDF
For significant updates, follow @dhushara on Twitter
The Covid19 / SARSCov2 Papers: A Malthusian Catastrophe
Contents
Quantum Theory and Relativity
Quantum Reality
Introduction
This article is designed to give an overview of all the developments in quantum reality and cosmology, from the theory of everything to the spooky properties of quantum reality that may lie at the root of the conscious mind. Along the way, it takes a look at just about every kind of weird quantum effect so far discovered, while managing a description which the general reader can follow without a great deal of former knowledge of the area.
The universe appears to have had an explosive beginning, sometimes called the big bang, in which space and time as well as the material leading to the galaxies were created. The evidence is pervasive, from the increasing redshift of recession of the galaxy clusters, like the deepening sound of a train horn as the train recedes, to the existence of cosmic background radiation, the phenomenally stretched and cooled remnants of the original fireball. The cosmic background shows irregularities of the early universe at the time radiation separated from matter when the first atoms formed from the flux of charged particles. From a very regular symmetrical 'isotropic ' beginning for such an explosion, these fluctuations, which may be of a quantum nature, have become phenomenally expanded and smoothed to the scale of galaxies consistent with a theory called inflation. The largescale structure of the universe in our vicinity, out to a billion light years surrounding the Milky Way, our supercluster Laneakea, and even larger structures, including the Shapley Atractor and dipole Repeller, shaped by variations in dark matter, as in the MIllennium simulation are shown in Fig 1.
Fig 2:(a) The cosmic background  a redshifted primal fireball (WMAP). This radiation separated from matter, as charged plasma condensed to atoms. The fluctuations are smoothed in a manner consistent with subsequent inflation. (b) Eternal inflation and big bounce models. Fractal inflation model leaves behind mature universes while inflation continues endlessly. Big crunch leads to a new big(ger) bang. (c) Darwin in Eden: "Paradise on the cosmic equator. "  life is an interactive complexity catastrophe consummating in intelligent organisms, resulting ultimately from force differentiation. This summative Σ interactive state is thus cosmological and as significant as the α of its origin and Ω of the big crunch or heat death in endless expansion.
Origin of Time and Space: In special relativity, the spacetime interval (3)can be expressed either (left) in terms of real time in Minkowski space in which the interval is independent of the inertial frame of reference under the Lorenz transformations of special relativity, or equivalently (right) in terms of imaginary time in ordinary Euclidean 4D space. These two are generalized in higher spatial dimensions into antiDe Sitter and De Sitter space respectively (see fig 3(b)). Hartle and Hawking suggest that if we could travel backward in time toward the beginning of the Universe, we would note that quite near what might have otherwise been the beginning, time gives way to space such that at first there is only space and no time. Beginnings are entities that have to do with time; because time did not exist before the Big Bang, the concept of a beginning of the Universe is meaningless. According to the HartleHawking proposal, the Universe has no origin as we would understand it: the Universe was a singularity in both space and time, preBig Bang. Thus, the HartleHawking state Universe, or its wave function, has no beginning  it simply has no initial boundaries in time nor space, rather like the south pole of the Earth in Euclidean space with imaginary time, but becomes a singularity in Minkowsi space in real time. According to the theory, time diverged from a threestate dimension after the Universe was at the age of Planck time , the time required for light to travel in a vacuum a distance of 1 Planck length , or approximately 5.39 x 10^{44} s. Because the Planck time comes from dimensional analysis, to produce a factor with the dimensionality of time from fundamental units, which ignores constant factors, the Planck length and time represent a rough scale at which quantum gravitational effects are likely to become important. Also since the universe was finite and without boundary in its beginning, according to Hawking, it should ultimately contract again.
The Holographic Principle, Entanglement, SpaceTime and Gravity
Two forms of evidence link quantum entanglement to cosmological processes that may involve gravity and the structure of spacetime. The holographic primciple asserts that in a variety of unified theories, an nD theory can be holographically represented by the physics of a corresponding (n1)D theory on a surface enclosing the region.
Fig 3: (a) An illustration of the holographic principle in which physics on the 3D interior of a region, involving gravitational forces represented as strings, is determined by a 2D holographic representation on the boundary in terms of the physics of particle interactions. This correspondence has been successfully used in condensed matter physics to represent the transition to superconductivity, as the dual of a cooling black hole's "halo" (Merali 2011), Sachdev arXiv:1108.1197) (b) Holographic principle explained. Einstein's field equations can be represented on antide Sitter space, a space similar to hyperbolic geometry, where there is an infinite distance from any point to the boundary. This 'bulk' space can also be thought of as a tensor network as in (c). In (1998) Juan Maldacena discovered a 11 correspondence between the gravitational tensor geometry in this space with a conformal quantum field theory like standard particle field theories on the boundary. A particle interaction in the volume would be represented as a more complex field interaction on the boundary, just as a hologram can generate a complex 3D image from wavefront information on a 2D photographic plate (Cowen 2015). The holographic principle can be used to generate dualities between higher dimensional string theories and more tractable theories that avoid the infinities that can arise when we try to do the analogue of Feynman diagrams to do perturbation theory calculations in string theory. (c) Entanglement plays a pivotal role because whan the entanglement between two regions on the boundary is reduced to zero, the bulk space pinches off and separates into two regions. (d) In an application to cosmology, entanglement on the horizon of black holes may occur if and only if a wormhole in spacetime connects their interiors. Einstein and Rosen addressed both wormholes and the pairsplitting EPR experiment. Juan Maldacena sent colleague Leonard Susskind the cryptic message ER=EPR outlining the root idea that entanglement and wormholes were different views of the same phenomenon (Maldacena and Susskind 2013, Ananthaswamy 2015). (e) Time may itself be an emergent property of quantum entanglement (Moreva et al. 2013). An external observer (1) sees a fixed correlated state, while an internal observer using one particle of a correlated pair as a clock (2) sees the quantum state evolving through two time measurements using polarizationrotating quartz plates and two beam splitters PBS1 and PBS2.
The Holographic Principle: A collaboration between physicists and mathematicians has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory. The holographic principle states that gravity in say a threedimensional volume can be described by quantum mechanics on a twodimensional surface surrounding the volume. The process applies generaly to antide Sitter spaces modelling gravitation in ndimensions and conformal field theories in (n1)dimensions and plays a central role in decoding string and Mtheories. Juan Maldacena's (1998) paper has become the most cited one in theoretical physics, with over 7000 citations. Now the researchers have found that quantum entanglement is the key to solving this question. Using a quantum theory (that does not include gravity), they showed how to compute the energy density, which is a source of gravitational interactions in three dimensions, using quantum entanglement data on the surface. This allowed them to interpret universal properties of quantum entanglement as conditions on the energy density that should be satisfied by any consistent quantum theory of gravity, without actually explicitly including gravity in the theory (Lin et al. 2015).
In a second experimental investigation, working directly with quantum entangled states, (Moreva et al. 2013) time itself was found to be be an emergent property of quantum entanglement. In the experiment, an external observer sees time being fixed throughout, while an observer using one particle in and entanglement as a clock percieves time as evolving (fig 3(e)).
The holographic principle, otherwise known as the antide Sitter/conformal field theory (AdS/CFT) correspondence, has since been found to imply several conjectures when applied to combine gravity and quantum mechanics  that no global symmetries are possible, that internal gauge symmetries must come with dynamical objects that transform in all irreducible representations, and that internal gauge groups must be compact (Harlow & Ooguri 2019). Their previous work had found a precise mathematical analogy between the holographic principle and quantum error correcting codes, which protects information in a quantum computer. In the new paper, they showed such quantum error correcting codes are not compatible with any symmetry, meaning that symmetry would not be possible in quantum gravity. This result has several important consequences. It predicts for example that the protons are stable against decaying into other elementary particles, and that magnetic monopoles exist.
Fig 4: Above: Sketch of the timeline of the holographic Universe. Time runs from left to right. The far left denotes the holographic phase and the image is blurry because space and time are not yet well defined. At the end of this phase (denoted by the black fluctuating ellipse) the Universe enters a geometric phase, which can now be described by Einstein's equations. The cosmic microwave background was emitted about 375,000 years later. Patterns imprinted in it carry information about the very early Universe and seed the development of structures of stars and galaxies in the late time Universe (far right). Below: Angular power spectrum of CMB anisotropies, comparing Planck 2015 data with best fit ΛCDM (dotted blue curve) and holographic cosmology (solid red curve) models, for l ≥ 30.
Holographic Origin A class of holographic models for the very early Universe (Afshordi et al. 2017) based on threedimensional perturbative superrenormalizable quantum field theory (QFT) has been tested against cosmological microwave background observations and found that they are competitive to the infationary standard cold dark matter model with a cosmological constant (ΛCDM) of cosmology.
Inflation, Dark Matter and Dark Energy
Cosmic Inflation: Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe  a repulsive force, qualitatively similar to dark energy, resulting in an enormous and exponential expansion of the universe just after the Big Bang but at a much higher energy density than the dark energy we observe today and is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Nearly all inflation models predict that the total (matter + energy) density of the universe should be very close to the critical density. The evidence from the early universe indicates that there was not simply an explosive beginning in a big bang but an extremely rapid exponential inflation of the universe in the first 10^{35} sec into essentially the huge expanding universe we see today.
Since dark energy and dark matter dominate the cosmological massenergy equation, interest is now focusing on dark inflation as being better capable of modelling and picturing the earliest phases of the universe, such as the inflationary period, whose energy parameters and precise dynamics remain highly uncertain. The range of energies at which inflation could have occurred is vast, stretching over 70 orders of magnitude. In order to recreate the observed dominance of radiation in the Universe, inflatons should lose energy rapidly. The researchers propose two physical mechanisms which could be responsible for the process. They reveal that the new model predicts the course of events of the Universe's thermal history with a far greater accuracy than previously. If inflation involved the dark sector, the input of gravitational waves increased proportionally. This means that traces of the primordial gravitational waves are not as weak as originally thought. Data suggests that primordial gravitational waves could be detected by observatories currently at the design stage or under construction (Artymowski M et al. 2018 doi:10.1088/14757516/2018/04/046). The stability of the earliest phase from immediate gravitational collapse also appears to be dependent on the relation between gravity and the Higgs' field in the inflationary epoch (Herranen M et al. 2018 Phys. Rev. Lett. 113, 211102).
Fig 4b: Dark inflation gives a more presice description of the inflationary period based on the dominant massenergy components of th euniverse which predicts gravitational waves in existing pulsars (circled far right), which may soon be able to be detected by a new generation of instruments. Data suggests that primordial gravitational waves could be detected by observatories currently at the design stage or under construction, such as the DeciHertz Interferometer Gravitational Wave Observatory (DECIGO), Laser Interferometer Space Antenna (LISA), European Pulsar Timing Array (EPTA) and Square Kilometre Array (SKA). The first events could be detected in the coming decade.
In some 'eternal inflation' models the inflation is fractal, leaving behind mature 'bubble' universes while inflation continues unabated (fig 2(b)). The inflationary model explains the big bang neatly in terms of the same process of symmetrybreaking which caused the four forces of nature, gravity, electromagnetism and the weak and strong nuclear forces to become so different from one another. The largescale cosmic structure is thus related to the quantum scale in one logical puzzle. In this symmetrybreaking the universe adopted its very complex 'twisted ' form which made hierarchical interaction of the particles to form protons and neutrons, and then atoms and finally molecules and complex molecular life possible. We can see this twisted nature in the fact that all the charges in the nucleus are positive or neutral protons and neutrons while the electrons orbiting an atom are all negatively charged.. Some theories model inflation on the idea of a scalar field and some of these consider that the Higgs particle may itself be the source of the hypothetical inflaton generating this field (arXiv:1011.4179).
Symmetrybreaking is a classic example of engendering at work. Cosmic inflation explains why the universe seems to have just about enough energy to fly apart into space and no more, and why disparate regions of the universe which seemingly couldn't have communicated since the bigbang at the speed of light, seem to be so regular. Inflation ties together the differentiation of the fundamental forces and an exponential expansion of the universe based on a form of antigravity which exists only until the forces break their symmetry. Inflation explains galactic clusters as phenomenally inflated quantum fluctuations and suggests that our entire universe may have emerged from its own wave function in a quantum fluctuation. However more recent modeling suggests that, due to these quantum effects, inflation can lead to a multiverse where the universe breaks up into an infinite number of patches, which explore all conceivable properties as you go from patch to patch. Hence we shall investigate other models such as the ekpyrotic scenario which also predict a smoothed out universe. On the other hand the latest data from the Planck satelitte does favour the simplest models of inflation, in which the size of temperature fluctuations is, on average, the same on all distance scales (doi:10.1038/nature.2014.16462).
Our view of the distant parts of the universe, which we see long ago because of the time light has taken to reach us, likewise confirm a different more energetic galactic early life. We can look out to the limits of the observable universe and because of the long delay which light takes to cross such a vast region, witness quasars and early energetic galaxies, which are quite different from mature galaxies such as our own milky way.
Fig 5: Researchers used instruments at the Atacama Large Millimeter/submillimeter Array observatory in Chile to observe light emitted in a galaxy called MACS1149JD1, one of the farthest light sources visible from Earth. The emissions are a clue to the galaxy's redshift, detected in an emission line of doubly ionized oxygen at a redshift of 9.1096 ± 0.0006. The galaxy's redshift suggests that the starlight was emitted when the universe was about 550 million years old, but many of those stars were already about 300 million years old, further calculations indicate. That finding suggests that the stars would have blinked into existence some 250 million years after the universe's birth (Hashimoto doi:10.1038/s415860180117z).
Early Evolution: As shown in fig 7, the evolution of the early universe from the end of the hypothesized inflationary period begins with the cosmic background radiation emitted when light became separated from neutral matter when the charged plasma coupling with light condensed to form neutral atoms. This led initially to a dark age of a few hundred million years before the first galaxies formed and stars began to shine. The evidence from fig 5 implies that the first stars were radiating from as early as 250 mya after the cosmic origin. Other research from radio frequencies (SN: 3/31/18 p6) suggests star formation began as early as 180 mya.
Reheating: The link between the inflationary phase and the view we have of the hot particulate origin of the expanding universe in the Big Bang is called "reheating". The earliest phases of reheating should be marked by resonances. One form of highenergy matter dominates, and it's shaking back and forth in sync with itself across large expanses of space, leading to explosive production of new particles, the resonant effect to break up, and for the produced particles to scatter off each other and come to some sort of thermal equilibrium, reminiscent of Big Bang conditions.The scientists chose a model of inflation whose predictions closely match highprecision measurements of the cosmic microwave background emitted 380,000 years after the Big Bang, which is thought to contain traces of the inflationary period. The simulation tracked the behaviour of two types of matter that may have been dominant during inflation, very similar to a type of particle, the Higgs boson, that was recently observed in other experiments. Matter at very high energies, were modelled as interacting with gravity in ways that are modified by quantum mechanics at the atomic scale. Quantummechanical effects predict that the strength of gravity can vary in space and time when interacting with ultrahighenergy matter  nonminimal coupling. They found that the stronger the quantummodified gravitational effect was in affecting matter, the faster the universe transitioned from the cold, homogeneous matter in inflation to the much hotter, diverse forms of matter that are characteristic of the Big Bang. By tuning this quantum effect, they could make this crucial transition take place over 2 to 3 "efolds," referring to the amount of time it takes for the universe to (roughly) triple in size. In this case, they managed to simulate the reheating phase within the time it takes for the universe to triple in size two to three times. By comparison, inflation itself took place over about 60 efolds (Nguyen et al. 2019).
Ultimate fate: The eventual fate of the universe is less certain, because it's rate of expansion brings it very close to the limiting condition between the gravitational attraction of the mass energy it contains ultimately reversing the expansion, causing an eventual collapse, and continued expansion forever. The evidence is now in favour of a perpetual and possibly accelerating expansion and astronomers are seeking an explanation for this apparent lack of mass in dark matter and a dark energy, called 'quintessence' in some of its more varying forms, promoting accelerating expansion that may vary over time.
The missing mass is clearly evident in close galaxies, which spin so rapidly they would fly apart if the only matter present was the luminous matter of stars, black holes and gaseous nebulae. WMAP and Planck data now suggest the universe's rate of expansion has increased part way through its lifetime and that its largescale dynamics are governed mostly by dark energy (68.3%), with successively smaller contributions from dark matter (26.8%) and ordinary galactic matter and radiation (4.9%). From the time of the cosmic microwave background radiation (CMB), dark matter comprised 63% of the matter, photons 15%, atoms 12% and neutrinos 10%, but because photons have zero rest mass, and the CMB is full of low energy photons, the particle ratio is about 10^{9} photons for each proton or neutron.
Fig 6: (left) SN 2011fe, a type 1a supernova 21 million lightyears away in galaxy M101 discovered in 2011, shown in before and after images of the galaxy. Theoretical models for the current expansion rate taking into account normal and dark matter and dark energy using cosmic microwave background data infer a Hubble constant of 67, but current measurements of supernovae put the figure at 73 or 74 based on actually measuring the expansion, by analyzing how the light from distant supernova explosions has dimmed over time. Explanations vary from quintessence models of an actual field rather than a cosmological constant, through additional neutrino types to relativistic particle moving close to light speed, or interactions between dark matter and radiationin the early universe. (right) Discrepancies between early and late measurements of the Hubble constant..
Dark energy: A type 1a supernova occurs in binary systems in which one of the stars is a white dwarf, which gradually accretes mass from its companion, which can be anything from a giant star to an even smaller white dwarf, until its core reaches the ignition temperature for carbon fusion. Within a few seconds of initiation of nuclear fusion, a substantial fraction of the matter in the white dwarf undergoes a runaway reaction, releasing enough energy to unbind the star in a supernova explosion. This process produces consistent peak luminosity because of the uniform mass of the white dwarfs that explode via the accretion mechanism. The stability of this value allows these explosions to be used as standard candles to measure the distance to their host galaxies because the visual magnitude of the supernovae depends primarily on the distance. In 1998 two separate teams noted that these distant supernovae were much dimmer than they should be. The simplest and most logical explanation is that the expansion of the universe is now accelerating by comparision with measures of the earlier universe such as the cosmic microwave background (arXiv:astroph/9805201, arXiv:astroph/9812133).
Fig 7: Left: Cosmic history including inflation and dark energy. Right: After stars formed in the early Universe, their ultraviolet light is expected, to have penetrated the primordial hydrogen gas and altered the excitation state of its 21centimetre hyperfine line, causing the gas to absorb photons from the cosmic microwave background, producing a distortion at radio frequencies of less than 200 mHz. The latest onset of the cosmic dawn is estimated to be 180 million years after the Big Bang. The signal's disappearance gives away a second milestone – when moreenergetic Xrays from the deaths of the first stars raised the temperature of the gas and turned off the signal – around 250 million years after the Big Bang. The strength suggests that either there was more radiation than expected in the cosmic dawn, or the gas was cooler than predicted. That points to dark matter, which theories suggest should have been cold in the cosmic dawn. The results suggest dark matter should be lighter than the current theory indicates. This could help to explain why physicists have failed to observe dark matter directly (doi:10.1038/nature25792).
Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background experiments observed the first acoustic peak in the CMB, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the differenceof some 70%.
Dark energy is poorly understood at a fundamental level, the main required properties are that it functions as a type of antigravity, it dilutes much more slowly than matter as the universe expands, and it clusters much more weakly than matter, or perhaps not at all.
The cosmological constant Λ (see equation 5), is the simplest possible form of dark energy since it is constant in both space and time, and this leads to the current standard ΛCDM model of cosmology, involving the cosmological constant Λ and cold dark matter. It is frequently referred to as the standard model of Big Bang cosmology, because it is the simplest model that provides a reasonably good account of (1) the cosmic microwave background, (2) the largescale structure of the galaxies, (3) the abundances of hydrogen (including deuterium), helium, and lithium and (4) the accelerating expansion of the universe.
Antimatter gravity could also provide an explanation for the Universe's expansion if significant amounts of antimatter can be found. The current formulation of general relativity predicts that matter and antimatter are both selfattractive, yet matter and antimatter mutually repel each other. CPT symmetry means that, in order to transform a physical system of matter into an equivalent antimatter system (or vice versa) described by the same physical laws, not only must particles be replaced with corresponding antiparticles (C operation), but an additional PT transformation is also needed. From this perspective, antimatter can be viewed as normal matter that has undergone a complete CPT transformation, in which its charge, parity and time are all reversed. Even though the charge component does not affect gravity, parity and time affect gravity by reversing its sign. So although antimatter has positive mass, it can be thought of as having negative gravitational mass, since the gravitational charge in the equation of motion of general relativity is not simply the mass, but includes a factor that is PTsensitive and yields the change of sign. CPT symmetry means that antimatter basically exists in an inverted spacetime – the P operation inverts space, and the T operation inverts time (Villata 2011).
Quintessence is a model of dark energy in the form of a scalar field forming a fifth force of nature that changes over time, unlike the cosmological constant, which always stays fixed. It could be either attractive or repulsive, depending on the ratio of its kinetic and potential energy. Quintessance theories, which combine standard model constraints with a dark energy field, may help to provide a real contstraint which might enable string theories to be physically tested and to reduce their vast number of possible universes with differing laws of nature to the ones we experience (arXiv:1806.08362). The scalar field of the Higgs boson would appear to create a contradiction with quintessence constraints forbiding scalar field critical points if the two interacted unless theydo so in a particular way which would be physically identifiable (doi:10.1103/PhysRevD.98.086004).
Fig 7b: Claudia de Rham. In the latest acknowledgement of her breakthrough, she received the Blavatnik Award for Young Scientists, two years after winning the Adams prize, one of the University of Cambridge's oldest and most prestigious awards.
Chameleon Particle is a hypothetical scalar particle that couples to matter more weakly than gravity,postulated as a dark energy candidate. Due to a nonlinear selfinteraction, it has a variable effective mass which is an increasing function of the ambient energy density—as a result, the range of the force mediated by the particle is predicted to be very small in regions of high density (for example on Earth, where it is less than 1mm) but much larger in lowdensity intergalactic regions: out in the cosmos chameleon models permit a range of up to several thousand parsecs. As a result of this variable mass, the hypothetical fifth force mediated by the chameleon is able to evade current constraints on equivalence principle violation derived from terrestrial experiments even if it couples to matter with a strength equal or greater than that of gravity. Although this property would allow the chameleon to drive the currently observed acceleration of the universe's expansion, it also makes it very difficult to test for experimentally. It has been proposed in realistic models of galaxy formation (Realistic simulations of galaxy formation in f(R) modified gravity Nature Astronomy doi:10.1038/s415500190823y) and could be produced in the medial tachocline layers of the Sun through magnetic fields interacting with photons and has been putatively detected in the XENON1T dark matter experiment (arXiv 2103.15834).
Fig 7c: Above galaxy formation models. Below XENON1t detections and chamelion model of solar emission.
Massive Gravity: If gravitons have a mass, then gravity is expected to have a weaker influence on very large distance scales, which could explain dark energy and why the expansion of the universe has not been reined in. Claudia deRham's work (de Rham, Gabadadze & Tolley 2011, de Rham 2014, de Rham et al. 2017 ) marks a breakthrough in a centurylong quest to build a working theory of massive gravity. Despite successive efforts, previous versions of the theory had the unfortunate feature of predicting the instantaneous decay of every particle in the universe  an intractable issue that mathematicians refer to as a "ghost". In 2011, De Rham and her collaborators published their landmark paper on massive gravity, the response was initially swift and hostile, due to the possible presence of ghosts in their theory, but after 8 years, the theory has stood up and is gaining traction. A discussion of these issues can be found in Merali (2013).
Particle physics predicts the existence of vacuum energy which could explain dark energy, but also asserts that it should be 10^{120} times larger than what is needed to explain the dark force acceleration observed by astronomers. Gravity is longrange because we feel gravity from the Sun. However, if the graviton had a tiny mass of less than 10^{33} eV, it would still fit with all astronomical observations. (Neutrinos have masses of the order of 1 eV, and the electron has a mass of about 511,000 eV). A masscarrying graviton would swallow up almost all of the vacuum's energy, leaving behind just a small fraction as dark energy to cause the Universe to accelerate outwards. Such experiments could soon be carried out within the Solar System, because massivegravity models predict a gravitational field between Earth and the Moon that is slightly different to that of the Sun. This would create a detectable difference of one part in 10^{12} in the precession of the Moon's orbit around Earth. Experiments that fire lasers back and forth between Earth and mirrors left on the Moon currently measure the distance between the two bodies and that angle with an accuracy of one part in 10^{11}.
Dark matter/energy: A model of dark energy emerging as a repulsive magnetic effect of dark matter has also been proposed (Loeve, Nielsen & Hansen 2021).
Fig 8: (Left) accelerating expansion with supernova and cosmic background measures. (Right) Phantom dark energy picute of the Big Rip. Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of dark matter and baryons. The density of dark matter in an expanding universe decreases and eventually the dark energy dominates. When the volume of the universe doubles, the density of dark matter is halved, but the density of dark energy is constant for a cosmological constant and changes only slowly otherwise.
The changing rate of expansion of the universe is described by the equation of state constant where p is the pressure and ρ is the energy density. Einstein's field equations (5) have an exact solution in the form of the FriedmannLemaitreRobertsonWalker (FLRW) metric describing the scale a of a homogeneous, isotropic expanding or contracting universe that is path connected, but not necessarily simply connected:. If we examine the "effective" pressure and energy density:, we can see that (6), where . This shows us the underlying relationship between the cosmological constant Λ and w.
From the above equation (6) we can see that for w < 1/3, the expansion of the universe will continue to accelerate. A fixed cosmological constant corresponds to w = 1, which we can see as follows: The cosmological constant has negative pressure equal to its energy density and so causes the expansion of the universe to accelerate if it is already expanding, and vice versa. This is because energy must be lost from inside a container (the container must do work on its environment) in order for the volume to increase. Specifically, a change in volume dV requires work done equal to a change of energy −p.dV, wherepis the pressure. But the amount of energy in a container full of vacuum actually increases when the volume increases, because the energy is equal to ρV, where ρ is the energy density of the cosmological constant. Therefore, p is negative and p = −ρ.
Phantom Dark Energy: Hypothetical phantom energy would have an equation of state w < 1. It possesses negative kinetic energy, and predicts expansion of the universe in excess of that predicted by a cosmological constant, which leads to a Big Rip. The expansion rate becomes infinite in finite time, causing the expansion to accelerate without bounds, passing the speed of light (since it involves expansion of the universe itself, not particles moving within it), causing more and more objects to leave our observable universe faster than its expansion, as light and information emitted from distant stars and other cosmic sources cannot "catch up" with the expansion. As the observable universe expands, objects will be unable to interact with each other via fundamental forces, and eventually the expansion will prevent any action of forces between any particles, even within atoms, "ripping apart" the universe. One estimate puts the destruction times of the Milky Way at 60mya, solar system 3 months, Earth 30 min and atoms at 10^{19} sec as smaller and smaller scales become overwhelmed (fig 8). The value of w has been constrained by Planck in 2013 and 2015 to be w = 1.13 ± 0.13 and w = 1.006 ± 0.045 respectively, and the value from WMAP9 is w = 1.084 ± 0.063, suggesting a possible bigrip (arXiv: 1708.06981, arXiv:1502.01589, arXiv:1212.5226, arXiv:1409.4918).
One application of phantom energy involves a cyclic model of the universe (arXiv:hepth0610213), in which dark energy with w < 1 equation of state leads to a turnaround at a time, extremely shortly before the wouldbe Big Rip, at which both volume and entropy of our universe decrease by a gigantic factor, while very many independent similarly small contracting universes are spawned.
One theory attaches the turning on of dark energy part way through the expansion to new types of stringtheory related axions (arXiv:1409.0549). Another to an additional scalar field that operates in a seesaw mechanism with the grand unification energy of the Higgs particle (doi:10.1103/PhysRevLett.111.061802) gaining a very small energy in inverse relation to the Higgs energy. The seesaw mechanism is used to model small neutrino masses. Yet another ascribes it to 'dark magnetism'  primordial photons with wavelength greater than the universe (arxiv.org/abs/1112.1106).
Unimodular gravity and MassEnergy Leakage: Dark energy could come about because the total amount of energy in the universe isn't conserved, but may gradually disappear. Dark energy could be a new field, a bit like an electric field, that fills space. Or it could be part of space itself  a pressure inherent in the vacuum  called a cosmological constant. Quantum mechanics suggests the vacuum itself should fluctuate imperceptibly. In general relativity, those tiny quantum fluctuations produce an energy that would serve as the cosmological constant. Yet, it should be 120 orders of magnitude too big  big enough to obliterate the universe. General relativity assumes a mathematical symmetry called general covariance, which says that no matter how you label or map spacetime coordinates  i.e. positions and times of events  the predictions of the theory must be the same. That symmetry immediately requires that energy and momentum are conserved. Unimodular gravity possesses a more limited version of that mathematical symmetry and quantum fluctuations of the vacuum do not produce gravity or add to the cosmological constant, which can be set to the desired value. If one allows the violation of the conservation of energy and momentum, it can set the value of the cosmological constant. Dark energy thus keeps track of how much energy and momentum has been lost over the history of the universe (arXiv:1604.04183, doi:10.1126/science.aal0603).
A thermodynamic interpretation of dark energy has also been proposed. Carroll and ChatwinDavies (arxiv.org/abs/1703.09241) take a definition of entropy which uses a quantum mechanical description of spacetime to calculate what happens to the geometry of spacetime as it evolves. Once a universe has reached peak entropy it is effectively one described by de Sitter geometry. In the 1980s, Robert Wald showed that a universe with a positive cosmological constant will end up as a flat, empty, featureless void known as de Sitter space and Tom Banks suggested then that the value of dark energy could be related to the entropy of spacetime. This thermodynamic way of thinking turns the standard view of dark energy on its head: dark energy emerges from the quantum structure of spacetime and then drives the accelerated expansion. Solving the mystery of dark energy's value then becomes a case of justifying the choice of a particular quantum mechanical description of spacetime..
Dark matter is likewise poorly understood. There are four basic candidates, axions, machos (nonluminous, small stars, black holes etc) and wimps (weakly interacting massive particles which might emerge from extensions of the standard model), complex dark matter experiencing strong selfinteractions, while intercting with normal matter only through gravity.
A model of dark matter also being the source of dark energy, through repulsive magnetic effects, has also been proposed (Loeve, Nielsen & Hansen 2021).
In terms of WIMPs, the most sensitive dark matter detector in the world is Gran Sasso's XENON1T, which looks for flashes of light created when dark matter interacts with atoms in its 3.5tonne tank of extremely pure liquid xenon. But the team reported no dark matter from its first run. As of May 2018 the larger second run reported likewise. Neither was there any signal in data collected over two years during the second iteration of China's PandaX experiment, based in Jinping in Sichuan province. Hunts in space have also failed to find WIMPs, and hopes are fading that a oncepromising γray signal detected by NASA's Fermi telescope from the centre of the Milky Way (see fig 11) was due to dark matter — moreconventional sources seem to explain the observation. There has been only one major report of a darkmatter detection, made by the DAMA collaboration at Gran Sasso, but no group has succeeded in replicating that highly controversial result, although renewed attempts to match it are under way (Gibney 2017). In 2018 DAMA announced new results confirming the effect with new detectors. However, the upgrade has made it sensitive to lowerenergy collisions. For typical darkmatter models, the timing of the fluctuations, as seen from Earth, should reverse below certain energies. The latest results don't show that. Furthermore, the COSINE100 experiment which also uses sodium iodide crystals as does DAMA has seen no effect. LUX the Large Underground Xenon in South Dakota also reported no sign (Aron 2016). The latest round of results seem to rule out the simplest and most elegant supersymmetrybased wimp theories, leaving open the possibility of axions, or a hidden sector of particles interacting more feebly, or not at all, with normal matter.
Interest has also converged on a link between dark matter and antimatter, in which axions interact differently with antimatter, leading to a possible explanation of both dark matter and the preponderance of matter over antimatter (Carosi G. 2019 Nature 575 2934 doi:10.1038/d41586019034315). Significantly, because axions are bosons and can cohabit, the constraints on their possible masses are much less confining than other dark matter candidates. Such approaches include the notion of "quark nuggets" massive collections e.g. of antiquarks surrounded by an envelope of axions which maintain their stability and shield their interaction with external ordinary matter, so that almost no interaction occurs (Quark nuggets of wisdom , Cosmicray detector might have spotted nuggets of dark matter).
The simplest model of dark matter portrays it as a single particle  one that happens to interact with others of its kind and normal matter very little or not at all. Physicists favor the most basic explanations that fit the bill and add extra complications only when necessary, so this scenario tends to be the most popular. For dark matter to interact with itself requires not only dark matter particles but also a dark force to govern their interactions and dark boson particles to carry this force. This more complex picture mirrors our understanding of normal matter particles, which interact through forcecarrying particles. Selfinteracting dark matter with dark forces and dark photons may not be as simple as the singleparticle explanation but it is just as reasonable an idea.
A 2018 study of four colliding galaxies in the galaxy cluster Abell 3827 for the first time suggests that the dark matter in them may be interacting with itself through some unknown force other than gravity that has no effect on ordinary matter. The dark matter in Abell 3827 is plentiful, so it warps the space around it significantly. The scientists found that in at least one of the colliding galaxies the dark matter in the galaxy had become separated from its stars and other visible matter by about 5,000 lightyears. One explanation is that the dark matter from this galaxy interacted with dark matter from one of the other galaxies flying by it, and these interactions slowed it down, causing it to separate and lag behind the normal matter (doi:10.1038/nature.2015.17350).
Dstar hexaquark BoseEinstein condensate: When six quarks combine, this creates a type of particle called a dibaryon, or hexaquark. The dstar hexaquark, described in 2014, ududud made of six light quarks  3 uquarks and 3 dquarks, was the first nontrivial hexaquark detection. Because they are bosons, at close to absolute zero they could form BoseEinstein condensates which might remain stable and form dark matter without having to extend the standard model (Bashkanov & Watts 2020). During the earliest moments after the Big Bang, as the cosmos slowly cooled, stable d*(2830) hexaquarks could have formed alongside baryonic matter, and the production rate of this particle would have been sufficient to account for the 85% of the Universe's mass that is believed to be Dark Matter. Calculations have shown that the H dibaryon udsuds, which could result from the combination of two uds hyperons, is light and (meta)stable and takes more than twice the age of the universe to decay.
Dark Negative Energy: A new theory unifies dark matter and dark energy into a single phenomenon: a fluid which possesses negative mass accompanied by negative gravity. To avoid this rapidly diluting itself a 'creation tensor," which allows for negative masses to be continuously created. It demonstrates that under continuous creation, this negative mass fluid does not dilute during the expansion of the cosmos and appears to be identical to dark energy. It also provides the first correct predictions of the behaviour of dark matter halos. Their computer simulation, predicts the formation of dark matter halos just like the ones inferred by observations using modern radio telescopes (Farnes J (2018) Astronomy & Astrophysics arXiv:1712.07962).
Dark Sector Theories: There are further searches under way for lighter dark matter candidates such as the dark photon, using very intense rays of lower energy particles. Complex dark matter, or the dark sector was first suggested in 1986 (Holdom B 1986 Phys. Lett. B 166,196198), but remained largely unexplored until a group of theorists resurrected the theory (ArkaniHamed B et al. 2009 Phys. Rev. D 79, 015014), in light of results from a 2006 satellite mission called PAMELA (Payload for Antimatter Matter Exploration and Lightnuclei Astrophysics), which had observed a puzzling excess of positrons in space. Two nearby pulsars, Geminga and PSR B0565+14, were identified as possible sources, however an international team using the HighAltitude Water Cherenkov Gammaray Observatory has measured the positrons emanating from these and found it couldn't account for the surplus reaching Earth. Theorists suggested that they might be spawned by darkmatter particles annihilating each other, but the weakly interacting massive particles (WIMPs) most often suggested would have also decayed into protons and antiprotons, which weren't seen by PAMELA. Another motivation came from a result reported in 2004 that found that the magnetic moment created by the spin and charge of the muon did not match the predictions of the standard model thus suggesting a supersymmetriy explanation. This experimental anomaly, called the muon g2, could also be rectified by a darksector force. However theoretical paper (arXiv:1801.10244) now suggests the effect may be purely due to gravitational spacetime effects of the Earth on the relativistic highenergy muons. Theoretical sugestions for dark matter collapse to form additional hidden galactic structures (doi:10.1103/PhysRevLett.120.051102) and dark fusion (doi:10.1103/PhysRevLett.120.221806) have also been proposed.
The most recent cosmological data including the cosmic microwave background radiation anisotropies from Planck 2015, Type Ia supernovae, baryon acoustic oscillations, the Hubble constant and redshiftspace distortions show that the interaction in the dark sector parameterized as an energy transfer from dark matter to dark energy is strongly suppressed by the whole updated cosmological data. On the other hand, an interaction between dark sectors with the energy flow from dark energy to dark matter is proved in better agreement with the available cosmological observations (arXiv:1605.04138).
Fig 9: A variety of searches are underway for lighter dark matter candidates. Loglog plots vertical axis relative interaction strength horizontal GeV from 0.01 to 1.
High precision measurements of the fine structure constant, fig 9 right, (Science 360/6385 191195 doi: 10.1126/science.aap7706) have added further constraints the all but eliminte a dark photon but do favour a dark axial vector boson including the relaxion mass and reaction strength as in the central green triangle (arXiv:1708.00010).
There are two qualitatively different types of neutron lifetime measurements. In the bottle method, ultracold neutrons are stored in a container for a time comparable to the neutron lifetime. The remaining neutrons that did not decay are counted and fit to a decaying exponential, exp(t/τn). The average from the five bottle experiments included in the Particle Data Group (PDG) is 879.6 ± 0.6 s. In the beam method, both the number of neutrons N in a beam and the protons resulting from β decays are counted, and the lifetime is obtained from the decay rate, dN/dt = N/τn. This yields a considerably longer neutron lifetime; the average from the two beam experiments included in the PDG average is 888.0 ± 2.0 s. The discrepancy between the two results is 4.0 σ. A possible explanation arises from the neutron having a second decay path into the dark sector. This path violates baryon number and generically gives rise to proton decay via the neutron followed by its alternate decay, can be eliminated from the theory if the sum of masses of particles in the minimal final state of the neutron decay process is larger than mpme. On the other hand, for the neutron to decay, its mass must be smaller than the neutron mass, setting prospective bounds on the dark particle mass (Fornal & Grinstein 2017). However recent evidence from the UCNtau team claims to have ruled out the presence of the telltale gamma rays with 99 percent certainty (arXiv:1802.01595).
Modified Newtonian Dynamics (MOND) attempts avoid the need for dark matter by modifying gravity to account for the observed high velocities of stars around the galaxy by amending Newton's Second Law so that gravity is proportional to the square of the acceleration instead of the first power, so that it varies inversely with galactic radius (as opposed to the inverse square) at extremely small accelerations, characteristic of galaxies, yet far below anything typically encountered in the Solar System or on Earth. However, MOND and its relativistic generalisations such as TeVeS do not adequately account for observed properties of galaxy clusters, and no satisfactory cosmological model has been constructed. Furthermore TeVeS and another class of socalled Galileon theories which introduce scalar and/or vector fields which decay more slowly, have been decisively disproved by the LIGO neutron star collision because this proved gravitational waves travel at the speed of light, inconsistent with these theories.
Fig 10: EG and it's experimental test: (a) Two forms of long range entanglement connecting bulk excitations that carry the positive dark energy either with the states on the horizon or with each other. (b) In antideSitter space (left) the entanglement entropy obeys a strict area law and all information is stored on the boundary. In deSitter space (right) the information delocalizes into the bulk volume and creates a memory effect in the dark energy medium by removing the entropy from an inclusion region. (c) The ESD profile predicted by EG for isolated central galaxies, both in the case of the point mass approximation (dark red, solid) and the extended galaxy model (dark blue, solid), compared with observed values. The difference between the predictions of the two models is comparable to the median 1σ uncertainty on our lensing measurements (grey band).
Emergent Gravity (EG) as a Comprehensive Solution: ES is a radical new theory of gravitation developed by Erik Verlinde in 2011 (arXiv:1001.0785), in which he developed from scratch a fundamental theory of how Newtonian gravitation can be shown to arise naturally in a theory in which space is emergent through a holographic scenario similar to the one discussed above in the context of black holes. Gravity is explained as an entropic force caused by changes in the information associated with the positions of material bodies. A relativistic generalization of the presented arguments directly leads to Einstein's equations. The way in which gravity arises from entropy can most easily be visualized in the context of polymer elasticity where a linear polymer which randomly wriggles into a disordered arrangement thermodynamically is pulled out straight, resulting in an elastic force tending to take it back into a disordered configuration. Space is then an emergent property of the holographic boundary and gravitation a consequence of entropy following an area law at the boundary surface, as in black hole entropy (Bekenstein J 1973 Black holes and entropy Phys. Rev. D 7, 2333).
In November 2016 Verlinde (arXiv:1611.02269) extended the theory to make predictions that can explain both dark energy and dark matter as manifestations of entanglement under the holographic scenario. The entanglement is long range and connects bulk excitations that carry the positive dark energy either with the states on the horizon or with each other. Both situations lead to a thermal volume law contribution to the entanglement entropy that overtakes the area law at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic states do not thermalise at subHubble scales, but exhibit memory effects in the form of an entropy displacement caused by (baryonic) matter. The emergent laws of gravity thus contain an additional 'dark' gravitational force describing the 'elastic' response due to the entropy displacement, which in turn explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
A month later in December 2016 (arXiv:1612.03034), a group of astronomers made a test of the theory using weak gravitational lensing measurements. As noted, in EG the standard gravitational laws are modified on galactic and larger scales due to the displacement of dark energy by baryonic matter. EG thus gives an estimate of the excess gravity (as an apparent dark matter density) in terms of the baryonic mass distribution and the Hubble parameter. The group measured the apparent average surface mass density profiles of 33,613 isolated central galaxies, and compared them to those predicted by EG based on the galaxies' baryonic masses and find that the prediction from EG, despite requiring no free parameters, is in good agreement with the observed galaxygalaxy lensing profiles. This suggests that a radical revisioning of the relationship between gravity and cosmology could be under way which will transform current attempts at unifying gravitation and quantum cosmology.
Fig 10b: Reaction profile of the protophobicX
Protophobic X as a Possible Fifth Force In 2016 a Hungarian team fired protons at thin targets of lithium7, which created unstable beryllium8 nuclei that then decayed and spat out pairs of electrons and positrons(arXiv:1504.01527). According to the standard model, physicists should see that the number of observed pairs drops as the angle separating the trajectory of the electron and positron increases. But the team reported that at about 140^{o}, the number of such emissions jumps  creating a 'bump' when the number of pairs are plotted against the angle  before dropping off again at higher angles with 6.8σ significance. This suggests that a minute fraction of the unstable beryllium8 nuclei shed their excess energy in the form of a new particle with a mass of about 17 MeV, which then decays into an electronpositron pair. They were searching for a dark photon candidate, but subsequently papers suggest a "protophobic X boson". The theorists showed that the data didn't conflict with any previous experiments – and concluded that it could be evidence for a fifth fundamental force (arXiv:1604.07411). Such a particle would carry an extremely shortrange force that acts over distances only several times the width of an atomic nucleus. And where a dark photon (like a conventional photon) would couple to electrons and protons, the new boson would couple to electrons and neutrons. Experimental resolution of this anomaly should be forthcoming within a year. The DarkLight experiment at the Jefferson Laboratory is designed to search for dark photons with masses of 10–100 MeV, by firing electrons at a hydrogen gas target. Now it will target the 17MeV region as a priority, and could either find the proposed particle or set stringent limits on its coupling with normal matter (doi:10.1038/nature.2016.19957). Indeed, a new transition has now been found in helium4 (arXiv:1910.10459) with 7.2σ significance.
Extended SM with Dark Matter Inducing Lepton Flavour Violation: A further theory, which is inprinciple testable at the LHC (Arcadi G et al. 2018 Lepton flavor violation induced by dark matter doi:10.1103/PhysRevD.97.075022), is one in which the standard model is extended by including dark matter leptons extending the SU(2) of the SM electroweak flavours to a second SU(3) symmetry comprised of an electron, electron neutrino, and neutral fermion later to be identified as dark matter. This multiplet structure is replicated among the three generations to embed the SM fermionic content. Therefore, we have three neutral Dirac fermions, the lightest one being a dark matter candidate, which might induce lepton flavor violation (LFV) decays μ → eγ and μ → eee as well as μ − e conversion. This theoretical arrangement may be albe to show that one may have a viable dark matter candidate yielding flavor violation signatures that can be probed in upcoming experiments. Keeping the dark matter mass at the TeV scale, a sizable LFV signal is possible, while reproducing the correct dark matter relic density and meeting limits from directdetection experiments.
Fig 11: Waxing and waning of prospects of galactic centre gamma ray source from dark matter: Lower left: Dark matter illuminated: The bullet cluster, two colliding galaxies 3.4 billion lightyears away, have total mass far less than the mass of the cluster's two clouds of hot xray emitting gas (red). The blue hues show the distribution of dark matter in the cluster with far more mass than the gas. Otherwise invisible, the dark matter was mapped by observations of gravitational lensing of background galaxies. Unlike the gas, the dark matter seems to have passed right through indicating little interaction with itself or other matter although obersvations of galaxy cluster Abell 3827 sugests a possible dark force interaction consistent with complex dark matter. Top and right: False colour view of excess xray emissions from the centre of our galaxy suggesting a dark matter particle with mass ranging from around 10 GeV at possible LHC energies upwards. The 13 GeV signal is in good fit with a 3651 GeV dark matter particle annihilating to bb. However in 2018 a group of astronomers has concluded that this radiation comes from 10 billion year old stars in the galactic bulge close the the black hole (doi:10.1038/s4155001804143). The angular distribution of the excess is approximately spherically symmetric and centered around the dynamical center of the Milky Way (within 0.05^{o} of Saggitarius A* the central black hole), showing no sign of elongation along the Galactic Plane, which would be expected with a pulsar distribution. The signal is observed to extend to at least 10^{o} from the Galactic Center, disfavoring the possibility that this emission originates from millisecond pulsars. The shape of the gammaray spectrum from millisecond pulsars appears to be significantly softer than that of the gammaray excess observed from the Inner Galaxy (ArXiv: 1402.6703). Earthbound dark matter detectors have also caught events consistent with this mass range (doi:10.1038/521017a). However a survey of wider regions of the Milky Way also show similar peaks, implying this is not sourced only in the galactic center (rXiv:1704.03910), shedding doubt on the notion of a central source of dark matter decay. The Alpha Magnetic Spectrometer on board the International Space Station has also detected more positrons than expected which could be the result of dark matter being annihilated, but might also be caused by nearby pulsars. However the idea of a disk of dark matter coplanar with the disc of baryonic matter has received a blow from lack of experimental verification in a search for stellar kinematics of a thin disk using the Gaia satellite (arXiv:1711.03103). More recently estimates of the gamma ray distribution find it is closer to galactic stellar distributions than that of assumed dark matter (doi:10.1038/s415500180531z).
Gravitational Waves: Confirmation of the existence of gravitational waves came in 2016 with the detection of the 'chirp' at two widely space detectors (fig 12) in the groundbreaking LIGO experiment, which can detect minute gravitational fluctuations in a change in the 4 km laser mirror spacing of less than a tenthousandth the charge diameter of a proton, equivalent to measuring the distance to Proxima Centauri with an accuracy smaller than the width of a human hair. This characteristic "chirp" signal is believed to be due to the last throes of two colliding black holes in a death spiral. An alternative explanation to a coalescing black hole is a gravastar.
Since then, gravitational waves have also been detected, along with a burst of gamma rays, from a pair of colliding neutron stars (fig 12), demonsrating both gravity and electromagnetism are transmitted at the speed of light eliminating theories where the speed of gravity is modified to be slower or faster than light. These coincident signals have enabled a much more detailed picture of neutron stars to emerge. It generated gravitational waves, picked up by LIGO, and Virgo  that lasted an astounding 100 seconds. Less than two seconds later, a NASA satellite recorded a burst of gamma rays. In the wake of the collision, the churning residue forged gold, silver, platinum and a smattering of other heavy elements such as uranium. Such elements' birthplaces were previously unknown, but their origins were revealed by the cataclysm's afterglow. As the collision spurted neutronrich material into space, a bevy of heavy elements formed, through a chain of reactions called the rprocess, which requires an environment crammed with neutrons. Atomic nuclei rapidly gobble up neutrons and decay radioactively, thereby transforming into new elements, before resuming their neutron fest. The rprocess is thought to produce about half of the elements heavier than iron (Strickland A (2017) Firstseen neutron star collision creates light, gravitational waves and gold CNN, Conover E (2017) Neutron star collision showers the universe with a wealth of discoveries Science News).
Fig 12: Left: Signal of gravitational waves from LIGO believed to be from two collidiing black holes in a binary system, the "chirp" coming from their increasing orbital frequency as they merge. However evidence for the significance of the gamma ray signal at the galactic center has since diminished (arXiv:1704.03910). In October 2017, two neutron stars in a neighbouring galaxy were likewise detected to be colliding from gravitational waves detected by both LIGO and Virgo, but this time because they were not black holes and light could escape, there was a coincident burst of gamma radiation. Such collisions are also believed to provide nearly half the heavy elements such as gold, later swept into planetary systems Right: Light images of the radiation burst of the colliding neutron stars, coincident with a similar gravitational wave chirp, showing its change in radiation over time.
More than a week later, as those wavelengths faded away, Xrays crescendoed, followed by radio waves. That detailed picture revealed the inner workings of neutron star collisions and the source of brief blasts of highenergy light called short gammaray bursts. Researchers also tested the properties of the odd material within neutron stars. The neutron stars' union also gave researchers the opportunity to gauge the universe's expansion rate, by measuring the distance of the collision using gravitational waves and comparing that to how much the wavelength of light from the galaxy was stretched by the expansion. It falls squarely between the two previous estimates of 67 and 73 km/s per megaparsec. The neutron stars, whose masses were between 1.17 and 1.60 times that of the sun, probably collapsed into a black hole, although LIGO scientists were unable to determine the stars' fate for certain. By studying how the neutron stars spiraled inward, astrophysicists also tested the rigidity of neutron star material for the first time, ruling out ultrasquishy neutron stars The outer crust of neutron stars has thus been proposed to consists of ultradense ulltrahard mountains htemselves radiating gravitational waves (doi:10.1103/PhysRevLett.102.191102) and the inner crust consisting of ultrastrong structures having the topologies of gnocchi, spaghetti and lasagna (Science News 91418) The relative strengths of the gravitational waves and gamma radiation also suggest there is no leakage of gravity into other dimensions, which has been cited as an explanation for its relative weakness (doi:10.1088/14757516/2018/07/048).
The confirmation of the existence of gravitational waves has led to a surge of interest in unified theories in which gravitational waves play a formative role. In one theory, primordial chiral gravitatonal waves are conceived as a generator of both visible baryons and strongly interacting dark baryons, dark matter candidates, whose masses would be lower than wimps and would not be directly detectable but whose eixstence might leave evience in the cosmic background (arXiv:1801.07255). In a second gravitational wave theory called bigravity, gravitational waves have two modes (f and g), with g being the usual and f , a sterile massive noninteractive mode, with oscillations between them in the same manner as neutrinos (arXiv:1703.07785).
Singularities, Cosmic Censorship and Classical Predictivity.
Fig 12b: Many potential futures in a collapsing black hole as the field equations cross the Cauchy horizon (light ring). (Inset) Wormholes, in which distinct regions of spacetime become connected through form another hypothetical alternative to a black hole in which there is no actual event horizon Alternative entitiesto black holes, lacking event horizons include boson stars, possibly consisting of axions, gravastars, fuzzballs and wormholes, which were originally theorized by Einstein and Rosen. Neutron stars could also collapse further to quark stars, strange stars electroweak stars and planck stars  arising when the energy density of a collapsing star reaches the Planck energy density, assuming gravity and spacetime are quantized, there arises a repulsive 'force' derived from Heisenberg's uncertainty principle. The accumulation of massenergy inside the Planck star cannot collapse beyond this limit because it would violate the uncertainty principle for spacetime.
The idea of black hole gravitational collapse leads to the notion of a singularity in spacetime which can come in two forms  spacelike where particles are crushed together and timelike in rotating black holes where light rays pass through a point of infinite curvature. In the 1960s mathematicians found a physical scenario in which Einstein's field equations  which form the core of his theory of general relativity  cease to describe a predictable universe when we deal with the evolution of spacetime inside a rotating black hole. Beyond the event horizon where light becomes trapped, the field equations still hold predictively, but when one passes a second threshold the Cauchy horizon, Einstein's equations start to report that many different configurations of spacetime could unfold, all o which satisfy the equations. The theory cannot tell us which option is true. Roger Penrose suggested a principle called cosmic censorship that meant that spacetime and with it the laws of motion cease at the Cauchy horizon, but in 2018 (arXiv:1710.01722) it was discovered that this is not the case, but rather that spacetime ceases to be smooth enough to use Einstein's differential equations, so the multiple solutions do not apply. The basis of this is that the singularity, is milder than Penrose's  a weak 'lightlike' singularity rather than a strong 'spacelike' singularity. This gives a classical view of the boundary between general relativity and the quantum world discussed in quantum gravity theories.
Gravastars: A gravastar is an object originally hypothesized as an alternative to black holes by Mazur and Mottola, resulting from assuming real, physical limitations on the formation of black holes, such as discrete length and time quanta, which were not known to exist when black holes were originally theorized. Gravastars are how a black hole becomes transformed if we define spacetime as quantized, based on the Planck length. Matter does not collapse inside because quantization makes this impossible in a manner consistent with dark energy preventing collapse. Instead of an event horizon, we have light in orbit. The notion builds on general relativity imposing a universal "smallest size" known to exist according to wellaccepted quantum theory in the form of the Planck length . Quantum theory says that any scale smaller than the Planck length is unobservable and meaningless. This limit can be imposed on the wavelength of a beam of light so as to obtain a limit of blue shift that the light can undergo. A gravitational well blueshifts incoming light, so around the extremely large mass of a gravastar there is a region of "immeasurability" to the outside universe as the wavelength of the light crosses the Planck length. This region is a "gravitational vacuum"  a void in the fabric of space and time. The researchers suggest that the violent creation of a gravastar might be an explanation for the origin of our universe and many other universes, because all the matter from a collapsing star would implode "through" the central hole and explode into a new dimension and expand forever, which would be consistent with the current theories regarding the Big Bang. This "new dimension" exerts an outward pressure on the BoseEinstein condensate layer and prevents it from collapsing further.
If one half of an entangled pair of particles were to cross the event horizon and disappear into the singularity while the other did not, then this entanglement would be destroyed, and that is forbidden by quantum theory. Since quantum theory is generally considered the more fundamental theory, general relativity cannot provide a true description of gravity close to a black hole, horizons do not form. Instead, spacetime undergoes a shift in its fundamental properties. Mazur's alternative arises from the fact that a superfluid can exist in a number of "phases". As the star collapses in on itself, the particles within it reach a density that matches the density of the particles that make up the condensate of superfluid spacetime. At this point, the material of the star can interact with the material that makes up spacetime, and the result is that the two materials undergo a phase change. Inside a spherical boundary, where conditions "go critical", the stellar matter is converted to energy, and the superfluid changes its phase, just like water turning to steam. According to Mazur and Chapline's calculations, the energy associated with this phase of the superfluid spacetime has a negative pressure, which manifests as repulsive gravity.
The internal pressure might also become manifest in a big bang origin of a daughter universe (Cardoso V et al. 2016 Is the GravitationalWave Ringdown a Probe of the Event Horizon? Phys Rev Lett 116, 171101). Subsequently investigations of echoes of the chirp following the main pulse of the colliding black hole event originally detected by LIGO, have been found to be consistent with a quantummechanical boundary, such as a firewall, a quantummechanically high energy interface at the event horizon, or of a gravistar. Some versions of string theory also suggest that black holes are 'fuzzballs'  tangled threads of energy with a fuzzy surface, in place of a sharplydefined event horizon (Merali Z 2016 doi:10.1038/nature.2016.21135, Cardoso V et al. 2016 Gravitationalwave signatures of exotic compact objects and of quantum corrections at the horizon scale Phys. Rev. D 94, 084031. doi: 10.1103/PhysRevD.94.084031 ). Any of these quantum theories would contradict the universality of general relativity but the lack of such structures would likewise contradict quantum theory, so this is a potential acid test of their relationship.
BoseEinstein cosmological simulations involving ultracold atoms are also being used to explore cosmological questions. A BoseEinstein condensate is formed when integer spin atoms which behave as bosons and thus can superimpose in the same manner as lasers do with photons, are brought togenther in a single superimposed quantum state.
Fig 12c: BoseEinstein condensate mimicking the early universe.
By rapidly increasing the size of a ringshaped cloud of atoms (doi: 10.1038/d4158601804972x), experimenters induced behaviour in the system that mimicked how light waves were stretched and damped as space expanded in the early Universe. Sound waves (phonons) travelling through a BoseEinstein condensate obey the same equations that describe how light would have moved through empty space at the dawn of the Universe. The wavelength of the waves increase as the ring grows, mimicking a redshift, in which the expansion of space gradually stretches light. The intensity of the waves decrease during expansion, mirroring Hubble friction, describing how the amplitude of light waves fell as they lose energy to the expanding space. Finally, they observed preheating at the end of inflation, when energy involved in the initial rapid expansion dissipated to create the range of particles we see today. In the ultracooled atoms, when expansion stopped, the waves sloshed back and forth before dissipating through a series of whirlpools into waves that travelled around the ring.
In a second experiment (doi:10.1038/536258a), simulated Hawking radiation was observed. Steinhauer created an event horizon by accelerating atomsin a BoseEinstein condensate until some were travelling at more than 1 mm/s  a supersonic speed for the condensate. At its ultracold temperature, the condensate undergoes only weak quantum fluctuations that are similar to those in the vacuum of space. And these should produce phonons, just as the vacuum produces photons, the partners should separate from each other, with one partner on the supersonic side of the horizon and the other forming Hawking radiation. On one side of his acoustical event horizon, where the atoms move at supersonic speeds, phonons became trapped. And when Steinhauer took pictures of the condensate, he found correlations between the densities of atoms that were an equal distance from the event horizon but on opposite sides. This demonstrates that pairs of phonons were entangled  a sign that they originated spontaneously from the same quantum fluctuation, he says, and that the condensate was producing Hawking radiation.
Engendering Nature: Cosmic SymmetryBreaking, Inflation and Grand Unification
At the core of the cosmic inflation concept is cosmolgical symmetrybreaking, in which the fundamental forces of nature, which make up the matter and radiation we relate to in the everyday world gained the very different properties they have today from a single superforce. There are four quite different forces. The first two are well known  electromagnetism and gravity  both longrange forces we can witness as we look out at distant galaxies. The others are two shortrange nuclear forces. The colour force holds together the three quarks in any neutron or proton and indirectly binds the nucleus together by the strong force, generating the energy of stars and atom bombs. The weak radioactive force is responsible for balancing the protons and neutrons in the nucleus by interconverting the flavours of quarks and leptons (p 311).
There is a fundamental 'sexual' division among the waveparticles based on their quantum spin. All particles come in one of two types:
Fermions, of halfintegral spin can only clump in complementary pairs in a single wave function and thus, being incompressible, make up matter.
Bosons of integral spin which can become coherent and can all enter the same wave function in unlimited numbers, as in a laser, and hence form radiation and as virtual particles appearing and disappearing through quantum uncertainty, the force fields which act between the particles.
Fig 13: Scalar and vector fields illustrate the classical behaviour or potential functions and electrostatic fields and fluid flows. A scalar is a single quantity wheras a vector field in 3 dimensions has 3dimensional vectors. Quantum fields likewise can have differing dimensions depending on their spin. Spin0 fields have one degree of freedom and are scalar. Spin1 fields have three degrees of freedom and are vectors. Photons, because they are massless have lost the longitudinal mode and have only two degrees of freedom (polarisation). The one additional degree of freedom contributed by the Higg's boson gives back to the weak bosons the degree of freedom they need to be massive and have a varying velocity. Spin 1/2 fermions have twocomponent wave functions which turn into their negatives upon a 360 degree revolution, leading to the Pauli exclusion principle.
Spin1 bosons such, as the photon, behave like 3D vector fields and form the wellknown fields of electricity and magnetism. Electric charge is essentially the capacity to emit and absorb virtual photons and comes in +/ attractiverepulsive forms. The photon's longitudinal field however is lost because it is massless, leaving only the two transverse fields defined by the polarization. Combining with a Higgs to form a heavy photon such as the Z_{0} particle adds this missing field.
Fig 14: Symmetry and local symmetries are believed to underlie the fundamental forces. Top left: 60 degree rotational geometric symmetry of a snowflake, charge symmetry of electromagnetism and isotopic spin symmetry between a neurton and a proton illustrate symmetries in nature. Right: The electromagnetic force can be conceived of as an effect required to make the global symmetry of phase change local. A global phase shift does not alter the twoslit interference of electron waves (which usually have one light band in the centre), but a phase filter which locally shifts the phase through one slit has precisely the same effect as applying a magnetic field between the slits. The local phase shift causes the centre peak to become split in both cases. Gravitation can likewise be conceived as a symmetry of the Lorenz transformations of relativity, usually referred to as Poincare invariance.
Spin 1/2 fermions behave very differently. They have fields with only two degrees of freedom and, unlike the photon whose wave function becomes itself when turned through 360^{o}, when the fermionic fields are rotated by 360^{o} their wave function becomes its negative. Hence two particles in the same wave function, such as electrons in an atomic or molecular orbital have to have opposite spins to remain attracted, or they will fly apart. Hence fermions resist compression and form matter. Gravity behaves as stress tensors in spacetime, and it is universally attractive, so its quantum fields behave as spin2 gravitons.
We thus have another fundamental sexual complementarity manifesting as the relationship between matter and radiation. The half integral spin of electrons was first discovered in the splitting of the spectral lines of electrons in atomic orbitals into pairs whose spin angular momentum corresponded to +/1/2 rather than the 0, 1 , 2 etc. of atomic s, p , d and f orbitals (p 318). As spin states have to differ by a multiple of Planck's constant h a particle of spin s has 2s+1 components. A glance at the known waveparticles (p 311), indicates that the bosons and fermions we know are very different from one another in their properties and patterns of arrangement. There is no obvious way to pair off the known bosons and fermions, however there are reasons why there may be a hidden underlying symmetry, which pairs each boson with a fermion of onehalf less spin, called supersymmetry, because in supersymmetric theories the infinities that plague quantum field theories cancel and vanish, the negative contributions of the fermions exactly balancing the positive contributions of the bosons. This would mean that there must be undiscovered particles. For example corresponding to the spin2 graviton would be a spin3/2 gravitino, a spin1 graviphoton a spin1/2 gravifermion and a spin0 graviscalar.
Fig 15: (a) Standard model of the four fundamental forces is based on the combined SU(3) x SU(2) x U(1) symmetries of the RGB color force, the +/ electroweak force (charge and flavor) and weak hypercharge. The waveparticles are divided into two disparate groups  bosons and fermions. The fermions, which make matter are divided between quarks which experience all the forces including colour and leptons which experience only the electroweak and gravity. The bosons, which mediate the forces have integer spin and freely superimpose, as in lasers and hence also make radiation. Halfinteger spin fermions only superimpose in pairs of opposite spin and hence resist compression into one space, thus making solid matter. Each quark comes in three colours (RGB) and pairs of flavours (e.g. up and down with charges 2/3 and 1/3) with antiquarks having anticolors (CMY) and antiflavors with opposite charges. Quarks associate (a) in pairs to form mesons (e.g. π^{+} ud, ud, ud, depending on color, π^{0} uu or dd, π^{} du, with higher mass mesons involving the heavier quarks) (b) in triplets to forms baryons (e.g. p^{+} uud, n udd), (c) transient tetraquarks (e.g. udsb) and (d) transient pentaquarks (uudcc). The fermions also come in three series of increasing mass. The gluons have a coloranticolor charge. (b) The forces converge at high energies. Electromagnetism is first united with the weak force ostensibly through the spin0 Higgs boson, then with the colour force gluons and finally with gravity. (c) Force differentiation tree, in which the four forces differentiate from a single superforce, with gravity displaying a more fundamental divergence. (d) the scalar Higgs field has lowest energy in the polarized state, resulting in electroweak symmetrybreaking. The SU(2) x U(1) symmetry corresponds to three W bosons and one B boson all massless. Under symmetrybreaking the last W and the B coalesce into Z_{0} and γ, leaving W^{±}. (e) the stable atomic nuclei with their increasing preponderance of neutrons are equilibrated by the weak force. This force is chiral, engaging lefthanded interactions, for example in neutron decay, as shown. Weak interactions may explain the chirality of RNA and proteins (King R372, R374). Right: Electronpositron creation illustrating the trajectories of identical mass but oppositey charged particles in a magnetic field.
Every physicist knows the approximate value (α = e^{2}/ hc ~1/137) of a fundamental constant called the finestructure constant. This constant describes the strength of the electromagnetic force between elementary particles in the standard model of particle physics and is therefore central to the foundations of physics. For example, the binding energy of a hydrogen atom — the energy required to break apart the atom's electron and proton — is about α^{2}/2 times the energy associated with an electron's mass. Likewise the terms in Feyman diagrams of quantum electrodynamics decline by factors of α. Moreover, the magnetic moment of an electron is subtly larger than that expected for a charged, pointlike particle by a factor of roughly 1 + α/(2π) caused by the emission and reabsorption of virtual photons. This 'anomaly' of the magnetic moment has been verified to everincreasing accuracy, becoming "the standard model's greatest triumph" (Müller H 2020). However a hint of further particles in the virtual milieu comes from measurements of the heavier muon where the magnetic moment anomaly muon g2 may indicate further virtual particles increasing the moment, including a possibly composite Higgs consisting of varying arrangements of subparticles, or a supersymmetric Higgs quartet (Castelvecchi D 2021 doi:10.1038/d41586021008332).
The four fundamental forces appear to converge at very great energies and to have been in a state of symmetry at the cosmic origin as a common superforce. A key process mediating the differentiation of the fundamental forces is cosmic symmetrybreaking. The shortrange weak force behaves in many ways as if it is the same as electromagnetism, except the charged W^{+},W^{} and neutral Z_{0} carrier particles corresponding to the electromagnetic photon are very massive. One can of course consider this division of a common superforce into distinct complementary forces as a nd of sexual division, just as the division into male and female is a primary division. In this respect gravity stands apart from the other three forces which share a common medium of spin1 bosons and broke symmetry first.
Fig 16: Colour force: Top left: Mesons (quarkantiquark) and baryons (three quarks) mediate their color by exchanging gluons of appropriate coloranticolour combinations. Top centre: The electromagnetic field reduces effective charge by forming virtual electronantielectron pairs. Top right: The colour force also does this by forming quarkantiquark pairs, but in addition the gluons have a colour charge (unlike the uncharged photon) which increases the effective charge towards infinity at great distances, while remaining relaxed at short distances (asymptotic freedom), allowing the quarks to move freelywithin a confined space. This phenomenon, which is also known as camoflage is also illustrated in the lower series of diagrams where electromagnetism has only shielding while colour has shielding and camoflage. The effect of quark and gluon confinement is that individual particles cannot be isolated. When they are driven apart in a very energetic collision, a shower of particles results which eventually neutralizes the colour charge. In another form of asymptotic freedom, quantum electrodynamics and in particular electric charge (the gauge coupling constant) diminishes at very high energies in the presence of gravity, with the same trends expected for the weak and color forces (doi:10.1038/nature09506). Right: Internal dynamics of proton in which the prepondernace of up quarks is compansated for by an increased incidence of down antiquarks by a factor of 1.4. This may be explained by the virtual emission and reabsorption of a meson leading to a transient state of the proton being a neutron (Quanta 2021).
Every proton and neutron is itself believed to consist of three subparticles called quarks as follows: n=udd, p^{+}=uud. Neutron decay is thus actually the transformation of a down quark into an up (see figs 15, 16). The three quarks are bound together by a force, called the colour force because each quark comes in one of three colours, just as electric charges come in two types, positive and negative. Each neutron has one up and two down quarks and each proton two up and one down. To balance the charges each up must have charge 2/3 and each down 1/3. However, regardless of their up or down flavour, there is always one of each colour, so that the proton and neutron are colourless.
Recent investigations of the weak charge of the proton by firing electrons of both spins at protons suspended in liquid hydrogen, which separates the chiral effect of the weak force affecting only one of the spins, shows that the proton's weak charge of 0.0719 ± 0.0045 is in excellent agreement with the standard model and sets multiteraelectronvoltscale constraints on any semileptonic parityviolating physics not described within the standard model, and rules out leptoquark masses below 2.3 TeV (doi:10.1038/s4158601800960), illustrating how a low energy experiment can indirectly provide evidence of physics at higher energies than current particle accelerators.
lepton 
mass

symbol

charge

quark

mass

symbol

charge

electron neutrino 
< 16 eV


0

up

2.3 MeV

u u u

2/3

electron 
0.5 MeV

e

1

down

4.8 MeV

d d d

1/3

muon neutrino 
< 65 eV


0

charm

1275 MeV

c c c

2/3

muon 
106.6 MeV


1

strange

95 MeV

s s s

1/3

tau neutrino 
< 65 eV


0

truth

173 GeV

t t t

2/3

tau 
1784 MeV


1

beauty

4180 MeV

b b b

1/3

A second, quite different force, the weak nuclear force, is responsible for radioactive decay. If a nucleus has too many neutrons, one neutron can decay into a proton, an electron and an antineutrino (fig 15e). This reaction and its reverse act to keep the balance of protons and neutrons, which is roughly 50:50 to keep each nuclear particle in the lowest possible energy states under the strong force, but becomes biased toward neutrons in heavier elements, fig 15(e) because of instability caused by the accumulated repulsive positive charges of the protons. Significantly, the reaction does not preserve mirror symmetry, as it gives rise only to lefthanded electrons, the antineutrino involved in beta decay having right handed helicity.
Fig 17: CP Violation: (a) Decay of the Ko meson is a parallel to photon polarization. The K1 component (see below) by decay is similar to vertical polarization removing the horizontal component from circularly polarized light. However there is a small amplitude for the K2 to go into resonance back into the K1 form, just as dextrose rotates the polarization of light, allowing it to subsequently decay again, similarly to detecting horizontal polarization in the rotated light. Lower left Feynman diagram for quark flavour mixing. Just as classical chirality requires three dimensions, CPviolation of the Ko requires at least three families of fermions. Investigations of the B meson containing a b (beauty) quark indicate flavour mixing, suggesting a fourth family is possible. There can be no more than four or the extra neutrino types would cause an unrealistic expansion rate of the universe. (b) Treversal violation experiment on the B meson. When one meson decays at time t1 , the identity of the other is "tagged" but not measured specifically. In the top panel, the tagged meson is a "B0", where B stands for Bbar. This surviving meson decays later at t2 , encapsulating a timeordered event, which in this case corresponds to "B0" > B . To study time reversal, the BaBar collaboration compared the rates of decay in one set of events to the rates in the timereversed pair. In the present case, these would be the "B" > B0 events, shown in the bottom panel (Zeller 2012). (c) Interactions in the decay of the Λ^{0}_{b} baryon.
The weak force is known to be chiral, but the asymmetry of nature runs even deeper. In 1964 the principle of CP (chargeparity) conservation was overthrown by the neutral K_{0} meson. CP violation is accommodated in the standard model (SM) of particle physics by the Cabibbo–Kobayashi–Maskawa (CKM) mechanism that describes the transitions between up and downtype quarks, in which quark decays proceed by the emission of a virtual W boson and where the phases of the couplings change sign between quarks and antiquarks. The neutral K_{0} usually decays into 3 πmesons, but once in 500 times is found after a strange delay to decay into only two. The neutral K_{0} meson, and its antiparticle both decay into a pair of mesons. The rapid decay of the component into πmesons, subsequently leaves the remaining component which does not follow the same decay. However subsequently there is a small amplitude for conversion of some of the K_{2} back to K_{1} resulting in a K_{L} which is not matterantimatter symmetric, since it contains differing components of K_{0} and its antiparticle. Thus the reaction is preferred over the mirrorimage. Since the K_{0} has quark constituents (d, antis) and its antiparticle (antid,s), this implies that the reaction should be directed in time. Similar considerations are used to explain the preponderance of matter over antimatter. It is suggested that the one part in 10^{8} of matter to radiation could have come from a similar process resulting in a slight differential in the stability of matter and antimatter with respect to time. Potential confirmation of baryonic CP violation, which would be pivotal for matter  antimatter asymmetry, has also been found in LHC studies of the decay of Λ^{0}_{b} baryons decaying to pπ π +π and pπ K +K final states with the former having a 3.3 sigma significance of around 1 in 1000 due to chance alone (Nature Physics 2017 doi:10.1038/nphys4021).
An even more glaring symmetry violation has been discovered in the B meson which indicates a direct violation of time reversal as shown in fig 17(b). Tviolation can be inferred from CP violation by applying the CPT theorem, which states that all local Lorentz invariant quantum field theories are invariant under the simultaneous operation of charge conjugation, parity reversal, and time reversal, but in the Bmeson experiment (Lees et al. 2012), Tviolation was detected directly. The experiment takes advantage of entangled B and B (Bbar) mesons in the Y(4s) resonance produced in positronelectron collisions at SLAC. This allows measurement of an asymmetry that can only come about through a T inversion, and not by a CP transformation. Each of the entangled B0 and B0 mesons resulting from the Y(4s) can decay into either a CP eigenstate, or a state that identifies the flavour of the meson. To study T inversion, the experimenters selected events where one meson decayed into a flavour state and the other decayed into a CP eigenstate. The time between these two decays was measured, and the rate of decay of the second with respect to the first was determined. After detecting and identifying the mesons, the experimenters determined the proper time difference between the decay of the two B states by determining the energy of each meson and measuring the separation of the two meson decay vertices along the e+  e beam axis. When timereversed pairs were compared, the BaBar collaboration found discrepancies in the decay rates. The asymmetry, which could only come from a T transformation and not a CP violation, was significant, being fourteen standard deviations away from time invariance (Zeller 2012).
CPviolation has also been detected at the LHC in the D_{0} meson which consist of a charmantiup quark pair. The D_{0} and the antiD_{0} don't decay at the same rate. The ratios of decay differed by a tenth of a percent (https://cds.cern.ch/record/2668357/files/LHCbPAPER2019006.pdf).
Although all these three types of CP violations are too tiny to account for our matterdominated universe, cientists are holding out hope of finding much larger matterantimatter differences elsewhere, such as in neutrinos or reactions involving the Higgs boson.
This has since led to a theory of the arrow of time based on the existence of Tviolating quantum interactions. Despite the Lorenz equations of special relativity connecting space and time, in conventional quantum theory states are presumed to undergo continuous translation over time. There is thus a fundamental difference between space and time in that quanta can be confined in space but have to persist in time to avoid nonconservation of massenergy. In the theory separate wave eqations are established which would allow quanta to be located in time in the way they are in space. In the words of the researcher Joan Vaccaro (2016): "If T symmetry is obeyed, then the formalism treats time and space symmetrically such that states of matter are localized both in space and in time. In this case, equations of motion and conservation laws are undefined or inapplicable. However, if T symmetry is violated, then the same sum over paths formalism yields states that are localized in space and distributed without bound over time, creating an asymmetry between time and space. Moreover, the states satisfy an equation of motion (the Schrodinger equation) and conservation laws apply. The Schrodinger equation of conventional quantum mechanics, where time is reduced to a classical parameter, emerges as a result of coarse graining over time".
Fig 18: (a) Oscillation of an electron neutrino into the other two known types muon and tauon over distance. The key to understanding the oscillation phenomenon is that electron neutrinos do not have a definite mass: they are a superposition of the three neutrino mass eigenstates. The neutrino mass matrix is not diagonal in the flavor (electronmuontau) basis. Therefore, the wave equation that describes how they move through space will mix them up and therefore they 'oscillate.' (b): An experiment using neutrino oscillations to verify quantum 'entanglement' in terms of the violation of the constraints imposed by local causality over the longest distance ever  735 km  using the LeggettGarg inequality, a variant of Bell's inequalities that works over distinct times, or energies in the case of this neutrino experiment. Quantum and classical theoretical predictions are in blue and red and the experimental result is in black (Arxiv:1602.00041). Instead of a single evolving system at different times, we can use an ensemble with 'stationarity' the correlations depending only on the time differences. One can then perform measurements on distinct members of an identically prepared ensemble, each of which begins in some known initial state. The combination of the prepared and stationarity conditions acts as a substitute for noninvasive (weak) quantum measurements, because wave function collapse and classical disturbance in a given system do not influence previous or subsequent measurements on distinct members of the ensemble. Energy can be used as a proxy for time because the energy of a neutrino determines the unitary time evolution. (c) Results from T2K and NOvA experiments suggest the rate of oscillation may differ between neutrinos and antineutrinos resulting in a symmetry violation. Heavy neutrinos and their antiparticles could also have had different decay profiles potentially explaining the preponderance of matter over antimatter in the universe.
Although the standard model gives zero rest mass for the neutrino, neutrinos are now known to have a small mass (currently less than 0.8 eV Katrin experiment), making them current focal candidates for exploring beyond the standard model. This is consistent with the idea that the neutrino types are able to interconvert by a resonance, or oscillation, similar to that of the K_{0} meson. This explains the small observed flux of neutrinos from the sun, which is only about 1/3 what it should be for the nuclear energy required to keep it at current luminosity.
Weak interactions create neutrinos in one of three leptonic flavours: electron neutrinos (ve), muon neutrinos (νμ), or tau neutrinos (vτ), in association with the corresponding electron, muon, and tau charged leptons, respectively. It is now known that there are three discrete neutrino masses. Each neutrino flavour state is a linear combination of the three discrete mass eigenstates. From cosmological measurements, it has been calculated that the sum of the three neutrino masses must be less than one millionth that of the electron. A neutrino created in a specific flavour eigenstate is in an associated specific quantum superposition of all three mass eigenstates. Researchers (doi:10.1103/PhysRevLett.123.081301) combined data from 1.1 million galaxies from the Baryon Oscillation Spectroscopic Survey (BOSS) to measure the rate of expansion of the Universe, and constraints from particle accelerator experiments and nuclear reactors and from a variety of sources including space and groundbased telescopes observing the first light of the Universe (the CMB radiation), exploding stars, the largest 3D map of galaxies in the Universe. They found the maximum possible mass of the lightest neutrino to be 0.086 eV, and that three neutrino flavours together have an upper bound of 0.26 eV, compared with the mass of the electron of 0.511 MeV.
In the early universe there was a sea of protons and neutrons constantly interacting with electrons, neutrinos of every type and their antiparticles through weak interactions. Because neutrons are slightly more massive (939.5 MeV) than the proton (938.2 MeV), there are fewer of them. As the expansion separates, these the weak interactions cease, leaving about a 1:5 n:p ratio at 1 second. The neutrons begin to decay with a halflife of 15 minutes (see fig 15). After 3 minutes, deuterium ( n + p^{+} + e^{}) becomes stable and is rapidly converted to helium. At this point neutron decay has reduced the n:p ratio to 1:8. These flush out another 1/8 of the particles (protons) leaving a 1:4 ratio of helium to hydrogen. More families of neutrinos than four would cause a faster expansion rate, and the faster reaction would produce more helium than observed. Experiments on supernovae limit the electron neutrino mass to less than 16 eV. All neutrinos must have a mass less than 65 eV or the universe will be closed and collapse and moreover the expansion rate would be slower than observed. Recent evidence from the Planck survey indicates that the summed masses of the three neutrinos must be less than 0.21 eV. There are even more neutrinos than photons, several billion for every proton, electron and neutron.
It is also unknown whether neutrinos are their own antiparticle and are thus Majorana fermions, which would have real wave functions and behave differently from other (Dirac) fermions which have distinct antiparticles. The concept goes back to Majorana's suggestion in 1937 that neutral spin1/2 particles can be described by a real wave equation, in contrast to the complex wave functions of our known Dirac fermions such as the electron. Majorana fermions would therefore be identical to their antiparticle (because the wave functions of particle and antiparticle are related by complex conjugation).
The only CP violation observed so far is in the weak interactions of quarks, and it is too small to explain the matterantimatter imbalance of the universe. It has been shown that CP violation in the lepton sector could generate the matterantimatter disparity through leptogenesis. Collected over nearly a decade, the data suggest that neutrinos oscillated more than expected, while antineutrinos oscillated less than expected  a sign of CP violation. The T2K (Tokai to Kamioka see: t2kexperiment.org/) and NOvA experiments suggest that there is a difference in the oscillation rate of muon and antimuon neutrinos. In 2016, 32 muon neutrinos changed to electron neutrinos on their way to SuperK. To test for CP violation in neutrinos, T2K researchers sent beams composed of neutrinos or antineutrinos 300km trek across Japan. The beams initially consist of muon neutrinos or muon antineutrinos. The researchers counted how often the particles converted into electron neutrinos or electron antineutrinos. When the researchers sent muon antineutrinos, only four became electron antineutrinos. In the 2017 T2K results, more electron neutrinos were produced than expected (89 rather than 67 ratio 1.33) and fewer electron antineutrinos than expected (7 rather than 9 ratio 0.77). A preliminary analysis of T2K's data rejects the hypothesis that neutrinos and antineutrinos oscillate with the same probability at 95% confidence (2σ) level. This could help explain the preponderance of matter over antimatter in the universe which the kaon and B meson CP violations are insufficient to account for.
For the first time, the researchers are beginning to narrow down the potential values of a complex phase delta CP, concluding with a significance of at 99.7% confidence (3σ) level. The results show an indication of CP violation in the lepton sector. Future measurements with larger data samples will determine whether the leptonic CP violation is larger than the quark sector CP violation. (K. Abe et al. Constraint on the matterantimatter symmetryviolating phase in neutrino oscillations. arXiv:1910.03887, Nature doi:10.1038/s4158602021770).
The three neutrino states that interact with the charged leptons in weak interactions are each a different superposition of the three neutrino states, each of definite mass. Neutrinos are created in weak processes in one of the three flavours. As a neutrino propagates through space, the quantum mechanical phases of the three mass states advance at slightly different rates due to the slight differences in the neutrino masses. This results in a changing mixture of mass states as the neutrino travels, but a different mixture of mass states corresponds to a different mixture of flavour states. So a neutrino born as, say, an electron neutrino will be some mixture of electron, mu, and tau neutrino after traveling some distance. This shapeshifting ability is measured by three mixing angles: θ_{12}, θ_{23} and θ_{13} which determines the periodicity of each shape shift.
In the Standard Model of particle physics, fermions have mass only because of interactions with the Higgs field. These interactions involve both left and righthanded versions of the fermion. However, only lefthanded neutrinos (with left helicities  spins antiparallel to momenta) have been observed so far, with antineutrinos being righthanded. Attempts to verify whether neutrinos are Majorana particles depends on investigating the existence of neutrinoless double beta decay (right) where a nucleus e.g. of Germanium simultaneously converts two neutrons to protons releasing a pair of electrons with energy the nuclear energy difference, but no neutrinos, due to one being absorbed by the other nucleus, by becoming an absorbed antineutrino rather than a second emitted neutrino. Limits on the halflife limiting the production rate (e.g. from GERDA) are now over 2x10^{25} years.
Neutrinos may have another source of mass through the Majorana mass term. The seesaw mechanism involves a model of physics where righthanded neutrinos with very large Majorana masses are included. If the righthanded neutrinos are very heavy, they induce a very small mass for the lefthanded neutrinos, which is proportional to the inverse of the heavy mass. The actual mass of the righthanded neutrinos is unknown and could have any value between 10^{15} GeV and less than 1eV. These neutrinos are called "sterile" as they would interact only with other neutrinos or via gravity and thus could be candidates for dark matter or the "dark raditation" connecting other dark matter particles, possibly of righthanded chirality as noted. The number of sterile neutrino types is undetermined, in contrast to the number of active neutrino types, which has to equal that of charged leptons and quark generations to ensure the anomaly freedom of the electroweak interaction.
Sterile neutrinos have gained interest as a possible explanation for the excess of matter in the universe  that in the first microseconds after the big bang, the young, hot universe contained extremely heavy, unstable sterile neutrinos that soon decayed, some into leptons and the remainder into their antimatter counterparts, but at unequal rates. They would become heavy and the other neutrinos very light by the seesaw mechanism. The slight excess would then become the matter after mutual annihilation of the majority. This would require sterile neutrinos to be Majorana fermions.
During the early universe when particle concentrations and temperatures were high, neutrino oscillations can behave differently. Depending on neutrino mixingangle parameters and masses, a broad spectrum of behavior may arise including vacuumlike neutrino oscillations, smooth evolution, or selfmaintained coherence. The physics for this system is nontrivial and involves neutrino oscillations in a dense neutrino gas.
Evidence of the degree of cosmic clumping and of the combined neutrino masses (above) from Planck may make their existence less likely (doi:10.1038/nature.2014.16462). On the other hand best estimates of neutrino number from Planck and WMAP are around 3.3 (Olive et al., Chin. Phys. C, 38, 090001) leaving some room for a neutrino contribution to dark radiation (ArXiv: 1109.2767). Although some experiments have found no evidence for sterile neutrinos in particle decays, several experiments, including the Liquid Scintillator Neutrino Detector (LSND) in Los Alamos in the 1990s and its sucessor  MiniBooNE running over 15years to 2018 which fire muons at an oilfilled detector and count the electron neutrinos resulting from neutrino oscillations, have both noted anomalies of a few hundred extra electron neutrinos in the shapeshifting of neutrinos in which there are more electron neutrinos produced than consistent with direct oscillation from from the targeting muon neutrinos alone, suggesting a sterile intermediate upping the conversion rate.
There is also a deficit of neutrinos from nuclear reactors, when atoms such as U235 produce ffission products that emit antineutrinos, such as the repeated betadecay of krypton89 through rubidium89, strontium89 and yttrium89, suggesting a route to an undetectable form, however this discrepancy was in 2017 found to fluctuate with the U235 content of a nuclear reactor as opposed to other nuclei such as U238 and plutonium (arXiv:1704.01082). The shortfall fluctuating with U235 content is not consistent with a resonance mechanism converting neutrino types, which should depend only on the neutrino flux.
The discovery in 2016 that the current local rate of cosmic expansion is 9% faster than previously thought tests the consistency of dark energy models and could be explained by the existence of a sterile neutrino (Sokol 2016), however at about 1 eV, the hypothetical mass of the sterile neutrino from MiniBooNE would be too small to function as a dark matter candidate. Alternatively an axionlike light particle that interacted with quarks in the very early universe could also explain why the cosmic abundance of lithium is lower than predicted levels (Goudelis et al. (2016).
However, the failure to detect the sterile neutrino has now led to most researchers abandoning the idea as it stands has led to the idea that there could be a more complex set of dark sector neutrinos (Lewton 2021). An analysis where sterile neutrinos can decay into other, invisible particles, actually favors the sterile neutrino's existence (Moulai 2021). Analyses that consider all neutrino oscillation experiments together also find support for decaying sterile neutrinos (Diaz et al. 2019). A dark sector model has been proposed (Vergani et al. 2021) that includes three heavy neutrinos of different masses. Their model accounts for the LSND and MiniBooNE data through a concoction of both a heavy neutrino decaying and lightweight ones oscillating; it also leaves room to explain the origin of neutrino mass, the universe's matterantimatter asymmetry through the seesaw mechanism, and dark matter..
Fig 19: LHC and Higgs manifestations. (a): Generation of the Higgs by four pathways, q and g are the quarks and gluons making up the colliding protons. Bremsstralung is "braking radiation" caused by particles glancing off one another. (b) Two pathways of Higgs decay into photons, or Z_{0} bosons. L are leptons. (c) Anomalies in the decays of the Higgs hints at effects beyond the standard model. (d) LHC Atlas Higgs decay. The observed mass of the Higgs at 125 GeV has been claimed to be consistent with technicolour an older theory extending the standard model with a fifth force, implying the Higgs could be a compostie of 'techniquarks' (doiI: 10.1103/PhysRevD.90.035012). Both the Higgs and the Bmeson (ArXiv:1506.08614) have shown some anomalies in the way they decay with biases in how often they decay in into taus, muons or electrons, which may indicate a second Higgs or other force or particles appearing, but so far no evidence of supersymmetry. Bmeson decay could be modified by virtual particles of greater masses than attainable in the LHC including wither a 30 times heavier Z' or a technicolor leptoquark (doi:10.1038/nature21721). Technicolor is a force field invoked to explain the hidden mechanism of electorweak symmetrybreaking. The mechanism for the breaking of electroweak gauge symmetry in the remains unknown. The breaking must be spontaneous, meaning that the underlying theory manifests the symmetry exactly, but the solutions (ground and excited states) do not and the W and Z bosons become massive and also acquire an extra polarization state. Despite the agreement of the electroweak theory with experiment at energies accessible so far, the process causing the symmetry breaking remains hidden. The simplest mechanism of electroweak symmetry breaking introduces a single complex field and predicts the existence of the Higgs boson. Typically, the Higgs boson is "unnatural" in the sense that quantum mechanical fluctuations produce corrections to its mass that lift it to such high values that it cannot play the role for which it was introduced. Unless the Standard Model breaks down at energies less than a few TeV, the Higgs mass can be kept small only by a delicate finetuning of parameters. Technicolor avoids this problem by hypothesizing a new interaction coupled to new massless fermions. This interaction is asymptotically free at very high energies and becomes strong and confining as the energy decreases to the electroweak scale of 246 GeV. These strong forces spontaneously break the massless fermions' chiral symmetries, some of which are weakly gauged as part of the Standard Model. This is the dynamical version of the Higgs mechanism. The electroweak gauge symmetry is thus broken, producing masses for the W and Z bosons. The new strong interaction leads to a host of new composite, shortlived particles at energies accessible at the Large Hadron Collider (LHC). This framework is natural because there are no elementary Higgs bosons and, hence, no finetuning of parameters.
The Higgs Particle, Symmetrybreaking and Cosmic Inflation: A key explanation for symmetrybreaking is that originally all the particles had zero rest mass like the photon, but some of the boson force carriers like the W changed to mediate a shortrange force by becoming massive and gaining an extra degree of freedom (the freedom to change speed) by picking up an additional spin0 particle called a Higgs boson. The elusive Higgs, which has now been discovered in the LHC may also explain why the universe flew apart. The universe begins at a temperature a little below the unification temperature  slightly supercooled, possibly even a result of a quantum fluctuation. In the early symmetric universe empty space is forced into a higherenergy arrangement than its temperature can support called the false vacuum.
The result is a tremendous energy of the Higgs field, or rather the 'inflation' field, as the energy is ascribed to another elusive force. This behaves as a super antigravity, exponentially decreasing the universe's curvature, inflating the universe in 10^{35} of a second to something already compaable to its present size. This inflationary phase becomes broken once the Higgs field collapses, breaking symmetry to a lower energy polarized state, rather like a ferromagnet. does, to create the asymmetric force arrangement we experience to form the true vacuum. In this process the Higgs particles, which are zero spin and have one wave function component, unite with some of the particles, such as W^{+/} and Z_{0} to give them nonzero rest mass by adding their extra component , allowing the additional longitudinal component of the wave function associated with a varying velocity.
Because the true vacuum is at a lower energy than the false one, it grows to engulf it releasing the latent heat of this energy difference as a shower of hot particles, the hot fireball we associate with the big bang. Normal gravity has now become the attractive force we are familiar with. The reversal of the sign of gravity means that the potential energy os now reversed, so that it adds to the large kinetic energy of the universe flying apart. Two energies which cancelled now became two which add  an insignificant universe  almost nothing  becomes one of almost incalculable proportions. The end result is a universe flying apart at almost exactly its own escape velocity, whose kinetic energy almost balances the potential energy of gravitation. Symmetrybreaking can leave behind defects if the true vacuum emerges in a series of local bubbles which join. Depending on whether the symmetries which are broken are discrete, circular, or spherical, corresponding anomalies in the form of domain walls, cosmic strings or magnetic monopoles may form. In some models, cosmic inflation has a fractal branched structure, like a snowflake, which is perpetually leaving behind mature universes like ours.
Fig 19b: Left and centre: LHC collision illustrating the production of a Higgs particle and a top quark antiquark pair again confirms the standard model predictions, despite the very high masses of the particles involved (doi:10.1103/PhysRevLett.120.231801). This comes on top of the discovery of the Higgs ecaying into a pair of W bosons (inset top right), leaving very little room for any deviations from the standard model to give rise to particels explaining dark matter. Right: Decay of the Higgs into a pair of bottom quarks (CMS/CERN).
Leptoquarks are hypothetical bosons that carry information between quarks and leptons of a given generation that allow quarks and leptons to interact. They are colortriplet bosons that carry both lepton and baryon numbers. They are encountered in various extensions of the Standard Model, such as technicolor theories or GUTs based on PatiSalam model, SU(5) or E6, etc. Their quantum numbers like spin, (fractional) electric charge and weak isospin vary among theories. According to the Standard Model, a B^{+} meson should decay to a kaon, electron and positron as often as it decays to a kaon, muon and antimuon  a situation known as lepton universality. If a measurement of both rates shows a difference between them, it could be the first sign of something new beyond the standard model. According to the LHCb collisions, B^{+} mesons decay to muons about 25% less often than they decay to electrons. The observed difference has a significance of 2.6 standard deviations, corresponding to a chance of one in a hundred that it is due to a statistical fluctuation. Flavourchanging neutral current decays, whereby a quark changes its flavour without altering its electric charge are also a key way to test departures from the standard model. One example of such a transition is the decay of a beauty quark into a strange quark.
Fig 20: (a) Hypothetical leptoquark interaction complementing th weak force (b) Hypothetical leptoquark in proton decay. (c) Candidate event at CMS. Leptoquarks could be produced in pairs, and each would decay into a lepton (such as an electron) and a quark (which becomes a jet). (d) Feynman diagrams for hypothetical gluon leptoquark interactions. (e) Leptoquark pair formation. These intereactions are distinct from those of the hypothetical hyperweak force previously conceived to transform quarks into leptons..
The deviations discovered with respect to the standard model predictions could be explained by the existence of a new particle, not predicted by the standard model, whose contribution to the decay amplitude destructively interferes with the standard model diagram. One option is an ultraheavy Z' version of the weak Z_{0} boson, another is a leptoquark. Here, typical solutions require a leptoquark that couples more strongly to the second and third generations of decay than to the first. This hierarchical flavour structure can naturally explain the lepton nonuniversality ratio R_{k}, and could be related to other hints of lepton nonuniversality seen in other bhadron decays. Some models predict a leptoquark mass of around 1 TeV, which with some luck could be directly observed at the LHC. The LHeC project to add an electron ring to collide bunches with the existing LHC proton ring is proposed as a project to look for highergeneration leptoquarks/ The beauty meson decay anomaly as of October 2018 had a precision of 3.4 sigma, or 99.97 percent. When the modelling includes longdistance interactions fo the disintegration products, according to the Standard Model, the confidence goes up to 6.1 sigma.
Virtual particle methods In a move at the opposite extreme to the LHC, virtual particles emitted by some of our most familiar particles could leave an impression of force unification in their virtual particle exchanges. An extreme alternative to using particle accelerators to try to prize out new realexotic particles extending the standard model is to look for subtle effects in some of the most fundamental virtual particle interactions, such as those emitted and absorbed by the electron as a function of its charge and the electric field this generates. The standard model predicts an almost vanishing electric moment for the electron but other models suggest a larger value supported by the creation and absorption of exotic virtual particles outside the standard model, all of which should be allowed by the uncertainty principle. Experiments are underway to try to gain an accurate estimate of this value. The standard model predicts a value less than 10^{38} ecm, but if the neutrino is a Majorana particle (its own antiparticle) it could rise to 10^{33}. Various technicolor models predict a value between 10^{29} and 10^{27}. Supersymmetric models a value greater than 10^{26}. Current experimental limits are below 8.7 x 10^{29} with several groups working on refining the value, the latest of which is down to (4.3 ±3.1stat ±2.6syst) x 10^{30 }e cm (doi:10.1038/s4158601805998). This result implies that a broad class of conjectured particles, if they exist and timereversal symmetry is maximally violated, have masses that greatly exceed what can be measured directly at the Large Hadron Collider.
A second series of experiments involves magnetic precession of the muon in a field perpendicular to the orientation of its magnetization. Standard, theory predicts that in a magnetic field a muon's magnetism should precess at the same rate as the particle itself circulates, so that if it starts out polarized in the direction it's flying, it will remain locked that way throughout its orbit. Thanks to quantum uncertainty, however, the muon continually emits and reabsorbs other particles. That haze of particles popping in and out of existence increases the muon's magnetism and makes it precess slightly faster than it circulates. Because the muon can emit and reabsorb any virtual particle within uncertainty, its magnetism like the electron experiments above, tallies all possible particles – even new ones too massive for the LHC to make. Researchers in the g2 experiment at Brookhaven National Laboratory tested this by injecting muons into a ringshaped vacuum chamber sandwiched between superconducting magnets. Over hundreds of microseconds, the positively charged muons decay into positrons, which tend to be emitted in the direction of the muons' polarization. Physicists can track the muons' precession by detecting the positrons. The experiment which detected an anomaly at 3.5σ has since been moved to Fermilab where an expanded more sensitive program is underway.
Fig 21: Left (a) A photon may undergo mixing to a pseudoscalar particle  an axion  in an external magnetic field. (b) The magnetic field is a pseudovector field because, when one axis is reflected, as shown, reversing parity, the magnetic field is not reflected, but reversed, because the currents are reversed. The position of the wire and its current are vectors, but the magnetic field B is a pseudovector, as is any vector cross product p=a x b. Any scalar product between a pseudovector and an ordinary vector is a pseudoscalar. A pseudoscalar particle corresponds to a scalar field which is likewise inverted under a change of parity. (c) When photons in an initially unpolarized light beam (consisting of both parallel and perpenidcular components) enter an external magnetic field, axionphoton mixing depletes only the parallel electric field components (dichrosim) leading to polarization. This could be used to detect axions in distant quasars. Right: Axion mass ranges ruled out so far by ADMX.
Axions: In addition, other weaklyinteracting particles may emerge, such as the axions which some researchers associate with cold dark matter. Axions were originally envisaged to explain why CP (chargeparity) violation does not happen with the color force as it does with the weak force. The strong nuclear force arranges quarks inside the neutron, so that their overall charge seemingly never grows lopsided  chargeparity (CP) symmetry: Inverting each quark's charge and reflecting them all in a mirror doesn't affect the neutron's behaviour. However the weak nuclear force doesn't share it: Two neutral kaons decay in ways CP symmetry forbids. Since quarks are involved in both cases, experts would have expected the weakforce symmetrybreaking to extend to the strong force as well. The neutron's charge distribution thus becomes the strong CP problem. The axion represents the leading solution..
One of the terms in the Lagrangian energy equation for chromodynamics is chiral, breaking CP symmetry, but the color force does not break symmetry. The strong CP problem boils down to the unexpected value of an angle theta in the equations that describe the strong force. Its value seems to be zero, which makes the neutron's charges stay in line. After some fudging, Quinn and Peccei promoted θ from a constant to a field that permeates space, with a value that could naturally settle down to zero everywhere. Weinberg and Wilczek observed that the PecceiQuinn field requires a particle excitation in the field and the axion was born. The most elegant solution to this is thus a new continuous U(1) symmetry whose spontaneous symmetry breaking relaxes the chiral term to zero. This leads to a new spin0 pseudoscalar particle  the axion. If axions inherit a mass they become natural cold dark matter candidates. One theory of axions relevant to cosmology predicts that they would have no electric charge, a very small mass in the range from 10^{6} to 1 eV/c^{2}, and very low interaction crosssections for strong and weak forces.
Because of their properties, axions would interact only minimally with ordinary matter, but could change to and from photons in magnetic fields, as a result of mixing due to the pseudovector nature of the magnetic field as a cross product (fig 21). Pierre Sikivie, noted that the axion would be something of a spiritual cousin to the photon, but with just a hint of mass and tweaked the classical electromagnetic theory to incorporate the axion and found that axions just might pack the universe tightly enough that they could add up to the missing dark matter and that now and then they would transform into two photons. Axions' minuscule mass makes them extremely lowenergy waves, with wavelengths somewhere between a building and a football field in length. Sikivie realised that the key to coaxing these lowenergy axions to turn into photons would be a device that could be tuned to resonate at precisely the same wavelength as the axions  the principle that drives the ADMX experiment, which has scanned from 0.65 to nearly 0.68 gigahertz looking for excess power from axionspawned photons; this year the collaboration has continued on to 0.8 gigahertz (arXiv:1910.08638). These frequencies mean that the experiment has ruled out axions weighing between 187 billion times and 151 billion times less than the electron, with wider ranges to come.
Fig 22: The 95% CL upper limits on the gluino (left) and squark (right) pair production cross sections as a function of neutralino versus gluino (squark) mass ("Search for supersymmetry in events with photons and missing transverse energy in pp collisions at 13 TeV" CMS Collaboration).
The SU(5) theory (fig 23) extending the standard model, sometimes also referred to as hyperweak, is an attempt to make an immediate extension of the idea of the electroweak unification to unification with the colour force, through which a quark could decay into leptons. However its prediction that the proton should also be unstable, like the neutron, and decay e.g. as in (b), has not been validated in any experiment. Fornal & Grinstein B (2017) have developed a form of SU5 where the proton remains stable, reigniting the potential viability of the theory (fig 23) extending the standard model, sometimes also referred to as hyperweak, is an attempt to make an immediate extension of the idea of the electroweak unification to unification with the colour force, through which a quark could decay into leptons. However its prediction that the proton should also be unstable, like the neutron, and decay e.g. as in (b), has not been validated in any experiment. Fornal & Grinstein B (2017) have developed a form of SU5 where the proton remains stable, under suitable constrants on the parameters by placing the quarks and leptons in distinct irreducible representations of SU5, and other heavy forces linking these in yet other irreducible representations, reigniting the potential viability of the theory. Representations "represent" the elements of a group as linear transformations of vector spaces, including Hilbert space, so in the case of SU5 represent the forces induced by the (broken) symmetries. The representations linking quarks and fermions correspond to 'heavy' forces with short range and low interaction probability. Stability of the proton requires three relations between the parameters of the model to hold. However, abandoning the requirement of absolute proton stability, the model fulfills current experimental constraints limiting proton decay rate without finetuning.
Supersymmetry, also illustrated in fig 23, in which each boson has a fermion partner and vice versa, which has been a favorate of extensions of the Standard Model because it balances the positive and negative vacuum energy contributions of the bosons and fermions, has failed to demonstrate any evidence of its existence in the latest rounds of LHC experiments, leading up to the end of 2016, using energies up to 13 TeV.
Supersymmetry is a pairing between bosons and fermions of adjacent spin. The idea behind this is based on ground state zeropoint fluctuations  the energies that arise through uncertainty when a quantum is considered in its lowest (ground) energy state. Only a perfect balancing of the negative zeropoint energies of the fermions against the corresponding positive zeropoint energies of the bosons implied by supersymmetry would cancel the potential infinities arising from the arbitrarily short wavelengths that result from the electromagnetic field when quantum gravitation is included in the unification scheme. These would effectively curl spacetime to a point (Hawking R303 46, 50). It is possible however that it is the collective contribution of the two groups which balance so that there is not an individual set of bosonfermion pairings but two symmetrybroken groups  bosons and fermions which collectively balance one another  reflecting the standard model. Garrett Lisi's "exceptionally simple" theory is like this.
Fig 22 shows the production limits for particle pair production of two supersymmetric candidates with no experimental evidence of their existence up to the high range of energies provided by the LHC. Ths means that no evidence for any extension of the Standard Model in terms of fundamental particle creation is likely to occur in the current round, leaving physics with only the single Higgs as a trophy and no immediate prospect of a resolution.
Fig 23: Unproven symmetries: (a) SU(5) theory extending the standard model. Since a quark could decay into leptons its prediction is that the proton should also be unstable, like the neutron, and decay e.g. as in (b), (ce) Supersymmetry, a hypothetical symmetry between fermions and bosons identifies each with a supersymmetric partner of one half spin less. The hierarchy problem deals with why the weak force for example, is 10^{32} times stronger than gravity. In other words, how the electroweak scale (~246 Gev the Higgs field vacuum expectation) relates to the Planck scale (10^{35} m or 10^{19} GeV) where black holes could spontaneouly form from uncertainty applied to gravity (lower chart) . Supersymmetry would provide a solution because the negative vacuum contribution of the fermions would then balance the positive contribution of the bosons. This would also see the strengths of the three vector forces coming together neatly at high energies. 1 eV is the energy to move one electron through 1 volt. 1 GeV= 10^{9} eV. A proton has a mass of 0.9 GeV and an electron 0.511 MeV. The Z_{0} and Higgs have masses of 91 and ~126 GeV. Because E=mc^{2}, strictly the units are GeV/c^{2}. The LHC is running at up to 14 TeV = 1.4 x 10^{4} GeV. The simplest theory minimal supersymmetric standard model (MSSM) is shown in (e). Symmetrybreaking would give the supersymmetric partners a large mass, but again, the highest energy LHC results have so far shown no evidence for supersymmetric partners appearing.
SMASH Physics: A minimal extension of the Standard Model (SM) called SMASH ( Standard Model Axion Seesaw Higgs portal inflation) provides a potentially complete and consistent picture of particle physics and cosmology up to the Planck scale. According to Ballesteros et al. (2016) the model adds to the SM three righthanded SMsinglet neutrinos, a new vectorlike color triplet fermion and a complex SM singlet scalar σ whose vacuum expectation value at ∼ 10^{11} GeV breaks lepton number and a PecceiQuinn symmetry simultaneously. Primordial inflation is produced by a combination of σ and the SM Higgs. Baryogenesis proceeds via thermal leptogenesis. At low energies, the model reduces to the SM, augmented by seesawgenerated neutrino masses, plus the axion, which solves the strong CP problem and accounts for the dark matter in the Universe. The model can be probed decisively by the next generation of cosmic microwave background and axion dark matter experiments. It builds on Shaposhnikov's (2005) model, which added three neutrinos to the three already known in order to solve four fundamental problems in physics: dark matter, inflation, some questions about the nature of neutrinos, and the origins of matter. SMASH adds a new field to explain some of those problems a little differently. This field includes two particles: the axion, a dark horse candidate for dark matter, and the inflaton, the particle behind inflation. As a final flourish, SMASH uses the field to introduce the solution to a fifth puzzle: the strong CP problem, which helps explain why there is more matter than antimatter in the universe.
Fig 24: Scale symmetrybreaking model
Scale Symmetry theories and their resulting scale symmetrybreaking suggest the fundamental description of the universe does not include mass and length, implying that, at its core, nature lacks a sense of scale and thus may not differentiate between scales. The physics starts with a basic equation that sets forth a massless collection of particles, each with unique characteristics, such as whether it is matter or antimatter and has positive or negative charge. As these particles attract and repel one another and the effects of their interactions cascade, scale symmetry breaks, and masses and lengths spontaneously arise. Similar dynamical effects generate 99% of the mass in the visible universe. Protons and neutrons are each a trio of lightweight quarks. The energy used to hold these quarks together gives them a combined mass that is around 100 times more than the sum of the parts. In the Standard Model, the Higgs boson, comes equipped with mass. It provides mass to other elementary particles through its interactions with them, adding an extra scalar field to their wave function. The masses of electrons, W and Z bosons, individual quarks and so on derive from the Higgs boson and, in a feedback effect, they simultaneously affect the Higgs mass. The scale symmetry approach traces back to 1995, when William Bardeen showed that the mass of the Higgs boson and the other Standard Model particles could be calculated as consequences of spontaneous scalesymmetry breaking. But the delicate balance of his calculations seemed easy to spoil when researchers attempted to incorporate new, undiscovered particles, like those of dark matter and gravity. Instead, researchers gravitated toward supersymmetry that naturally predicted dozens of new particles some of which could account for dark matter.
In the standard approach, the Higgs boson's interactions with other particles tend to elevate its mass toward the highest scales present in the equations, as particles even each other out through quantum mechanical effects, dragging the other particle masses up with it. But physicists propose that far beyond the Standard Model, at a scale about 10^18 times heavier  the Planck mass  there exist unknown giants associated with gravity. These heavyweights would be expected to fatten up the Higgs boson and pull the mass of every other elementary particle up to the Planck scale. Instead, an unnatural hierarchy seems to separate the lightweight Standard Model particles and the Planck mass. Supersymmetry posits the existence of a heavier twin for every particle found in nature with spin 1/2 more, thus switching from boson to fermion and vice versa. If for each particle the Higgs boson encounters (such as an electron) it also meets that particle's slightly heavier twin (the selectron), the combined effects would nearly cancel, preventing the Higgs mass from ballooning toward the highest scales. Yet decades after their prediction, none of the supersymmetric particles have been found. But without supersymmetry, the Higgs boson mass seems as if it is reduced not by mirrorimage effects but by random, improbable cancellations between unrelated numbers. Essentially, the initial mass of the Higgs seems to exactly counterbalance the huge contributions to its mass from gluons, quarks, and gravitational states. And if the universe is improbable, then it must just be one universe of many  a rare bubble in a foaming multiverse  an unsatisfactory conclusion.
For a scalesymmetric theory to work, it must account for both the small masses of the Standard Model and the gargantuan masses associated with gravity. Both scales must arise dynamically  and separately  starting from nothing. In agravity, or adimensional gravity (arXiv:1403.4226), the Higgs mass and the Planck mass both arise through separate dynamical effects, neatly identifying the inflation of the inflationary scenario with the Higgs particle of gravity. However, the theory requires the existence of particlelike ghosts, which either have negative energies or negative probabilities of existing, effectively wreaking havoc on the probabilistic interpretation of quantum mechanics. However, like the 'holes' of antimatter, these may gain a satisfactory explanation over time. In an alternative scalesymmetric theory (arXiv:1408.3429), Bardeen and others posit that the scales of the Standard Model and gravity are separated as if by a phase transition. The researchers have identified a mass scale where the Higgs boson stops interacting with other particles, causing their masses to drop to zero. It is at this scalefree point that a phase changelike crossover occurs.
.
Monopoles: Although Maxwell's equations have symmetry between the electric and magnetic fields E and B and do not prohibit magnetic monopoles, their absence led to Gauss law: . However Dirac discovered that the existence of a single magnetic monopole in the universe would explain the quantization of charge. Consider a system consisting of a single stationary electric charge (e.g. an electron) and a single stationary magnetic monopole. Classically, the electromagnetic field surrounding them has a total angular momentum proportional to the product q_{e}q_{m}, and independent of the distance between them. Quantum mechanics dictates that angular momentum is quantized in units of h, so therefore the product q_{e}q_{m} must also be quantized. This means that if even a single magnetic monopole existed in the universe, and the form of Maxwell's equations is valid, all electric charges would then be quantized: .
Fig 25: GUT structure of a magnetic monopole. Near the center (about 10^{29} cm) there is a GUT symmetric vacuum. At about 10^{16} cm, its content is the electroweak gauge fields of the standard model. At 10^{15} cm, it is made up of photons and gluons. At the edge to the distance of 10^{13} cm, there are fermionantifermion pairs. Far beyond nuclear distances it behaves as a magneticallycharged pole of the Dirac type. In effect, the sequence of events during the earliest moment of the universe has been fossilized inside the magnetic monopole.
In a U(1) gauge group with quantized charge, the group is a circle of radius 2π/e. Such a U(1) gauge group is called compact. Grand unified theories (GUTs) uniting electroweak and strong forces lead to compact U(1) gauge groups, so they explain charge quantization in a way that seems to be logically independent from magnetic monopoles. However, the explanation is essentially the same, because in any GUT which breaks down into a U(1) gauge group at long distances, there are magnetic monopoles. In the early universe if the symmetrical unified state of the GUT froze out in different regions, various topological defects in the symmetrybreaking can form 2D domain walls, 1D cosmic strings, or monopoles, depending on the type of symmetry which is broken. Unlike Dirac monopoles, which would be point singularities of infinite selfenergy, such monopoles would have unified force interactions at very short radii and would thus have a finite but very large mass. If the horizon of the domains were very large, due to inflation, these would be vanishingly infrequent but would still be integral to the cosmic description. Although such cosmic monopoles have never been detected experimentally there is good evidence for Dirac magnetic monopoles as lattice quanta in condensed matter spin ices (doi:10.1126/science.1177582, doi:10.1126/science.1178868). Angulons in molecules spinning in superfluid liquid helium ^{4}He (doi:10.1103/PhysRevLett.118.095301) have also been demonstrated to behave as monopoles (doi:10.1103/PhysRevLett.119.235301).
Rehabilitating Duality: Quantum Gravity, String Theory, and Spacetime Structure
Quantum theory is formulated within spacetime, but massenergy, through gravitation in general relativity alters the structure of spacetime by curving it. This has made a comprehensive integration of gravity with the other forces of nature difficult to achieve and may indicate a fundamental complementarity between the theories. Something of this paradox can be understood in graphic terms if we consider the implications of quantum uncertainty over very small time intervals, small enough to allow a virtual black hole to form. In this case a quantum fluctuation could give rise to a wormhole in the very spacetime in which it is conceived raising all manner of paradoxes of connected universes and time loops into the bargain. This leads to a fundamental conceptual paradox in which spacetime is flat or slightly curved on large scales but a seething topological foam of wormholes on very small scales. These problems lead to fundamental difficulties in describing any form of quantum field in the presence of gravity.
The unification of gravity with the other forces brings new and deeper mysteries into play. Theories which treat particles as points are plagued with infinities the very points themselves imply as infinite concentrations of energy. Point particles may thus on very small scales become string, loop or membrane excitations. The theories broadly called 'superstring' explain the infinite selfenergies associated with a point particle, and the different particles themselves as different excitation on a closed or open loop or string. However none have been found so far which correspond to our own peculiar asymmetric set of particles.
Supergravity adds a number of particles of higher integer and halfinteger spin which also give a good example of how supersymmetry might work. Like any field theory of gravity, where gravitational spacetime stress tensors convert to spin2 particles, a supergravity theory contains a spin2 field whose quantum is the graviton. Supersymmetry requires the graviton field to have a superpartner. This field has spin 3/2 and its quantum is the gravitino. The number of gravitino fields is equal to the number of supersymmetries. There are 8 extended supergravity theories and each of them has a characteristic number of distinct supersymmetries ranging from n = 1 to 8. These generate successive generations of particles of lower spin as in fig23(c). In each theory there is one spin2 graviton and there are n spin3/2 gravitinos. The number of particles with lower spins is also completely determined. If n is equal to 1 the theory is simply supergravity with one graviton and one gravitino. If n is 2. the theory includes 1 graviton, 2 gravitinos and 1 spin1 particle (graviphoton). Perhaps the most realistic model of this kind is given when n = 8. The complement of elementary particles then consists of 1 graviton, 8 gravitinos. 28 graviphotons variously 48 to 56 spin1/2 particles (gravifermions) and 70 spin0 particles (graviscalars). An intriguing property of the extended supergravity theories is their extreme degree of symmetry. Each particle is related to particles with adjacent values of spin by supersymmetry transformations. and these supersymmetries are of local form. Thus a graviton can be transformed into a gravitino and a gravitino into a graviphoton. Within each family of particles that have the same spin all the particles are related by a global internal symmetry, much like the internal symmetry that relates proton and neutron.
In 2018 a version of n = 8 supergravity involving E10 an infinite dimensional Lie algrbra extending the exceptional simple symmetry group E8 has been found to provide an extension of all the forces in the standard model to a unification with gravity (Meissner & Nicolai 2018). In supergravity theories in four spatiotemporal dimensions, there cannot be more than eight different supersymmetric rotations. In this version of n = 8 supergravity, there are 48 fermions (with spin 1/2), which is precisely the number of degrees of freedom required to account for the six types of quarks and six types of leptons observed in nature. After making an adjustment for charge anomalies (the electron had a charge of 5/6 instead of 1, the neutrino had 1/6 instead of 0, etc. in GellMann's original model 30 years previously) in 2015 Meissner and Nicolai obtained a structure with the symmetries U(1) and SU(3) known from the Standard Model. The motivation was strengthened by the fact that the LHC accelerator failed to produce anything beyond the Standard Model and n = 8 supergravity fermion content is compatible with this observation. What was missing was to add the SU(2) group, responsible for the weak nuclear force. In 2018 Meissner and Nicolai show that the weak force SU(2) symmetry can also be accommodated. Unlike the symmetry groups previously used in unification theories, E10 is an infinite group, very poorly studied even in the purely mathematical sense. It keeps the number of spin 1/2 fermions as in the Standard Model but on the other hand suggests the existence of new particles with very unusual properties. Importantly, at least some of them could be present in our immediate surroundings, and their detection should be within the possibilities of modern detection equipment.
Superstrings attempt to address the singularity of the infinite selfenergy of point particles by assuming that on the Planck scale these turn into ibrating quantum strings, which thus also do not have point vertices, but continuous interactions as illustrated in fig 26.
Fig 26: (Above) Point particles (a), such as the charged electron, have infinite selfenergies because their fields tend to infinity at the vertex and they have precise vertices of interaction. Strings (b) turn the infinities into harmonic quantum excitations at a fundamental scale such as the Planck scale smoothing both the infinities and turning the vertices into smooth manifold tansitions. (Wolfson R760, Sci. Am. Jan 96). They can be regarded either as open strings or loops. The different excitations (c) correspond to different particles e.g. of higher massenergy.(Below) Compactification of the 12 or so unseen dimensions leave only our 4 of spacetime on large scales (Sci. Am. Jan 96). Compactification of one dimension to form a tube is a way 11D Mtheory can be linked to 10D superstrings which are on smaller scales, stringlike tubes.
Such theories also generally require over 10 dimensions to converge, all but four of which are 'compactified'  curled up on subparticulate scales, leaving only our four dimensions of spacetime as global dimensions. Such 'theories of everything' or TOEs have not yet fully explained how the particular arrangements of particles and forces in our universe are chosen out of the millions of possibilities for compactification these higher dimensional theories permit when supersymmetry is broken to produce the particles and forces we experience at low energies.
The internal symmetry dimensions of existing particles come close to the additional number of hidden dimensions required, suggesting the key can be found in the known particles. If we take 1 for the Higgs, 1 for the neutrino, 2 for the electroweak, 3 for colour, and 4 for spacetime we have 11. However in string theory the compactifications occur on a huge variety of spaces called CalabiYau manifolds, presenting up to 10^{500} possible configurations. Fourdimensional spacetime is optimal mathematically for complexity. In some unification theories, one of the compactified dimensions might be much larger (fig 26). Duality, in which fundamental particles in one description may become composite in another and vice versa may also enable apparently divergent theories to be understood through a convergent dual.
Fig 27: Relation between Mtheory and dualities between string theories (ex Hawking R303, Duff R170). Originally string theories were entirely bosonic and formulated in 26 dimensions for internal consistency until the advent of consistent 10 dimensional theories. Type I has one supersymmetry in 10 dimensions. It is based on unoriented open and closed strings, while the rest are based on oriented closed strings. Type II have two supersymmetries. IIA is nonchiral (parity conserving) while the IIB is chiral (parity violating). The heterotic string theories are based on a hybrid of a type I superstring and a bosonic string. There are two kinds of heterotic strings differing in their tendimensional gauge groups: the heterotic E8xE8 string and the heterotic SO(32) string. There are two types of duality. Sduality says that a collection of strongly interacting particles in one theory can be viewed as a collection of weakly interacting particles in another thus avoiding infinities. Tduality states that a string propagating around a circle of radius R is equivalent to a string in the dual propagating around a circle of radius 1/R. If a string has momentum p and winding number n around the circle in one description, it will have momentum n and winding number p in the dual description (see fig 28).
MTheory represents a form of unification of several theories including 10dimensional superstring theories and 11dimensional supergravity have been proposed in the form of Mtheory  Mfor membrane, or according to its proponents, magic. The essential idea is that 11dimensional membrane theory looks like 10dimensional string theory if one of the two membrane dimensions are rolled up into a tiny tube along with one of the 11dimensions. In this point of view several of these theories are actually complementary mathematical formulations of the same object. This brings in the 'holographic principle' (fig 3), in which a theory in a multidimensional region can be equivalent to a theory on the boundary of the region, one dimension lower (Cowen 2015, Duff R175).
Particles can come in two types, one vibrational states of strings (vibrating particles) and the other topological  how many times a string wraps around the compactified dimension (winding particles). The winding particles on a tube of radius R are identical to the vibrational particles on a tube of radius 1/R. Duality is a paradoxical concept in which there is a natural relationship between theories which continue to have strong interactions and the perturbation theory fails, with dual theories whose interaction strengths are the reciprocals of the originals and hence converge nicely. The nemesis comes if we end up having to deal with a TOE whose interactions are mid rage, so that neither the original nor the dual can be unraveled.
Fig 28: Duality between string theories. Winding particles in one have the same energetics as vibrational particles in the other and vice versa (Duff R175). The concept of duality may solve intractable infinities by finding a dual theory which is convergent. In the dual theory, particles like magnetic monopoles, which are a composite of quarks and other particles, become fundamental and electrons and quarks become composites of these. No particle is thus truly fundamental, each locked in sexual paradox with its dual.
Another branch of string theory, Ftheory (arXiv: hepth 9602114) has allowed physicists to work with strongly interacting, or strongly coupled, strings. This means that string theorists can use algebraic geometry  which uses algebraic techniques to tackle geometric problems  to analyze the various ways of compactifying extra dimensions in Ftheory and to find solutions. More recently, researchers (arXiv: hepth 1903.00009) have identified a class of solutions with string vibrational modes that lead to a similar spectrum of fermions to the standard model  including the property that all fermions come in three generations. The Ftheory solutions found have particles that also exhibit the handedness, or chirality, of the standard model particles  reproducing the exact 'chiral spectrum' of standard model particles. For example, the quarks and leptons in these solutions come in left and righthanded versions, as they do in our universe. There are at least a quadrillion solutions in which particles have the same chiral spectrum as the standard model, which is 10 orders of magnitude more solutions than had been found within string theory until now.
Supersymmetry provides at least one candidate for dark matter. There are four neutralinos that are their own antiparticles (Majorana fermions) and are electrically neutral, the lightest of which is typically stable. Because these particles only interact with weak vector bosons, they are not directly produced at hadron colliders in copious numbers and are favoured dark matter candidates. In supersymmetry models, all Standard Model particles have partners with the same quantum numbers except for spin, which differs by 1/2 from its partner. Since the superpartners of the Z_{0} boson (zino), the photon (photino) and the neutral higgs (higgsino) have the same quantum numbers, they can mix to form four eigenstates of the mass operator called "neutralinos". Alternatively these four states can be considered mixtures of the bino the neutral wino (superpartners of the U(1) gauge field corresponding to weak hypercharge and the W bosons), and the neutral higgsino. In many models the lightest of the four neutralinos turns out to be the lightest supersymmetric particle (LSP).
Fig 29: Superstring theories suffer from having many different forms of compactification during symmetry breaking  up to 10^{500}. Two recent papers (arXiv:1806.08362, 1806.09718) suggest the overwhelming majority of multiverses in this landscape are assigned to the swampland of unviable universes, where dark energy is unstable, also reviving the popularity of timevarying dark energy models such as quintessence. Right: An attempt is made to find why our universe has an optimal configuration among the millions of possibilities. The CalabiYau manifold illustrated is just one compactification, which shows a local 2D crosssection of the real 6D manifold known in string theory as the CalabiYau quintic. This satisfies the Einstein field equations and is a popular candidate for the wrappedup 6 hidden dimensions of 10dimensional string theory at the scale of the Planck length (1.6 x 10^{35} m or about 10^{20} times the size of a proton). The 5 rings that form the outer boundaries shrink to points at infinity, so that a proper global embedding would be seen to have genus 6 (6 handles on a sphere, Euler characteristic 10). The underlying real 6D manifold (3D complex manifold) has Euler characteristic 200, is embedded in the 4D complex projective plane, and is described by the equation z_{0}^{5} + z_{1}^{5} + z_{2}^{5} + z_{3}^{5} + z_{4}^{5} = 0 in five complex variables. The displayed surface is computed by assuming that some pair of inhomogeneous complex variables, say z_{3}/z_{0} and z_{4}/z_{0}, are constant (thus defining a 2manifold slice of the 6manifold), renormalizing the resulting equations, and plotting the local Euclidean space solutions to the complex equation z_{1}^{5} + z_{2}^{5} = 1.
An alternative dark matter candidate has emerged from extending the SU(3) symmetry of the color force with one extra force field to conserve baryon number, resulting in an SU(4) x SU(2) x U(1) extension of the standard model explaining why the proton never decays as the lightest baryon. The model requires quarks to have heavier partners, and the lightest of these has the right properties to be dark matter (ArXiv:1511.07380).
In one form of the holographic principle, discussed above and illustrated in fig 3, quantum entanglement on the boundary gives rise to gravitationlike forces on the interior, possibly explaining gravity and relativity in terms of holographic entanglement.
A possible key to the higher dimensional theories is the 8dimensional number system called the octonians. Just as complex numbers form a two dimensional plane, for which the second component is a multiple of i, the square root of 1, octonians form a system of 8components. Associated with the octonians are the exceptional symmetry groups such as G4 and E8. Internal symmetries such as that of colour, and of charge, as well as the wellknow Lorentz transformations of special relativity are already the basis for explaining the standard model.
Another key to a possible unraveling of the Gordian knot of the theory of everything comes from dualities. Electromagnetism is renormalizable because by adjusting for the infinite self energy of a charge we arrive at a theory like quantum electrodynamics where each more complicated diagram with more vertices makes a contribution 137 times smaller to the interaction and it is then possible to correctly deduce the combined effects without infinities creeping in. Essentially the idea is as follows:
Another dimensional issue is that the only spheres which will admit a vector field without singularities, socalled 'hairy ball's, are S^{1} the circle, and S^{3} , S^{7} the 3D and 7D spheres. Our twosphere S^{2} always gets places where one hair stands on end like the crown of your head. Thus the status of the unit octonians has a dual 7D coincidence between algebra and topology, which may be essential in establishing for example a uniform time flow.
The three dimensional nature of space has also been linked to quantum reality. In the 'emergent' picture the three dimensions of space and one of time would arise from quantum gravity and the differentiaton of the forces of nature in the cosmic origin. But a more fundamental basis has been suggested that, given a single dimension for time, quantum theory is the only theory that can supply the degree of randomness and correlation seen in nature  and it can only do so if space is 3D (ArXiv:1206.0630, ArXiv:1212.2115). A subsequent paper shows this constraint applies if microscopic objects interact "pairwise" with each other, as they appear to in ours (ArXiv:1307.3984) but could be higher if pairwise became three or more. If our conventional complex number quantum theories are replaced by quaternions or octonians, the dimensionality could rise to five or nine (Phys. Rev. D 84 125016).
Stephen Hawking, who has been a consistent champion of the TOE quest, has lamented that although the connections implied by Mtheory dualities are so convincing that to not think they are on the right track "would be a bit like believing that God put fossils into the rocks in order to mislead Darwin about the evolution of life " (Hawking R303 57), he now worries (R304) that the search for a consistent theory may remain beyond reach in a single theory because of the implications of Godel's theorem, which proves that any logical system containing finite arithmetic admits formally undecidable propositions. If the search for a TOE runs up against this nemesis, the description of the universe may become undecidable.
E8 and Octonionic Foundation Theories Two theories attempting to unify the foundations of physics leading to a TOE have been generated by creative mavericks coming out of leftfield with ingenious unifying concepts based on a deep unity between mathematics and physics, depending on number systems from real through complex to quarternions and octonians and exceptional simple groups based on octonians, such as G2 and E8, which has 248 dimensions and is generated by a 240 vector root system in R^{8}.
Fig 30: Left: Octonians and the Fano plane. Just as complex numbers have two components a + bi with i^{2} = 1, so the octonians have eight components 1, e_{1}, ..., e_{7} such that e_{i}^{2} = 1. Multiplication of coordinate vectors is determined by the 'Fano plane'. Any e_{i} , e_{j}, e_{k} connected by arrows multiply in the manner e_{i} x e_{j} = e_{k}. Like the quarternions, the octonians are noncommutative. Those connected in the reverse direction inherit a minus sign. Each line also loops back to the first coodinate in a cyclic manner. Octonian multiplication is also nonassociative, with bracketing rearrangements also invoking a minus sign. Lower left: Dynkin diagrams for E8 and its infinitedimensional extension E10. E10 has been proposed to be a symmetry group in a realizable supergravity model representing the standard model and its further extension E11 has been conjectured to be the underlying symmetry group in Mtheory. Right: Right: E8' 240 root vectors in R^{8}.
The first of these is Garrett Lisi's (2007) "An Exceptionally Simple Theory of Everything", which caused a convulsion of debate throughout the physics community when it was first proposed. After getting his Ph.D., Lisi left academia and moved to Maui  expressing his dissatisfaction with the state of theoretical physics. On Maui, Lisi volunteered as a staff member at a local school, and split his time between working on his own physics research and surfing. On July 31, 2006, Lisi was awarded an FQXi grant to develop his research in quantum mechanics and unification. On June 9, 2007, Lisi realized that the algebraic structure he had constructed to unify the standard model of particle physics with general relativity partially matched part of the algebraic structure of the E8 Lie group. On July 8, 2009, at a FQXi conference in the Azores, Lisi made a public bet with Frank Wilczek that superparticles would not be detected by July 8, 2015. After a oneyear extension to allow for more data collection from the Large Hadron Collider, Frank Wilczek conceded the superparticle bet to Lisi in 2016.
Lisi's theory sets out a possible scheme for a theory uniting gravity with the other forces based on root vector systems generating E8 "via a superconnection described by the curvature and action over a four dimensional base manifold". Although this theory remains speculative, it brings together an ingenius utilization of the internal symmetries of E8 with the dynamical topology of the underlying manifold, retaining an intrinsic complementarity between discrete and continuous aspects, despite its manifestly algebraic basis.
Fig 31: Left: A depiction in Garrett Lisi's Exceptionally Simple Theory of Everything. Centre: Garrett Lisi (TED Talk).
However a later paper (Distler and Garibaldi 2009) claims any "Theory of Everything" obtained by embedding the gauge groups of gravity and the Standard Model into a real or complex form of E8 lacks certain representation theoretic properties required by physical reality. Lisi (2011) has commented on critiques of his work. A 2015 discussion of this can be found at (www.physicsforums.com/threads/isthereanynewstogarrettlisitheory.790057/).
While the existence of the Higgs particle has now been confirmed in the first round of the LHC runs, completing the standard model of physics, there is still no experimental support for Supersymmetry.
One particularly siginificant prediction of this model is that the universe may be algebraically symmetrybroken so that the bosons and fermions give a balanced positive and negative contriution to the massenergy of the universe, but collectively rather than in supersymmetric pairs, while each have different numbers and arrangements of particles, as is the case in the standard model. In the 240 dimensional root system of E8, there are 2^{2}.^{8}C_{2} = 112 'bosonic' root vectors with integer coordinates and 128 = 2^{8}/2 'fermionic' ones with half integer coordinates. Both types are 8D vectors with Pythagorean length 2^{1/2} and coordinates adding to an even number, hence the 128 rather than 256. Stephen Adler in a (2014) paper has invoked a process involving both SU(8) unification and supergravity into a nonsupersymmetric model based on such a complementation.
Fig 31b: Cohl Furey describing how CxO through the ideals of the Clifford algebra Cl6 can be shown to be composed into singletons, doublets
and so on forming a system equivalent to a set of quarks and leptons with quantized electric charge, as in the standard model. (Videos)
The second is Cohl Furey (20142018) a Canadian physicist now at Cambridge, who has develped ways of generating key features of the standard model with special relativity using the tensor product of the the unique four division algebras the reals, complex numbers, quaternions and octonians . This entity is sometimes called the Dixon algebra, after Geoffrey Dixon, a physicist who first took this tack in the 1970s and '80s before failing to get a faculty job and leaving the field. (Dixon forwarded me a passage from his memoirs: "What I had was an outofcontrol intuition that these algebras were key to understanding particle physics, and I was willing to follow this intuition off a cliff if need be. Some might say I did.") Unlike Dixon who attached other features of physics to the algebra, Furey works essentiall with this entity operating upn itself.
In her (2015) thesis and more recent publications, she has shown that when this is cleanly split into the two factors and , the former can be transformed, through the Clifford algebra Cl_{2}, to form twistors and replicate the Lorenzian transformations of special relativity, while the latter can be shown to factor into a single generation of SU(3)xSU(2)xU(1) components, transformed through the Clifford algebra Cl_{6}, identifying quarks and leptons and their discrete electric charges characteristic of the standard model of physics as ideals. One can see some features of this immediately. If one of the octonionic unit vectors, say e_{7}, is fixed, the transformations under this constraint are those of SU(3). An ideal is a subalgebra in which every element of the algebra is mapped into the ideal by multiplication with any ideal element. This will in turn mean that under operation of the other transformations, the fundamental ideal substructure is preserved, giving the nascent particle stability under the wider transformations, thus producing a similar effect to that of KaluzaKlein theories above, but here more cleanly in the context of the standard model.
These relationsare gradually being extended by Furey (2018) to include systems having features of the SU(5) extension of the standard model, without proton decay, and development of electroweak parity violation. Multiplicative chains of elements of the RCHO algebra can be shown to have 10 generators. Nine of the generators act like spatial dimensions, and the 10th, which has the opposite sign, behaves like time, in a manner similar to the 10 dimensions of some string theories.
These theories fit very neatly into the situation that has resulted from the failure of supersymmetric and other exotic supertheory particles to be detected, even at the high energies of the LHC experiments. The failure of these supertheories makes a concise succinct basis for the alternative approach which is to define the symmetries defining the standard model from "inside" on the basis of interactive symmetries of the four division algebras R, C, H and O. Furey's goal is to find the model that, "in hindsight, feels inevitable and that includes mass, the Higgs mechanism, gravity and spacetime". Extending the Lorenzian component to general relativity and gravity would effectively turn the standard model or its further extension on the basis of fundamental algebraic symmetries directly into the TOE everyone has been looking for.
Like Dixon and Lisi, Furey knows this path is perilous, noting that if a faculty position isn't forthcoming after her fellowship ends, there's always mixed martial arts, the ski slopes or busking her accordion, as she has done in the past to make ends meet. "Accordions are the octonions of the music world," she said  tragically misunderstood. Even if I pursued that, I would always be working on this project".
Superfluidity and Quasiparticles: An indication of the possible complexity of a TOE uniting gravity and quantum field theories comes from superfluid helium 3, which forms a superfluid at a lower temperatures than helium 4 because _{3}He atoms are fermionic and unlike bosonic _{4}He atoms, have to condense into bosonic pairs before superfluidity ensues. At close to absolute zero, helium 3 is superfluid, and as the temperature rises fractionally a number of bound quantum quasiparticles, form in the medium (Dobbs). Several of the known properties of unified field theories can be modeled using superfluidity on the one hand and these bound structures on the other, as equivalents of gravitational and the other quantum fields. This indicates that the theory sought may not just be a limit of gravitation and quantum fields, but a deeper theory in which both of these are merely stability states. A simulation in helium 3 modelling a collision of branes using the A and B phases of the superfluid (Bradley et al. 2008) illustrates these ideas, which have also been applied to the gravastar alternative to black holes.
Brane cosmology: forms an explanation alternative to supersymmetry for the hierarchy problem  why gravity is so much weaker than the other forces. The central idea is that the visible, fourdimensional universe is restricted to a brane inside a higherdimensional space, called the "bulk" or "hyperspace". If the additional dimensions are compact, as in compactified superstring theories, then the observed universe contains the extra dimensions, and then no reference to the bulk is appropriate. Some versions of brane cosmology, based on the large extra dimension idea, can explain the weakness of gravity relative to the other fundamental forces of nature, thus solving the hierarchy problem. In the brane picture, the other three forces (electromagnetism and the weak and strong nuclear forces) are localized on the brane, i.e 4D spacetime, but gravity has no such constraint and propagates on the full e.g. 5D spacetime. Much of the gravitational attractive power "leaks" into the bulk. As a consequence, the force of gravity should appear significantly stronger on small (subatomic or at least submillimetre) scales, where less gravitational force has "leaked". Various experiments are currently under way to test this. Extensions of the large extra dimension idea with supersymmetry in the bulk appears to be promising in addressing the socalled cosmological constant problem.
The Big Bounce and Loop Quantum Gravity (LQG) is a theory that attempts to quantize general relativity. The quantum states in the theory do not live inside the spacetime. Rather they themselves define spacetime. The solutions describe different possible spacetimes. Space becomes granular as a result of quantization. Space can be viewed as an extremely fine fabric or network "woven" of finite loops. These networks of loops are called spin networks, whose evolution over time is called a spin foam. When the spin network is tied in a braid, it forms something like a particle. This entity is stable, and it can have electric charge and handedness. Some of the different braids match known particles as shown in fig 32, where a complete twist corresponds to +1/3 or 1/3 unit of electric charge depending on the direction of the twist. Heavier particles are conceived as more complex braids in spacetime. The configuration can be stabilized from spacetime quantum fluctuations by considering each quantum of space as a bit of quantum information resulting in a kind of quantum computation. The predicted size of this structure is the Planck length, ~10^{35} m. There is no meaning to distance at scales smaller than this. LQG predicts that not just matter, but space itself, has an atomic structure. There are fundamental isues reconciling LQG with special relativity. LQG had also been invoked in theories seeking to integrate it with string theory or as a holographic duality to it, which also might help resolve the inconsistency at its core with sepcial relativity.
Fig 32: Left: Loop quantum gravity is an alternative to superstring theory. Right: Braided spacetime gives an underlying basis for unifying the fundamental particles. It is similar to the preonic Rishon model where an TTT = antielectron; VVV = electron neutrino; TTV, TVT and VTT = three colours of up quarks; TVV, VTV and VVT = three colours of down antiquarks; with the other particles appearing from the antirishons (Nuclear Physics B 204 1982 141167).
The most spectacular consequence of loop quantum cosmology is that the evolution of the universe can be continued beyond the Big Bang, which becomes a sort of cosmic Big Bounce, in which a previously existing universe collapsed, not to the point of singularity, but to a point before that where the quantum effects of gravity become so strongly repulsive that the universe rebounds back out, forming a new branch. Successive universes might thus be able to evolve their laws of nature. The big bounce has also been calculated to invoke a form of cosmic inflation ( doi: 10.1016/j.physletb.2010.09.058). Hints of an experimental result that might confirm the existence of spacetime foam come from extreme high energy gamma ray bursts from quasar black holes, where the extremely high energy rays appear to arrive later than lower energies consistent with being slowed by spacetime quantization (arXiv:1305.2626).
In 2018 two new models of a big bounce cosmology have been proposed (arXiv:1710.05990, arXiv:1709.01999). Both of these get around the problem of the universe collapsing into a singularity, one by introducing a scalar field and the other by invoking a rotational energy confined in six compactified dimensions.
Fig 33: The big bounce in loop quantum gravity.
In general relativity spacetime ceases to be a "container" over which physics takes place and has no objective physical meaning. Instead the gravitational interaction is represented as just one of the fields forming the world. Einstein's comment was "Beyond my wildest expectations". In quantum gravity, the problem of time remains an unsolved conceptual conflict between general relativity and quantum mechanics. Roughly speaking, the problem of time is that there is none in general relativity. This is because the Hamiltonian is a constraint that must vanish. However, in quantum mechanics, the Hamiltonian generates the time evolution of quantum states. Therefore, we arrive at the conclusion that "nothing moves" ("there is no time") in general relativity. Since "there is no time", the usual interpretation of quantum mechanics measurements at given moments of time breaks down.
The ekpyrotic scenario, the term meaning 'conflagrationary', is a cosmological model of the early universe that explains the origin of the largescale structure of the cosmos and also has a big bounce. The original ekpyrotic models relied on string theory, branes and extra dimensions, but most contemporary ekyprotic and cyclic models use the same physical ingredients as inflationary models (quantum fields evolving in ordinary spacetime). The model has also been incorporated in the cyclic universe theory (or ekpyrotic cyclic universe theory), which proposes a complete cosmological history, both the past and future. The name is wellsuited to the theory, which addresses the fundamental question that remains unanswered by the big bang inflationary model: what happened before the big bang?
The explanation, is that the big bang was a transition from a previous epoch of contraction to the present epoch of expansion. The key events that shaped our universe occurred before the bounce, and, in a cyclic version, the universe bounces at regular intervals. It predicts a uniform, flat universe with patterns of hot spots and cold spots now visible in the cosmic microwave background (CMB), and has been confirmed by the WMAP and Planck satellite experiments. Discovery of the CMB was originally considered a landmark test of the big bang, but proponents of the ekpyrotic and cyclic theories have shown that the CMB is also consistent with a big bounce.
Fig 34: A cyclic ekpyrotic universe based on colliding branes, which are periodically attracted to oneanother, causing a big bang (inset left), resulting in a cycle two period of which are illustrated. Mutual forces between the branes may provide a feedback driving expansion and contraction.
The search for primordial gravitational waves in the CMB (which produce patterns of polarized light known as Bmodes) may eventually help scientists distinguish between the rival theories, since the ekpyrotic and cyclic models predict that no Bmode patterns should be observed.
A key advantage of ekpyrotic and cyclic models is that they do not produce a multiverse. When the effects of quantum fluctuations are properly included in the big bang inflationary model, they prevent the universe from achieving the uniformity and flatness that the cosmologists are trying to explain. Instead, inflated quantum fluctuations cause the universe to break up into patches with every conceivable combination of physical properties. Instead of making clear predictions, inflationary theory allows any outcome. The idea that the properties of our universe are an accident and come from a theory that allows a multiverse of other possibilities is hard to reconcile with fact that the universe is extraordinarily simple (uniform and flat) on large scales and that elementary particles appear to be described by fundamental symmetries.
There are two types of polarization, called Emodes and Bmodes. This is in analogy to electrostatics, in which the electric field (Efield) has a vanishing curl and the magnetic field (Bfield) has a vanishing divergence. The Emodes arise naturally from scattering in a heterogeneous plasma. The Bmodes are not sourced by standard scalar perturbations. Instead they can be created either by gravitational lensing of Emodes, which has been measured by the South Pole Telescope in 2013, or from gravitational waves arising from cosmic inflation. Detecting the Bmodes is extremely difficult, as the degree of foreground contamination is unknown, and the weak gravitational lensing signal mixes the relatively strong Emode signal with the Bmode signal. In 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the Bmode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation. However, based on the combined data of BICEP2 and Planck, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way.
The possibilities remain open between our universe having unique laws derived from fundamental symmetries or being one of many types of universe whose laws happen to support complexity and life  a 'manyuniverses' perspective. Some theories (Smolin R649) even suggest the laws of nature might be capable of evolution from universe to universe, resulting in one containing observers. The anthropic principle asserts that the existence of (conscious) observers is a constraint delimiting what laws of nature are possible. Anthropic arguments (Barrow and Tipler R45) may enable a form of selfselection in the sense that simple universe which could not sustain life or observers would never be observed, guaranteeing our universe has dimensionalities, symmetrybreakings giving rise to fundamental constants consistent with the interactive fractal complexity (p 317). Regardless of these uncertainties in the final TOE, the general features of force unification, symmetrybreaking and inflation are likely to remain part of our understanding of the cosmic origin.
The Wave, the Particle and the Quantum
We all exist in a quantum universe, and the classical one we assume and link to our experience of the everyday world is just an extrapolation. To understand both the foundations of cosmology and the spooky world of quantum reality, we need to set aside the classical ideas of mechanism, determinism and the mathematical notions of sets made out of discrete points and come to terms with ultimate paradoxes of spacetime and complementarity. To fully understand the implications we need to examine all aspects of the universe in detail, from the smallest particles to the universe as a whole and only then come to a synthesis of the role complementarity and 'sexual paradox' may play at the cosmological level.
Our quantum world is very subtle and much more mysterious than a mechanical 'building blocks ' view of the universe with simple separate classical particles interacting in empty space. Many people lead their lives at the macroscopic level as if quantum reality didn't exist, but quantum reality runs from the very foundations of physics to the ways we perceive. Our senses of sight, hearing, touch and taste/smell are all distinct quantum modes of interaction with the environment. Senses aren't just biological adaptions but fundamental quantum modes of information transfer, by photons, phonons, solitons and orbital interactions. Quantum processes such as tunneling are central to the function of our enzymes and to the ion channels and synapses that support our excitable neurons (Walker R724).
Fig 35: Bohr and Einstein  their debate which sparked the Copenhagen interpretation that Quantum mechanics describes only our knowledge of a system not its actual state, eventually led to the discovery of quantum nonlocality.
The 'correspondence principle' by which the quantum world is supposed to fade into classical 'reality ' is never fully realized. Many phenomena in the everyday world involve chance events which themselves are often sensitively related to uncertainties at the quantum level. Chaotic, selfcritical and certain other processes may 'inflate ' quantum effects into global fluctuations. Conscious interaction with the physical world may likewise depend both on quantum excitations and the loophole of uncertainty in expressing 'freewill '. We need to understand how quantum reality interacts with conscious experience, however in doing so we immediately find the most challenging examples of sexual paradox that lie at the core of the cosmological puzzle  waveparticle complementarity. A quantum manifests in two complementary ways as a nonlocal flowing 'wave ' which has a frequency and spatial extension and as a localized 'particle ' which is created or destroyed in a single step. It can manifest as either but not both at the same time. All the weird quantum paradoxes of nonlocality, entanglement andmwave function collapse emerge from this complementary relationship. To understand the full dimensions of this mystery we need to see how this strange reality was discovered and do a little fairly simple maths.
In the late 19th century, classical physics seemed to have captured all the phenomena of reality, including Clerk Maxwell's equations for the electromagnetic transmission of light: , where .
However Lord Kelvin noticed what he called 'two small dark clouds on the horizon', which together plunged classical physics into the quantumtheoretic age.
Why we don't burn to a crisp: The first of these was blackbody radiation, named after the thermal radiation from a dark cavity and also from bright thermal objects like the sun. We know the sun has some ultraviolet and can burn us, but not as much as the peak of visible light. If classical physics were true it should have more ultraviolet and even more x and gamma rays  a situation called ultraviolet catastrophe.
Fig 36: The solar spectrum Fraunhoffer 1814, and Planck's radiation law both have a peak about 5,000 ^{o}C
Planck eventually solved the problem by quantizing the radiation into little packets proportional to h called quanta. The particles responsible for this packeting are now identified as the photon. The answer to the problem is this. In the classical view energy distribution should increase endlessly into the high frequencies, but in the quantum view, to release a particulate photon of a given frequency, there has to be an atom somewhere with an energized enough electron to radiate the photon, so the energy is limited by the temperature of the thermal body. Thus because the photons come in quanta, or packets, the radiation cannot go endlessly up into the ultraviolet. Planck's equation is displayed in fig 36. It starts out growing for small energies but falls off exponentially after the peak corresponding to the exponentially rarer thermodynamic excitations at a given temperature.
The Photoelectric Effect and Einstein's Law: Einstein made the next breatthrough addressing the other dark cloud  the photoelectric effect  If you shine light on a plate in a vacuum valve and vary the voltage required to stop the resulting current flow, you find the more light, the more current, but no more voltage. The voltage turns out to depend only on the frequency. That is, the energy doesn't change, just the flow rate. This makes no sense with a classical wave, because a bigger wave has both more flow and more energy.
Fig 37: Photoelectric effect apparatus
The answer is that a given frequency of light contains particles called photons. The more photons, the more excited electrons cross the vacuum by gaining this energy, but there is no change in the energy because each photon has the same energy for a given colour (frequency), regardless of how bright the light.
Einstein solved this problem by realizing the energy of any particle is proportional to its frequency as a wave by the same factor h  Planck's constant  the fundamental unit of quantumness. Energy is thus intimately related to frequency  in a sense it IS frequency. Measuring one is necessarily measuring the other. We can thus write (1)
Quantum Uncertainty: Supposing we try to imagine how we would calculate the frequency of a wave if we had no means to examine it except by using another similar wave and counting the number of beats that the 'strange wave ' makes against the standard wave we have generated. This is exactly the situation we face in quantum physics, because all our tools are ultimately made up of the same kinds of waveparticle quanta we are trying to investigate. If we can't measure the amplitude of the wave at a given time, but only how many beats occur in a given period, we can then only determine the frequency with any accuracy by letting several beats pass. We then however have let a considerable time elapse, so we don't know exactly when the frequency was at this value.
The closer we choose our frequency to get a given accuracy, the longer the beats take to occur. We thus cannot know the time and the frequency simultaneously. The more precisely we try to define the frequency, the greater the time is smeared out. Measuring a wave frequency with beats has intrinsic uncertainty as to the time, which becomes a smearedout interval. The relationship between the frequencies and the beats is: (2)
Fig 38: Waves and beats.
Despite gaining his fame for discovering relativity, and the doom equation E = mc^{2} which made the atom bomb possible, Einstein, possibly in cooperation with his wife, also made a critical discovery about the quantum. Einstein's law connects to every energetic particle a frequency
If we apply equations (1) & (2) together, we immediately get the famous Heisenberg uncertainty relation . It tells us something is happening which is impossible in the classical world. We can't know the energy of a quantum interaction and the time it happened simultaneously. Energy and time have entered into a primal type of prisoners ' dilemma catch 22. The closer we try to tie down the energy, the less precisely we know the time. This peculiar relationship places a specific taboo on knowing all the features of a situation and means we cannot predict precise outcomes, only probabilities. The same goes for momentum and position in each of the three spatial dimensions. Notice also that this links energy and momentum, time and space, and frequency and wavelength as three manifestations of one another. The way in which this happens is illuminating. Each quantum can be conceived as a particle or as a wave but not both at the same time. Depending on how we are interacting with it or describing it, it may appear as either.
Quantum Chemistry:All particles, such as the electrons, protons and neutrons which make up the atoms of our chemical elements and molecules all exist as both particles and waves. The orbitals of the electrons around atoms and those linking each molecule together occur only at the energies and sizes which correspond to a perfect standing wave, forming a set of discrete levels like the layers of an onion. These in turn determine the chemical properties of each substance. Because the molecular orbitals formed between a pair of atoms have lower energy than their individual atomic counterparts, the atoms react to form a molecule releasing the spare energy as heat. The characteristic energy differences between the levels of a given atom can be seen, both on earth and in the universe at large, as emission or absorbtion lines in the electromagnetic spectrum.
Fig 39: Quantum chemistry. (a) s, p, d, f orbitals have spins 0, 1, 2 and 3 respectively. Each occurs in a series of levels, forming the shells or orbitals of the atom. The first level 1s can contain 2 electrons of ossosite spin. The second with 2s, and three p orbitals 2p_{x} 2p_{y} and 2p_{z} can hold 8. These can form energybalancing linear combinations, resulting in hybrid sp orbitals. (b) Two s orbitals form a lower energy σ molecular orbital as well as a higher energy σ * repelling antibonding orbital if the electron spins are not complementary (see fermions below). (c) Bonding p obitals can also form π orbitals. Six p orbitals can combine to form a single delocalized π molecular orbital as in in the benzene ring. Hybrid atomic orbitals sp, sp_{2} and sp_{3} lead to linear, planar and tetrahedral bonding arrangements seen in many molecules, due to energy minimization. (d) Absorbed or emitted photons cause electron transitions between orbitals in hydrogen, giving rise to the signature of the hydrogen spectrum (e). This signature in space, redshifted far into the low frequencies, revealed the expanding universe.
Twoslit interference and Complementarity
We are all familiar with the fact that CDs have a rainbow appearance on their underside. This comes from the circular tracks spaced a distance similar to the wavelength of visible light. If we used light of a single wavelength we would see light and dark bands. We can visualize this process more simply with just two slits as in fig 40. When many photons pass through, their waves interfere as shown and the photographic plate gets dark and light interference bands where the waves from the two slits reinforce or cancel, because the photons are more likely to end up where their superimposed wave amplitude is large. The experiment confirms the wave nature of light, since the size of the bands is determined by the distance between the slits in relation to the wavelength where c is the velocity of light:
We know each photon passes through both slits, because we can slow the experiment down so much that only one photon is released at a time and we still eventually get the interference pattern over time. Each photon released from the light bulb is emitted as a particle from a single hot atom, whose excited electron is jumping down from a high energy orbit to a lower one. It is thus released locally and as a single 'particle ' created by a single transition between two stable electron orbitals, but it spreads and passes through both slits as a wave. After this the two sets of waves interfere as shown in fig 40 to make light and dark bands on the photographic plate when the light is of a single frequency, and the rainbows we see on a CD or DVD when white light of many frequencies is reflected off the shiny rings between the grooves in the manner of a multislit apparatus.
The evolution of the wave is described by an equation involving rates of change of a wave function φ with respect to space and time. For example for a massive particle in free space, we have a 1D differential equation: . For Schrodinger's and Dirac's wave equations see the appendix.
This equation emphasizes the relationship between space and time we see emerging in special relativity below. The comlementary relationship between Schrodinger's continuous wave equation and Heisenberg's discrete matrix mechanics (see appendix), which in a sense mirros the wave and particle aspects of the quantum, highlights a deeper complementarity in mathematics between the discrete operations of algebra and the continuous properties of calculus, which may also be expressed in the brain (p 367).
Fig 40: Twoslit interference experiment (Sci. Am. Jul 92)
For the bands to appear in the interference experiment, each single photon has to travel through both slits as a wave. If you try to put any form of transparent detector in the slits to tell if it went through one or both you will always find only one particle but now the interference pattern will be destroyed. This happens even if you use the gentlest forms of detection possible such as an empty resonant maser chamber (a maser is a microwave laser). Any measurement sensitive enough to detect a particle alters its momentum enough to smear the interference pattern into the same picture you would get if the particle just went through one slit. Knowing one aspect destroys the other.
Now another confounding twist to the catch 22. The photon has to be absorbed again as a particle by an atom on the photographic plate, or somewhere else, before or after, if it doesn't career forever through empty space, something we shall deal with shortly. Where exactly does it go? The rules of quantum mechanics are only statistical. They tell us only that the particle is more likely to end up where the amplitude of the wave is large, not where it will actually go on any one occasion. The probability is precisely the complex square of the wave's amplitude at any point (the Born rule): .
Hence the probability is spread throughout the extent of the wave function, extending throughout the universe at very low probabilities. Quantum theory thus describes all future (and past) states as probabilities. Unlike classical probabilities, we cannot find out more about the situation and reduce the probability to a certainty by deeper investigation, because of the limits imposed by quantum uncertainty. The photon could end up anywhere the wave is nonzero. Nobody can tell exactly where, for a single photon. Each individual photon really does seem to end up being absorbed as a particle somewhere, because we will get a scattered pattern of individual dark crystals on the film at very low light intensities, which slowly build up to make the bands again. This is the mysterious phenomenon called 'reduction, or collapse, of the wave packet'. Effectively the photon was in a superposition of states represented by all the possible locations within the wave, but suddenly became one of those possible states, now absorbed into a single localized atom where we can see its evidence as a silver mark on the film. Only when there are many photons does the behaviour average out to the wave distribution. Thus each photon seems to make its own mind up about where it is going to end up, with the proviso that on average many do this according to the wave amplitude's probability distribution. So is this quantum freewill? It may be.
Fig 40a: Interference demonstrated in 2019 for large molecules (Fein et al. 2019).
Experiments can also be done using electrons, but in 2019, a team (Fein et al.) have reported extending the domain of interference experiments to large molecules. They report interference of a molecular library of functionalized oligoporphyrins masses beyond 25,000 Da (atomic mass units 1/12 that of carbon12), consisting of up to 2,000 atoms, by far the heaviest objects shown to exhibit matterwave interference to date. Porphyrins, such as chlorophyll and heme are polycyclic molecules which also appear on prebiotic syntheses. This shows that molecules, including those in biological organisms are spreading as quantum waves.
The Cat Paradox and the Role of Consciousness
This situation is the subject of a famous thought experiment by Schrodinger, who invented the wave equation. In Schrodinger's cat paradox, we use an interference experiment with about one photon a second and we detect whether the photon hits one of the bright bands to the left (we can do the same thing measuring electron spin using an asymmetric magnetic field). If it does then a cat is killed by smashing a cyanide flask. Now when the experimenter opens the box, they find the cat is either alive or dead, but quantum theory simply tells us that the cat is both alive and dead, each with differing probabilities  superimposed alive and dead states. This is counterintuitive, but fundamental to quantum reality. The cat paradox can also apply to other variables such as temperature. In a classical situation, temperature is measured by establishing equilibrium with the process being measured, but in a quantum context for example in sampling the temperature of a quantum dot, uncertainty will reduce on measurement to a cat paradox situation (Miller & Anders 2018).
Fig 41: Cat paradox experiment variations (King)
In the cat paradox experiment, the wave function remains uncollapsed at least until the experimenter I opens the box. Heisenberg suggested representing the collapse as occurring when the system enters the domain of thermodynamic irreversibility, i.e. at C. Schrodinger suggested the formation of a permanent record e.g. classical physical events D, E or computer data G, and Wigner (see below) to a paradox with a second observer H. However even these classical outcomes could be superpositions at least until a conscious observer experiences them, as the manyworlds theory below suggests. Schrodinger in "What is Life?" also viewed consciousness as primary: "The observer is never entirely replaced by instruments; for if he were, he could obviously obtain no knowledge whatsoever. ...They must be read! The observer's senses have to step in eventually. The most careful record, when not inspected, tells us nothing". This process is called quantum measurement.
John von Neumann went further and proposed that quantum observation is the action of a conscious mind and that everything in the universe that is subject to the laws of quantum physics creates one vast quantum superposition. But the conscious mind is different, being able to select out one of the quantum possibilities on offer, making it real  to that mind. Max Planck, the founder of quantum theory, said in 1931, "I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness".
Indeed there seems to be no way any purely physical nonconscious interaction can result in a quantum measurement because when two inanimate objects interact they simply become quantummechanically entangled with one another, but no actual quantum measurement is performed. We thus simply have a larger physical system, again in a superposition of states. The claim that entanglement is observed only in microscopic systems and, therefore, its peculiarities are allegedly irrelevant to the world of tables and chairs is patently untrue as macrosocpic systems have now also become entangled.
Collapse of the wave function projects the linear combination onto one of its basis states. Originally, it was thought by von Neumann and others that a measurement on a quantum system would inevitably destroy all quantum superpositions. Later, Lüders pointed out that certain superpositions should survive, so that a sequence of ideal measurements would preserve quantum coherence. In theory, an ideal measurement projects a quantum state onto the eigenbasis of the measurement observable, while preserving coherences between eigenstates that have the same eigenvalue for example representing the same energy in different configurations. Experiments have now proved this possibility using the coupling of a trapped ion qutrit (a unit of quantum information that is realized by a quantum system described by a superposition of three mutually orthogonal quantum states) to the photon environment, by taking tomographic snapshots during the detection process (Pokorny F et al. (2020) Tracking the Dynamics of an Ideal Quantum Measurement Phys. Rev. Lett. doi: 10.1103/PhysRevLett.124.080401)..
Some claim that quantum decoherence rules out consciousness as the agency of measurement. According to this claim, when a quantum system in a superposition state is probed, information about the overlapping possibilities in the superposition "leaks out" and becomes dispersed in the surrounding environment. This allegedly explains in a fairly mechanical manner why the superposition becomes indiscernible after measurement. But decoherence cannot explain how the state of the surrounding environment becomes definite to begin with, so it doesn't solve the measurement problem. or rule out the role of consciousness. Indeed Wojciech Zurek who proposed the idea of decoherence noted "an exhaustive answer to [the question of why we perceive a definite world] would undoubtedly have to involve a model of 'consciousness', since what we are really asking concerns our [observers'] impression that 'we are conscious' of just one of the alternatives".
Contrary to the assumed vulneability of all quanta to decoherence, scientists have found that quasiparticles in quantum systems could be effectively immortal (Verrensen R et al. 2019 Avoided quasiparticle decay from strong quantum interactions Nature Physics doi:10.1038/s4156701905353). This doesn't mean they don't decay, but once they have, they are able to reorganise themselves back into existence, possibly ad infinitum. Although we know that qunatum interactions are inprinciple time reversible, the assumption was that quasiparticles in interacting quantum systems would ulltimately decay, consistent with the second law of thermodynamics. However, using a detailed computer simulations, the researchers found that if this decay proceeds very quickly, an inverse reaction will occur after a certain time and the debris will converge again. This process can recur endlessly and a sustained oscillation between decay and rebirth emerges. Because the oscillation is a wave that is transformed into matter, which is covered by waveparticle duality, their entropy is not decreasing, but remaining constant. Examples of quasiparticles are the phonons of harmonic chemical bond excitation, magnons in exotic magnetic materials that are paradoxically stable, and rotons in superfluid helium.
As noted by Kastrup, Stapp, & Kafatos (2018) the Bell theorem experiments indicate that the everyday world we perceive does not exist until observed, which in turn suggests a primary role for mind in nature. The mind that underlies the world is a transpersonal mind behaving according to natural laws. It comprises but far transcends any individual psyche. The dynamics of all inanimate matter in the universe correspond to transpersonal mentation, just as an individual's brain activity  which is also made of matter  corresponds to personal mentation. This notion eliminates arbitrary discontinuities and provides the missing inner essence of the physical world: all atter  not only that in living brains  is the outer appearance of inner experience, different configurations of matter reflecting different patterns or modes of mental activity.
What philosophers of science such as Philip Goff have realized (Cook G 2020 Does Consciousness Pervade the Universe? Scientific American https://www.scientificamerican.com/article/doesconsciousnesspervadetheuniverse/.) is that physical science, for all its richness, is confined to telling us about the behavior of matter, what it does. Physics tells us, for example, that matter has mass and charge. These properties are completely defined in terms of behavior, things like attraction, repulsion, resistance to acceleration. Physics tells us absolutely nothing about what philosophers like to call the intrinsic nature of matter: what matter is, in and of itself. So it turns out that there is a huge hole in our scientific story. The proposal of the panpsychist is to put consciousness in that hole. Consciousness, for the panpsychist, is the intrinsic nature of matter. There's just matter, on this view, nothing supernatural or spiritual. But matter can be described from two perspectives. Physical science describes matter "from the outside," in terms of its behavior. But matter "from the inside"  i.e., in terms of its intrinsic nature  is constituted of forms of consciousness, just as our subjective experiences complement molecular brain function. We will see in the context of quantum entanglement that this interpretation extends to entangled particles and that there is no calculable upper limit on how complex such entanglements could become.
Nearly 60 years ago, the Nobel Prize–winning physicist Eugene Wigner captured one of the many oddities of quantum mechanics in a thought experiment. He imagined a friend of his, sealed in a lab, measuring a particle such as an atom while Wigner stood outside. Quantum mechanics famously allows particles to occupy many locations at once—a socalled superposition—but the friend's observation "collapses" the particle to just one spot. Yet for Wigner, the superposition remains: The collapse occurs only when he makes a measurement sometime later. Worse, Wigner also sees the friend in a superposition. Their experiences directly conflict.
Wigner's friend is a version of the cat paradox, in which a human or AI assistant G or H, possibly sensing directly inside the box or even being in the box (hence the catlike features in fig 41), reports on the result, establishing that unless the first conscious observer collapses the wave function, there could be a conscious observer in a multiplicity of alternative states, which is an omnipresent drawback of the many worlds view. In a macabre version the conscious assistant is of course the cat. According to the Copenhagen interpretation, it its not the system which collapses, but only our knowledge of its behavior. The superimposed state within the wave function is then not regarded as a real physical entity at all, but only a means of describing our knowledge of the quantum system, and calculating probabilities.
Fig 41a: Gedanken experiment Frauchiger and Renner (2018) leading to possible internal inconsistency of the Copenhagen interpretation (Ananthaswamy 2018). The key assumption that appears to have been glossed over in the analysis is the role of the conscious observer. By putting conscious observers in the box, the assumption that an external quantum measurement will still find a superposition of states when the conscious observer inside has seen a tail rather than a head and has furthermore acted upon it by preparing a particle in a certain state, goes to the heart of the question of consciousness being key to a quantum measurement collapsing the wave function. However subjective consciousness is not an objective physical system either, so grouping this into the assumption that quantum theory is universal is facile because quanum theory may indeed be universal to physical reality, but subjective consciousness is not an objective physical phenomenon as such. Thus it is not quantum theory that is on the stake but the physical nature of subjective consciousness which raises another deep question about how the physical brain actually generates subjective consciousness which may be a much more difficult question to rationalize than quantum theory itself. Substitution of a quantum computer would appear to render the consistency assumption invalid, as the inconsistency is contained in the superposition of states and Alice's friend and Bob's friend now don't receive one or other prepared particle type but a superposition. This m=would mean that both Alice and Bob are now observing mixed states and would thus disagree with a certain probability arising from the superpositions. This is effectively similar to the conclusion that the many worlds interpretation renders.
In an elaboration of the Wigner's friend idea, Frauchiger and Renner (2018) describe a gedanken experiment in which hey have two Wigners, Alice and Bob each doing an experiment on one of a pair of physicist friends Alice F and Bob F whom they each keep in a box containing a laboratory. They investigate the question whether quantum theory can, in principle, have universal validity. The idea is that, if the answer is yes, it must be possible to employ quantum theory to model complex systems that include agents who are themselves using quantum theory. Analysing the experiment under this presumption, they find that one agent, upon observing a particular measurement outcome, must conclude that another agent has predicted the opposite outcome with certainty. The agents' conclusions, although all derived within quantum theory, are thus claimed to be inconsistent.
One of the two friends, Alice F, can toss a coin and  using her knowledge of quantum physics  prepare a quantum message to send to Bob F. Using his knowledge of quantum theory, Bob can detect Alice's message and guess the result of her coin toss. When the two Wigners effectively open their boxes, by making a quantum measurement on them as a complex quantum system in a superposition of states, in some situations they can conclude with certainty which side the coin landed on, but occasionally  about 1/12 times given the design of the preparation their conclusions are inconsistent.
This raises fundamental questions about the consistency of the Copenhagen interpretation, but the experiment requires knowing all the quantum variables of Alice F and Bob F which is currently unfeasable although a quantum computing version might do so. Other theories, from Bohmian and other hidden variable theories to gravitational collapse, violate one or more of the assumed conditions of Q (the Born rule), C (consistent reasoning) and S (non multiple measurement values) and are thus not ruled out of consistency.
The contradiction between the superposition of the wave function and the observer's experience of the wave function having collapsed into one of its many superimposed possibilities has led to various spontaneous collapse theories such as the GRW theory (Ghirardi, Rimini, Weber 1986), in which collapse is spontaneous and random, but because of the large number of potential particles with which the quantum can interact (and thus become entangled) with during a measurement by a macroscopic device and/or conscious observer, the probability of collapse approaches unity. However neither this, nor the CSL (continuous spontaneous localization) theory (Ghirardi, Pearle, and Rimini 1990), which sucessfully models systems of identical particles, are free from physical contradictions. To avoid violating the principle of the conservation of energy, any collapse be incomplete. Almost all of the wave function is contained at the one value, but there are one or more small tails where the function should intuitively equal zero but mathematically does not. Under the probability interpretaton, this would mean that some matter has collapsed elsewhere than the measurement indicates, or that (with low probability) an object might jump from one collapsed state to another. These options are counterintuitive and physically unlikely.
Fig 41b: Above An experimental realization of the Wigner' friend setup showing there is no such thing as objective reality  quantum mechanics allows two observers to experience different, conflicting realities (arXiv: 1902.05080 [quantph]). Below the proof of principle experiment of Bong et al .demonstrating mutual inconsistency of 'NoSuperdeterminism', 'Locality' and 'Absoluteness of Observed Events'.
An experimental realization of this dilemma has been devised (Proietti et al., Sci. Adv. 2019 5 eaaw9832 doi:10.1126/sciadv.aaw9832) as shown above using quantum entanglement. The experiment involves two people observing a single photon that can exist in one of two alignments, but until the moment someone actually measures it to determine which, the photon is in a superposition. A scientist analyzes the photon and determines its alignment. Another scientist, unaware of the first's measurement, is able to confirm that the photon  and thus the first scientist's measurement  still exists in a quantum superposition of possible outcomes. As a result, each scientist experiences a different reality  both "true" even though they disagree with each other. Realizing this idea involves an experimental setup with lasers, beam splitters, and a series of six photons that were measured by various pieces of equipment that stood in for the two scientists. Pairs of entangled photons from the source S0, in modes a and b, respectively, are distributed to Alice and Bob's friends, who locally measure their respective photon in the h, v basis using entangled sources SA, SB and typeI fusion gates. These use nonclassical interference on a polarising beam splitter (PBS) together with a set of halfwave (HWP) and quarterwave plate (QWP). The photons in modes α′ and β′ are detected using superconducting nanowire single photon detectors (SNSPD) to herald the successful measurement, while the photons in modes α and β record the friends' measurement results. Alice (Bob) then either performs a Bellstate measurement via nonclassical interference on a 50/50 beam splitter (BS) on modes a and α (b and β) to measure A1 (B1) and establish her (his) own fact, or removes the BS to measure A0 (B0), to infer the fact recorded by their respective friend.
In a subsequent experiment, Bong et al. (2020) transform the thought experiment into a mathematical theorem that confirms the irreconcilable contradiction at the heart of the Wigner scenario. The team also tests the theorem with an experiment (fig 41b), using photons as proxies for the humans, accompanied by new forms of Bell's inequalities, by building on a scenario with two separated but entangled friends. The researchers prove that if quantum evolution is controllable on the scale of an observer, then one of (a)'NoSuperdeterminism' — the assumption of 'freedom of choice' used in derivations of Bell inequalities  that the experimental settings can be chosen freely — uncorrelated with any relevant variables prior to that choice,(2) 'Locality' or (3) 'Absoluteness of Observed Events' — that every observed event exists absolutely, not relatively — must be false. Although the violation of Belltype inequalities in such scenarios is not in general sufficient to demonstrate the contradiction between those three assumptions, new inequalities can be derived, in a theoryindependent manner, that are violated by quantum correlations. This is demonstrated in a proofofprinciple experiment where a photon's path is deemed an observer. We discuss how this new theorem places strictly stronger constraints on physical reality than Bell's theorem.
Penrose in objective reduction singles out gravity as the key unifying force and suggests that interaction with gravitons splits the wave function, causing reduction. Others try to discover hidden laws which might provide the subquantum process, for example the pilot wave theory, in which a welldefined particle piloted within a nonlocal wave as developed by David Bohm (1966). This can produce comparable results with quantum mechanics and provides an example of a plausible theory underlying quantum reality, but it has difficulties defining positions when new particles with new quantum degrees of freedom are created.
It has also met with a failure to replicate its results in analogous macroscopic systems using small oil droplets suspended on acoustically excited water waves, including work by a team led by Bohr's grandson Tomas (Andersen et al. 2015), which have not replicated the twoslit interference fringes that should appear, because the wave becomes separated by obstructions and one component decays due to the particle introducing a term in the Hamiltonian which also influences the wave.
Another approach we will explore, is the transactional interpretation, which has features of all these ideas and seeks to explain this process in terms of a handshaking relationship between the past and the future, in which spacetime itself becomes sexual. Key here is the fact that reduction is not like any other physical process. One cannot tell when or where it happens again suggesting it is part of the 'spooky ' interface between quantum and consciousness.
Whatever model of quantum mechanics is used, the Born rule assigning a particulate probability on the basis of the squared amplitude of the wave function applies. In the dynamical collapse theories such as GRW, collapse is assumed to be a random physical event. In the pilot wave theory it becomes the probability estimate of our incomplete knowledge of the presumed deterministic hidden variables, which remain inaccessible to direct measurement. In the Everett manyworlds interpretation, due to self locating uncertainty on a given branch, the credence you should attach to being on any particular branch of the wave function is just the amplitude squared for that branch, just as in ordinary quantum mechanics.
In many situations people try to pass the intrinsic problems of uncertainty away on the basis that in the large real processes we witness, individual quantum uncertainties cancel in the law of averages of large numbers of particles. They will suggest for example that neurons are huge in terms of quantum phenomena and that the 'law of mass action ' engulfs quantum effects. However brain processes are notoriously sensitive to external and internal perturbations. Moreover history itself is a unique process emerging out of a sequence of such unrepeated events at each stage of the process. Critical decisions we make become watersheds. History and evolution are both processes littered with unique idiosyncratic acts in a counterpoint to the major forces shaping the environment and landscape. Therefore it is unclear whether these processes adhere to the Born rule's probabilities, which become established only on multiple repetitions of a given detection event any more than the uncertainty of the position of a single quantum detection within the entire wave function, as in the Cat paradox, or in the interference experiment where we cannot ask to find the position of a single photon because it i entirely uncertain. Chaotic processes are potentially able to inflate arbitrarily small fluctuations, so molecular chaos may 'inflate ' the fluctuations associated with quantum uncertainty into macroscopic uncertainties.
As a final eipthet to the cat paradox, physicists have discovered how to catch and even reverse a quantum jump midflight (Minev et al. 2019 Nature doi: 10.1038/s415860191287z). An atom releasing a photon, as in fig 41, is making what is apparently a discrete transition from an exicted sate to a lower state, thereby releasing a single photon as a quantum, however the wave aspect of the photon has to be released over time, effectively as a continuous radiative transiton. Quantum jumps were first observed in an atomic ion driven by a weak deterministic force while under strong continuous energy measurement. The times at which the discontinuous jump transitions occur are reputed to be fundamentally unpredictable. The experiment demonstrates that the jump from the ground state to an excited state of a superconducting artificial threelevel atom can be tracked as it follows a predictable 'flight', by monitoring the population of an auxiliary energy level coupled to the ground state. The evolution of each completed jump is continuous, coherent and deterministic. Using realtime monitoring and feedback, the experimenters were also able to catch and reverse quantum jumps midflight  thus deterministically preventing their completion. The findings support quantum trajectory theory (Gardiner et al. 1992 Phys. Rev A 46/7 4363).
Fig 41b: Left: Threelevel atom possessing a hidden transition (shaded region) between its ground and dark state, driven by the Rabi drive. Quantum jumps between ground and dark are indirectly monitored by a stronger Rabi drive between the ground and the bright state, whose occupancy is continuously monitored by an auxiliary oscillator (LC circuit on the right), itself measured in reflection by continuouswave microwave light (depicted in light blue). When the atom is in the bright state, the resonance frequency of the LC circuit shifts to a lower frequency than when the atom is in ground or dark. Right: Catching the quantum jump midflight. a, The atom is initially prepared in the bright state. The readout tone and atom Rabi drive are turned on until the catch condition is fulfilled, consisting of the detection of a click followed by the absence of click detections for a total threshold catch time. The Rabi drive can be shut off prematurely, before the end of the catch. A tomography measurement is performed after the catch time. b Conditional tomography revealing the continuous, coherent and, surprisingly, deterministic flight (when completed) of the quantum jump from ground to dark. Data obtained from 6.8 × 10^{6} experimental realizations. Solid lines represent theoretical predictions.
The quantum jump method is an approach which operates by evolving the system's wave function in time with a pseudoHamiltonian, where at each time step, a random quantum jump (discontinuous change) may take place with some probability. The calculated system state as a function of time is known as a quantum trajectory, and the desired density matrix as a function of time can be calculated by averaging over many such trajectories. In this case a three level atom is exicted by a harmonic stimulation (Rabi drive) close to the atoms resonant frequency.
The Twotiming Nature of Special Relativity
We also live in a paradoxical relationship with space and time. While space is to all purposes symmetric and multidimensional, and not polarized in any particular direction, time is singular in the present and polarized between past and future. We talk about the arrow of time as a mystery related to the increasing disorder or entropy of the universe. We imagine spacetime as a four dimensional manifold but we live out a strange sequential reality in which the present is evanescent. In the words of the song Fly Like an Eagle  "time keeps slipping, slipping, slipping ... into the future ". There is also a polarized gulf between a past we can remember, the living present and a shadowy future of nascent potentialities and foreboding uncertainty. In a sense, space and time are complementary dimensionalities, which behave rather like real and imaginary complex variables, as we shall see below.
A second fundamentally important discovery in twentieth century physics, complementing quantum theory, which transformed our notions of time and space, was the special theory of relativity. In Maxwell's classical equations for transmission for light, light always has the same velocity, c regardless of the movement of the observer, or the source. Einstein realized that Maxwell's equations and the properties of physics could be preserved under all inertial systems  the principle of special relativity  only if the properties of space and time changed according to the Lorenz transformations as a particle approaches the velocity of light c:
Space becomes shortened along the line of movement and time becomes dilated. Effectively space and time are each being rotated towards oneanother like a pair of closing scissors. Consequently the mass and energy of any particle with nonzero rest mass tend to infinity at the velocity of light:
By integrating this equation, Einstein was able to deduce that the rest mass must also correspond to a huge energy E_{o}=m_{o}c^{2} which could be released for example in a nuclear explosion, as the mass of the radioactive products is less than the mass of the uranium that produces them, thus becoming the doom equation of the atom bomb.
In special relativity, space and time become related entities, which form a composite four dimensional spacetime, in which points are related by lightcones  signals travelling at the speed of light from a given origin. In spacetime, time behaves differently to space. When time is squared it has a negative sign just like the imaginary complex number does.
Hence the negative sign in the formula for spacetime distance (3) and the scissorlike reversed rotations of time and space into one another expressed in the Lorenz transformations. Stephen Hawking has noted that, if we treat time as an imaginary variable, the spacetime universe could become a closed 'manifold ' rather like a 4D sphere, in which the cosmic origin is rather like the north pole of Earth, because imaginary time will reverse the negative sign in (3) and give us the usual Pythagorean distance formula in 4D.
Fig 42: Spacetime light cone permits linkage of 'timelike ' points connected by slowerthenlight communication. In the 'spacelike ' region, temporal order of events and causality depends on the observer.
A significant feature of special relativity is the fact that the relativistic energymomentum equation E^{2}=p^{2}+ m^{2} has dual energy solutions: (4)
The negative energy solution has reversed temporal direction. Effectively a negative energy antiparticle travelling backwards in time is exactly the same as a positive energy particle travelling forwards in time in the usual manner. The solution which travels in the normal direction (subsequent points are reached later) is called the retarded solution. The one which travels backwards in time is called the advanced solution. A photon is its own antiparticle so in this case we just have an advanced or retarded photon.
General relativity goes beyond this to associate gravity with the curvature of spacetime caused by massenergy. The Einstein field equations are governed by the following relationship:
where is the Ricci tensor representing curvature, R is the scalar curvature, is the metric tensor representing the gravitational potential, is the cosmological constant, G is Newton's gravitational constant, cis the speed of light, and is the stressenergy tensor representing the gravitational massenergy field. Hence the equation explains gravitation as the curvature of spacetime caused by massenergy. Einstein introduced the cosmological constant as a mechanism to obtain a solution of the gravitational field equation that would lead to a static universe, but it was later realized that this would not be stable: local inhomogeneities would ultimately lead to either runaway expansion, or contraction. The cosmological constant is equivalent to the vacuum energy  the energy density of empty space, which can be positive or negative. The stressenergy tensor responds to both pressure and massenergy, as we shall see in dark energy models (equation 6).
Reality and Virtuality: Quantum fields and Seething Uncertainty
We have learned about waves and particles, but what about fields? What about the strange actionatadistance of electromagnetism and gravity? Special relativity and quantum theory combine to provide succinct explanations of electromagnetism, in fact they are the most succinct theories ever invented by the human mind, accurate to at least seven decimal places when describing the magnetic moment of an electron in terms of the hidden virtual photons which the electron emits and then almost immediately absorbs again.
Richard Feynman and others discovered the answer to this riddle by using uncertainty itself to do the job. The field is generated by particles propagated by a rule based on wave spreading. These particles are called virtual because they have no net positive energy and appear and disappear entirely within the window of quantum uncertainty, so we never see them except as expressed in the force itself. This seething tumult of virtual particles exactly produces the familiar effects of the electromagnetic field, and other fields as well. We can find the force between two electrons by integrating the effects of every virtual photon which could be exchanged within the limits of uncertainty and of every other possible virtual particle system, including pairs of electrons and positrons coming into a fleeting existence. However, note that we can't really eliminate the wave description because the amplitudes with which the particles are propagated from point to point are the hidden wave amplitudes. Uncertainty not only can create indefiniteness but it can actively create every conceivable particle out of the vacuum, and does so sine qua non. Special relativity and the advanced and retarded solutions that arise are also essential to enable the interactions that make the fabric of the quantum field. The advanced solutions are required to have negative energy and retarded solutions positive energy thus giving the correct results for both scattering and electronpositron interactions within the field so that electron scattering is the same as electron positron creation and annihilation.
Fig 43: Quantum electrodynamics: (a,b) Two Feynman diagrams in the electromagnetic repulsion of two electrons. In the first a single virtual photon is exchanged between two electrons, in the second the photon becomes a virtual electronpositron pair during its transit. All such diagrams are integrated together to calculate the strength of the electromagnetic force. (c) A homologous weak force diagram shows how neutron decay occurs via the Wparticle of the weak nuclear force, which itself is a heavy charged photon, as a result of symmetrybreaking. A down quark becoming up changes a neutron (ddu) into a proton (duu). (d) Timereversed electron scattering is the same as positron creation and annihilation.
Each more complex interaction involving one more particle vertex is smaller by a factor where e is the electron charge and h and c are as above, called the 'fine structure constant '. This allows the contribution of all the diagrams to sum to a finite interaction, unlike many unified theories, which are plagued by infinities, as we shall see. The electromagnetic force is generated by virtual photons exchanged between charged particles existing only for a time and energy permitted by the uncertainty relation. The closer the two electrons, the larger the energy fluctuation possible over the shorter time taken to travel between them and hence the greater the force upon them. Even in the vacuum, where we think there is nothing at all, there is actually a sea of all possible particles being created and destroyed by the rules of uncertainty.
The virtual particles of a force field and the real particles we experience as radiation such as light are one and the same. If we pump energy into the field, for example by oscillating it in a radio transmitter, the virtual photons composing the electromagnetic field become the real positive energy photons in radio waves entering the receiver as a coherent stream of real photons, encoding the music we hear. Relativistic quantum field theories always have both advanced and retarded solutions, one with positive and the other with negative energy, because of the two square roots of special relativity (4). They are often described by Feynman spacetime diagrams. When the Feynman diagram for electron scattering becomes timereversed, it then becomes precisely the diagram for creation and annihilation of the electron's antiparticle, the positron, as shown in fig 43. This hints at a fundamental role for the exotic timereversed advanced solutions.
As a simple example, the wave equation for a zero spin particle with mass m has two solutions: , where .
The weak and strong nuclear forces can be explained as quantum particle fields in a similar way, but gravity holds out further serious catch22s. Gravity is associated with the curvature of spacetime, but this introduces fundamental contradictions with quantum field theory. To date there remains no fully consistent way to reconcile quantum field theory and gravitation as we shall see.
The Spooky Nature of Quantum Entanglement
We have already seen how the photon wave passing through two slits ends up being absorbed by a single atom. But how does the wave avoid two particles accidentally being absorbed in far flung parts of its wave function out of direct communication?
Because we can't sample two different points of a singleparticle wave, it is impossible to devise an experiment which can test how a wave might collapse. One way to learn more about this situation is to try to find situations in which two or more correlated particles will be released coherently in a single wave. This happens with many particles in a laser and in the holograms made by coherent laser light and in BoseEinstein condensates. It also happens in other situations where two particles of opposite spin or complementary polarization become created together. Many years ago Einstein, Rosen and Podolsky (EPR) suggested we might be able to break through the veil of quantum uncertainty this way, indirectly finding out more about a single particle than it is usually prepared to let on. Einstein commented "I do not believe God is playing dice with the universe", but that is precisely what quantum entnglement seems to entail.
Fig 44: (a) Pairsplitting experiment for photons using polarization. The first experiments were done on electron's spins using a SternGerlach magnet's nonunifom field to separate spin up and down particles.(b) A variant of the experiment in (a), in which a polarized beam splitter leads to two detectors on each side, ensuring both polarization states are detected separately avoiding errors from nondetection. (c) The results are consistent with quantum mechanics but inconsistent with Bell's inequalities for a locally causal system. Below is shown the CHSH (Clauser, Horne, Shimony, and Holt) inequality, an easier to use version of Bell's inequalities applicable to configuration (b), where N_{+} is the number of coindicences detected between D_{a}+ and D_{b} etc. where a, b are the angles. Using Bell's proof, the combined expectancies on the left are bounded above by 2, but the sinusiodal angular projection of quantum theory allows . (d) Timevarying analyzers are added driven by an optical switch too fast for light to cross the apparatus showing the effect persists even when light doean't have time to cross the apparatus. (e) The calcium transition (Aspect R25). (f) An experiment using the GHZ (Greenberger, Horne, and Zeilinger) arrangement involving three entangled photons generatd by a pulse passed through a down converter (BBO) to create an entangled pair, beamsplitters (BS and PBS) as well as quarter and halfwave plates and detected at D_{1}, D_{2}, D_{3}, with T used as a trigger, according to the GHZ equation below, where the three photons are collectively either horizontally or vertically polarized (Nature 403 5159). GHZ can return a violation of local causality directly without having to build a statistical distribution. A third relation called the LeggettGarg inequality (Arxiv:1304.5133) applies instead to the varying time of observations and has been performed on systems from qbits through to neutrino oscillations (Arxiv:1602.0004, fig 18).
A calcium atom's electron excited into a higher spin0 sorbital cannot fall back to its original sorbital in one step because a photon has spin 1 and the spins don't match, since you can't go between two orbits of equal spin and radiate a spin1 photon, or the summed spins don't tally. The atom however can radiate two photons together as one quantum event, thereby cancelling one another's spins, to transit to its ground state, via an intermediate spin1 porbital. This releases a blue and a yellow photon, each of which travel off in opposite directions, with complementary polarizations.
When we perform the experiment, it turns out that the polarization of neither photon is defined until we measure one of them. When we measure the polarization of one photon, the other immediately  instantaneously  has complementary polarization. The nature of the angular correlations between the detectors is inconsistent with any locallycausal theory  that is no theory based on information exchanged between the detectors by particles at the speed of light can do the trick, as proved in a famous result by John Bell (1966) and subsequent experiments. The correlation persists even if the detectors' configurations are changed so fast that there is no time for information to be exchanged between them at the speed of light as demonstrated by Alain Aspect (1982). This phenomenon has been called quantum nonlocality and in its various forms quantum 'entanglement', a name coined by Schrodinger, which is itself very suggestive of the throes of a sexual 'tryst'. The situation is subtly different from any kind of classical causality we can imagine. The information at either detector looks random until we compare the two. When we do, we find the two seemingly random lists are precisely correlated in a way which implies instantaneous correlatedness, but there is no way we can use the situation to send classically precise information faster than the speed of light by this means. We can see however in the correlations just how the ordinary oneparticle wave function can be instantaneously autocorrelated and hence not slip up in its accounting during collapse.
Entanglement has also been verified to apply not just to real particles, but to the virtual particles appearing and diappearing in the quantum vacuum (BeneaChelmus et al. 2019). Although indirect effects of virtual particles are well known, it is only by probing a vacuum on very short timescales that the particles temorarily become real and can be directly observed. But do these particles appear completely randomly, or are they also correlated in space and time? The researchers have now provided an answer to this questio, by finding evidence for correlations between fluctuations in the electric field of a vacuum.
Entanglement raises a fundamental issue of nonlocality which poses a potential threat to special relativity, because strict locality in time and space is compromised by nonlocal interactions. Tumulka (2006) showed how all the empirical predictions of quantum mechanics for entangled pairs of particles could be reproduced by a modification of the GRW theory of spontaneous collapse, which is nonlocal, and yet it is fully compatible with the spacetime geometry of special relativity.
However Albert and Galchen (2009) have discovered further circumstances in which the quantum reality becomes too rich to be described through any timedirected narative description, because special relativity tends to mix up space and time in a way that transforms quantummechanical entanglement among distinct physical systems into an entanglement among physical situations at different times, due to the fact that observers in morion with respect to one another can perceive the time order of events differently. Entangled histories in fig 47 give another illustration of this problem about temporality.
In 2019 a laser experiment has for the first time visualized the entangled photons in a Bell type experiment as shown in fig 45b.
Fig 44b: Visualizing entangled photons in a Bell theorem test (Moreau P et al. 2019 Imaging Belltype nonlocal behavior. Sci. Adv. 5 eaaw2563.). Right: The apparatus used: A bBarium Borate crystal pumped by an ultraviolet laser is used as a source of entangled photon pairs. The two photons are separated on a beam splitter (BS). An intensified camera triggered by a singlephoton avalanche diode (SPAD) is used to acquire ghost images of a phase object placed on the path of the first photon and nonlocally filtered by four different spatial filters that can be displayed on a spatial light modulator (SLM 2) placed in the other arm. By being triggered by the SPAD, the camera acquires coincidence images that can be used to perform a Bell test. Left: The four coincidence counting images are presented, which correspond to images of the phase circle acquired with the four phase filters with different orientations, q2 = {0, 45, 90, 135}, necessary to perform the Bell test.
Other attempts to find hidden variable theories that could explain entanglement tend to invoke retrocausality. Costa de Beauregard suggested that nonlocal interaction could be avoided if a retrocausal influence was traced back in time from the spatiallyseparated detectors to the source of the pair of entangled particles creation. His supervisor Louis de Broglie forbade him to publish the idea for several years until Richard Feynman showed the positron to be a timereversed electron in quantum electrodynamics (fig 43). Feynman has also shown that using the fact that special relativity also has time revered solutions an absorber theory based on reversed causality could also provide a valid description of physics. Price and Wharton (2015) have extended this idea into a full explanation of the Bell's theorem results. Likewise Sutherland (2017) invokes a Lagrangian theory invoking future boundary conditions to provide a description consistent with special relativity. The transactional interpretation discussed below and the twostate formalism of Aharonov & Vaidman (2014) provide further extensions of this approach.
Antony Valentini (2001, 2002, Valentini & Westman 2005) has developed an extension of the pilot wave theory which proposes that the wave function is genuinely nonlocal as Bohm's potential describes, but the probability interpretation is an expression of the fact that the hiddenvariable realm has reached a thermodynamic equilibrium in the universe as we find it today. This notion was also recognised by Bohm. In this description, the early universe would have been far from equilibrium and nonlocal effects might have been able to pass direct signals in a way which is now forbidden. He envisages this equilibrium arose through a dynamical process which resulted in an exponentially increasing subquantum entropy converging to the probability interpretation. He suggests that some undetected nonequilibrium matter particles might still have interactions stemming from this early stage, which could be used to send instantaneous signals, to violate the uncertainty principle, to distinguish nonorthogonal quantum states without disturbing them, to eavesdrop on quantum key distribution, and to outpace quantum computation.
The issue of probabilities appears in all models of quantum mechanics. In the Copenhagen interpretation, it applies to out knowledge of the system.
Researchers report that, by beaming photons between the quantum satellite Micius and two distant ground stations, they have demonstrated Bell's theorem over a distance of more than 1,200 kilometres (Yin, J. et al. 2017 Science 356 11404) and are exploring using entanglement for encryption which cannot be read without dusturbing the entanglement using quantum teleportation. Entanglement has also been demonstrated between two spatially separated components of a spinsqueezed BoseEinstein condensate consisting of cold atoms in the same quantum state (doi:10.1126/science.aao1850 ).
There are several loopholes that might undermine the conclusions of the Bell's theorem quantum entanglement tests. The detection loophole is that not all photons produced in the experiment are detected. Then we have the communication loophole if the entangled paticles are too close together, then, in principle, measurements made on one could affect the other without violating the speedoflight limit. In fig 45(a) is shown an experiment (arxiv:1608.01683) where both electrons and photons are used, closing these two loopholes simultaneously.
There is thus no easy way out for a locally realistic theory to circumvent the limits imposed by Bell's theorem, without undermining the principle that the observer has the free will to choose the orientations of the detectors. In order for the argument for Bell's inequality to follow, it is necessary to be able to speak meaningfully of what the result of the experiment would have been, had different choices been made. This assumption is called counterfactual definiteness.
But this means Bell tests still have a freedomofchoice loophole, because they assume experimenters have free choice over which measurements they perform on each of the pair. But some unknown effect could be influencing both the particles and what tests are performed (either by affecting choice of measurement directly, or by restricting the available options), to produce correlations that give the illusion of entanglement. Superdeterminism asserts this can never happen because the entire state of the universe, including the observer, is deterministic, so the experimenter can choose only those configurations already stipulated. Gerard 't Hooft has some papers exploring this idea (arXiv:0908.3408, arXiv:0701097, arXiv:hepth/0104219).
However, as shown in in fig 45 (b), experiments have now been performed which significantly close the time frame on this loophole as well. To narrow the freedomofchoice loophole, researchers have previously put 144 kilometres between the source of entangled particles and the randomnumber generator that they use to pick experimental settings. The distance between them means that if any unknown process influenced both setups, it would have to have done so at a point in time before the experiment. But this only rules out any influences in the microseconds before. The latest paper (arXiv: 1611.06985) has sought to push this time back dramatically, by using light from two distant stars to determine the experimental settings for each photon. The team picked which properties of the entangled photons to observe depending on whether its two telescopes detected incoming light as blue or red. The colour is decided when the light is emitted, and does not change during travel. This means that if some unknown effect, rather than quantum entanglement, explains the correlation, it would have to have been set in motion at least around 600 years ago, because the closest star is 575 lightyears away. The approach has since pushed back this limit to billions of years ago by doing the experiment with light from distant highredshift quasars (Rauch et al. 2018).
Fig 45: 2015 Experiment (a) closes two loopholes: Experiments that use entangled photons are prone to the detection loophole: not all photons produced in the experiment are detected, and sometimes as many as 80% are lost. Experimenters therefore have to assume that the properties of the photons they capture are representative of the entire set. To get around the detection loophole, physicists often use particles that are easier to keep track of than photons, such as atoms. But it is tough to separate distant atoms apart without destroying their entanglement. This opens the communication loophole: if the entangled atoms are too close together, then, in principle, measurements made on one could affect the other without violating the speedoflight limit. The team used a cunning technique called entanglement swapping to combine the benefits of using both light and matter. The researchers started with two unentangled electrons sitting in diamond crystals held in different labs 1.3 km apart. Each electron was individually entangled with a photon, and both of those photons were then zipped to a third location. There, the two photons were entangled with each other  and this caused both their partner electrons to become entangled, too. This did not work every time. In total, the team managed to generate 245 entangled pairs of electrons over the course of nine days. The team's measurements exceeded Bell's bound, once again supporting the standard quantum view. Moreover, the experiment closed both loopholes at once: because the electrons were easy to monitor, the detection loophole was not an issue, and they were separated far enough apart to close the communication loophole, too (Hensen et al. 2015). Experiment (b) substantially closes a third loophole the freedomofchoice loophole (Handsteiner et al. 2017). Light sources from two telescopes trained on distant stars up to 600 light years away is used to determine the choice of orientations, eliminating the gray light cone in the lower image, extending back up to 600 years. A third experiment (c) shows that two histories in which the order of events and accompanying changes induced by the Anne and Bob are inverted in one of two histories can become entangled so that the usual idea of causality does not apply (Rubino et al. 2016).
The BIG Bell Test asked volunteers to choose the measurements, in order to close the socalled 'freedomofchoice loophole' as noted above  the possibility that the particles themselves influence the choice of measurement. Such influence, if it existed, would invalidate the test; it would be like allowing students to write their own exam questions. This loophole cannot be closed by choosing with dice or random number generators, because there is always the possibility that these physical systems are coordinated with the entangled particles. Human choices introduce the element of free will, by which people can choose independently of whatever the particles might be doing. The BIG Bell Test participants contributed unpredictable sequences of zeros and ones (bits) through an online video game. The bits were routed to stateoftheart experiments in Brisbane, Shanghai, Vienna, Rome, Munich, Zurich, Nice, Barcelona, Buenos Aires, Concepcion Chile and Boulder Colorado, where they were used to set the angles of polarizers and other laboratory elements to determine how entangled particles were measured. The participants contributed with 97,347,490 bits, making possible a strong test of local realism, as well as other experiments on realism in quantum mechanics (Abellan 2018).
It has also been proposed to test whether consciousness can alter the nature of Bell's theorem entanglement by performing a large number of measurements at A and B and extracting the small fraction in which the EEG signals caused changes to the settings at A and B after the particles departed their original position but before they arrived and were measured. If the amount of correlation between these measurements doesn't tally with previous Bell tests, it implies a violation of quantum theory, hinting that the measurements at A and B are being influenced by processes outside the purview of standard physics (arXiv:1705.04620v1). It has also been suggested to perform a more challenging experiment where the conscious intent of humans is used to perform the switching.
Entanglement also raises intriguing questions about the nature of randomness and may provide a way to make uncrackable forms of randomness for security protection. All randomness in the universe essentially derives from quantum uncertainty, so it is naural that quantum entanglement could become the basis of ultrasecure randomness. Encryption schemes used in modern cryptography make extensive use of random, unpredictable numbers to ensure that an adversary cannot decipher encrypted data or messages.
Fig 46: Experimental arrangement.
Security can be established only if the randomnumber generator satisfies two conditions. First, the user must know how the numbers have been generated to verify that a valid procedure is being implemented. And second, the system must be a black box from the adversary's perspective to prevent them from exploiting knowledge about its internal mechanism. Thanks to the laws of quantum physics, it is possible to create a provably secure randomnumber generator for which the user has no knowledge about the internal generation mechanism, whereas the adversary has a fully detailed description of it, but can't crack it. Bierhorst et al. (2018) prepared two photons in an entangled state. They then sent each photon to a different remote measurement station, where the photons' polarizations were recorded. During measurement, the photons were unable to interact with each other – the stations were so distant that this would require signals travelling faster than the speed of light. A key difficulty has been that most experiments that the Bell inequality loopholes, mean that they cannot be considered as blackbox demonstrations. For instance, the constraint that the two photons cannot exchange signals at subluminal speeds was not strictly enforced in two previous demonstrations of randomness generation based on Bell inequalities. In the loopholefree experiments to date, the magnitude of the Bellinequality violations observed in these experiments, although sufficient to confirm the correlated behaviour of the photons, was too low to verify the presence of randomness of sufficient quality for cryptographic purposes. The current experiment improves existing loopholefree experimental setups to the point at which the realization of such randomness becomes possible. However, this threshold is barely reached. Every time a photon is measured in the authors' experiment, the randomness that is generated is equivalent to tossing a coin that has 99.98% probability of landing on heads, however a powerful statistical technique ultimately enabled the authors to generate 1,024 random bits in about 10 minutes of data acquisition – corresponding to the measurement of 55 million photon pairs.
Entanglement can also apply not just to quantum states but to quantum histories, so that a photon which has specific states at two points in time cannot be assigned a state at intermediate points. Its history thus becomes a superposition of inconsistent histories which separate and come together again at the final point, illustrating many worlds features of quantum superposition. An experimental realization has been performed (Cotler et al. 2016) which is a temporal version of a three degrees of freedom version of the Bell experiment entitled the GreenbergerHorneZeilinger (GHZ) type, where instead of three photons at the same time, the experiment explored a single photon at three different time points. In this case the bounds on the Bell's theorem equivalent are again violated, showing no single definite history can be assigned, indicating entangled histories.
Fig 47: Quantum entangled causality  a particle can be switched by a qbit so that instead of traversing from A to B it now makes the reverse transition. When the qbit is put is a superposition of 0 and 1 states, causality becomes a superposition of causal and causally reversed trajectories. In a theroretic experiment using Bell theorm principles, researcheres have found that if a massive object subject to relativity such as a planet can exist in a superposition of states, time itself would become entangled in a similar way to the causality above (doi:10.1038/s4146701911579x).
Causality can also become entangled, so that intermediate states can become a supersposition of inconsistent causalities, as shown in fig 47 (Ball 2017). This could both enlarge the scope of quantum computation. A causal superposition in the order of signals travelling through two gates means that each can be considered to send information to the other simultaneously, effectively doubling information processing efficiency. It could also help unravel the relationship between quantum reality and relativity since causality in spacetime is at the root of the difficulty of unifying these two theories, consistent with some novel axiomatic approaches to the foundations of quantum theory (arXiv:quantph/0101012, arXiv:quantph/0212084).
In an experimental realisation, a series of 'waveplates' (crystals that change a photon's polarization) are used and partial mirrors that reflect light and also let some pass through. These devices act as the logic gates A and B to manipulate the polarization of a test photon. A control qubit determines whether the photon experiences AB or BA  or a causal superposition of both (Rubino, G. et al. 2017 Sci. Adv. 3, e1602589). But any attempt to find out whether the photon goes through gate A or gate B first will destroy the superposition of gate ordering. In another other experiment (MacLean, J et al. 2017 Nature Commun. 8, 15149), utilizes quantum circuits that manipulate photon states in other ways causally. A photon passes through gates A and B in that order, but its state is determined by a mixture of two causal procedures: either the effect of B is determined by the effect of A, or the effects of A and B are individually determined by some other event acting on them both. As with the previous experiment, it's not possible to assign a single causal 'story' to the state the photons acquire.
Fig 47b: Left: Quantum switch (a,b) H and V polarized paths, (c) diagonal path is a superposition (d) degrees of freedom. Right: Causal asymmetry is abolished in the quentum equivalents of classical causal asymmetry (arXiv 1712.02368 [quantph]).
A robust form of indefinite causal order eliminating previous loopholes can be observed in a quantum switch, where two operations act in a quantum superposition of the two possible orders (arXiv:1803.04302 2018). The operations cannot be distinguished by spatial or temporal position, and the experimenters show this quantum switch has no definite causal order, by constructing a causal witness and measuring its value to be 18 standard deviations beyond the definiteorder bound.
Many classical systems possess causal assymetry. For example a hammer smashing a window pane has efficient forwards causality resulting in the glass shattering stochastically into many pieces, while the retrocausal inference is exceedingly complex requiring massive memory and computation to resolve how all the pieces would end up back together. Various other stochastic systems demonstrate causal assymetry precisely. For example if we take a random sequence of 0s and 1s and then assign a 2 to every case where a 1 transitions to a 0  viz 11000101001 becomes 11200121201. In the forward driection 1 is followed by a 1 or a 2, 2 is followed by a 0 and 0 is followed by a 0 or 1. But in the reverse process , a a 2 is preceded by a 1, a 0 is preceded by a 0 or a 2, but 1 can be preceded by 0, 1, or 2 requring more memory and computation. Researchers studying the quantum versions of such systems (arXiv 1712.02368 [quantph]) have found the quantum versions retain causal symmetry, raising unsolved questions about both the causal arrow of time and increasing entropy.
In a simulation using IBMs public quantum computer, a team have demonstrated that it is possible for the arrow of time to be reversed in a quantum system (Lesovik et al. 2019). To achieve the time reversal, the research team developed an algorithm for IBM's public quantum computer that simulates the scattering of a particle. To reverse its quantum evolution is like reversing the rings created when a stone is thrown into a pond. In nature, restoring this particle back to its original state  in essence, putting the broken teacup back together  is impossible, because you would need a "supersystem to manipulate the particle's quantum waves at every point. Their algorithm simulated an electron scattering by a twolevel quantum system, "impersonated" by a quantum computer qubit and its related evolution in time. The electron goes from a localized, or "seen," state, to a scattered one. Then the algorithm throws the process in reverse, and the particle returns to its initial state  in other words, it moves back in time, if only by a tiny fraction of a second.
An intriguing illustration of how different the quantum world can be is illustrated by a quantum game of NIM (Arxiv:1304.5133) utilizing the LeggettGarg (1985) Belltype inequality (see figs 18, 44). This places a bound on measurements, say Q = +/1 at three times t_{1}, t_{2}, t_{3}, where we find ≤ 1. Consider a quantum version of the threebox game, played by Alice and Bob, who manipulate the same threelevel system. Alice first prepares the system in state 3> and then evolves it with a unitary operator that takes . Bob then has a choice of measurement: with probability p_{1}^{B} he decides to test whether the system is in state 1> or not (classically, he opens box 1), and with probability p_{2}^{B} he tests whether the system is in state 2> or not. Alice then applies a second unitary to the system, which takes before she makes her final measurement to check the occupation of state 3>. If both Alice and Bob find the system in the state that they check (e.g., Bob measures level 1 and finds the system there and Alice, the same for state 3), then Alice wins. If Alice finds the system in state 3, but Bob's measurement fails, then Bob wins. Finally, if Alice doesn't find the system in state 3, the game is drawn. In a realistic description of this game in which Bob's measurements are noninvasive, Alice's chance of winning can be no better than 50/50 as long as Bob chooses his measurements at random . In the quantum version, however, interference between various paths means that Alice wins every time. Alice's quantum strategy therefore outstrips all classical (i.e. realistic NIM) ones.
Various quantum games exploiting additional knowledge of a system provided by entanglement and Bell's inequalties, fig 48(right), have been shown to provide an indication of whether the universe is finitely or infinitely complex. Games like NIM where two players, Alice and Bob do not have knowledge of one another's choices can have their odds improved of both winning by using quantum entanglement and Bell's inequalities. Recent papers (arXiv: 1606.03140, 1703.08618, 1709.05032[quantph])has shown that by increasing the number of entangled particles accessed, the chances of winning can be improved without bound, suggesting a way to actually test thi question.
Fig 48: Left: Schrodinger's cat split into two entangled boxes. (Right): Quantum NIM game using Bell's inequalities to improve success.
Scientists have in 2016 split Schrodinger's cat between two entangled boxes (Wang et al. 2016). Microwaves inside a superconducting aluminum cavity take the place of the cat. the microwaves' electric fields can be pointing in two opposing directions at the same time, just as Schrodinger's cat can be simultaneously alive and dead. Because the states of the two boxes are entangled, if the cat turns out to be alive in one box, it's also alive in the other. Measurements from the two boxes will agree on the cat's status. For microwaves, this means the electric field will be in sync in both cavities. The scientists measured the cat states produced and found a fidelity of 81 percent. The result is a step toward quantum computing with such devices. The two cavities could serve the purpose of two quantum bits, or qubits. The cat states are more resistant to errors than other types of qubits so the system could eventually lead to more faulttolerant quantum computers.
This clash between subjective experience and quantum theory has lead to much soulsearching. The Copenhagen interpretation says quantum theory just describes our state of knowledge of the system and is essentially incomplete. This effectively passes the problem back from physics to the observer.
Some physicists think all the possibilities happen and there is a probability universe for each case. This is called the manyworlds interpretation of Hugh Everett III. The universe becomes a superabundant superimposed set of all possible probability futures and indeed all pasts as well in a smeared out 'holographic ' multiverse in which everything happens. It suffers from a key difficulty. All the experience we have suggests just one possibility is chosen in each situation  the one we actually experience. Some scientists thus think collapse depends on a conscious observer. Many worlds defenders claim an observer wouldn't see the probability branching because they too would be split but this leaves us either with infinite split consciousness, or all we lose all forms of decisionmaking process, all forms of historicity in which there is a distinct line of history, in which watershed events do actually occur, and the role of memory in representing it.
Fig 49: (Left) Basic intuition and experimental setup for entangled atoms. (a) When atoms spontaneously emit photons, phase coherence between the atoms leads to constructive interference and enhanced emission probability in a certain direction, measured by a single photon detector (SPD). Emission in any other direction is incoherent and hence not enhanced. If this phase coherence is generated by absorbing a single photon, the atoms are necessarily entangled. (Right) The experimental design used in the first experiment below.
Just as laser light is coherent, consisting of many photons in a single wave function, so it is possible for large populations of waveparticles to form entangled states. Researchers (arXiv:1703.04704) have demonstrated quantum entanglement of 16 million atoms, smashing the previous record of about 3,000 entangled atoms (fig 49). Meanwhile, another team (arXiv:1703.04709) used a similar technique to entangle over 200 groups of a billion atoms each. Both teams demonstrated entanglement using "quantum memories" consisting of a crystal interspersed with rareearth ions designed to absorb a single photon and reemit it after a short delay. The single photon is collectively absorbed by many rareearth ions at once, entangling them. After tens of nanoseconds, the quantum memory emits an echo of the original photon: another photon continuing in the same direction as the photon that entered the crystal. By studying the echoes from single photons, the scientists quantified how much entanglement occurred in the crystals. The more reliable the timing and direction of the echo, the more extensive the entanglement was. While the second team based its measurement on the timing of the emitted photon, the first team focused on the direction of the photon.
Whole tardigrades have also been entangled in quantum dots (arXiv:2112.07978v2): "We observe coupling between the animal in cryptobiosis and a superconducting quantum bit and prepare a highly entangled state between this combined system and another qubit. The tardigrade itself is shown to be entangled with the remaining subsystems. The animal is then observed to return to its active form after 420 hours at sub 10 mK temperatures and pressure of 6 × 10^{−6} mbar, setting a new record for the conditions that a complex form of life can survive".
The amount of 'spooky action at a distance' that could be involved in the actions of the universe may prove to be incalculable, as potentially demonstrated in a proof of a theorem (Ji Z et al. 2020 "MIP∗ = RE" arXiv 2001.04383 [quantph]) combining complexity theory and quantum entanglement. This involves an analysis of a team of two players who are able to coordinate their actions through quantum entanglement, even though they are not allowed to talk to each other. This enables both players to 'win' more often than they would without quantum entanglement. But the authors show that it is intrinsically impossible for the two players to calculate an optimal strategy . This implies that it is impossible to calculate how much coordination they could theoretically reach. There is thus no algorithm that is going to tell you what is the maximal violation you can get in quantum mechanics through entanglement. Connes' embedding conjecture in the theory of operators used to provide the foundations of quantum mechanics. Operators are numbrer matrices that can have either a finite or an infinite number of rows and columns which have a crucial role in quantum theory, whereby each operator encodes an observable property of a physical object. Connes asked whether quantum systems with infinitely many measurable variables could be approximated by simpler systems that have a finite numberasked of you could always approximate an infinite system with a finite one. But the paper shows that the answer is no: there are, in principle, quantum systems that cannot be approximated by finite ones. This also means that it is impossible to calculate the amount of correlation that two such systems can display across space when entangled.
Entanglement and Quantum Gravity: Two papers have proposed invesitgating whether two systems can become entangled by gravitational attraction (doi:10.1103/PhysRevLett.119.240401, doi:10.1103/PhysRevLett.119.240402). Gravity is assumed to be mediated by the graviton, but the gravitatonal force is too weak at the particle level to have any hope of measuring single graviton interactions. The classical nature of general relativity has also caused some scientists to question whether gravity is a quantum force at all, exemplified by Penroses idea of objective reduction, where gravitational interaction collapses the wave function, returning the quantum superpositions to a classical end result. While we cannot measure a photon directly, we might be able to detect whether gravitation can quantum entangle two systems, thus demonstrating that gravitation is also a quantum force and that quantum gravity is thus a reality. The basic form of the experiment is to prepare two quantum systems in a superposition of states and see if they become entanged by gravitational attraction while falling through a vacuum. The current suggestion is to take two microdiamonds and embed a nitrogen atom in each which can be put in a superposition of spin up and down states by zapping it with a microwave pulse. A magnetic field is them applied so that the spin up component moves left and the spin down moves right putting each in a superposition of two spatially separated states. The diamonds are then allowed to fall in a vacuum and tested for signs of entanglement between them caused by gravitational interaction as they fall.
Quantum Tunneling: In a version of the pairsplitting experiment (Chiao and Kwait 1993), which illustrates the difficulty of using superluminal correlations to violate classical causality, one photon of a pair goes directly to a detector, while the other has to quantum tunnel through a partially reflecting mirror's energy barrier designed so it succeeds 1% of the time. When the tunneling photon's arrival time is compaired with the other it is detected sooner more than 50% of the time, indicating it was traveling up to 1.7 times the speed of light, but the effect results from reshaping the wave so that the leading edges of the two photon's waves both arrive together, but the peak of the tunneling photon arrives sooner because its wave packet has been shortened and its peak, where detection is most probable, arrives sooner. However this doesn't mean it can be used to convey information faster than the speed of light, because the effect lies within the uncertainty of position of the photon that detemined the tunneling in the first place.
Fig 50: In radioactivity, particles can escape the nucleus, by quantum tunneling out, even though there is an energy barrier greater than their own energy holding them together. They can do this because the wave function extends through the barrier, declining exponentially and there is a nonzero probability of finding the particle outside, since its wave function continues. The energy (frequency) is unchanged, but the amplitude (probability or intensity) is reduced. In the same way we can think of tunneling as a fluctuation of energy for a short enough time to jump over the barrier , which must be returned within the Heisenberg time limit. A game of lookingglass croquet above has Alice hitting rolledup hedgehogs, each bearing an uncanny resemblance to Werner Heisenberg towards a wall, overlooked by Einstein. Classically the hedgehogs always bounce off. Quantummechanically however a small probability exists that a hedgehog will appear on the far side. The puzzle facing quantummechanics is how long does it take to go through the wall? Does the traversal time violate Einstein's lightspeed limit? In the pairsplitting experiment when one is required to quantum tunnel, the tunneling photon seems to jump the barrier faster than light, so that it is likely to arrive sooner, but is not able to do this in a way which violates Einsteinian causality by sending usable information faster than light.
Since the first pairsplitting result in the 1980s there have been a veritable conjurer's collection of experiments, all of which verify the predictions of quantum mechanics in every case and confirm all the general principles of the pairsplitting experiment. Even if we clone photons to form quartets of correlated particles, any attempt to gain information about one of such a multiple collection collapses the correlations between the related twins. Furthermore these effects can be retrospective, leading photons to be able to be superpositions of states which were created at different times.
Superconductivity Entanglement is also involved in superconductivity. Electrons in the material form orbiting pairs, because the positively charged atomic ions are attracted to the negative electrons, a small peak of atomic density forms in the neighbourhood of two electrons which can cause them to form entrapped orbits even though the negatively charged electrons would naturally repel. The pairs cannot then collide with the atoms in the material because the activation energy required for either electron to escape the attractive pair exchanging phonons in their minimum energy configuration, is greater than the thermal energy of the material. Hence the electric current flows unobstructed.
Entanglement can also explain the Meissner effect, in which a magnet levitates above superconducting material. The magnetic field induces a current in the surface of the superconductor, and this current effectively excludes the magnetic field from the interior of the material, causing the magnet to hover. The current halts the photons of the magnetic field after they have travelled only a short distance through the superconductor. For the normally massless photons it is as if they have suddenly entered treacle, effectively giving them a mass. A similar mechanism may be behind the mass of all particles. The source of this mass is believed to be the Higgs field mediated by the Higgs boson, existing in a "condensed" state that excludes mediator particles such as gluons in the same way that a superconductor's entangled electrons exclude the photons of a magnetic field (Quantum quirk may give objects mass New Scientist 24 October 2004).
Delayed Choice, Quantum Erasure, Entanglement Swapping and Procrastination
A counterintuitive aspect of quantum reality is that it is possible to change the way a quantum is detected after it has traversed its route in such a way as to retrospectively determine whether it traversed both paths as a wave or just one as a particle, or in the case of erasure merge it back into the entangled state.
In aWheeler's delayed choice experiment, illustrated in fig 51, we can sample photons either by an interference pattern, verifying they went along both paths (e.g. both sides of the galaxy in fig 14), or place separate directional detectors which will detect they went one way around only as particles (which will destroy the interference pattern. Moreover, we can decide which to perform after the photon has passed the galaxy, at the end of its path. Thus the configuration of the latter parts of the wave appear to be able to alter the earlier history. The delayed choice experiment has a deep link with Schrodinger's cat because opening the cat's box is exactly like using particle detectors because it determines whether or not a scintillation particle was emitted, while leaving the box closed retains the superposition of the wave. Furthermore, Wheeler's purpose in proposing the experiment was devoted to establishing that antirealism  the idea that the photon doesn't have a state until it is measured is inconsistent with a causal determininsm that avoids retrocausality  the eventual configurationof the detectors altering the assumed path or paths the photon has taken..
Fig 51: (a) Wheeler delayed choice experiment on a cosmic scale. A very distant quasar is gravitationally lensed by an intervening galaxy. (b) An experimental implementation of Wheeler's idea along a satelliteground interferometer that extends for thousands of kilometers in space (Vedovato et al. 2017), using shutters on an orbiting satellite (c, d).
Just how large such waves can become can be appreciated if we glance out at a distant galaxy, whose light has had to traverse the universe to reach us, perhaps taking as long as the history of Earth to get here. The ultimate size is as big as the universe. Only one photon is ever absorbed for each such wave, so once we detect it, the probability of finding the photon anywhere else, and hence the amplitude of the wave, must immediately become zero everywhere. How can this happen, if information cannot travel faster than the speed of light? For a large wave, such as light from a galaxy, (and in principle for any wave) this collapse process has to cover the universe. When I shine my torch against the window, the amplitude of each photon is both reflected, so I can see it, and transmitted, escaping into the night sky. Although the wave may spread far and wide, if the particle is absorbed anywhere, the probability across vast tracks of space has to suddenly become zero. Moreover collapse may involve the situation at the end of the path influencing the earlier history, as in the Wheeler delayed choice experiment.
In Quantum Erasure, it is also possible to 'uncollapse' or erase such losses of correlation by reinterfering the wave functions so we can no longer tell the difference. The superposition choices of the delayed choice experiment do this. This successfully recreates the lost correlations, inducing information about one of the particles and then erase it again by reinterfering it back into the wave function provided we use none of its information  the quantum eraser. In such situations the interference, which would be destroyed had we looked at the information, is reintegrated undiminished.
Fig 52: Quantum erasure (Scientific American)
Erasing information about the path of a photon restores wavelike correlated behavior. Pairs of identically polarized correlated photons produced by a 'downconverter', bounce off mirrors, converge again at a beam splitter and pass into two detectors. A coincidence counter observes an interference pattern in the rate of simultaneous detections by the two detectors, indicating that each photon has gone both ways at the beam splitter, as a wave. Adding a polarization shifter to one path destroys the pattern, by making it possible to distinguish the photons' paths. Placing two polarizing filters in front of the detectors makes the photons identical again, erasing the distinction, restoring the interference pattern.
Fig 53: Delayed choice quantum eraser configuration (en.wikipedia.org/wiki/Delayed_choice_quantum_eraser, doi:10.1103/PhysRevLett.84.1).
Use of entangled photons enables the design and implementation of versions of the quantum eraser that are impossible to achieve with singlephoton interference. What makes the Wheeler's delayed choice quantum eraser astonishing is that, unlike in the classic doubleslit experiment, the choice of whether to preserve or erase the whichpath information of the idler was not made until 8 ns after the position of the signal photon had already been measured.
An individual photon goes through one (or both) of the two slits. In the illustration, the photon paths are colorcoded (red A, blue B). One of the photons  the "signal" photon (red and blue lines going upwards from the prism at BBO) continues to the target detector D0. Detector D0 is scanned in steps along its xaxis. A plot of "signal" photon counts detected by D0 versus x can be examined to discover whether the cumulative signal forms an interference pattern. The other entangled photon  the "idler" photon (red and blue lines going downwards from the prism), is deflected by prism PS that sends it along divergent paths depending on whether it came from slit A or slit B. Beyond the path split, the idler photons encounter beam splitters BSa, BSb, and BSc that each have a 50% chance of allowing the idler photon to pass through and a 50% chance of causing it to be reflected. The beam splitters and mirrors direct the idler photons towards detectors labeled D1, D2, D3 and D4.
Note that:
Detection of the idler photon by D3 or D4 provides delayed "whichpath information" indicating whether the signal photon with which it is entangled had gone through slit A or B. On the other hand, detection of the idler photon by D1 or D2 provides a delayed indication that such information is not available for its entangled signal photon. Insofar as whichpath information had earlier potentially been available from the idler photon, it is said that the information has been subjected to a "delayed erasure".
Fig 54: (Above) Delayed choice entanglement swapping in which Victor is able to decide whether Alice's and Bob's photons are entangled or not after they have already been measured. (Below) A photon is entangled with a photon that has already died (sampled) even though they never coexisted at any point in time.
In a second experiment, fig 54(below), two photons can become entangled even though they have never coexisted at any point in time. Photons 1 & 2 are entangled and 1 is detected killing it. A second entangled pair 3 & 4 are later created and 3 is then entangled with 2 disrupting the original entanglement with 4. But when 4 is measured, we then find it is entangled with the dead photon 1 (ArXiv: 1209.4191).
Closing Wheeler loopholes: In 2018 a group of physicists (Chaves R, Barreto Lemos G, Pienaar J 2018 Causal Modeling the DelayedChoice Experiment doi:10.1103/PhysRevLett.120.190401) used the emerging field of causal modeling to find a loophole in Wheeler's delayedchoice experiment, that showed that in an experiment in which two phase shifts change the interference pattern, and with it, the presumed "wavelike" or "particlelike" behavior of the photon, it is possible to use a hidden variable to write down rules that use the variable's value and the presence or absence of a change in the detectors to guide the photon to one detector or another in a manner that mimics the predictions of quantum mechanics. Causal modeling involves establishing causeandeffect relationships between various elements of an experiment. Often when studying correlated events if one cannot conclusively say that A causes B, or that B causes A, there exists a possibility that a previously unsuspected or "hidden" third event, C, causes both. In such cases, causal modeling can help uncover C.
They then constructed a formula that takes as its input probabilities calculated from the number of times that photons land on particular detectors (based on the setting of the two phase shifts). If the formula equals zero, the classical causal model can explain the statistics, but if the equation spits out a number greater than zero, then, subject to some constraints on the hidden variable, there's no classical explanation for the experiment's outcome. However subsequent experiments (arXiv:1806.00156, 1806.00211) have shown that in every configuartion tested the value is nonzero and the loophole has been closed. However more complex hidden variable theories carrying more than one bit of information and the Bohm pilot wave theory (because the wave and particle coexist and the wave can guide the particle) could still explain the phenomenon.
Entanglement Swapping A third intriguing phenomenon called entanglement swapping can also be made into a Wheeler delayed choice version. In this there is a mediator, Victor. In the entanglement swapping procedure, fig 54 (above), two pairs of entangled photons are produced, and one photon from each pair is sent to Victor. The two other photons from each pair are sent to Alice and Bob, respectively. If Victor projects his two photons onto an entangled state, Alice's and Bob's photons are entangled although they have never interacted or shared any common past. What might be considered as even more puzzling is the idea of delayedchoice for entanglement swapping. Victor is free to choose either to project his two photons onto an entangled state and thus project Alice's and Bob's photons onto an entangled state, or to measure them individually and then project Alice's and Bob's photons onto a separable state. If Alice and Bob measure their photons' polarization states before Victor makes his choice and projects his two photons either onto an entangled state or onto a separable state, it implies that whether their two photons are entangled (showing quantum correlations) or separable (showing classical correlations) can be defined after they have been measured (ArXiv: 1203.4384).
Quantum Procrastination and Delayed Choice Entanglement: An ingenious version of the delayed choice experiment has also been applied to the idea of morphing the wave aspect into the particle aspect through a superposition of the two. A simple version of the delayed choice experiment involves an interferometer that contains two beam splitters. The first splits the incoming beam of light, and the second recombines them, producing an interference pattern. Such a device demonstrates waveparticle duality in the following way. If light is sent into the interferometer a single photon at a time, the result is still an interference pattern  even though a single photon cannot be split, but is passing through both routes as a wave. If you remove the device that recombines the two beams, interference is no longer possible, and the photon emerges from one or other route as a particle, which can be detected as before, even when this decision is made after the photon entered the splitter.
Fig 55: Quantum procrastination. Morphing the probablility statistic of a wave into that of a particle by altering the detection angle of the external photon in the delayed choice entanglement. Inset black and white and artist's impression of the transition.
Now two groups of researchers have taken this a step further by replacing the second beam splitter with a quantum version that is simultaneously operational and nonoperational, because it is entangled with a second photon outside the interferometer. Hence whether it is operational or not can be determined only by measuring the state of the second photon. The researchers found that this allowed them to delay the photon's wave or particle quality until after it has passed through all the experimental equipment, including the second beam splitter tasked with determining that very thing and by varying the detection angle of the second entangled photon according to Bell's theorem to morph the resutl between wave and particle aspects of the transmitted photon (doi: 10.1126/science.1226719, doi:10.1126/science.1226755). The ability to delay the measurement which determines the degree of wavelike or particlelike behavior to any desired degree has deservedly been termed 'quantum procrastination'.
Quantum Teleportation, Computing and Cryptography
In Quantum teleportation, information defining a quantum particle in a given state is 'teleported' by another particle, has also become an experimental reality. These experiments give us a broad intuition of quantum reality. In quantum teleportation one of a pair of entanged particles is interacted with by a third distinct particle to produce a signal which is 'teleported' as classical information, e.g. as part of the state of a transmitted particle, such as its porarization, although this particle must still be an isolated noninteracting quantum. This later interacts with the second entangled particle resulting in the generation of a particle with identical properties to the third particle. The illustrations below the theoretical process and two exerimental realizations.
Fig 56: (a) In quantum teleportation, a quantum (blue left) is combined in an interference measurement with one of an entangled pair (pink left) by experimenter 1, who then sends the result of the measurement as classical information to 2 who applies this to transform the other entangled particle, causing it to enter the same quantum state as the original blue one. (a) Teleporting a grin  the magnetic moment (the grin) of a neutron (Cheshire cat) traversed a different path from the particle (doi: 10.1038/ncomms5492). (b) Quantum teleportation has been achieved over distances greater than 100 km and more recently from Earth 1400 km to a satellite (arXiv:1707.00934).
In continuousvariable quantum teleportation, entangled particles help to transmit a stream of information comprising numerical values that can range widely, such as the amplitudes of a laser's light waves. But until now, this form of teleportation has been achieved only over very short distances in the lab. Xiaojun Jia and colleagues have now used optical fibre to carry out continuousvariable teleportation of laserlight values across a distance of 6 kilometres. A fidelity of 0.62 ± 0.03 was achieved for the retrieved quantum state, which breaks through the classical limit of 1/2. A fidelity of 0.69 ± 0.03 breaking through the nocloning limit of 2/3 has also been achieved when the transmission distance is 2.0 km. This approach could allow optical fibre to be used for powerful forms of quantum computing (Huo et al. Sci. Adv. 2018 4 eaas9401 doi:10.1126/sciadv.aas9401).
Quantum Computing: Classical computation suffers from the potentially unlimited time it takes to check out every one of the possibilities. To crack a code we need to check all the combinations, whose numbers can increase more than exponentially with the size of the code numbers and possibly taking as long as the history of the universe to compute. Factorizing a large number composed of two primes is known to be computationally intractable enough to provide the basis for public key encryption by which banks records and passwords are kept safe. Although the brain ingeniously uses massively parallel computation, there is as yet no systematic way to boot strap an arbitrary number of parallel computations together in a coherent manner.
However quantum reality is a superposition of all the possible states in a single wave function, so if we can arrange a wave function to represent all the possibilities in such a computation, superposition might give us the answer by a form of parallel quantum computation. A large number could in principle be factorized in a few superimposed steps, which would otherwise require vast timeconsuming classical computer power to check all the possible factors one by one. Suppose we know an atom is excited by a certain quantum of energy, but only provide it a part of the energy required. The atom then enters a superposition of the ground state and the excited state, suspended between the two like Schrodinger's cat. If we then collapse the wave function, squaring it to its probability, as in , it will be found to be in either the ground state or excited state with equal probability. This superimposed state is sometimes called the 'square root of not' when it is used to partially excite a system which flips between 0 and 1 corresponding to a logical negation.
To factorize a large number, we could devise a quantum system in two parts. The left part is excited to a superposition. Suppose we have a small array of atoms which effectively form the 0s and 1s of a binary number  0 in the ground state and 1 in the excited state. If we then partially excite them all they represent a superposition of all the binary numbers  e.g. 00, 01, 10 and 11. The right half of the system is designed to give the factorization remainder of a test number taken to the power of each of the possible numbers in the left. These turn out to be periodic, so if we measure the right we get one of the values. This in turn collapses the left side into a superposition of only those numbers with this particular value in the right. We can then recombine the reduced state on the left to find its frequency spectrum and decode the answer. As a simple example, you are trying to factorise n=15. Take the test number x = 2. The powers of 2 give you 2, 4. 8, 16, 32, 64, 128, 256 ... Now divide by 15, and if the number won't go, keep the remainder. That produces a repeating sequence 2, 4, 8, 1, 2, 4, 8, 1 ... with period n = 4 we can use this to figure that 3 = 2^{4/2}1 is a factor of 15. The quantum parallelism solves all the computations simultaneously  this is known as Shor's algorithm, after Peter Shor.
Stage three is the most complex and depends on the fact that the frequency of of these repeats can be made to pop out of a calculation by getting the different universes to interfere with one another. A complex series of quantum logic operations has to be performed and interference then brought about by looking at the final answer. The final observed value, the frequency f, has a good chance of revealing the factors of n from the expression x^{f}^{/2}1. in the simple example above, the repeat sequence is the four values 2, 4, 8, 1, so the repeat frequency is 4. Thus Shor's algorithm produces the number: 2^{4/2}1 = 3 which is a factor of 15.
Fig 57: Above: Two qubit logic gate performance (doi:10.1038/nature15263). Below: Adiabatic quantum computing on the spinchain problem onedimensional spin problems with variable local fields and couplings between adjacent spins. An example of a stoquastic problem. With evolution of the system for 9 qbits shown at right (doi:10.1038/nature17658).
Such quantum computers require isolation from the environment to avoid quantum superpositions collapsing in decoherence. A two qubit quantum logic gate has been recently constructed using silicon transistor technology, promising a proofofprinciple breakthrough in the construction of quantum computers (Veldhorst et al. 2015).
In an ingenious strategy, a team have used a welltested four qubit quantum computer to simulate the creation of pairs of particles and antiparticles in a proof of concept simulation in which energy is converted into matter, creating an electron and a positron. Quantum electrodynamics has the most excellent predictions of any physical theory, but interactions involving strong nuclear and colour forces become too complex, requiring simulations which are prone to exponential runaway in classical computing because it lacks quantum superposition. The team used a quantum computer in which an electromagnetic field traps four ions in a row, each one encoding a qubit, in a vacuum. They manipulated the ions' spins (magnetic orientations) using laser beams, coaxing the ions to perform logic operations. The team's quantum calculations confirmed the predictions of a simplified version of quantum electrodynamics: "The stronger the field, the faster we can create particles and antiparticles" (Martinez et al. 2016).
Fig 58: Left: (a) An experiment to simulate the coherent realtime dynamics of particleantiparticle creation by realizing the Schwinger model (onedimensional quantum electrodynamics) on a lattice. (b) The four qubit arrangement. (c, d) Experimental and theoretical data showing the evolution of the particle number density as a function of time wt and particle mass m/w. Right: Quantum tomography a technique akin to weak quantum measurement. In a functional quantum computer this could be used to inform errorcorrection measures on connected qubits in the same device. A qubit is created using a circuit with two superconducting metals separated by an insulating barrier. Passing a current produces a qubit with two simultaneous superposed energy levels simultaneously. Reducing the energy barrier maintaining the superposition collapses its wavefunction into one of the two energy levels. But if it is set just above the highest of the two energy levels, it only partially collapses the waveform  in a "partial measurement". Scanning the qubit using microwave radiation, and then fully removing the energy barrier can then reveal its state of superposition and document its collapse (Science 312 1498). Right: (a) The Google designed sycamore processor involving 53 qubits (Arute et al. 2019) (b) The controllabe coupling array of the qubits. (c) The control pathway.
The Dwave computer works on an entirely different principle of adiabatic quantum computing  quantumannealing of a potential energy landscape with multiple local minima. Classical annealing works to find a suboptimal local minimum by starting at a high thermodynamic temperature of random excitations to effectively throw a marble around the landscape to avoid it getting caught in a highaltitude lake, before gradully lowering the temperature to assist in finding a local minimum not too far in value from the global minimum. Quantum annealing replaces kinetic excitation with graduated quantum tunneling to achieve the same effect. This approach works only on problems decodable into an energy landscape based on array computing. Whether it achieves better performance than classical computing remains unproven according to wikipedia (en.wikipedia.org/wiki/Adiabatic_quantum_computation).
In a veritable 'quantum computing leap', a team from Google (Arute et al. 2019) have announced that the Sycamore processor (fig 58 right), consisting of 53 conrollable qubits, takes only about 200 seconds to sample one instance of a quantum circuit a million times. Benchmarks currently indicate that the equivalent task for a stateoftheart classical supercomputer would take approximately 10,000 years. In their words, "This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a muchanticipated computing paradigm..
A team from Google (Barends et al. 2016), see fig 57, have more recently begun working with fundamental research into adiabatic quantum computing of systems such as Stoquastic spinchain problems. Stoquastic Hamiltonians, those for which all offdiagonal matrix elements in the standard basis are real and nonpositive, are common in the physical world (Bravy et al. 2008). They include fluxtype Josephson junction qbits (Barends et al. 2013). A Josephson junction is a conductor pair separated by a thin insulating barrier, which permits quantumtunneling. It is a macroscopic quantum phenomenon resulting in a current crossing the junction, determined by the junction's flux quantum, in the absence of any external electromagnetic field, with discrete steps under increasing voltage.
Quantum Cryptography exploits quantum mechanical properties to perform cryptographic tasks. Quantum key distribution offers an informationtheoretically secure solution to the key exchange problem. Publickey encryption and signature schemes such as RSA, can be broken by quantum adversaries. Quantum cryptography allows the completion of various cryptographic tasks that are proven or conjectured to be impossible using only classical communication, for example, it is impossible to copy data encoded in a quantum state and the very act of reading data encoded in a quantum state changes the state. This is used to detect eavesdropping in quantum key distribution.
The most well known and developed application of quantum cryptography is quantum key distribution (QKD), which is the process of using quantum communication to establish a shared key between two parties (Alice and Bob, for example) without a third party (Eve) learning anything about that key, even if Eve can eavesdrop on all communication between Alice and Bob. This is achieved by Alice encoding the bits of the key as quantum data and sending them to Bob; if Eve tries to learn these bits, the messages will be disturbed and Alice and Bob will notice. The key is then typically used for encrypted communication using classical techniques. For instance, the exchanged key could be used as the seed of the same random number generator both by Alice and Bob.
Fig 59: Quantum cryptography (1) To begin creating a key, Alice sends a photon through either the 0 or 1 slot of the rectilinear or diagonal polarizing filters, while making a record of their orientations. (2) For each incoming bit, Bob chooses randomly which filter slot he uses for detection down both the polarization and the bit value. (3) If Eve tries to spy on the photon train, quantum mechanics prohibits her using both filters but if she chooses the wrong one she may create errors by modifying their polarization. (4) After all the photons reach Bob, he tells Alice openly his sequence of filters, but not the value of the bit value. (5) Alice tells Bob openly in turn which filters he chose correctly. These determine the bits they will use to form their encryption key.
Following the discovery of quantum key distribution and its unconditional security, researchers tried to achieve other cryptographic tasks with unconditional security. One such task was quantum commitment. A commitment scheme allows a party Alice to fix a certain value (to "commit") in such a way that Alice cannot change that value while at the same time ensuring that the recipient Bob cannot learn anything about that value until Alice decides to reveal it.
Weak Quantum Measurement and Surreal Bohmian Trajectories
Weak quantum measurement (Aharonov et al. 1988) is a process where a quantum wave function is not irreversibly collapsed by absorbing the particle but a small deformation is made in the wave function whose effects become apparent later when the particle is eventually absorbed in a 'postselection' e.g. on a photographic plate in a strong quantum measurement. Weak quantum measurement changes the wave function slightly midflight between emission and absorption, and hence before the particle meets the future absorber involved in eventual detection. A small change is induced in the wave function, e.g. by slightly altering its polarization along a given axis (Kocsis et al. 2011). This cannot be used to deduce the state of a given waveparticle at the time of measurement because the wave function is only slightly perturbed, and is not collapsed or absorbed, as in strong measurement, but one can build up a prediction statistically over many repeated quanta of the conditions at the point of weak measurement, once postselection data is assembled after absorption at specific points in the eventual path.
Fig 60: Weak quantum measurement in a double slit apparatus generating single photons using a laser stimulated quantum dot and split fiber optics. The overlapping wave function is elliptically polarized in the xyplane transverse to the zdirection of travel. A calcite crystal is used to make a small shift in the phase of one component, while the other retains the information leading to absorption of the photon on a charged coupled device. By combining the information from the two transverse components at varying lens settings, it becomes possible to make a statistical portrait of the evolving particle trajectories within the wave function. Pivotally the weak quantum measurement is made in a way, which is confirmed only in the future of the ensemble when the absorption takes place (Kocsis et al. 2011).
Weak measurement also suggests (Merali 2010, Cho 2011) that, in some sense, the future is determining the present, but in a way we can discover conclusively only by many repeats. Focus on any single instance and you are left with an effect with no apparent cause, which one has to put it down to a random experimental error. This has led some physicists to suggest that freewill exists only in the freedom to choose not to make the postselection(s) revealing the future's pull on the present. Yakir Aharonov, the codiscoverer of weak quantum measurement (Aharonov et al. 1988) sees this occurring through an advanced wave travelling backwards in time from the future absorbing states to the time of weak measurement. What God gains by 'playing dice with the universe', in Einstein's words, in the quantum fuzziness of uncertainty, is just what is needed, so that the future can exert an effect on the present, without ever being caught in the act of doing it in any particular instance: "The future can only affect the present if there is room to write its influence off as a mistake", neatly explaining why no subjective account of prescience can do so either.
Weak quantum measurements have been used to elucidate the trajectories of the wave function during its passage through a twoslit interference apparatus (Kocsis et al. 2011), to determine all aspects of the complex waveform of the wave function (Hosten 2011, Lundeen et al. 2011), to make ultra sensitive measurements of small deflections (Hosten & Kwiat 2008, Dixon et al. 2008) and to demonstrate counter factual results involving both negative and positive postselection probabilities, which still add up to certainty, when two interference pathways overlap in a way which could result in annihilation (Lundeen & Steinberg 2009). In a more recent development, a team led by Aharanov (et al. 2014) has found that postselection can also induce forms of entanglement in particles even if they have no previous quantum connection coupling their wave functions.
In formulating a theoretical point of view in which weak measurement plays an integral part, Aharonov and coworkers (Aharonov, Bergmann & Lebowitz 1964, Aharonov & Vaidman 2008), noting that the assumption that reduction of the wave packet was an irreversible process making quantum reality nontime symmetric, because it was unsuited to the postselection of weak quantum measurement, devised a timesymmetric version of quantum mechanics  the two state vector approach  in which a state is defined by two vectors, defined by the results of measurements performed on the system in the past relative to the time t and of a backward evolving quantum state defined by the results of measurements performed on this system after the time t. In this sense, the postselection strong measurement after the event becomes critical in defining the quantum state. We will see this has a close relationship with the transactional interpretation although the Bohmian idea of pilot waves guiding a real particle assumes a causality in the classical direction of increasing time.
Fig 60b: Experimental realization of the pigeohole paradox  three photons and only two polarizations.
In another manifestation of quantum reality exposed by weak measurement, we have the pigeonhole paradox. The pigeonhole principle states that if there are more pigeons than boxes, at least one box must contain more than one pigeon. Aharonov et al. (2014, 2016) put forward a quantum pigeonhole paradox where the classical pigeonhole counting principle in some case may break down. Chen et al. (2019) demonstrate that when three single photons transmit through two polarization channels, in a welldefined pre and postselected ensemble, there are no two photons in the same polarization channel by weakstrength measurement. The effect of variablestrength quantum measurement is experimentally analysed order by order and a transition of violation of the pigeonhole principle is observed. The different kinds of measurementinduced entanglement are responsible for the photons' abnormal collective behaviour in the paradox.
Surreal Bohmian Trajectories: The spacetime profile of WQM displays detection trajectories comparable with David Bohm's (1952) pilot wave theory in which the particle has a defined position and the wave acts simply as a guide, albeit with nonlocal influences. The link with Bohm's pilot wave theory became reinforced when a critical experiment demonstrated the existence of socalled "surreal Bohmian trajectories. A group of physicists with the initial letters ESSM (1992) in their names pointed out that Bohmian hidden variable in which particles were guided by a nonlocal 'pilot' wave could in principle lead to 'surreal' trajectories which could violate the predictions of quantum theory. However, when a second group (Mahler et al. 2016) set out to test surreal trajectories experimentally they found they physically exist.
Fig 61: Left: The apparatus used to discover surreal trajectories (Mahler et al. 2016 see discussion below). Right: The experimental results show that some photons which should have gone say through the red slit according to their entangled twins, appeared to do so near the slit but further along the trajectory veer off to erratically behave as if they are a superposition of either polarization indicating some unseen nonlocal connection occurring between the now separated entangled photons.
The experiment (fig 61) first prepares a pair of highly entangled photons with complementary polarization and then passes one into a double slit apparatus in which the photon to be measured is directed to one or other slit depending on its entangled twin's polarization. The measured photons are then passed through an apparatus to do weak quantum measurement of their trajectories as an ensemble and then detect the eventual position destructively in the same manner as fig 60. However when the polarization of the other entangled photon is used to determine which slit the first one must have gone through, the orbits near the centre of the interference pattern display clear signs of surreal trajectories.
When weak measurement is used to detect the trajectory close to the slit. it confirms that the photon has gone through the correct slit according to its assumed polarization as subsequently measured by sampling its entangled twin. However, as the position of weak measurement moves towards the photographic plate the predictions fall to an even superposition of the two polarizations. Since the weak quantum measurement is a physical realization of the ensemble trajectories going to this particular point on the plate surreal trajectories are real but the prediction made of the spin by the entangled twin has become changed. This implies in turn that changes have occurred between entering the slits and hitting the plate of a nonlocal nature implying the there is substance to the Bohmian reality.
A brief synopsis of Bohm's pilot wave theory, which can be generalized, e.g. to bosons, runs along the following lines. Consider a wave function defined in a configuration space consisting of mdistinguishable particles x_{1}, ... x_{m} in d dimensions, forming an md dimensional space consisting of q={ q_{ 11, }q_{12}, q_{13}, ... , q_{m1, }q_{m}_{2}, q_{m3}} assuming d=3. Giving them masses in each direction M_{p,q}, we can derive a Schrodinger wave equation . We also consider the 'world particle' x consisting of real component positions x(t)={x_{11}(t), ... , x_{m3}(t)}, with x(0) being a random variable distributed with probability density P_{0}(x) where . Under the wave function, the velocity of the particle is defined by , guaranteeing the probability density for x(t) is P_{t}(x). Equivalently this gives an equation of motion , where f is the classical force arising from the potential V(q) and r is a repulsive force due to the quantum potential: , where . We thus have essentially real particles with defined positions subject to their (random) initial conditions whose dynamics is determined both by a classical potential and an additional quantum potential, whose effects are broadly consistent with the results we find in experiments such as weak quantum measurement.
Superposition, Entanglement and Counterparticles
Fig 62: Elitzur's proposed experiment (arXiv:1707.09483), whose principles have already been confirmed in a related experiment causing a photon to be reflected off both of two slits (Okamoto & Takeuchi 2018 Experimental demonstration of a quantum shutter closing two slits simultaneously Sci. Repts. doi:10.1038/srep35161).
In a new and more puzzling twist about quantum superpositions (Elitzur et al. 2018 arXiv:1707.09483) following on from Aharonov's two state vector pastfuture handshaking view, a probe photon (yellow left) is sent in a superposition through three boxes, A, B and C, simultaneously to see if the shutter photon (red) is inside them. If it is, the probe photon is reflected. This lets the probe photon report on where the shutter photon is without looking at it directly. The shutter photon is placed in a superposition that makes its location within the boxes vary through time: At moment 1 it is in A and C but not B, at moment 2 it is only in C and at moment 3 it is only in B and C. Although in C at all times the shutter photon appears to be in A a moment 1 only to disappear and reappear in B at moment 3. So this superposition is in some places some of the time rather than all places at once. The experiment is designed so the probe photon can only show interference if it interacted with the shutter photon in this particular sequence of places and times, so interference in the probe photon would be a definitive sign the shutter photon made this bizarre, logicdefying sequence of disjointed appearances among the boxes at different times.
The apparent vanishing of particles in one place at one time  and their reappearance in other times and places  suggests a new and extraordinary vision of the underlying processes involved in the nonlocal existence of quantum particles can be understood as a series of events in which a particle's presence in one place is somehow 'canceled' by its own 'counterparticle' in the same location, in a manner reminiscent of particleantiparticle annihilation. The disappearance of quantum particles is not 'annihilation' in this same sense but it is somewhat analogous  these putative counterparticles should possess negative energy and negative mass, allowing them to cancel their counterparts. So although the traditional 'two places at once' view of superposition might seem odd enough, 'it's possible a superposition is a collection of states that are even crazier,' Elitzur says. 'Quantum mechanics just tells you about their average'.
Many Interacting Worlds: More recently Hall, Deckert and Wiseman (2014 doi:10.1103/PhysRevX.4.041013) have extended these ideas to encompass a many interacting worlds (MIW) approach, replacing the quantum potential, with the effects of a large number of worlds with Newtonian dynamics following the classical force above, but under a very unusual type of interaction where the force between worlds is nonnegligible only when the two worlds are close in configuration space. The authors admit that such an interaction is quite unlike anything in classical physics, and it is clear that an observer in one world would have no experience of the other worlds in their everyday observations. But unlike Everett's many worlds interpretation, where all the probability universes are equal and simply represent the alternative outcomes of Schrodinger's cat, the interacting worlds are not equal but have a mutually repulsive global interaction, so, by careful experiment an observer might detect a subtle nonlocal action on the molecules of its world.
Fig 63:
Twoslit interference amplitudes using the pilot wave theory above and below,
the many interacting worlds theory. Both correspond closely to the
distributions of standard quantum mechanics in this case.
Suppose now that
instead of only one worldparticle, as in the pilot wave interpretation, there
were a huge number Nof
worldparticles coexisting, with positions (world configurations) x_{1} ... x_{N}. If each of the initial world
configurations is chosen at random from P_{0}(q), as described above, then by
construction. One can thus approximate P_{t}(q), and its derivatives, from a suitably
smoothed version of the empirical density at time t. From this smoothed density, one may also
obtain a corresponding approximation of the Bohmian force
for N≫1 in terms of the list of world configurations X_{t}={x_{1}(t) ... x_{N}(t)} at time t.
Note, in fact, that
since only local properties of P_{t}(q), are required for r_{t}(q), the
approximation r_{N}(q; Xt), requires only worlds from the set of Xt which are in the Ndimensional neighborhood of q. That is, the approximate force is local
in configuration space.
The MIW theory
replaces the Bohmian force acting on each
worldparticle x_{n}(t) by the approximation r_{N}(x_{n}; Xt), Thus, the evolution of the world
configuration x_{n}(t) is directly determined by the
other configurations in X_{t}_{.} This makes
the wave function
, and the
functions P_{t}(q) and S_{t}(q) derived from it,
superfluous. Its fundamental dynamics are described by the system of N×m×3 secondorder differential equations . While each world evolves deterministically, which of
the Nworlds we are actually living in
is unknown. Hence, assertions about the configuration of the particles in our
world are again probabilistic. For a given function
of the world
configuration, only an equally weighted population mean over all the
worlds compatible with observed macroscopic properties, can be predicted at any
time. Moreover since the worlds are
distributed with
, for any smooth function so the
description in limit approaches the wave function. The description is complete
only when the form of the force between worlds is specified. There are
different possible ways of doing so, each leading to a different version. For example in a simplified 1D example we might
have the repulsive potential . Ideally we want a conservative interaction in which the average
energy per world approaches the quantum average energy in the limit. Suitable
choices lead to estimates, which closely follow pilot wave and quantum
descriptions for several quantum phenomena.
MIW is provocative
because it shows that multiple configuration space hidden variable theories can
evoke commensurate dynamics to quantum theory, but the action between
configuration spaces is a redescription of the same phenomena of global
dynamics that quantum entanglement demonstrates, so in a sense it is a
multiverse theory of entanglement.
However neither MIA nor the pilot wave theory can explain all aspects of wavefunction collapse because of cases like the decay of a photon into an electronpositron pair, where there are more degrees of freedom in the more complicated massive twoparticle system than the initial conditions, and both still depend on random variables in their defining conditions.
Quantum Decoherence, Procrastination and Recoherence
Quantum Decoherence: (Zurek 1991, 2003) explains how reduction of the wave packet can lead to the classical interpretation through interation of the sytem with other quanta. Supposing we consider a measurement of electron spin. If an ideal detector is placed in the spin up path, it will than click only if the electron is spin up so we can assume the undetermined detector state dn is equivalent to spin down. If we start with an electron in the pure state then the composite system can be described as and the detector system will evolve into a correlated state: . This correlated state involves two branches of the detector, one in which it measures spin up and the other (passively) spin down. This is the splitting of the wave function into two branches advanced by Everett to articulate the manyworlds description of quantum mechanics. However in the real world, we know the alternatives are distinct outcomes rather than a mere superposition of states. Von Neumann was well aware of these difficulties and postulated that in addition to the unitary evolution of the wave function there is a nonunitary 'reduction of the state vector or wave function which converts the superposition into a mixture by cancelling the correlating offdiagonal terms of the pure density matrix: to get a reduced density matrix, which enables us to interpret the coefficients as classical probabilities.
However, as we have seen with the EPR pairsplitting experiments, the quantum system has not made any decisions about its nature until measurement has taken place. This explains the offdiagonal terms, which are essential to maintain the fully undetermined state of the quantum system which has not yet even decided whether the electrons are spin up or spin down. One way to explain how this additional information is disposed of is to include the interaction of the system with the environment in other ways. Consider a system S detector D and environment E. If the environment can also interact and become correlated with the apparatus, we have the following transition: .
This final state extends the correlation beyond the systemdetector pair. When the states of the environment corresponding to the spin up and spin down states of the detector are orthogonal, we can take the trace over the uncontrolled degrees of freedom to get the same results as the reduced matrix. Essentially whenever the observable is a constant of motion of the detectorenvironment Hamiltonian, the observable will be reduced from a superposition to a mixture. In practice, the interaction of the particle carrying the quantum spin states with a photon and the large number of degrees of freedom of the open environment can make this loss of coherence or decoherence irreversible. Zurek describes such decoherence as an inevitable result of interactions with other particles.
Fig 64: Left: Cancellation of off diagonal elements in a cat paradox experiment due to decoherence arising from interactions with other quanta leads to a distribution along the diagonal and a classical real probability distribution (inset), representing the probability that the cat is either alive or dead, but not both. Right: Quantum daarwinism experiment (Unden et al. 2019) showing the setup of centres in teh diamond and quantum redundancy emerging.
Quantum Darwinism However, in contrast to the apparent simplicity of the decoherence model, we know the actual explanation of decoherence is interaction with other quantum systems, resulting not in a simple decline of the offdiagonal elements, but multiple quantum entanglements with third parties. To explain the emergence of objective, classical reality, it is not enough to say that decoherence washes away quantum behavior and thereby makes it appear classical to an observer. Somehow, it's possible for multiple observers to agree about the properties of quantum systems. Wojciech Zurek argues that two things must be true. First, quantum systems must have "pointer states" that are especially robust in the face of disruptive decoherence by the environment as is a pointer on the dial of a measuring instrument, such as the particular location of a particle, its speed, spin, or polarization. Zurek argues that classical behavior  the existence of welldefined, stable, objective properties  is possible only because pointer states of quantum objects exist. What is special mathematically about pointer states is that they are preserved, or transformed into a nearly identical state. This implies that the environment preserves some states while degrading others. A particle's position is resilient to decoherence, but superpositions decohere into localized pointer states, so that only one can be observed. Zurek (1982) described this "environmentinduced superselection" of pointer states in the 1980s. But there's a second condition that a quantum property must meet to be observed. As Zurek (2009) argues, our ability to observe some property depends not only on whether it is selected as a pointer state, but also on how substantial a footprint it makes in the environment. The states that are best at creating replicas in the environment  i.e. the "fittest"  are the only ones accessible to measurement. It turns out that the same stability property that promotes environmentinduced superselection of pointer states also promotes quantum Darwinian fitness, or the capacity to generate replicas. "The environment, through its monitoring efforts, decoheres systems, and the very same process that is responsible for decoherence should inscribe multiple copies of the information in the environment".
An experimental realization of this idea (fig 64 right) has been performed by Fedor Jelezko and coworkers (Unden et al. 2019). The team focused on NV centres, which occur when two adjacent carbon atoms within a diamond lattice are replaced with a nitrogen atom and an empty lattice site. The nitrogen atom has an extra electron that remains unpaired. This behaves as an isolated spin  which can be up, down or in a superposition of the two. The spin state can be probed in a wellestablished process that involves illuminating the diamond with laser light and recording the florescence given off. The researchers set out to monitor how the NV spin interacts with the spins of several neighbouring carbon atoms. Most carbon in the diamond is carbon12, which has zero spin. However, around 1% of the atoms are carbon13, which has a nuclear spin. Their experiment involved probing the interaction of a NV spin with, on average, four carbon13 atoms, about 1 nm away. The carbon13 spins  which serve as the environment  are too weak to interact with one another but nevertheless cause decoherence in the NV spin. This process involves the carbon13 spins changing to new quantum states that depend on the state of the NV spin. The experiment is done by shining a green laser light onto NV spins within a millimetresized sample of diamond and measuring the photons emitted as microwave and radio frequency fields are switched on and off. Because they were not able to observe the carbon13 spins directly, the team transferred these spin states to the NV spins and again exploited fluorescence measurements. By measuring the spin of just one carbon13 nucleus, and repeating the experiment many times, they found they could correctly deduce most of the NV spin properties most of the time. But measurements of additional nuclear spins added little to this knowledge. These results, "give the first laboratory demonstration of quantum Darwinism in action in a natural environment". Two other groups meanwhile have carried out similar measurements (using the polarization of photons) that also show redundancy which demonstrate the proliferation of classical information and also an 'uptick' in information taking place at the quantum level. Adan Cabello argues that other approaches can reveal crucial insights into the emergence of classical reality (Pokomy et al. 2019). He and colleagues have shown how to make measurements on trapped ions while still preserving the remaining parts of the system's quantum coherence, which shows that measurement is the result of a dynamical process governed itself by quantum mechanics.
Quantum Darwinism may provide an interactive bridge which can explain how the conscious brain derives its model of the classical world and uses it to anticipate opportunities and threats to survival. Conscious brains states are characterized by a maximal degree of global coupling in terms of phase coherence of the EEG across regions, by contrast with noncoherent regional processing which does not reach the level of consciousness. Karl Pribram has drawn attention to the similarities of this form of processing by comparison with quantum measurement where the uncertainty relation is defined by wave beats. Given the sensitive dependence of edge of chaos and selforganized criticality and interactive capacity between micro levels of the synapse and ion channel and global scale excitations, the conscious brain forms the most complex interactive system of quantum entanglements in the known universe. The brain states corresponding to the evolving Cartesian theatre of consciousness thus provide the richest set of boundary conditions for quantum Darwinism to shape brain states and in turn be shaped by them just as the carbon13 atoms form an interactive basis for quantum Darwinism with the unpaired N atom electron. This way one can see the conscious brain as a twoway interactive process both shaping the fluctuations of the quantum milieu and and being in turn shaped by them in an interactive resonance with the foundations of the transitions from the quantum superimposed world to that of unfolding experienced real world history. This mutual interaction invokes a defining interface between the two worlds in which both consciousness and intentional will are modulating the apparent randomness of wavefunction collapse dependent only on wave amplitude and probability, resolving the question of how apparently random quantum processes can lead to anticipative intentional acts.
Quantum Discord (Ollivier & Zurek 2002), is an extension of entanglement to more general forms of coherence in which partial correlations induced through interaction with mixed state particles can still be used to induce quantum correlated effects (Gu et al. 2012). Quantum discord is a promising candidate for a complete description of all quantum correlations, including coherent interactions that generate negligible entanglement. Coherent interactions can harness discord to complete a task that is otherwise impossible. Experimental implementation of this task demonstrates that this advantage can be directly observed, even in the absence of entanglement. Quantum discord does not require isolation from decoherence, and can even derive additional quantum information from interaction with mixed states which would annihilate entangled states.
Quantum discord is thus a viable model for processes ongoing at biological temperatures, which could disrupt full entanglement, such as photosynthesis receptors which are claimed to use a spatial form of quantum computing to utilize the most efficient conduction path of the chemical reaction centers (Brooks 2014, Thyrhaug et al. 2018), although this is still being debated (Ball 2018). Biology is full of phenomena at the quantum level, which are essential to biological function. Enzymes invoke quantum tunneling to enable transitions through their substrate's activation energy. Protein folding is a manifestation of quantum computation intractable by classical computing. When a photosynthetic active centre absorbs a photon, the wave function of the excitation is able to perform a quantum computation, which enables the excitation to travel down the most efficient route to reach the chemical reaction site (McAlpine 2010, Hildner et al. 2013). Frog rod cells are sensitive to single photons (King 2008) and recent research suggests the conscious brain can detect as few as three individual photons (Castelvecchi 2015). Quantum discord may also be integral to the coherent excitations of active brain states (King 2014). A scheme has also been used to perform certain forms of quantum computing, such as finding the diagonal sum of a 2x2 matrix using quantum discord rather than entanglement (Merali 2011b fig 65 right).
Fig 65: Left/centre: Quantum discord. Alice encodes information within one arm of a twoarm quantum state ρAB. Bob attempts to retrieve the encoded data. We compute Bob's optimal performance when he is restricted to performing a single local measurement on each arm (that is, Bob can make a local measurement first on A, then B, or vice versa). We compare this to the case where Bob can, in addition, coherently interact the processes, which allows him to effectively measure in an arbitrary joint basis of A and B. We show that coherent twobody interactions are advantageous if and only if ρAB contains discord and that the amount of discord Alice consumes during encoding bounds exactly this advantage. Curve (a) represents the amount of information Bob can theoretically gain should he be capable of coherent interactions. For our proposed implementation, this maximum is reduced to the level of curve (b), where, experimentally, Bob's knowledge about the encoded signal is represented by the blue data points. Curve (c) models these observations, taking experimental imperfections into account. Despite these imperfections, Bob is still able to gain more information than the incoherent limit given by curve (d). The blue shaded region highlights this quantum advantage, which is even more apparent if we compare Bob's performance to the reduced incoherent limit when experimental imperfections are accounted for (curve (e)). We can also compare these rates to a practical decoding scheme for Bob when limited to a single measurement on each optimal mode (curve f) and its imperfect experimental realization (curve g). Right: Quantum computation usually requires entangled qbits to perform calculations such as Shor's algorithm above, but it has been found that a collection of qubits with all but one in a state of discord and only one in a pure state, or even all of them in a discordant state, as long as the discord is above the zero value of classical systems can be used to perform types of quantum computation, when the computation is averaged over several runs (Merali 2011b).
The original motivations for discord were to understand the correlation between a quantum system and classical apparatus and the division between quantum and classical correlation. It shows us the quantum interior of what is happening during decoherence (Zurek 1991). A similar quantity called deficit was employed to study the thermodynamic work extraction and Maxwell's demon. Discord is equal to the amount of classical correlation that can be unlocked in quantumclassical states. Discord between two parties is related to the resource for quantum state merging with another party. Coherent quantum interactions (twobody operations) between separable systems that result in negligible entanglement could still lead to exponential speedups in computation, or the extraction of otherwise inaccessible information.
Fig 66: Recoherence experimental apparatus with on the right evidence for the increase in amplitude of offdiagonal elements as the apparatus os moved into the recoherence confiuration.
Recoherence is the reversal of decoherence by providing back the information which was lost in decoherence. All forms of entanglement involve decoherence because the system has become coupled toanother waveparticle. Once two quantum subsystems have become entangled, it is no longer possible to ascribe an independent state to either. Iinstead, the subsystems are completely described only as part of a greater, composite system. As a consequence of this, each entangled subsystem experiences a lloss of coherence or decoherence following entanglement. Decoherence leads to the leaking of information from each subsystem to the composite entangled system. In fig 66 the researchers demonstrate a process of decoherence reversal, whereby they recover information lost from the entanglement of the optical orbital angular momentum and radial profile degrees of freedom possessed by a photon pair. They note that these results carry great potential significance, since quantum memories and quantum communication schemes depend on an experimenter's ability to retain the coherent properties of a particular quantum system (Bouchard et al. 2015). They show that quantum information in the orbital angular momentum (OAM) degree of freedom of an entangled photon pair can be lost and retrieved through propagation, by manipulating the degree of entanglement between their OAM and radial mode Hilbert spaces. This effect is different from entanglement migration, in which information is transferred between wavefunction phase and amplitude, rather than having been lost to ancilliary Hilbert spaces, and likewise differes from quantum erasure, which occurs by information loss due to projective measurement.
Quantum Chaos, Criticality and Entanglement Coupling
The waveparticle complementarity of quantum systems alters the behavior of these systems when the dynamics is chaotic. Nuclear energetics for example which are chaotic, as they are highly energetic and spatially confined, unlike the electron orbits of atoms which have energy levels converging at high energy, have consistent energy gaps between their eigenfunctions representing closed orbits.
Fig 67: (1) Quantum chaos. Confined wave function in a quantum dot shows statistices displaying finite separation of energy levels similar to the atomic nuclear chaoit eigenfunctions and with the quantum stadium (2) Quantum stadium shows 'scarring' of the wave function along periodic repelling orbits which ar unstable in the classical case but here have stability due to the spatially extended wave packets overlapping (King 2009). The classical analogue (3) is fully chaotic with dense sets of repelling periodic orbits and spacefilling trajectories. (4) Top to bottom classical and quantum kicked top phase spaces and linear entropies, with left to right ordered and chaotic dynamics. The lack of a dip in linear entropies in the chaotic regime indicates entanglement with nuclear spin, rather than quantum suppression of chaos, as occurs in closed quantum systems (Chaudhury et al. 2009, Steck 2009).
Likewise the quantum stadium displays 'scarring' of the wave function where the probab ility remains consistently high in an ordered manner, around dominant classical repelling periodic orbits, which remains stable for the wave packet because of its spatial extension and frequency resonance, while the classical orbits are chaotic with positive butterflyeffect Lyapunov exponents, consisting dense repelling periodic orbits in a sea of ergodic spacefilling chaotic orbits.
Both of these examples illustrate what is called the quantum supression of chaos.
In another example (Yan B, Sinitsyn N (2020) Recovery of Damaged Information and the OutofTimeOrdered Correlators doi:10.1103/PhysRevLett.125.040605) the researchers wanted to know what would happen if they rewound the entangled interactions in qubits and then introduced the quantum analogy of a butterfly effect change. Would the future remain intact or become inevitably altered like the time traveler in sci fi stories? A number of entangled qubits were run through a set of logic gates before being returned to their initial setup. Back at the starting point, a measurement was made, effectively turning its wave superposition into an 'actuality', collapsing its superposition. The whole setup was then allowed to run again. They found that they could then easily recover the useful information because this damage is not magnified by a decoding process.
However, unlike closed quantum systems, when we investigate open quantum systems, or those which can be energetically coupled to other transitions, such as in the quantum kicked top, consisting of a Ceasium atom optically excited to a spin 3 state and then magnetically perturbed, where in the chaotic regime, entanglement coupling occurs between electronic and nuclear spin states fig 67(4). We thus find that quantum chaos can lead to new forms of entanglement between the coupled states, showing quantum chaos can paradoxically lead to further 'spooky' interactive wave effects. Werner Heisenberg cryptically commented "When I meet God, I'm going to ask him two questions, 'Why relativity?' and 'Why turbulence?' I really believe he will have an answer to the first"  implying the second, i.e. chaos is the very nemesis,
In a case which illustrates entanglement coupling on a grand scale, physicists have observed quantum entanglement among 'billions of billions' of flowing electrons in a quantum critical material (Prochaska. et al. 2020 Singular charge fluctuations at a magnetic quantum critical point Science 367/6475 285288 doi:10.1126/science.aag1595). The research provides the strongest direct evidence to date of entanglement's role in bringing about quantum criticality.
A wide variety of metallic ferromagnets and antiferromagnets have been observed to develop quantum critical behavior when their magnetic transition temperature is driven to zero through the application of pressure, chemical doping or magnetic fields. Quantum criticality is also believed to drive hightemperature superconductivity.
A quantum critical point is a point in the phase diagram of a material where a continuous phase transition takes place at absolute zero. A quantum critical point is typically achieved by a continuous suppression of a nonzero temperature phase transition to zero temperature by the application of a pressure, field, or through doping. Conventional phase transitions occur at nonzero temperature when the growth of random thermal fluctuations leads to a change in the physical state of a system e.g. solid to liquid. Condensed matter physics research over the past few decades has revealed a new class of phase transitions called quantum phase transitions which take place at absolute zero. In the absence of the thermal fluctuations which trigger conventional phase transitions, quantum phase transitions are driven by the zero point quantum fluctuations associated with Heisenberg's uncertainty principle and in particular in the light of this study by lagescale entanglement. Within the class of phase transitions, there are two main categories: at a firstorder phase transition, the properties shift discontinuously, as in the melting of solid, whereas at a second order phase transition, the state of the system changes in a continuous fashion. Secondorder phase transitions are marked by the growth of fluctuations on everlonger lengthscales. These fluctuations are called "critical fluctuations". At the critical point where a secondorder transition occurs the critical fluctuations are fractally scale invariant and extend over the entire system. At a quantum critical point, the critical fluctuations are quantum mechanical in nature, exhibiting scale invariance in both space and in time.
The current research examined the electronic and magnetic behavior of a "strange metal" compound YbRh_{2}Si_{2}, as it both neared and passed through a critical transition at the boundary between two wellstudied quantum phases. In quantumcritical heavyfermion antiferromagnets, such physics may be realized as critical Kondo entanglement of spin and charge and probed with optical conductivity. At a magnetic quantum critical point, conventional wisdom dictates that only the spin sector will be critical, but if the charge and spin sectors are quantumentangled, the charge sector will end up being critical as well. The discovery suggests that critical charge fluctuations play a central role in the strange metal behavior, elucidating one of the longstanding mysteries of correlated quantum matter.
Time Crystals and reversing Time's Arrow
Time Crystals: Another manifestation of quantum reality associated with disorder, which Frank Wilczek (arXiv:1308.5949) proposed the concept of in 2012, is the time crystal. Quantum time crystals are systems characterized by spontaneously emerging periodic order in the time domain. The laws of physics are symmetrical in that they apply equally to all points in space and time. Many systems violate the symmetry of physical laws in space and time, resulting in symmetrybreaking. In a magnet, atomic spins line up in their lowest energy state, rather than pointing in all directions. The symmetrybreaking of the weak and electroagnetic forces via the Higgs particle behaves similarly. In a mineral crystal, atoms occupy set positions in space, and the crystal does not look the same if it is shifted slightly. In the same way a time crystal would repeat in time without expending any energy rather like a perpetual motion machine. However other researchers (doi:10.1103/PhysRevLett.111.070402) quickly proved there was no way to create time crystals, of rotating minimum energy quantum systems. But the proof left a loophole. It did not rule out time crystals in systems that have not yet settled into a steady state and are out of equilibrium. Three ingredients are essential: a force repeatedly disturbing the particles, a way to make them interact with each other and an element of random disorder. The combination of these ensures that particles are limited in how much energy they can absorb, allowing them to maintain a steady, ordered state.
Fig 68: (a) Laser pumping at the resonant frequency repeatedly reverses the spins of a system of atoms, but requires two precise energy inputs to cycle the states. If the lasers are tuned off the resonant frequency (b), the spins will not move by 180^{o} and will not cycle back to the initial state. However if suitable degrees of disorder and internal interactions occur, the system may enter a state where the spins flip endlessly at a new period even when the laser frequencies are off resonance. In the inset (d) the red light shows a diamond time crystal (red) flipping at a different freuency from the stimulating laser (green).
In the first of two experiments (doi:10.1038/nature21413), this meant repeatedly firing alternating lasers at a chain of ten ytterbium ions: the first laser flips their spins and the second makes the spins interact with each other in random ways. That combination caused the atomic spins to oscillate, but at twice the period they were being flipped. More than that, the researchers found that even if they started to flip the system in an imperfect way, such as by slightly changing the frequency of the kicks, the oscillation remained the same. Spatial crystals are similarly resistant to any attempt to nudge their atoms from their set spacing. In the second (doi:10.1038/nature21426) using a 3D chunk of diamond riddled with around a million defects, each harbouring a spin, the diamond's impurities provided a natural disorder. When the team used microwave pulses to flip the spins, they saw the system respond at a fraction of the frequency with which it was being disturbed. These seem to be the first examples of a host of new phases that exist in relatively unexplored outofequilibrium states. They could also have several practical applications frm room temperature simulation of quantum systems to supersensitive detectors. Autti S et al. (2020) have since been able to induce neighbouring time crystals to interact, then observing an exchange of magnons between the two time crystals leading to oppositephase oscillations in their populations — a signature of the AC Josephson effect — while the defining periodic motion remains phase coherent throughout the experiment.
Entanglement Reversing the Arrow of Time: Quantum entanglement can also be used to reverse the thermodynamic arrow of time (Micadei et al. 2017) The existence of an arrow follows from the second law of thermodynamics. The law states that entropy, or disorder, tends to increase over time, explaining why it's easy to shatter a glass but hard to put it back together, and why heat spontaneously flows from hot to cold but not the opposite direction. The new result shows that the arrow of time is relative rather than absolute. They experimentally demonstrate the reversal of the arrow of time for two initially quantum correlated spins1/2, prepared in local thermal states at different temperatures, employing a Nuclear Magnetic Resonance setup.
Fig 69: Reversal of the arrow of time: (A) Heat flows from the hot to the cold spin (at thermal contact) when both are initially uncorrelated. This corresponds to the standard thermodynamic arrow of time. For initially quantum correlated spins, heat is spontaneously transferred from the cold to the hot spin. The arrow of time is here reversed. (B) View of the magnetometer used. (C) Experimental pulse sequence for the partial thermalization process.
The experimenters manipulated molecules of chloroform  CHCl3, which are made of carbon, hydrogen and chlorine atoms. The scientists prepared the molecules so that the temperature  judged by the probability of an atom's nucleus being found in a higher energy state  was greater for the hydrogen nucleus than for the carbon. When the two nuclei's energy states were uncorrelated, the heat flowed as normal, from hot hydrogen to cold carbon. But when the two nuclei had strong enough quantum correlations, heat flowed backward, making the hot nucleus hotter and the cold nucleus colder.
The standard second law of thermodynamics assumes that there are no such correlations. When the second law is generalized to take correlations into account, the law holds firm. As the heat flows, the correlations between the two nuclei dissipate, a process that compensates for the entropy decrease due to the reverse heat flow. The experimenters note: "Our results on the thermodynamic arrow of time might also have stimulating consequences on the cosmological arrow of time.".
Quantum Matchmaking: Transactional Supercausality and Reality
For reasons which immediately become apparent, the collapse in the pairsplitting experiment has to not only be immediate, but also to reconcile information looking backwards in time. The two photons we are trying to detect are linked through the common calcium atom. Their absorptions are thus actually connected via a path travelling back in spacetime from one detector to the calcium atom and forward again to the other detector. Trying to connect the detectors directly, for example by hypothetical fasterthanlight tachyons, leads to contradictions. Tachyons transform by the rules of special relativity, so a tachyon which appears to be travelling at an infinite speed according to one observer, is travelling only at a little more than the speed of light according to another. One travelling in one direction to one observer may be travelling in the opposite direction to another. They can also cause causality violations (King R365). There is thus no consistent way of knitting together all parts of a wave or the detector responses using tachyons. Even in a singleparticle wave, the wave function in regions it has already traversed (and those it would subsequently pass through in future) also have to collapse retrospectively (and prospectively) so that no inconsistencies can occur, in which a particle is created in two locations in spacetime from the same wave function, as the Wheeler delayed choice experiment makes clear.
Fig 70: In the transactional interpretation, a single photon exchanged between emitter and absorber is formed by constructive interference between a retarded offer wave (solid) and an advanced confirmation wave (dotted). (b) The transactional interpretation of pairsplitting. Confirmation waves intersect at the emission point. (c) Contingent absorbers of an emitter in a single passage of a photon. (d) Collapse of contingent emitters and absorbers in a transactional matchmaking (King R365). (e) Experiment by Shahri Afshar (see Chown R114). A grid is placed at the interference minima of the wave fronts coming from two slits just below a lens designed to focus the light from each slit into a separate detector. Measurements by detectors (top) test whether a photon (particle) passed through the left or right slit (bottom). There is no reduction in intensity when the grid is placed below the lens at the interference minima of the offer waves from the two slits. The grid does however cause a loss of detector intensity when the dashed lefthand slit is covered and the negative wave interference between the offer waves at the grid is removed, so that the noninterfered wave from the right slit now hits the grid, causing scattering. This suggests both that we can measure wave and particle aspects simultaneously, and that the transactional interpretation is valid in a way which neither many worlds (which predicts a splitting into histories where a photon from the source goes through one slit or other) or the Copenhagen interpretation of complementarity (where detecting a particle forbids the photon manifesting as a wave).
In the transactional interpretation (Cramer R136), such a 'backward travelling' wave in time gives a neat explanation, not only for the above effect, but also for the probability aspect of the quantum in every quantum experiment. Instead of one photon travelling between the emitter and absorber, there are two shadow waves, which superimposed make up the complete photon. The emitter transmits an offer wave both forwards and backwards in time, declaring its capacity to emit a photon. All the potential absorbers of this photon transmit a corresponding confirmation wave. The confirmation waves travelling backwards in time send a handshaking signal back to the emitter. In the extension transactional approach to supercausality, a nonlinearity now reduces the set of possibilities to one offer and confirmation wave, which superimpose constructively to form a real photon only on the spacetime path connecting the emitter to the absorber as shown in fig 70. This always connects an emitter at an earlier time to an absorber at later time because a real positive energy photon is a retarded particle which travels in the usual direction in time.
A negative energy photon travelling backwards in time is precisely the antiparticle of the positive energy photon and has just the same effect. The two are identifiable in the transactional interpretation, as in quantum electrodynamics (p 304), where timereversed electron scattering is the same as positron creation and annihilation. The transactional relationship is in effect a matchmaking process. Before collapse of the wave function we have many potential emitters interacting with many potential absorbers. After all the collapses have taken place, each emitter is paired with an absorber in a kind of marriage dance. One emitter cannot connect with two absorbers without violating the quantum rules, so there is a frustration between the possibilities which can only be fully resolved if emitters and absorbers can be linked in pairs. The number of contingent emitters and absorbers are not necessarily equal, but the number of matched pairs is equal to the number of real particles exchanged.
In the pairsplitting experiment you can now see that the calcium atom emits in response to the advanced confirmation waves reaching it from both the detectors simultaneously right at the time it is emitting the photon pair, as in fig 70(b). Thus the faster than light linkage is neatly explained by the combined retarded and advanced aspects of the photon having a net forwards and backwards connection which is instantaneous at the detectors. One can also explain the arrow of time if the cosmic origin is a reflecting boundary that causes all the positive energy real particles in our universe to move in the retarded direction we all experience in the arrow of time. This in turn gives the sign for increasing disorder or entropy and the time direction for the second law of thermodynamics to manifest. The equivalence of real and virtual particles raises the possibility that all particles have an emitter and absorber and arose, like virtual particles, through mutual interaction when the universe first emerged
Quantum Paradoxes of Time and Causality
Although classical physics is in principle timesymmetrc in the sense that the laws of motion are in principle symmetric under time inversion, stochastic processes define a thermodynamic arrow of time in which low entropy physical systems tend to a highentropy equlibrium state under the second law of thermodynamics. Chaotic systems likewise display a time directedness in the function defining the dynamic in the sense that the Lyapunov exponent defining the butterfly effect is directed with increasing time, while the time reversed dynamic is ordered and convergent. Thus we see a glass being smashed, or paint being mixed, but dont see the reverse processes of the glass spontaneously coming together because these events have vanising probability. The classical Laplacian universe is also regulated by a mechanical causality determined by the laws of motion acting in the direction of increasing time. We thus have inprinciple time symmetry but no retrocausality and a causal direction identified with the thermodynamic arrow although these are not strictly related.
Quantum reality changes these assumptions in fundamental ways. Although the evolution of the Hamiltonian wave function is in principle time reversible in the same way the classical situation is, reduction of the wave packet appears at first glance to be a causality violating step leading to randomness in the sense the the outcome can only predicted in terms of probabilities. On the other hand, special relativity, by virtue of the Lorenz transformations with their dual square roots implicitly procudes both retarded solutions travelling in the usual direction of increasing time and advanced solutions travelling backwards in time. Diverse manifestations of quantum reality from the Wheeler delayed choice experiment to quantum erasure, entangled histories and entangled time inversion display features suggesting a deep time symmetry not seen in the classical description. Both the twostate vector description and the transactional interpretation are founded on a timesymmetry that ptentially invokes retrospective interaction, in which future absorbing boundary conditions influence a qantum inteaction qualitatively and quantitatively as fundamental as the past emitting boundary conditions
Moreover, quantum uncertaintly is not simply a statistical limitation on what can be observed, becausethe fundamental force fields are generated through the emission and absorption of virtual particles of every possible kind generated through the spacetime window of uncertainty. This brings us to the core of a debate between timesymmetry itself and qustions of retrocausality – the future acting causally on the present or past, that could give rise to paradoxical contradictions of time loops and inconsistent quantum histories. It is clear that every quantum, not only entangled particles have to balance an equation that involves future states in maintaining the consistency that avoids a particle being absorbed at two separate locations in spacetime.
We have seen that Bell's theoren rules out locally Einsteinian causality and that one way the EPR pairsplitting experiments can be resolved is through advanced waves from the absorbing detectors intersecting at the emission vertex. On the other hand both the Everett manyworlds approach and the Bohm pilot wave theory invoke process with a time arrow in the rearded direction of increasing time, where we may have time symmetry but are assumed not to have retrocausality. We also have issues betwen descriptions of quantum processes based on their reality as physical processes with Copenhagen perspectives where no actual reality is assocated with a quantum state except as a state of partial knowledge of a system on the part of the experimenter. The status of the wave function by comparison with the particle also remains debated with some approaches regarding the particles as real but the wave function only being a calculating device for particle positions and momenta of no physical reality in itself. This in a sense goes against the concept of waveparticle complementarity at the foundation of the uncertainty principle.
Contrasting again with these world views, researchers in foundations of quantum reality (Price 2012, Leifer & Pusey 2017) attempt to unravel the ontological relationships between time symmery and retrocausality. Price suggests a time symmetric ontology for quantum theory must necessarily be retrocausal. More precisely Realism + Time symmetry + Discreteness ⇒ Retrocausality, where discreteness means for example if a particle is detected on one channel it can't be in the other. Leifer & Pusey expand on Price's argument without having to assume the quantum state is a state of reality, replacing it with the notion of λmediation limiting the causal interactions to those involved in the apparatus itself and the freechoices of the experimenters. They show that, an ontological model satisfying No Retrocausality and λmediation in an experiment satisfying Time Symmetry must obey a temporal analogue of Bell's local causality condition and hence, the impossibility of a nonretrocausal time symmetric ontology.
The difficulty in all these diverse accounts is that we simply don't have a good model for the underlying processes 'governing" entanglement, reduction of the wave function or how hidden variables might interact in spacetime. We can see that the situation invoked in entanglement experiments where we have a manifest correlation in which knowing the polarization of oneof a an entangled pair of photons immediately tells us the other has complementary polarization, but we can't use this information to perform any form of causal signalling to transmit classical information.
We thus appear to be dealing with a quantum universe in which future states are as fomative as boundary conditions on all interactions as past emitting boundary conditions are and that the interactive proces is fundamentally timesymmetric, but one in which classical ideas of causality either conventional or retro taken together result in contradictions.
Neither does the thermodynamic arrow of time provide a clear symmetrybreaker, although the cosmic origin as a reflecting boundary condition resulting in retarded particles, as in the transactional interpretation might. Even if darkenergy, causes an increasing expansion, or fractal inflation leads to an open universe model in which some photons may never find an absorber, the excitations of brain oscillations, because they are both emitted and absorbed by past and future brain states could still be universally subject to transactional supercausal coupling (King 2008, 2014). Thus consciousness itself may have a central role in the process of collapsing the wave function and in the anticipatory role consciousness appears to play as critical to organismic survival, even if not based on a directly causal principle..
The handshaking spacetime relation implied by transactions makes it possible that the apparent randomness of quantum events masks a vast interconnectivity at the quantum level, which has been termed the 'implicate order' by David Bohm (R70). This might not itself be a random process, but because it connects past and future events in a timesymmetric way, it cannot be reduced to predictive determinism, because the initial conditions are insufficient to describe the transaction, which also includes quantum 'information' coming from the future. However this future is also unformed in real terms at the early point in time emission takes place. My eye didn't even exist, when the quasar emitted its photon, except as a profoundly unlikely branch of the combined probability 'waves' of all the events throughout the history of the universe between the ancient time the quasar released its photon and my eye developing and me being in the right place at the right time to see it. Transactional supercausality thus involves a huge catch 22 about space, time and prediction, uncertainty and destiny. It doesn't suggest the future is determined, but that the contingent futures do superimpose to create a spacetime paradox in collapsing the wave function.
Roger Penrose (R535, R536), has suggested that the onegraviton limit of interaction is an objective trigger for wave packet reduction, because of the bifurcation in spacetimes induced, leading to theories in which the random or pseudorandom manifestations of the particle within the wave are nonlinear consequences of gravity. Objective orchestrated reduction or OOR is then cited as a basis which intentional consciousness uses to follow collapse rather than participating in it, as the transactional model makes possible. The OOR model unlike transactional anticipation thus leaves freewill with a kind of orphan status, following, but not participating in, the collapse process itself.
By reducing the energy of a transaction to a superposition of ground and excited states, the transactional approach may combine with quantum computation to produce a spacetime anticipating quantum entangled system which may be pivotal in how the conscious brain does its anticipation. The brain is not a marvelous computer in any classical sense. We can barely repeat seven digits. But it is a phenomenally sensitive anticipator of environmental and behavioral change. Subjective consciousness has its survival value in enabling us to jump out of the way when the tiger is about to strike, not so much in computing which path the tiger might be on, because this is an intractable problem and the tiger can also take it into account in avoiding the places we would expect it to most likely be, but by intuitive conscious anticipation. What is critical here is that in the usual quantum description which considers only the emitter, we have only the probability function because the initial conditions are insufficient to determine the outcome. There is thus no useful way quantum uncertainty can be linked to conscious freewill. Only by including the advanced absorber waves can we see how such anticipation might be achieved.
The SexuallyComplex Quantum World
We have seen that all phenomena in the quantum universe present as a succession of fundamental complementarities in a shifting vacuum groundswell of uncertainty, out of which the superabundance of quantum diversity emerges. In this process we have discovered a multiple overlapping series of divisions: (i) waveparticle complementarity fundamental to the quantum, (ii) the roles of emitters and absorbers, (iii) the advanced and retarded solutions of special relativity, (iv) the fermions comprising matter complementaing the bosons mediating radiation, (v) virtual and real particles distinguishing force fields from positive energy matter and radiation, and the engendered symmetrybreakings between (vi) space and time (reflecting that between momentum and energy) and (vii) between the four fundamental forces of nature, which in turn cause the quantum architecture of atoms and molecules to be asymmetric and capable of complexity of interaction to form living systems (p 317) and finally (viii) duality, which makes it difficult or impossible to determine what is a fundamental particle and what is composite in a sexual paradox between dual descriptions. Sexual paradox may also be manifest in the difficulty of separating the forces from the seething quantum 'ground' of vaccum uncertainty, which is generative of all types of quantum. To understand conscious anticipation, or freewill, may require the inclusion of advanced waves, forming a paradoxical complement to the positive energy arrow of time.
All these complementarities possess attributes of sexual paradox and are pivotal to generating the complexity and diversity of the universe as we know it. There is no way to validly mount a single description based on only one of these complementary aspects alone. All attempts to define a theory based only on one aspect implicitly involves the other as a fundamental component, just as the propagators of the particles in quantum field theory are based on wavespreading. Classical mechanistic notions of a whole made out of clearly defined parts, as well as temporal determinism fail. The mathematical idea of a reality made out sets of points or point particle becomes replaced by the excitations of strings, again with wavebased harmonic energies. Just as we have an irreducible complementarity between subjective expereince and the objective world, so all the features of the quantum universe present in sexually paradoxical complementarities. It is thus hardly surprising that these fundamental and irreducible complementarities may come to be expressed as fundamental themes in biological complexity, thus making sexuality a cumulative expression of a sexual paradox which lies at the foundation of the cosmos itself.
Although both the Taoist and Tantric views of cosmology are based on a complementation between female and male generative principles, many people, including a good proportion of scientists still adhere to a mechanistic view of the universe as a Newtonian machine. In this view biological sexuality seems to be barred from having any fundamental cosmological basis, being an end product of an idiosyncratic process of chance and selection, in a biological evolution which has no apparent relation with or capacity to influence the vast energies and forces which shape the cosmological process. The origins of life remain mysterious and potentially accidental rather than cosmological in nature and evolution an erratic series of accidents preserved by natural selection.
However if we reverse this logic and begin with a sexually paradoxical cosmology, the phenomenon of biological sexuality then becomes a natural cumulative expression of physical sexual paradox operating in a new evolutionary paradigm in the biological world, rich with new feedback processes which give it the central role in genetics and organismic reproduction we regard as the signature and raison d'etre of reproductive sexuality.
Appendix: Complementary Views of Quantum Mechanics and Field Theory
Fig 71: Werner Heisenberg (R760).
Heisenberg was the first person to define the concept of quantum uncertainty, or indetermincay, as the term also means in German.
Heisenberg's research concentrated on momentum and angular momentum. It is well known both rotations in 3D space and matrices in general do not commute. , because matrix multiplication multiplies the rows of the first matrix by the columns of the second:
, but .
Hence AB  BA 0. More generally, if C = AB, . In quantum mechanical notation, we have so , showing that , all states leading to completeness with unit probability.
Fig 72: Erwin Schrodinger (R760)
Schrodinger's wave equation and Heisenberg's matrix mechanics highlight a deeper complementarity in mathematics between the discrete operations of algebra and the continuous properties of calculus. When Heisenberg was trying to solve his matrix equations, the mathematician David Hilbert suggested to look at the differential equations instead. But it fell to Schrodinger, who took his mistress up into the Alps and discovered his wave equation on a romantic tryst. It was only when Hilbert and others examined the two theories closely that it was discovered they were identical, but complementary, descriptions.
Schrodinger derived his timeindependent wave equation as follows. The Hamiltonian dynamical operator representing the total kinetic and potential energy H = K + V , of the system, in terms of how the wave varies with time and space:
, where .
This is a nonrelativistic equation expressed in terms of the first time derivative. If we now assume the wave function consists of separate space and time terms , and seek time independence of the wave function at constant energy E, we get
, or .
Interpreted in terms of matrix mechanics, the Schrodinger wave equation becomes a sum of basis vectors representing each of the wave states. The algebraic version of the equation , , becomes . Solving in terms of a transformation to a new state, we have , where . Hence and so . Thus and . This the famous eigenvalue (ownvalue) problem, whose stable standing wave solutions are the s, p, d and f orbitals of an atom.
Heisenberg's problem of uncertainty expressed in noncommuting operators such as position x and momentum p gives us back the uncertainty relation when we reinterpret momentum in terms of the wave function as a differential operator , we have
.
Hence , another view of the uncertainty relation .
In Schrodinger's view the wave function varies with time accodring to a fixed operator, but in the Heisenberg view the wave function is a fixed vector in Hilbert space and the Hermitian operator is time evolving.
Fig 73: Paul Dirac
Dirac extended Schrodinger's equation to make it relativistic, at the same time ushering in the existence of the positron and antimatter generally as solutions coming out of the equation. His equation is: , where ψ(x, t) is the wave function for the electron of rest mass m with spacetime coordinates x, t. The p_{1}, p_{2}, p_{3} are the components of the momentum, c is the speed of light, and h is Planck's constant divided by 2π. The new elements in this equation are the 4x4 matrices α_{k} and β and the fourcomponent wave function. The four components are interpreted as a superposition of a spinup electron, a spindown electron, a spinup positron, and a spindown positron.
Fig 74: Feynman diagram for first order photon exchange in electronelectron repulsion. Richard Feynman with his own diagram (R760).
The underlying waveparticle complementarity in Feynman's approach to quantum field theory, despite its apparent explanation of the electromagnetic field in terms of particle interaction is succinctly demonstrated in the firstorder diagram from electronelectron scattering (electromagnetic charge repulsion) through exchange of virtual photons provided by uncertainty. The propagator for the diagram is:
where are the variants of the Pauli spin matrices, the Dirac function represents the discrete interaction of the virtual photon over the spacetime interval, and K are the propagators for electrons a and b to be carried by Huygen's wavefront principle according to the wave summations for t_{2} > t_{1} representing positive energy 'retarded' solutions travelling in the usual direction in time and for the corresponding negative energy solutions in the reversed 'advanced' time direction t_{2} < t_{1 }, where E_{n} and are the energy eigenvalues and eigenfunctions for the wave equation.
This both explains how the relativistic solution gives rise to both time backward negative energy solutions and time forward positive energy ones, which make particleantiparticle creation and annihilation events critical to the sequence of Feynman diagrams possible, and also shows clearly in the complex exponentials the sinusoidal wave transmission hidden in the particle diagrams of the quantum field approach.