.
Quantum Reality and Cosmology
Complementarity and Spooky Paradoxes
Genotype 1.0.11  PDF
Contents
Quantum Reality
Quantum Cosmology
Introduction
This article is designed to give an overview of all the developments in quantum reality and cosmology, from the theory of everything to the spooky properties of quantum reality that may lie at the root of the conscious mind. Along the way, it takes a look at just about every kind of weird quantum effect so far discovered, while managing a description which the general reader can folow without a great deal of former knowledge of the area.
We all exist in a quantum universe, and the classical one we assume and link to our experience of the everyday world is just an extrapolation. To understand both the foundations of cosmology and the spooky world of quantum reality, we need to set aside the classical ideas of mechanism, determinism and the mathematical notions of sets made out of discrete points and come to terms with ultimate paradoxes of spacetime and complementarity. To fully understand the implications we need to examine all aspects of the universe in detail, from the smallest particles to the universe as a whole and only then come to a synthesis of the role complementarity and 'sexual paradox' may play at the cosmological level.
Our quantum world is very subtle and much more mysterious than a mechanical 'building blocks ' view of the universe with simple separate classical particles interacting in empty space. Many people lead their lives at the macroscopic level as if quantum reality didn't exist, but quantum reality runs from the very foundations of physics to the ways we perceive. Our senses of sight, hearing, touch and taste/smell are all distinct quantum modes of interaction with the environment. Senses aren't just biological adaptions but fundamental quantum modes of information transfer, by photons, phonons, solitons and orbital interactions. Quantum processes such as tunneling are central to the function of our enzymes and to the ion channels and synapses that support our excitable neurons (Walker R724).
Fig 2: Bohr and Einstein  their debate which sparked the Copenhagen interpretation that Quantum mechanics describes only our knowledge of a system not its actual state, eventually led to the discovery of quantum nonlocality.
The 'correspondence principle' by which the quantum world is supposed to fade into classical 'reality ' is never fully realized. Many phenomena in the everyday world involve chance events which themselves are often sensitively related to uncertainties at the quantum level. Chaotic, selfcritical and certain other processes may 'inflate ' quantum effects into global fluctuations. Conscious interaction with the physical world may likewise depend both on quantum excitations and the loophole of uncertainty in expressing 'freewill '. We need to understand how quantum reality interacts with conscious experience, however in doing so we immediately find the most challenging examples of sexual paradox that lie at the core of the cosmological puzzle  waveparticle complementarity. A quantum manifests in two complementary ways as a nonlocal flowing 'wave ' which has a frequency and spatial extension and as a localized 'particle ' which is created or destroyed in a single step. It can manifest as either but not both at the same time. All the weird quantum paradoxes of nonlocality, entanglement and collapse emerge from this complementary relationship. To understand the full dimensions of this mystery we need to see how this strange reality was discovered and do a little fairly simple maths.
The Wave, the Particle and the Quantum
In the late 19th century, classical physics seemed to have captured all the phenomena of reality, including Clerk Maxwell's equations for the electromagnetic transmission of light: , where .
However Lord Kelvin noticed what he called 'two small dark clouds on the horizon', which together plunged classical physics into the quantumtheoretic age.
Why we don't burn to a crisp: The first of these was blackbody radiation, named after the thermal radiation from a dark cavity and also from bright thermal objects like the sun. We know the sun has some ultraviolet and can burn us, but not as much as the peak of visible light. If classical physics were true it should have more ultraviolet and even more x and gamma rays  a situation called ultraviolet catastrophe.
Fig 3: The solar spectrum Fraunhoffer 1814, and Planck's radiation law both have a peak about 5,000 ^{o}C
Planck eventually solved the problem by quantizing the radiation into little packets proportional to h called quanta. The particles responsible for this packeting are now identified as the photon. The answer to the problem is this. In the classical view energy distribution should increase endlessly into the high frequencies, but in the quantum view, to release a particulate photon of a given frequency, there has to be an atom somewhere with an energized enough electron to radiate the photon, so the energy is limited by the temperature of the thermal body. Thus because the photons come in quanta, or packets, the radiation cannot go endlessly up into the ultraviolet. Planck's equation is displayed in fig 3. It starts out growing for small energies but falls off exponentially after the peak corresponding to the exponentially rarer thermodynamic excitations at a given temperature.
The Photoelectric Effect and Einstein's Law: Einstein made the next breatthrough addressing the other dark cloud  photoelectric effect. If you shine light on a plate in a vacuum valve and vary the voltage required to stop the resulting current flow, you find the more light, the more current, but no more voltage. The voltage turns out to depend only on the frequency. That is, the energy doesn't change, just the flow rate. This makes no sense with a classical wave, because a bigger wave has both more flow and more energy.
Fig 4: Photoelectric effect apparatus
The answer is that a given frequency of light contains particles called photons. The more photons, the more excited electrons cross the vacuum by gaining this energy, but there is no change in the energy because each photon has the same energy for a given colour (frequency), regardless of how bright the light.
Einstein solved this problem by realizing the energy of any particle is proportional to its frequency as a wave by the same factor h  Planck's constant  the fundamental unit of quantumness. Energy is thus intimately related to frequency  in a sense it IS frequency. Measuring one is necessarily measuring the other. We can thus write (1)
Quantum Uncertainty: Supposing we try to imagine how we would calculate the frequency of a wave if we had no means to examine it except by using another similar wave and counting the number of beats that the 'strange wave ' makes against the standard wave we have generated. This is exactly the situation we face in quantum physics, because all our tools are ultimately made up of the same kinds of waveparticle quanta we are trying to investigate. If we can't measure the amplitude of the wave at a given time, but only how many beats occur in a given period, we can then only determine the frequency with any accuracy by letting several beats pass. We then however have let a considerable time elapse, so we don't know exactly when the frequency was at this value.
The closer we choose our frequency to get a given accuracy, the longer the beats take to occur. We thus cannot know the time and the frequency simultaneously. The more precisely we try to define the frequency, the greater the time is smeared out. Measuring a wave frequency with beats has intrinsic uncertainty as to the time, which becomes a smearedout interval. The relationship between the frequencies and the beats is: (2)
Fig 5: Waves and beats.
Despite gaining his fame for discovering relativity, and the doom equation E = mc^{2} which made the atom bomb possible, Einstein, possibly in cooperation with his wife, also made a critical discovery about the quantum. Einstein's law connects to every energetic particle a frequency
If we apply equations (1) & (2) together, we immediately get the famous Heisenberg uncertainty relation . It tells us something is happening which is impossible in the classical world. We can't know the energy of a quantum interaction and the time it happened simultaneously. Energy and time have entered into a primal type of prisoners ' dilemma catch 22. The closer we try to tie down the energy, the less precisely we know the time. This peculiar relationship places a specific taboo on knowing all the features of a situation and means we cannot predict precise outcomes, only probabilities. The same goes for momentum and position in each of the three spatial dimensions. Notice also that this links energy and momentum, time and space, and frequency and wavelength as three manifestations of one another. The way in which this happens is illuminating. Each quantum can be conceived as a particle or as a wave but not both at the same time. Depending on how we are interacting with it or describing it, it may appear as either.
Quantum Chemistry: All particles, such as the electrons, protons and neutrons which make up the atoms of our chemical elements and molecules all exist as both particles and waves. The orbitals of the electrons around atoms and those linking each molecule together occur only at the energies and sizes which correspond to a perfect standing wave, forming a set of discrete levels like the layers of an onion. These in turn determine the chemical properties of each substance. Because the molecular orbitals formed between a pair of atoms have lower energy than their individual atomic counterparts, the atoms react to form a molecule releasing the spare energy as heat. The characteristic energy differences between the levels of a given atom can be seen, both on earth and in the universe at large, as emission or absorbtion lines in the electromagnetic spectrum.
Fig 6: Quantum chemistry. (a) s, p, d, f orbitals have spins 0, 1, 2 and 3 respectively. Each occurs in a series of levels, forming the shells or orbitals of the atom. The first level 1s can contain 2 electrons of ossosite spin. The second with 2s, and three p orbitals 2p_{x} 2p_{y} and 2p_{z} can hold 8. These can form energybalancing linear combinations, resulting in hybrid sp orbitals. (b) Two s orbitals form a lower energy σ molecular orbital as well as a higher energy σ * repelling antibonding orbital if the electron spins are not complementary (see fermions below). (c) Bonding p obitals can also form π orbitals. Six p orbitals can combine to form a single delocalized π molecular orbital as in in the benzene ring. Hybrid atomic orbitals sp, sp_{2} and sp_{3} lead to linear, planar and tetrahedral bonding arrangements seen in many molecules, due to energy minimization. (d) Absorbed or emitted photons cause electron transitions between orbitals in hydrogen, giving rise to the signature of the hydrogen spectrum (e). This signature in space, redshifted far into the low frequencies, revealed the expanding universe.
Twoslit interference and Complementarity
We are all familiar with the fact that CDs have a rainbow appearance on their underside. This comes from the circular tracks spaced a distance similar to the wavelength of visible light. If we used light of a single wavelength we would see light and dark bands. We can visualize this process more simply with just two slits as in fig 7. When many photons pass through, their waves interfere as shown and the photographic plate gets dark and light interference bands where the waves from the two slits reinforce or cancel, because the photons are more likely to end up where their superimposed wave amplitude is large. The experiment confirms the wave nature of light, since the size of the bands is determined by the distance between the slits in relation to the wavelength where c is the velocity of light:
We know each photon passes through both slits, because we can slow the experiment down so much that only one photon is released at a time and we still eventually get the interference pattern over time. Each photon released from the light bulb is emitted as a particle from a single hot atom, whose excited electron is jumping down from a high energy orbit to a lower one. It is thus released locally and as a single 'particle ' created by a single transition between two stable electron orbitals, but it spreads and passes through both slits as a wave. After this the two sets of waves interfere as shown in fig 7 to make light and dark bands on the photographic plate when the light is of a single frequency, and the rainbows we see on a CD or DVD when white light of many frequencies is reflected off the shiny rings between the grooves in the manner of a multislit apparatus.
The evolution of the wave is described by an equation involving rates of change of a wave function φ with respect to space and time. For example for a massive particle in free space, we have a 1D differential equation: . For Schrodinger's and Dirac's wave equations see the appendix.
This equation emphasizes the relationship between space and time we see emerging in special relativity below. The comlementary relationship between Schrodinger's continuous wave equation and Heisenberg's discrete matrix mechanics (see appendix), which in a sense mirros the wave and particle aspects of the quantum, highlights a deeper complementarity in mathematics between the discrete operations of algebra and the continuous properties of calculus, which may also be expressed in the brain (p 367).
Fig 7: Twoslit interference experiment (Sci. Am. Jul 92)
For the bands to appear in the interference experiment, each single photon has to travel through both slits as a wave. If you try to put any form of transparent detector in the slits to tell if it went through one or both you will always find only one particle but now the interference pattern will be destroyed. This happens even if you use the gentlest forms of detection possible such as an empty resonant maser chamber (a maser is a microwave laser). Any measurement sensitive enough to detect a particle alters its momentum enough to smear the interference pattern into the same picture you would get if the particle just went through one slit. Knowing one aspect destroys the other.
Now another confounding twist to the catch 22. The photon has to be absorbed again as a particle by an atom on the photographic plate, or somewhere else, before or after, if it doesn't career forever through empty space, something we shall deal with shortly. Where exactly does it go? The rules of quantum mechanics are only statistical. They tell us only that the particle is more likely to end up where the amplitude of the wave is large, not where it will actually go on any one occasion. The probability is precisely the complex square of the wave's amplitude at any point:
Hence the probability is spread throughout the extent of the wave function, extending throughout the universe at very low probabilities. Quantum theory thus describes all future (and past) states as probabilities. Unlike classical probabilities, we cannot find out more about the situation and reduce the probability to a certainty by deeper investigation, because of the limits imposed by quantum uncertainty. The photon could end up anywhere the wave is nonzero. Nobody can tell exactly where, for a single photon. Each individual photon really does seem to end up being absorbed as a particle somewhere, because we will get a scattered pattern of individual dark crystals on the film at very low light intensities, which slowly build up to make the bands again. This is the mysterious phenomenon called 'reduction, or collapse, of the wave packet'. Effectively the photon was in a superposition of states represented by all the possible locations within the wave, but suddenly became one of those possible states, now absorbed into a single localized atom where we can see its evidence as a silver mark on the film. Only when there are many photons does the behaviour average out to the wave distribution. Thus each photon seems to make its own mind up about where it is going to end up, with the proviso that on average many do this according to the wave amplitude's probability distribution. So is this quantum freewill? It may be.
The Cat Paradox and Quantum Reality
This situation is the subject of a famous thought experiment by Schrodinger, who invented the wave equation. In Schrodinger's cat paradox, we use an interference experiment with about one photon a second and we detect whether the photon hits one of the bright bands to the left (we can do the same thing measuring electron spin using an asymmetric magnetic field). If it does then a cat is killed by smashing a cyanide flask. Now when the experimenter opens the box, they find the cat is either alive or dead, but quantum theory simply tells us that the cat is both alive and dead, each with differing probabilities  superimposed alive and dead states. This is counterintuitive, but fundamental to quantum reality.
Fig 8: Cat paradox experiment variations (King)
In the cat paradox experiment, the wave function remains uncollapsed at least until the experimenter I opens the box. Heisenberg suggested representing the collapse as occurring when the system enters the domain of thermodynamic irreversibility, i.e. at C. Schrodinger suggested the formation of a permanent record e.g. classical physical events D, E or computer data G. However even these classical outcomes could be superpositions at least until a conscious observer experiences them, as the manyworlds theory below suggests. Wigner's friend is a version of the cat paradox in which an assistant G reports on the result, establishing that unless the first conscious observer collapses the wave function, there will be a conscious observer in a multiplicity of alternative states, which is an omnipresent drawback of the many worlds view. In a macabre version the conscious assistant is of course the cat. According to the Copenhagen interpretation, it its not the system which collapses, but only our knowledge of its behavior. The superimposed state within the wave function is then not regarded as a real physical entity at all, but only a means of describing our knowledge of the quantum system, and calculating probabilities.
Penrose in objective reduction singles out gravity as the key unifying force and suggests that interaction with gravitons splits the wave function, causing reduction. Others try to discover hidden laws which might provide the subquantum process, for example the pilot wave theory, in which a welldefined particle piloted within a nonlocal wave as developed by David Bohm (1966). This can produce comparable results with quantum mechanics and provides an example of a plausible theory underlying quantum reality, but it has difficulties defining positions when new particles with new quantum degrees of freedom are created. Another approach we will explore, is the transactional interpretation, which has features of all these ideas and seeks to explain this process in terms of a handshaking relationship between the past and the future, in which spacetime itself becomes sexual. Key here is the fact that reduction is not like any other physical process. One cannot tell when or where it happens again suggesting it is part of the 'spooky ' interface between quantum and consciousness.
In many situations people try to pass the intrinsic problems of uncertainty away on the basis that in the large real processes we witness, individual quantum uncertainties cancel in the law of averages of large numbers of particles. They will suggest for example that neurons are huge in terms of quantum phenomena and that the 'law of mass action ' engulfs quantum effects. However brain processes are notoriously sensitive. Moreover history itself is a unique process out of many such 'unstable ' possibilities at each stage of the process. Critical decisions we make become watersheds. History and evolution are both processes littered with unique idiosyncratic acts in a counterpoint to the major forces shaping the environment and landscape. Chaotic processes are potentially able to inflate arbitrarily small fluctuations, so molecular chaos may 'inflate ' the fluctuations associated with quantum uncertainty.
The Twotiming Nature of Special Relativity
We also live in a paradoxical relationship with space and time. While space is to all purposes symmetric and multidimensional, and not polarized in any particular direction, time is singular in the present and polarized between past and future. We talk about the arrow of time as a mystery related to the increasing disorder or entropy of the universe. We imagine spacetime as a four dimensional manifold but we live out a strange sequential reality in which the present is evanescent. In the words of the song "time keeps slipping, slipping, slipping ... into the future ". There is also a polarized gulf between a past we can remember, the living present and a shadowy future of nascent potentialities and foreboding uncertainty. In a sense, space and time are complementary dimensionalities, which behave rather like real and imaginary complex variables, as we shall see below.
A second fundamentally important discovery in twentieth century physics, complementing quantum theory, which transformed our notions of time and space, was the special theory of relativity. In Maxwell's classical equations for transmission for light, light always has the same velocity, c regardless of the movement of the observer, or the source. Einstein realized that Maxwell's equations and the properties of physics could be preserved under all inertial systems  the principle of special relativity  only if the properties of space and time changed according to the Lorenz transformations as a particle approaches the velocity of light c:
Space becomes shortened along the line of movement and time becomes dilated. Effectively space and time are each being rotated towards oneanother like a pair of closing scissors. Consequently the mass and energy of any particle with nonzero rest mass tend to infinity at the velocity of light:
By integrating this equation, Einstein was able to deduce that the rest mass must also correspond to a huge energy E_{o}=m_{o}c^{2} which could be released for example in a nuclear explosion, as the mass of the radioactive products is less than the mass of the uranium that produces them, thus becoming the doom equation of the atom bomb.
In special relativity, space and time become related entities, which form a composite four dimensional spacetime, in which points are related by lightcones  signals travelling at the speed of light from a given origin. In spacetime, time behaves differently to space. When time is squared it has a negative sign just like the imaginary complex number does.
Hence the negative sign in the formula for spacetime distance (3) and the scissorlike reversed rotations of time and space into one another expressed in the Lorenz transformations. Stephen Hawking has noted that, if we treat time as an imaginary variable, the spacetime universe could become a closed 'manifold ' rather like a 4D sphere, in which the cosmic origin is rather like the north pole of Earth, because imaginary time will reverse the negative sign in (3) and give us the usual Pythagorean distance formula in 4D.
Fig 9: Spacetime light cone permits linkage of 'timelike ' points connected by slowerthenlight communication. In the 'spacelike ' region, temporal order of events and causality depends on the observer.
A significant feature of special relativity is the fact that the relativistic energymomentum equation E^{2}=p^{2}+ m^{2} has dual energy solutions: (4)
The negative energy solution has reversed temporal direction. Effectively a negative energy antiparticle travelling backwards in time is exactly the same as a positive energy particle travelling forwards in time in the usual manner. The solution which travels in the normal direction (subsequent points are reached later) is called the retarded solution. The one which travels backwards in time is called the advanced solution. A photon is its own antiparticle so in this case we just have an advanced or retarded photon.
General relativity goes beyond this to associate gravity with the curvature of spacetime caused by massenergy. The Einstein field equations are governed by the following equation:
where is the Ricci tensor representing curvature, R is the scalar curvature, is the metric tensor representing the gravitational potential, is the cosmological constant, G is Newton's gravitational constant, c is the speed of light, and is the stressenergy tensor representing the gravitational massenergy field. Hence the equation explains gravitation as the curvature of spacetime caused by massenergy.
Reality and Virtuality: Quantum fields and Seething Uncertainty
We have learned about waves and particles, but what about fields? What about the strange actionatadistance of electromagnetism and gravity? Special relativity and quantum theory combine to provide succinct explanations of electromagnetism, in fact they are the most succinct theories ever invented by the human mind, accurate to at least seven decimal places when describing the magnetic moment of an electron in terms of the hidden virtual photons which the electron emits and then almost immediately absorbs again.
Richard Feynman and others discovered the answer to this riddle by using uncertainty itself to do the job. The field is generated by particles propagated by a rule based on wave spreading. These particles are called virtual because they have no net positive energy and appear and disappear entirely within the window of quantum uncertainty, so we never see them except as expressed in the force itself. This seething tumult of virtual particles exactly produces the familiar effects of the electromagnetic field, and other fields as well. We can find the force between two electrons by integrating the effects of every virtual photon which could be exchanged within the limits of uncertainty and of every other possible virtual particle system, including pairs of electrons and positrons coming into a fleeting existence. However, note that we can't really eliminate the wave description because the amplitudes with which the particles are propagated from point to point are the hidden wave amplitudes. Uncertainty not only can create indefiniteness but it can actively create every conceivable particle out of the vacuum, and does so sine qua non. Special relativity and the advanced and retarded solutions that arise are also essential to enable the interactions that make the fabric of the quantum field. The advanced solutions are required to have negative energy and retarded solutions positive energy thus giving the correct results for both scattering and electronpositron interactions within the field so that electron scattering is the same as electron positron creation and annihilation.
Fig 10: Quantum electrodynamics: (a,b) Two Feynman diagrams in the electromagnetic repulsion of two electrons. In the first a single virtual photon is exchanged between two electrons, in the second the photon becomes a virtual electronpositron pair during its transit. All such diagrams are integrated together to calculate the strength of the electromagnetic force. (c) A homologous weak force diagram shows how neutron decay occurs via the Wparticle of the weak nuclear force, which itself is a heavy charged photon, as a result of symmetrybreaking. A down quark becoming up changes a neutron (ddu) into a proton (duu). (d) Timereversed electron scattering is the same as positron creation and annihilation.
Each more complex interaction involving one more particle vertex is smaller by a factor where e is the electron charge and h and c are as above, called the 'fine structure constant '. This allows the contribution of all the diagrams to sum to a finite interaction, unlike many unified theories, which are plagued by infinities, as we shall see. The electromagnetic force is generated by virtual photons exchanged between charged particles existing only for a time and energy permitted by the uncertainty relation. The closer the two electrons, the larger the energy fluctuation possible over the shorter time taken to travel between them and hence the greater the force upon them. Even in the vacuum, where we think there is nothing at all, there is actually a sea of all possible particles being created and destroyed by the rules of uncertainty.
The virtual particles of a force field and the real particles we experience as radiation such as light are one and the same. If we pump energy into the field, for example by oscillating it in a radio transmitter, the virtual photons composing the electromagnetic field become the real positive energy photons in radio waves entering the receiver as a coherent stream of real photons, encoding the music we hear. Relativistic quantum field theories always have both advanced and retarded solutions, one with positive and the other with negative energy, because of the two square roots of special relativity (4). They are often described by Feynman spacetime diagrams. When the Feynman diagram for electron scattering becomes timereversed, it then becomes precisely the diagram for creation and annihilation of the electron's antiparticle, the positron, as shown in fig 10. This hints at a fundamental role for the exotic timereversed advanced solutions.
As a simple example, the wave equation for a zero spin particle with mass m has two solutions: , where .
The weak and strong nuclear forces can be explained as quantum particle fields in a similar way, but gravity holds out further serious catch22s. Gravity is associated with the curvature of spacetime, but this introduces fundamental contradictions with quantum field theory. To date there remains no fully consistent way to reconcile quantum field theory and gravitation as we shall see.
The Spooky Nature of Quantum Entanglement
We have already seen how the photon wave passing through two slits ends up being absorbed by a single atom. But how does the wave avoid two particles accidentally being absorbed in far flung parts of its wave function out of direct communication?
Because we can't sample two different points of a singleparticle wave, it is impossible to devise an experiment which can test how a wave might collapse. One way to learn more about this situation is to try to find situations in which two or more correlated particles will be released coherently in a single wave. This happens with many particles in a laser and in the holograms made by coherent laser light and in BoseEinstein condensates. It also happens in other situations where two particles of opposite spin or complementary polarization become created together. Many years ago Einstein, Rosen and Podolsky (EPR) suggested we might be able to break through the veil of quantum uncertainty this way, indirectly finding out more about a single particle than it is usually prepared to let on. Einstein commented "I do not believe God is playing dice with the universe", but that is precisely what quantum entnglement seems to entail.
Fig 11: (a) Pairsplitting experiment for photons using polarization. The first experiments were done on electron's spins using a SternGerlach magnet's nonunifom field to separate spin up and down particles.(b) A variant of the experiment in (a), in which a polarized beam splitter leads to two detectors on each side, ensuring both polarization states are detected separately avoiding errors from nondetection. (c) The results are consistent with quantum mechanics but inconsistent with Bell's inequalities for a locally causal system. Below is shown the CHSH (Clauser, Horne, Shimony, and Holt) inequality, an easier to use version of Bell's inequalities applicable to configuration (b), where N_{+} is the number of coindicences detected between D_{a}+ and D_{b} etc. where a, b are the angles. Using Bell's proof, the combined expectancies on the left are bounded above by 2, but the sinusiodal angular projection of quantum theory allows . (d) Timevarying analyzers are added driven by an optical switch too fast for light to cross the apparatus showing the effect persists even when light doean't have time to cross the apparatus. (e) The calcium transition (Aspect R25). (f) An experiment using the GHZ (Greenberger, Horne, and Zeilinger) arrangement involving three entangled photons generatd by a pulse passed through a down converter (BBO) to create an entangled pair, beamsplitters (BS and PBS) as well as quarter and halfwave plates and detected at D_{1}, D_{2}, D_{3}, with T used as a trigger, according to the GHZ equation below, where the three photons are collectively either horizontally or vertically polarized (Nature 403 5159). GHZ can return a violation of local causality directly without having to build a statistical distribution. A third relation called the LeggettGarg inequality (Arxiv:1304.5133) applies instead to the varying time of observations and has been performed on systems from qbits through to neutrino oscillations (Arxiv:1602.0004, fig 34).
A calcium atom's electron excited into a higher spin0 sorbital cannot fall back to its original sorbital in one step because a photon has spin 1 and the spins don't match, since you can't go between two orbits of equal spin and radiate a spin1 photon, or the summed spins don't tally. The atom however can radiate two photons together as one quantum event, thereby cancelling one another's spins, to transit to its ground state, via an intermediate spin1 porbital. This releases a blue and a yellow photon, each of which travel off in opposite directions, with complementary polarizations.
When we perform the experiment, it turns out that the polarization of neither photon is defined until we measure one of them. When we measure the polarization of one photon, the other immediately  instantaneously  has complementary polarization. The nature of the angular correlations between the detectors is inconsistent with any locallycausal theory  that is no theory based on information exchanged between the detectors by particles at the speed of light can do the trick, as proved in a famous result by John Bell (1966) and subsequent experiments. The correlation persists even if the detectors' configurations are changed so fast that there is no time for information to be exchanged between them at the speed of light as demonstrated by Alain Aspect (1982). This phenomenon has been called quantum nonlocality and in its various forms quantum 'entanglement', a name coined by Schrodinger, which is itself very suggestive of the throes of a sexual 'tryst'. The situation is subtly different from any kind of classical causality we can imagine. The information at either detector looks random until we compare the two. When we do, we find the two seemingly random lists are precisely correlated in a way which implies instantaneous correlatedness, but there is no way we can use the situation to send classically precise information faster than the speed of light by this means. We can see however in the correlations just how the ordinary oneparticle wave function can be instantaneously autocorrelated and hence not slip up in its accounting during collapse.
There are several loopholes that might undermine the conclusions of the Bell's theorem quantum entanglement tests. The detection loophole is that not all photons produced in the experiment are detected. Then we have the communication loophole if the entangled paticles are too close together, then, in principle, measurements made on one could affect the other without violating the speedoflight limit. In fig 12 (a) is shown an experiment (arxiv:1608.01683) where both electrons and photons are used, closing these two loopholes simultaneously.
There is thus no easy way out for a locally realistic theory to circumvent the limits imposed by Bell's theorem, without undermining the principle that the observer has the free will to choose the orientations of the detectors. In order for the argument for Bell's inequality to follow, it is necessary to be able to speak meaningfully of what the result of the experiment would have been, had different choices been made. This assumption is called counterfactual definiteness. But this means Bell tests still have a freedomofchoice loophole, because they assume experimenters have free choice over which measurements they perform on each of the pair. But some unknown effect could be influencing both the particles and what tests are performed (either by affecting choice of measurement directly, or by restricting the available options), to produce correlations that give the illusion of entanglement. Superdeterminism asserts this can never happen because the entire state of the universe, including the observer, is deterministic, so the experimenter can choose only those configurations already stipulated. Gerard 't Hooft has some papers exploring this idea (arXiv:0908.3408, arXiv:0701097).
However, as shown in in fig 12 (b), experiments have now been performed which significantly close the time frame on this loophole as well. To narrow the freedomofchoice loophole, researchers have previously put 144 kilometres between the source of entangled particles and the randomnumber generator that they use to pick experimental settings. The distance between them means that if any unknown process influenced both setups, it would have to have done so at a point in time before the experiment. But this only rules out any influences in the microseconds before. The latest paper (arXiv: 1611.06985) has sought to push this time back dramatically, by using light from two distant stars to determine the experimental settings for each photon. The team picked which properties of the entangled photons to observe depending on whether its two telescopes detected incoming light as blue or red. The colour is decided when the light is emitted, and does not change during travel. This means that if some unknown effect, rather than quantum entanglement, explains the correlation, it would have to have been set in motion at least around 600 years ago, because the closest star is 575 lightyears away. The approach may eventually push back this limit to billions of years ago by doing the experiment with light from more distant quasars.
Fig 12: 2015 Experiment (a) closes two loopholes: Experiments that use entangled photons are prone to the detection loophole: not all photons produced in the experiment are detected, and sometimes as many as 80% are lost. Experimenters therefore have to assume that the properties of the photons they capture are representative of the entire set. To get around the detection loophole, physicists often use particles that are easier to keep track of than photons, such as atoms. But it is tough to separate distant atoms apart without destroying their entanglement. This opens the communication loophole: if the entangled atoms are too close together, then, in principle, measurements made on one could affect the other without violating the speedoflight limit. The team used a cunning technique called entanglement swapping to combine the benefits of using both light and matter. The researchers started with two unentangled electrons sitting in diamond crystals held in different labs 1.3 km apart. Each electron was individually entangled with a photon, and both of those photons were then zipped to a third location. There, the two photons were entangled with each other  and this caused both their partner electrons to become entangled, too. This did not work every time. In total, the team managed to generate 245 entangled pairs of electrons over the course of nine days. The team's measurements exceeded Bell's bound, once again supporting the standard quantum view. Moreover, the experiment closed both loopholes at once: because the electrons were easy to monitor, the detection loophole was not an issue, and they were separated far enough apart to close the communication loophole, too (arxiv.org/pdf/1508.05949). Experiment (b) substantially closes a third loophole the freedomofchoice loophole (arxiv:1608.01683). Light sources from two telescopes trained on distant stars up to 600 light years away is used to determine the choice of orientations, eliminating the gray light cone in the lower imageextending back up to 600 years. A third experiment (c) shows that two histories in which the order of events and accompanying changes induced by the Anne and Bob are inverted in one of two histories can become entangled so that the usual idea of causality does not apply (arxiv:1608.01683).
Entanglement can also apply not just to quantum states but to quantum histories, so that a photon which has specific states at two points in time cannot be assigned a state at intermediate points. Its history thus becomes a superposition of inconsistent histories which separate and come together again at the final point, illustrating many worlds features of quantum superposition. An experimental realization has been performed (Cotler et al. 2016) which is a temporal version of a three degrees of freedom version of the Bell experiment entitled the GreenbergerHorneZeilinger (GHZ) type, where instead of three photons at the same time, the experiment explored a single photon at three different time points. In this case the bounds on the Bell's theorem equivalent are again violated, showing no single definite history can be assigned, indicating entangled histories.
An intriguing illustration of how different the quantum world can be is illustrated by a quantum game of NIM (Arxiv:1304.5133) utilizing the LeggettGarg (1985) Belltype inequality (see figs 11, 34). This places a bound on measurements, say Q = +/1 at three times t_{1}, t_{2}, t_{3}, where we find ≤ 1. Consider a quantum version of the threebox game, played by Alice and Bob, who manipulate the same threelevel system. Alice first prepares the system in state 3> and then evolves it with a unitary operator that takes . Bob then has a choice of measurement: with probability p_{1}^{B} he decides to test whether the system is in state 1> or not (classically, he opens box 1), and with probability p_{2}^{B} he tests whether the system is in state 2> or not. Alice then applies a second unitary to the system, which takes before she makes her final measurement to check the occupation of state 3>. If both Alice and Bob find the system in the state that they check (e.g., Bob measures level 1 and finds the system there and Alice, the same for state 3), then Alice wins. If Alice finds the system in state 3, but Bob's measurement fails, then Bob wins. Finally, if Alice doesn't find the system in state 3, the game is drawn. In a realistic description of this game in which Bob's measurements are noninvasive, Alice's chance of winning can be no better than 50/50 as long as Bob chooses his measurements at random . In the quantum version, however, interference between various paths means that Alice wins every time. Alice's quantum strategy therefore outstrips all classical (i.e. realistic NIM) ones.
Fig 12b: Schrodinger's cat split into two entangled boxes
Scientists have in 2016 split Schrodinger's cat between two entangled boxes (Wang et al. 2016). Microwaves inside a superconducting aluminum cavity take the place of the cat. the microwaves' electric fields can be pointing in two opposing directions at the same time, just as Schrodinger's cat can be simultaneously alive and dead. Because the states of the two boxes are entangled, if the cat turns out to be alive in one box, it's also alive in the other. Measurements from the two boxes will agree on the cat's status. For microwaves, this means the electric field will be in sync in both cavities. The scientists measured the cat states produced and found a fidelity of 81 percent. The result is a step toward quantum computing with such devices. The two cavities could serve the purpose of two quantum bits, or qubits. The cat states are more resistant to errors than other types of qubits so the system could eventually lead to more faulttolerant quantum computers.
This clash between subjective experience and quantum theory has lead to much soulsearching. The Copenhagen interpretation says quantum theory just describes our state of knowledge of the system and is essentially incomplete. This effectively passes the problem back from physics to the observer. Some physicists think all the possibilities happen and there is a probability universe for each case. This is called the manyworlds interpretation of Hugh Everett III. The universe becomes a superabundant superimposed set of all possible probability futures and indeed all pasts as well in a smeared out 'holographic ' multiverse in which everything happens. It suffers from a key difficulty. All the experience we have suggests just one possibility is chosen in each situation  the one we actually experience. Some scientists thus think collapse depends on a conscious observer. Many worlds defenders claim an observer wouldn't see the probability branching because they too would be split but this leaves us either with infinite split consciousness, or all we lose all forms of decisionmaking process, all forms of historicity in which there is a distinct line of history, in which watershed events do actually occur, and the role of memory in representing it.
Quantum Tunneling: In a version of the pairsplitting experiment (Chiao and Kwait 1993), which illustrates the difficulty of using superluminal correlations to violate classical causality, one photon of a pair goes directly to a detector, while the other has to quantum tunnel through a partially reflecting mirror's energy barrier designed so it succeeds 1% of the time. When the tunneling photon's arrival time is compaired with the other it is detected sooner more than 50% of the time, indicating it was traveling up to 1.7 times the speed of light, but the effect results from reshaping the wave so that the leading edges of the two photon's waves both arrive together, but the peak of the tunneling photon arrives sooner because its wave packet has been shortened and its peak, where detection is most probable, arrives sooner. However this doesn't mean it can be used to convey information faster than the speed of light, because the effect lies within the uncertainty of position of the photon that detemined the tunneling in the first place.
Fig 13: In radioactivity, particles can escape the nucleus, by quantum tunneling out, even though there is an energy barrier greater than their own energy holding them together. They can do this because the wave function extends through the barrier, declining exponentially and there is a nonzero probability of finding the particle outside, since its wave function continues. The energy (frequency) is unchanged, but the amplitude (probability or intensity) is reduced. In the same way we can think of tunneling as a fluctuation of energy for a short enough time to jump over the barrier , which must be returned within the Heisenberg time limit. A game of lookingglass croquet above has Alice hitting rolledup hedgehogs, each bearing an uncanny resemblance to Werner Heisenberg towards a wall, overlooked by Einstein. Classically the hedgehogs always bounce off. Quantummechanically however a small probability exists that a hedgehog will appear on the far side. The puzzle facing quantummechanics is how long does it take to go through the wall? Does the traversal time violate Einstein's lightspeed limit? In the pairsplitting experiment when one is required to quantum tunnel, the tunneling photon seems to jump the barrier faster than light, so that it is likely to arrive sooner, but is not able to do this in a way which violates Einsteinian causality by sending usable information faster than light.
Since the first pairsplitting result in the 1980s there have been a veritable conjurer's collection of experiments, all of which verify the predictions of quantum mechanics in every case and confirm all the general principles of the pairsplitting experiment. Even if we clone photons to form quartets of correlated particles, any attempt to gain information about one of such a multiple collection collapses the correlations between the related twins. Furthermore these effects can be retrospective, leading photons to be able to be superpositions of states which were created at different times.
Wheeler's Delayed Choice Experiment A counterintuitive aspect of quantum reality is that it is possible to change the way a quantum is detected after it has traversed its route in such a way as to retrospectively determine whether it traversed both paths as a wave or just one as a particle. We can sample photons either by an interference pattern, verifying they went along both paths (e.g. both sides of the galaxy in fig 14), or place separate directional detectors which will detect they went one way around only as particles (which will destroy the interference pattern. Moreover, we can decide which to perform after the photon has passed the galaxy, at the end of its path. Thus the configuration of the latter parts of the wave appear to be able to alter the earlier history (Sci. American). The delayed choice experiment has a deep link with Schrodinger's cat because opening the cat's box is exactly like using particle detectors because it determines whether or not a scintillation particle was emitted, while leaving the box closed retains the superposition of the wave.
Fig 14: Wheeler delayed choice experiment on a cosmic scale: A very distant quasar is gravitationally lensed by an intervening galaxy.
Just how large such waves can become can be appreciated if we glance out at a distant galaxy, whose light has had to traverse the universe to reach us, perhaps taking as long as the history of Earth to get here. The ultimate size is as big as the universe. Only one photon is ever absorbed for each such wave, so once we detect it, the probability of finding the photon anywhere else, and hence the amplitude of the wave, must immediately become zero everywhere. How can this happen, if information cannot travel faster than the speed of light? For a large wave, such as light from a galaxy, (and in principle for any wave) this collapse process has to cover the universe. When I shine my torch against the window, the amplitude of each photon is both reflected, so I can see it, and transmitted, escaping into the night sky. Although the wave may spread far and wide, if the particle is absorbed anywhere, the probability across vast tracks of space has to suddenly become zero. Moreover collapse may involve the situation at the end of the path influencing the earlier history, as in the Wheeler delayed choice experiment.
Quantum Erasure It is also possible to 'uncollapse' or erase such losses of correlation by reinterfering the wave functions so we can no longer tell the difference. The superposition choices of the delayed choice experiment do this. This successfully recreates the lost correlations, inducing information about one of the particles and then erase it again by reinterfering it back into the wave function provided we use none of its information  the quantum eraser. In such situations the interference, which would be destroyed had we looked at the information, is reintegrated undiminished.
Fig 15: Quantum erasure (Scientific American)
Erasing information about the path of a photon restores wavelike correlated behavior. Pairs of identically polarized correlated photons produced by a 'downconverter', bounce off mirrors, converge again at a beam splitter and pass into two detectors. A coincidence counter observes an interference pattern in the rate of simultaneous detections by the two detectors, indicating that each photon has gone both ways at the beam splitter, as a wave. Adding a polarization shifter to one path destroys the pattern, by making it possible to distinguish the photons' paths. Placing two polarizing filters in front of the detectors makes the photons identical again, erasing the distinction, restoring the interference pattern.
Fig 16: Delayed choice quantum eraser configuration (en.wikipedia.org/wiki/Delayed_choice_quantum_eraser, doi:10.1103/PhysRevLett.84.1).
Use of entangled photons enables the design and implementation of versions of the quantum eraser that are impossible to achieve with singlephoton interference. What makes the Wheeler's delayed choice quantum eraser astonishing is that, unlike in the classic doubleslit experiment, the choice of whether to preserve or erase the whichpath information of the idler was not made until 8 ns after the position of the signal photon had already been measured.
An individual photon goes through one (or both) of the two slits. In the illustration, the photon paths are colorcoded (red A, blue B). One of the photons  the "signal" photon (red and blue lines going upwards from the prism at BBO) continues to the target detector D0. Detector D0 is scanned in steps along its xaxis. A plot of "signal" photon counts detected by D0 versus x can be examined to discover whether the cumulative signal forms an interference pattern. The other entangled photon  the "idler" photon (red and blue lines going downwards from the prism), is deflected by prism PS that sends it along divergent paths depending on whether it came from slit A or slit B. Beyond the path split, the idler photons encounter beam splitters BSa, BSb, and BSc that each have a 50% chance of allowing the idler photon to pass through and a 50% chance of causing it to be reflected. The beam splitters and mirrors direct the idler photons towards detectors labeled D1, D2, D3 and D4.
Note that:
Detection of the idler photon by D3 or D4 provides delayed "whichpath information" indicating whether the signal photon with which it is entangled had gone through slit A or B. On the other hand, detection of the idler photon by D1 or D2 provides a delayed indication that such information is not available for its entangled signal photon. Insofar as whichpath information had earlier potentially been available from the idler photon, it is said that the information has been subjected to a "delayed erasure".
Entanglement Swapping A second intriguing phenomenon called entanglement swapping can also be made into a Wheeler delayed choice version. In this there is a mediator, Victor. In the entanglement swapping procedure, fig 17 above, two pairs of entangled photons are produced, and one photon from each pair is sent to Victor. The two other photons from each pair are sent to Alice and Bob, respectively. If Victor projects his two photons onto an entangled state, Alice's and Bob's photons are entangled although they have never interacted or shared any common past. What might be considered as even more puzzling is the idea of delayedchoice for entanglement swapping. Victor is free to choose either to project his two photons onto an entangled state and thus project Alice's and Bob's photons onto an entangled state, or to measure them individually and then project Alice's and Bob's photons onto a separable state. If Alice and Bob measure their photons' polarization states before Victor makes his choice and projects his two photons either onto an entangled state or onto a separable state, it implies that whether their two photons are entangled (showing quantum correlations) or separable (showing classical correlations) can be defined after they have been measured (ArXiv: 1203.4384).
Fig 17: (Above) Delayed choice entanglement swapping in which Victor is able to decide whether Alice's and Bob's photons are entangled or not after they have already been measured. (Below) A photon is entangled with a photon that has already died (sampled) even though they never coexisted at any point in time.
In a second experiment, fig 17 below, two photons can become entangled even though they have never coexisted at any point in time. Photons 1 & 2 are entangled and 1 is detected killing it. A second entangled pair 3 & 4 are later created and 3 is then entangled with 2 disrupting the original entanglement with 4. But when 4 is measured, we then find it is entangled with the dead photon 1 (ArXiv: 1209.4191).
Superconductivity Entanglement is also involved in superconductivity. Electrons in the material form orbiting pairs, because the positively charged atomic ions are attracted to the negative electrons, a small peak of atomic density forms in the neighbourhood of two electrons which can cause them to form entrapped orbits even though the negatively charged electrons would naturally repel. The pairs cannot then collide with the atoms in the material because the activation energy required for either electron to escape the attractive pair exchanging phonons in their minimum energy configuration, is greater than the thermal energy of the material. Hence the electric current flows unobstructed.
Entanglement can also explain the Meissner effect, in which a magnet levitates above superconducting material. The magnetic field induces a current in the surface of the superconductor, and this current effectively excludes the magnetic field from the interior of the material, causing the magnet to hover. The current halts the photons of the magnetic field after they have travelled only a short distance through the superconductor. For the normally massless photons it is as if they have suddenly entered treacle, effectively giving them a mass. A similar mechanism may be behind the mass of all particles. The source of this mass is believed to be the Higgs field mediated by the Higgs boson, existing in a "condensed" state that excludes mediator particles such as gluons in the same way that a superconductor's entangled electrons exclude the photons of a magnetic field (Quantum quirk may give objects mass New Scientist 24 October 2004).
Quantum Teleportation, in which information defining a quantum particle in a given state is 'teleported' by another particle, has also become an experimental reality. These experiments give us a broad intuition of quantum reality. In quantum teleportation one of a pair of entanged particles is interacted with by a third distinct particle to produce a signal which is 'teleported' as classical information, e.g. as part of the state of a transmitted particle, such as its porarization. This later interacts with the second entangled particle resulting in the generation of a particle with identical properties to the third particle. The illustrations below the theoretical process and two exerimental realizations.
Fig 18: (a) In quantum teleportation, a quantum (blue left) is combined in an interference measurement with one of an entangled pair (pink left) by experimenter 1, who then sends the result of the measurement as classical information to 2 who applies this to transform the other entangled particle, causing it to enter the same quantum state as the original blue one. (a) Teleporting a grin  the magnetic moment (the grin) of a neutron (Cheshire cat) traversed a different path from the particle (doi: 10.1038/ncomms5492). (b) Quantum teleportation has been achieved over distances greater than 100 km.
Quantum Computing Classical computation suffers from the potentially unlimited time it takes to check out every one of the possibilities. To crack a code we need to check all the combinations, whose numbers can increase more than exponentially with the size of the code numbers and possibly taking as long as the history of the universe to compute. Factorizing a large number composed of two primes is known to be computationally intractable enough to provide the basis for public key encryption by which banks records and passwords are kept safe. Although the brain ingeniously uses massively parallel computation, there is as yet no systematic way to boot strap an arbitrary number of parallel computations together in a coherent manner.
However quantum reality is a superposition of all the possible states in a single wave function, so if we can arrange a wave function to represent all the possibilities in such a computation, superposition might give us the answer by a form of parallel quantum computation. A large number could in principle be factorized in a few superimposed steps, which would otherwise require vast timeconsuming classical computer power to check all the possible factors one by one. Suppose we know an atom is excited by a certain quantum of energy, but only provide it a part of the energy required. The atom then enters a superposition of the ground state and the excited state, suspended between the two like Schrodinger's cat. If we then collapse the wave function, squaring it to its probability, as in , it will be found to be in either the ground state or excited state with equal probability. This superimposed state is sometimes called the 'square root of not' when it is used to partially excite a system which flips between 0 and 1 corresponding to a logical negation.
To factorize a large number, we could devise a quantum system in two parts. The left part is excited to a superposition. Suppose we have a small array of atoms which effectively form the 0s and 1s of a binary number  0 in the ground state and 1 in the excited state. If we then partially excite them all they represent a superposition of all the binary numbers  e.g. 00, 01, 10 and 11. The right half of the system is designed to give the factorization remainder of a test number taken to the power of each of the possible numbers in the left. These turn out to be periodic, so if we measure the right we get one of the values. This in turn collapses the left side into a superposition of only those numbers with this particular value in the right. We can then recombine the reduced state on the left to find its frequency spectrum and decode the answer. As a simple example, you are trying to factorise n=15. Take the test number x = 2. The powers of 2 give you 2, 4. 8, 16, 32, 64, 128, 256 ... Now divide by 15, and if the number won't go, keep the remainder. That produces a repeating sequence 2, 4, 8, 1, 2, 4, 8, 1 ... with period n = 4 we can use this to figure that 3 = 2^{4/2}1 is a factor of 15. The quantum parallelism solves all the computations simultaneously  this is known as Shor's algorithm, after Peter Shor.
Stage three is the most complex and depends on the fact that the frequency of of these repeats can be made to pop out of a calculation by getting the different universes to interfere with one another. A complex series of quantum logic operations has to be performed and interference then brought about by looking at the final answer. The final observed value, the frequency f, has a good chance of revealing the factors of n from the expression x^{f}^{/2}1. in the simple example above, the repeat sequence is the four values 2, 4, 8, 1, so the repeat frequency is 4. Thus Shor's algorithm produces the number: 2^{4/2}1 = 3 which is a factor of 15.
Fig 19: Above: Two qubit logic gate performance (doi:10.1038/nature15263). Below: Adiabatic quantum computing on the spinchain problem onedimensional spin problems with variable local fields and couplings between adjacent spins. An example of a stoquastic problem. With evolution of the system for 9 qbits shown at right (doi:10.1038/nature17658).
Such quantum computers require isolation from the environment to avoid quantum superpositions collapsing in decoherence. A two qubit quantum logic gate has been recently constructed using silicon transistor technology, promising a proofofprinciple breakthrough in the construction of quantum computers (Veldhorst et al. 2015).
In an ingenious strategy, a team have used a welltested four qubit quantum computer to simulate the creation of pairs of particles and antiparticles in a proof of concept simulation in which energy is converted into matter, creating an electron and a positron. Quantum electrodynamics has the most excellent predictions of any physical theory, but interactions involving strong nuclear and colour forces become too complex, requiring simulations which are prone to exponential runaway in classical computing because it lacks quantum superposition. The team used a quantum computer in which an electromagnetic field traps four ions in a row, each one encoding a qubit, in a vacuum. They manipulated the ions' spins (magnetic orientations) using laser beams, coaxing the ions to perform logic operations. The team's quantum calculations confirmed the predictions of a simplified version of quantum electrodynamics: "The stronger the field, the faster we can create particles and antiparticles" (Martinez et al. 2016).
Fig 19b:Left: (a) An experiment to simulate the coherent realtime dynamics of particleantiparticle creation by realizing the Schwinger model (onedimensional quantum electrodynamics) on a lattice. (b) The four qubit arrangement. (c, d) Experimental and theoretical data showing the evolution of the particle number density as a function of time wt and particle mass m/w. Right: Quantum tomography a technique akin to weak quantum measurement. In a functional quantum computer this could be used to inform errorcorrection measures on connected qubits in the same device. A qubit is created using a circuit with two superconducting metals separated by an insulating barrier. Passing a current produces a qubit with two simultaneous superposed energy levels simultaneously. Reducing the energy barrier maintaining the superposition collapses its wavefunction into one of the two energy levels. But if it is set just above the highest of the two energy levels, it only partially collapses the waveform  in a "partial measurement". Scanning the qubit using microwave radiation, and then fully removing the energy barrier can then reveal its state of superposition and document its collapse (Science 312 1498).
The Dwave computer works on an entirely different principle of adiabatic quantum computing  quantumannealing of a potential energy landscape with multiple local minima. Classical annealing works to find a suboptimal local minimum by starting at a high thermodynamic temperature of random excitations to effectively throw a marble around the landscape to avoid it getting caught in a highaltitude lake, before gradully lowering the temperature to assist in finding a local minimum not too far in value from the global minimum. Quantum annealing replaces kinetic excitation with graduated quantum tunneling to achieve the same effect. This approach works only on problems decodable into an energy landscape based on array computing. Whether it achieves better performance than classical computing remains unproven according to wikipedia (en.wikipedia.org/wiki/Adiabatic_quantum_computation).
A team from Google (Barends et al. 2016), see fig 19, have more recently begun working with fundamental research into adiabatic quantum computing of systems such as Stoquastic spinchain problems. Stoquastic Hamiltonians, those for which all offdiagonal matrix elements in the standard basis are real and nonpositive, are common in the physical world (Bravy et al. 2008). They include fluxtype Josephson junction qbits (Barends et al. 2013). A Josephson junction is a conductor pair separated by a thin insulating barrier, which permits quantumtunneling. It is a macroscopic quantum phenomenon resulting in a current crossing the junction, determined by the junction's flux quantum, in the absence of any external electromagnetic field, with discrete steps under increasing voltage.
Quantum Cryptography exploits quantum mechanical properties to perform cryptographic tasks. Quantum key distribution offers an informationtheoretically secure solution to the key exchange problem. Publickey encryption and signature schemes such as RSA, can be broken by quantum adversaries. Quantum cryptography allows the completion of various cryptographic tasks that are proven or conjectured to be impossible using only classical communication, for example, it is impossible to copy data encoded in a quantum state and the very act of reading data encoded in a quantum state changes the state. This is used to detect eavesdropping in quantum key distribution.
The most well known and developed application of quantum cryptography is quantum key distribution (QKD), which is the process of using quantum communication to establish a shared key between two parties (Alice and Bob, for example) without a third party (Eve) learning anything about that key, even if Eve can eavesdrop on all communication between Alice and Bob. This is achieved by Alice encoding the bits of the key as quantum data and sending them to Bob; if Eve tries to learn these bits, the messages will be disturbed and Alice and Bob will notice. The key is then typically used for encrypted communication using classical techniques. For instance, the exchanged key could be used as the seed of the same random number generator both by Alice and Bob.
Fig 20: Quantum cryptography (1) To begin creating a key, Alice sends a photon through either the 0 or 1 slot of the rectilinear or diagonal polarizing filters, while making a record of their orientations. (2) For each incoming bit, Bob chooses randomly which filter slot he uses for detection down both the polarization and the bit value. (3) If Eve tries to spy on the photon train, quantum mechanics prohibits her using both filters but if she chooses the wrong one she may create errors by modifying their polarization. (4) After all the photons reach Bob, he tells Alice openly his sequence of filters, but not the value of the bit value. (5) Alice tells Bob openly in turn which filters he chose correctly. These determine the bits they will use to form their encryption key.
Following the discovery of quantum key distribution and its unconditional security, researchers tried to achieve other cryptographic tasks with unconditional security. One such task was quantum commitment. A commitment scheme allows a party Alice to fix a certain value (to "commit") in such a way that Alice cannot change that value while at the same time ensuring that the recipient Bob cannot learn anything about that value until Alice decides to reveal it.
Weak Quantum Measurement and Surreal Bohmian Trajectories Weak quantum measurement (Aharonov et al. 1988) is a process where a quantum wave function is not irreversibly collapsed by absorbing the particle but a small deformation is made in the wave function whose effects become apparent later when the particle is eventually absorbed e.g. on a photographic plate in a strong quantum measurement. Weak quantum measurement changes the wave function slightly midflight between emission and absorption, and hence before the particle meets the future absorber involved in eventual detection. A small change is induced in the wave function, e.g. by slightly altering its polarization along a given axis (Kocsis et al. 2011). This cannot be used to deduce the state of a given waveparticle at the time of measurement because the wave function is only slightly perturbed, and is not collapsed or absorbed, as in strong measurement, but one can build up a prediction statistically over many repeated quanta of the conditions at the point of weak measurement, once postselection data is assembled after absorption.
Fig 21: Weak quantum measurement in a double slit apparatus generating single photons using a laser stimulated quantum dot and split fiber optics. The overlapping wave function is elliptically polarized in the xyplane transverse to the zdirection of travel. A calcite crystal is used to make a small shift in the phase of one component, while the other retains the information leading to absorption of the photon on a charged coupled device. By combining the information from the two transverse components at varying lens settings, it becomes possible to make a statistical portrait of the evolving particle trajectories within the wave function. Pivotally the weak quantum measurement is made in a way, which is confirmed only in the future of the ensemble when the absorption takes place (Kocsis et al. 2011).
Weak measurement also suggests (Merali 2010, Cho 2011) that, in some sense, the future is determining the present, but in a way we can discover conclusively only by many repeats. Focus on any single instance and you are left with an effect with no apparent cause, which one has to put it down to a random experimental error. This has led some physicists to suggest that freewill exists only in the freedom to choose not to make the postselection(s) revealing the future's pull on the present. Yakir Aharonov, the codiscoverer of weak quantum measurement (Aharonov et al. 1988) sees this occurring through an advanced wave travelling backwards in time from the future absorbing states to the time of weak measurement. What God gains by 'playing dice with the universe', in Einstein's words, in the quantum fuzziness of uncertainty, is just what is needed, so that the future can exert an effect on the present, without ever being caught in the act of doing it in any particular instance: "The future can only affect the present if there is room to write its influence off as a mistake", neatly explaining why no subjective account of prescience can do so either.
Weak quantum measurements have been used to elucidate the trajectories of the wave function during its passage through a twoslit interference apparatus (Kocsis et al. 2011), to determine all aspects of the complex waveform of the wave function (Hosten 2011, Lundeen et al. 2011), to make ultra sensitive measurements of small deflections (Hosten & Kwiat 2008, Dixon et al. 2008) and to demonstrate counter factual results involving both negative and positive postselection probabilities, which still add up to certainty, when two interference pathways overlap in a way which could result in annihilation (Lundeen & Steinberg 2009). In a more recent development, a team led by Aharanov (et al. 2014) has found that postselection can also induce forms of entanglement in particles even if they have no previous quantum connection coupling their wave functions.
The spacetime profile of WQM displays detection trajectories comparable with David Bohm's (1952) 'pilot wave theory' in which the particle has a defined position and the wave acts simply as a guide, albeit with nonlocal influences. The link with Bohm's pilot wave theory became reinforced when a critical experiment demonstrated the existence of socalled "surreal Bohmian trajectories. A group of physicists with the initial letters ESSM (1992) in their names pointed out that Bohmian hidden variable in which particles were guided by a nonlocal 'pilot' wave could in principle lead to 'surreal' trajectories which could violate the predictions of quantum theory. However, when a second group (Mahler et al. 2016) set out to test surreal trajectories experimentally they found they physically exist.
Fig 21b: Left: The apparatus used to discover surreal trajectories (Mahler et al. 2016 see discussion below). Right: The experimental results show that some photons which should have gone say through the red slit according to their entangled twins, appeared to do so near the slit but further along the trajectory veer off to erratically behave as if they are a superposition of either polarization indicating some unseen nonlocal connection occurring between the now separated entangled photons.
The experiment (fig 21b) first prepares a pair of highly entangled photons with complementary polarization and then passes one into a double slit apparatus in which the photon to be measured is directed to one or other slit depending on its entangled twin's polarization. The measured photons are then passed through an apparatus to do weak quantum measurement of their trajectories as an ensemble and then detect the eventual position destructively in the same manner as fig 21. However when the polarization of the other entangled photon is used to determine which slit the first one must have gone through, the orbits near the centre of the interference pattern display clear signs of surreal trajectories.
When weak measurement is used to detect the trajectory close to the slit. it confirms that the photon has gone through the correct slit according to its assumed polarization as subsequently measured by sampling its entangled twin. However, as the position of weak measurement moves towards the photographic plate the predictions fall to an even superposition of the two polarizations. Since the weak quantum measurement is a physical realization of the ensemble trajectories going to this particular point on the plate surreal trajectories are real but the prediction made of the spin by the entangled twin has become changed. This implies in turn that changes have occurred between entering the slits and hitting the plate of a nonlocal nature implying the there is substance to the Bohmian reality.
A brief synopsis of Bohm's pilot wave theory, which can be generalized, e.g. to bosons, runs along the following lines. Consider a wave function defined in a configuration space consisting of m distinguishable particles x_{1}, ... x_{m} in d dimensions, forming an md dimensional space consisting of q={ q_{ 11, }q_{12}, q_{13}, ... , q_{m1, }q_{m}_{2}, q_{m3}} assuming d=3. Giving them masses in each direction M_{p,q}, we can derive a Schrodinger wave equation . We also consider the 'world particle' x consisting of real component positions x(t)={x_{11}(t), ... , x_{m3}(t)}, with x(0) being a random variable distributed with probability density P_{0}(x) where . Under the wave function, the velocity of the particle is defined by , guaranteeing the probability density for x(t) is P_{t}(x). Equivalently this gives an equation of motion , where f is the classical force arising from the potential V(q) and r is a repulsive force due to the quantum potential: , where . We thus have essentially real particles with defined positions subject to their (random) initial conditions whose dynamics is determined both by a classical potential and an additional quantum potential, whose effects are broadly consistent with the results we find in experiments such as weak quantum measurement.
Fig 21c:
Twoslit interference amplitudes using the pilot wave theory above and below,
the many interacting worlds theory. Both correspond closely to the
distributions of standard quantum mechanics in this case.
More recently Hall, Deckert and Wiseman (2014 doi:10.1103/PhysRevX.4.041013)
have extended this to encompass a many interacting worlds (MIW) approach,
replacing the quantum potential, with the effects of a large number of worlds
with Newtonian dynamics following the classical force above, but under a very
unusual type of interaction where the force between worlds is nonnegligible
only when the two worlds are close in configuration space. The authors admit
that such an interaction is quite unlike anything in classical physics, and it
is clear that an observer in one world would have no experience of the other
worlds in their everyday observations. But unlike Everett's many worlds
interpretation, where all the probability universes are equal and simply represent the alternative outcomes of Schrodinger's cat, the interacting
worlds are not equal but have a mutually repulsive global interaction, so, by careful experiment an observer might detect a subtle
nonlocal action on the molecules of its world.
Suppose now that
instead of only one worldparticle, as in the pilot wave interpretation, there
were a huge number N of
worldparticles coexisting, with positions (world configurations) x_{1} ... x_{N}. If each of the initial world
configurations is chosen at random from P_{0}(q), as described above, then by
construction. One can thus approximate P_{t}(q), and its derivatives, from a suitably
smoothed version of the empirical density at time t. From this smoothed density, one may also
obtain a corresponding approximation of the Bohmian force
for N ≫1 in terms of the list of world configurations X_{t}={x_{1}(t) ... x_{N}(t)} at time t.
Note, in fact, that
since only local properties of P_{t}(q), are required for r_{t}(q), the
approximation r_{N}(q; Xt), requires only worlds from the set of Xt which are in the Ndimensional neighborhood of q. That is, the approximate force is local
in configuration space.
The MIW theory
replaces the Bohmian force acting on each
worldparticle x_{n}(t) by the approximation r_{N}(x_{n}; Xt), Thus, the evolution of the world
configuration x_{n}(t) is directly determined by the
other configurations in X_{t}_{.} This makes
the wave function
, and the
functions P_{t}(q) and S_{t}(q) derived from it,
superfluous. Its fundamental dynamics are described by the system of N×m×3 secondorder differential equations . While each world evolves deterministically, which of
the N worlds we are actually living in
is unknown. Hence, assertions about the configuration of the particles in our
world are again probabilistic. For a given function
of the world
configuration, only an equally weighted population mean over all the
worlds compatible with observed macroscopic properties, can be predicted at any
time. Moreover since the worlds are
distributed with
, for any smooth function so the
description in limit approaches the wave function. The description is complete
only when the form of the force between worlds is specified. There are
different possible ways of doing so, each leading to a different version.
For example in a simplified 1D example we might
have the repulsive potential . Ideally we want a conservative interaction in which the average
energy per world approaches the quantum average energy in the limit. Suitable
choices lead to estimates, which closely follow pilot wave and quantum
descriptions for several quantum phenomena.
MIW is provocative
because it shows that multiple configuration space hidden variable theories can
evoke commensurate dynamics to quantum theory, but the action between
configuration spaces is a redescription of the same phenomena of global
dynamics that quantum entanglement demonstrates, so in a sense it is a
multiverse theory of entanglement.
However neither MIA nor the pilot wave theory can explain all aspects of wavefunction collapse because of cases like the decay of a photon into an electronpositron pair, where there are more degrees of freedom in the more complicated massive twoparticle system than the initial conditions, and both still depend on random variables in their defining conditions.
Decoherence (Zurek 1991, 2003) explains how reduction of the wave packet can lead to the classical interpretation through interation of the sytem with other quanta. Supposing we consider a measurement of electron spin. If an ideal detector is placed in the spin up path, it will than click only if the electron is spin up so we can assume the undetermined detector state dn is equivalent to spin down. If we start with an electron in the pure state then the composite system can be described as and the detector system will evolve into a correlated state: . This correlated state involves two branches of the detector, one in which it measures spin up and the other (passively) spin down. This is the splitting of the wave function into two branches advanced by Everett to articulate the manyworlds description of quantum mechanics. However in the real world, we know the alternatives are distinct outcomes rather than a mere superposition of states. Von Neumann was well aware of these difficulties and postulated that in addition to the unitary evolution of the wave function there is a nonunitary 'reduction of the state vector or wave function which converts the superposition into a mixture by cancelling the correlating offdiagonal terms of the pure density matrix: to get a reduced density matrix, which enables us to interpret the coefficients as classical probabilities.
However, as we have seen with the EPR pairsplitting experiments, the quantum system has not made any decisions about its nature until measurement has taken place. This explains the offdiagonal terms, which are essential to maintain the fully undetermined state of the quantum system which has not yet even decided whether the electrons are spin up or spin down. One way to explain how this additional information is disposed of is to include the interaction of the system with the environment in other ways. Consider a system S detector D and environment E. If the environment can also interact and become correlated with the apparatus, we have the following transition: .
Fig 21d: Cancellation of off diagonal elements in a cat paradox experiment due to decoherence arising from interactions with other quanta leads to a distribution along the diagonal and a classical real probability distribution (inset), representing the probability that the cat is either alive or dead, but not both.
This final state extends the correlation beyond the systemdetector pair. When the states of the environment corresponding to the spin up and spin down states of the detector are orthogonal, we can take the trace over the uncontrolled degrees of freedom to get the same results as the reduced matrix. Essentially whenever the observable is a constant of motion of the detectorenvironment Hamiltonian, the observable will be reduced from a superposition to a mixture. In practice, the interaction of the particle carrying the quantum spin states with a photon and the large number of degrees of freedom of the open environment can make this loss of coherence or decoherence irreversible. Zurek describes such decoherence as an inevitable result of interactions with other particles.
Quantum Discord (Ollivier & Zurek 2002), is an extension of entanglement to more general forms of coherence in which partial correlations induced through interaction with mixed state particles can still be used to induce quantum correlated effects (Gu et al. 2012). Quantum discord is a promising candidate for a complete description of all quantum correlations, including coherent interactions that generate negligible entanglement. Coherent interactions can harness discord to complete a task that is otherwise impossible. Experimental implementation of this task demonstrates that this advantage can be directly observed, even in the absence of entanglement. Quantum discord does not require isolation from decoherence, and can even derive additional quantum information from interaction with mixed states which would annihilate entangled states.
Quantum discord is thus a viable model for processes ongoing at biological temperatures, which could disrupt full entanglement, such as photosynthesis receptors which are known to use a spatial form of quantum computing to utilize the most efficient conduction path of the chemical reaction centers (Brooks 2014). Biology is full of phenomena at the quantum level, which are essential to biological function. Enzymes invoke quantum tunneling to enable transitions through their substrate's activation energy. Protein folding is a manifestation of quantum computation intractable by classical computing. When a photosynthetic active centre absorbs a photon, the wave function of the excitation is able to perform a quantum computation, which enables the excitation to travel down the most efficient route to reach the chemical reaction site (McAlpine 2010, Hildner et al. 2013). Frog rod cells are sensitive to single photons (King 2008) and recent research suggests the conscious brain can detect as few as three individual photons (Castelvecchi 2015). Quantum discord may also be integral to the coherent excitations of active brain states (King 2014).
Fig 22: Quantum discord. Alice encodes information within one arm of a twoarm quantum state ρAB. Bob attempts to retrieve the encoded data. We compute Bob's optimal performance when he is restricted to performing a single local measurement on each arm (that is, Bob can make a local measurement first on A, then B, or vice versa). We compare this to the case where Bob can, in addition, coherently interact the processes, which allows him to effectively measure in an arbitrary joint basis of A and B. We show that coherent twobody interactions are advantageous if and only if ρAB contains discord and that the amount of discord Alice consumes during encoding bounds exactly this advantage. Curve (a) represents the amount of information Bob can theoretically gain should he be capable of coherent interactions. For our proposed implementation, this maximum is reduced to the level of curve (b), where, experimentally, Bob's knowledge about the encoded signal is represented by the blue data points. Curve (c) models these observations, taking experimental imperfections into account. Despite these imperfections, Bob is still able to gain more information than the incoherent limit given by curve (d). The blue shaded region highlights this quantum advantage, which is even more apparent if we compare Bob's performance to the reduced incoherent limit when experimental imperfections are accounted for (curve (e)). We can also compare these rates to a practical decoding scheme for Bob when limited to a single measurement on each optimal mode (curve f) and its imperfect experimental realization (curve g).
The original motivations for discord were to understand the correlation between a quantum system and classical apparatus and the division between quantum and classical correlation. It shows us the quantum interior of what is happening during decoherence (Zurek 1991). A similar quantity called deficit was employed to study the thermodynamic work extraction and Maxwell's demon. Discord is equal to the amount of classical correlation that can be unlocked in quantumclassical states. Discord between two parties is related to the resource for quantum state merging with another party. Coherent quantum interactions (twobody operations) between separable systems that result in negligible entanglement could still lead to exponential speedups in computation, or the extraction of otherwise inaccessible information.
Fig 22b: Recoherence experimental apparatus with on the right evidence for the increase in amplitude of offdiagonal elements as the apparatus os moved into the recoherence confiuration.
Recoherence is the reversal of decoherence by providing back the information which was lost in decoherence. All forms of entanglement involve decoherence because the system has become coupled toanother waveparticle. Once two quantum subsystems have become entangled, it is no longer possible to ascribe an independent state to either. Iinstead, the subsystems are completely described only as part of a greater, composite system. As a consequence of this, each entangled subsystem experiences a lloss of coherence or decoherence following entanglement. Decoherence leads to the leaking of information from each subsystem to the composite entangled system. In figure22b the researchers demonstrate a process of decoherence reversal, whereby they recover information lost from the entanglement of the optical orbital angular momentum and radial profile degrees of freedom possessed by a photon pair. They note that these results carry great potential significance, since quantum memories and quantum communication schemes depend on an experimenter's ability to retain the coherent properties of a particular quantum system (Bouchard et al. 2015). They show that quantum information in the orbital angular momentum (OAM) degree of freedom of an entangled photon pair can be lost and retrieved through propagation, by manipulating the degree of entanglement between their OAM and radial mode Hilbert spaces. This effect is different from entanglement migration, in which information is transferred between wavefunction phase and amplitude, rather than having been lost to ancilliary Hilbert spaces, and likewise differes from quantum erasure, which occurs by information loss due to projective measurement.
Quantum Procrastination and Delayed Choice Entanglement An ingenious version of the delayed choice experiment has also been applied to the idea of morphing the wave aspect into the particle aspect through a superposition of the two. A simple version of the delayed choice experiment involves an interferometer that contains two beam splitters. The first splits the incoming beam of light, and the second recombines them, producing an interference pattern. Such a device demonstrates waveparticle duality in the following way. If light is sent into the interferometer a single photon at a time, the result is still an interference pattern  even though a single photon cannot be split, but is passing through both routes as a wave. If you remove the device that recombines the two beams, interference is no longer possible, and the photon emerges from one or other route as a particle, which can be detected as before, even when this decision is made after the photon entered the splitter.
Fig 23: Quantum procrastination. Morphing the probablility statistic of a wave into that of a particle by altering the detection angle of the external photon in the delayed choice entanglement. Inset black and white and artist's impression of the transition.
Now two groups of researchers have taken this a step further by replacing the second beam splitter with a quantum version that is simultaneously operational and nonoperational, because it is entangled with a second photon outside the interferometer. Hence whether it is operational or not can be determined only by measuring the state of the second photon. The researchers found that this allowed them to delay the photon's wave or particle quality until after it has passed through all the experimental equipment, including the second beam splitter tasked with determining that very thing and by varying the detection angle of the second entangled photon according to Bell's theorem to morph the resutl between wave and particle aspects of the transmitted photon (doi: 10.1126/science.1226719, doi:10.1126/science.1226755). The ability to delay the measurement which determines the degree of wavelike or particlelike behavior to any desired degree has deservedly been termed 'quantum procrastination'.
Quantum Chaos and Entanglement Coupling The waveparticle complementarity of quantum systems alters the behavior of these systems when the dynamics is chaotic. Nuclear energetics for example which are chaotic, as they are highly energetic and spatially confined, unlike the electron orbits, which have energy levels converging at high energy, have consistent energy gaps between their eigenfunctions representing closed orbits.
(1) Fig 24: Quantum chaos. Confined wave function in a quantum dot shows statistices displaying finite separation of energy levels similar to the atomic nuclear chaoit eigenfunctions and with the quantum stadium (2) Quantum stadium shows 'scarring' of the wave function along periodic repelling orbits which ar unstable in the classical case but here have stability due to the spatially extended wave packets overlapping (King 2009). The classical analogue (3) is fully chaotic with dense sets of repelling periodic orbits and spacefilling trajectories. (4) Top to bottom classical and quantum kicked top phase spaces and linear entropies, with left to right ordered and chaotic dynamics. The lack of a dip in linear entropies in the chaotic regime indicates entanglement with nuclear spin, rather than quantum suppression of chaos, as occurs in closed quantum systems (Chaudhury et al. 2009, Steck 2009).
Likewise the quantum stadium displays 'scarring' of the wave function around dominant repelling periodic orbits, which remains stable for the wave packet because of its spatial extension. However, unlike closed quantum systems, when we investigate open quantum systems, or those which can be energetically coupled to other transitions, we find that quantum chaos can lead to new forms of entanglement between the coupled states, showing quantum chaos can paradoxically lead to further 'spooky' interactive wave effects. Werner Heisenberg cryptically commented "When I meet God, I'm going to ask him two questions, 'Why relativity?' and 'Why turbulence?' I really believe he will have an answer to the first"  implying the second, i.e. chaos is the very nemesis.
Another manifestation of quantum reality associated with disorder, which Frank Wilczek (arXiv:1308.5949) proposed the concept of in 2012, is the time crystal . The laws of physics are symmetrical in that they apply equally to all points in space and time. Many systems violate the symmetry of physical laws in space and time, resulting in symmetrybreaking. In a magnet, atomic spins line up in their lowest energy state, rather than pointing in all directions. The symmetrybreaking of the weak and electroagnetic forces via the Higgs particle behaves similarly. In a mineral crystal, atoms occupy set positions in space, and the crystal does not look the same if it is shifted slightly. In the same way a time crystal would repeat in time without expending any energy rather like a perpetual motion machine. However other researchers (doi:10.1103/PhysRevLett.111.070402) quickly proved there was no way to create time crystals, of rotating minimum energy quantum systems. But the proof left a loophole. It did not rule out time crystals in systems that have not yet settled into a steady state and are out of equilibrium. Three ingredients are essential: a force repeatedly disturbing the particles, a way to make them interact with each other and an element of random disorder. The combination of these ensures that particles are limited in how much energy they can absorb, allowing them to maintain a steady, ordered state.
Fig 25b: (a) Laser pumping at the resonant frequency repeatedly reverses the spins of a system of atoms, but requires two precise energy inputs to cycle the states. If the lasers are tuned off the resonant frequency (b), the spins will not move by 180^{o} and will not cycle back to the initial state. However if suitable degrees of disorder and internal interactions occur, the system may enter a state where the spins flip endlessly at a new period even when the laser frequencies are off resonance. In the inset (d) the red light shows a diamond time crystal (red) flipping at a different freuency from the stimulating laser (green).
In the first of two experiments (doi:10.1038/nature21413), this meant repeatedly firing alternating lasers at a chain of ten ytterbium ions: the first laser flips their spins and the second makes the spins interact with each other in random ways. That combination caused the atomic spins to oscillate, but at twice the period they were being flipped. More than that, the researchers found that even if they started to flip the system in an imperfect way, such as by slightly changing the frequency of the kicks, the oscillation remained the same. Spatial crystals are similarly resistant to any attempt to nudge their atoms from their set spacing. In the second (doi:10.1038/nature21426) using a 3D chunk of diamond riddled with around a million defects, each harbouring a spin, the diamond's impurities provided a natural disorder. When the team used microwave pulses to flip the spins, they saw the system respond at a fraction of the frequency with which it was being disturbed. These seem to be the first examples of a host of new phases that exist in relatively unexplored outofequilibrium states. They could also have several practical applications frm room temperature simulation of quantum systems to supersensitive detectors.
Quantum Matchmaking: Transactional Supercausality and Reality
For reasons which immediately become apparent, the collapse in the pairsplitting experiment has to not only be immediate, but also to reconcile information looking backwards in time. The two photons we are trying to detect are linked through the common calcium atom. Their absorptions are thus actually connected via a path travelling back in spacetime from one detector to the calcium atom and forward again to the other detector. Trying to connect the detectors directly, for example by hypothetical fasterthanlight tachyons, leads to contradictions. Tachyons transform by the rules of special relativity, so a tachyon which appears to be travelling at an infinite speed according to one observer, is travelling only at a little more than the speed of light according to another. One travelling in one direction to one observer may be travelling in the opposite direction to another. They can also cause causality violations (King R365). There is thus no consistent way of knitting together all parts of a wave or the detector responses using tachyons. Even in a singleparticle wave, the wave function in regions it has already traversed (and those it would subsequently pass through in future) also have to collapse retrospectively (and prospectively) so that no inconsistencies can occur, in which a particle is created in two locations in spacetime from the same wave function, as the Wheeler delayed choice experiment makes clear.
Fig 25: In the transactional interpretation, a single photon exchanged between emitter and absorber is formed by constructive interference between a retarded offer wave (solid) and an advanced confirmation wave (dotted). (b) The transactional interpretation of pairsplitting. Confirmation waves intersect at the emission point. (c) Contingent absorbers of an emitter in a single passage of a photon. (d) Collapse of contingent emitters and absorbers in a transactional matchmaking (King R365). (e) Experiment by Shahri Afshar (see Chown R114). A grid is placed at the interference minima of the wave fronts coming from two slits just below a lens designed to focus the light from each slit into a separate detector. Measurements by detectors (top) test whether a photon (particle) passed through the left or right slit (bottom). There is no reduction in intensity when the grid is placed below the lens at the interference minima of the offer waves from the two slits. The grid does however cause a loss of detector intensity when the dashed lefthand slit is covered and the negative wave interference between the offer waves at the grid is removed, so that the noninterfered wave from the right slit now hits the grid, causing scattering. This suggests both that we can measure wave and particle aspects simultaneously, and that the transactional interpretation is valid in a way which neither many worlds (which predicts a splitting into histories where a photon from the source goes through one slit or other) or the Copenhagen interpretation of complementarity (where detecting a particle forbids the photon manifesting as a wave).
In the transactional interpretation (Cramer R136), such a 'backward travelling' wave in time gives a neat explanation, not only for the above effect, but also for the probability aspect of the quantum in every quantum experiment. Instead of one photon travelling between the emitter and absorber, there are two shadow waves, which superimposed make up the complete photon. The emitter transmits an offer wave both forwards and backwards in time, declaring its capacity to emit a photon. All the potential absorbers of this photon transmit a corresponding confirmation wave. The confirmation waves travelling backwards in time send a handshaking signal back to the emitter. In the extension transactional approach to supercausality, a nonlinearity now reduces the set of possibilities to one offer and confirmation wave, which superimpose constructively to form a real photon only on the spacetime path connecting the emitter to the absorber as shown in fig 25. This always connects an emitter at an earlier time to an absorber at later time because a real positive energy photon is a retarded particle which travels in the usual direction in time.
A negative energy photon travelling backwards in time is precisely the antiparticle of the positive energy photon and has just the same effect. The two are identifiable in the transactional interpretation, as in quantum electrodynamics (p 304), where timereversed electron scattering is the same as positron creation and annihilation. The transactional relationship is in effect a matchmaking process. Before collapse of the wave function we have many potential emitters interacting with many potential absorbers. After all the collapses have taken place, each emitter is paired with an absorber in a kind of marriage dance. One emitter cannot connect with two absorbers without violating the quantum rules, so there is a frustration between the possibilities which can only be fully resolved if emitters and absorbers can be linked in pairs. The number of contingent emitters and absorbers are not necessarily equal, but the number of matched pairs is equal to the number of real particles exchanged.
In the pairsplitting experiment you can now see that the calcium atom emits in response to the advanced confirmation waves reaching it from both the detectors simultaneously right at the time it is emitting the photon pair. Thus the faster than light linkage is neatly explained by the combined retarded and advanced aspects of the photon having a net forwards and backwards connection which is instantaneous at the detectors. One can also explain the arrow of time if the cosmic origin is a reflecting boundary that causes all the positive energy real particles in our universe to move in the retarded direction we all experience in the arrow of time. This in turn gives the sign for increasing disorder or entropy and the time direction for the second law of thermodynamics to manifest. The equivalence of real and virtual particles raises the possibility that all particles have an emitter and absorber and arose, like virtual particles, through mutual interaction when the universe first emerged. However even if darkenergy, causes an increasing expansion, or fractal inflation leads to an open universe model in which some photons may never find an absorber, the excitations of brain oscillations, because they are both emitted and absorbed by past and future brain states could still be universally subject to transactional supercausal coupling.
The handshaking spacetime relation implied by transactions makes it possible that the apparent randomness of quantum events masks a vast interconnectivity at the quantum level, which has been termed the 'implicate order' by David Bohm (R70). This might not itself be a random process, but because it connects past and future events in a timesymmetric way, it cannot be reduced to predictive determinism, because the initial conditions are insufficient to describe the transaction, which also includes quantum 'information' coming from the future. However this future is also unformed in real terms at the early point in time emission takes place. My eye didn't even exist, when the quasar emitted its photon, except as a profoundly unlikely branch of the combined probability 'waves' of all the events throughout the history of the universe between the ancient time the quasar released its photon and my eye developing and me being in the right place at the right time to see it. Transactional supercausality thus involves a huge catch 22 about space, time and prediction, uncertainty and destiny. It doesn't suggest the future is determined, but that the contingent futures do superimpose to create a spacetime paradox in collapsing the wave function.
Roger Penrose (R535, R536), has suggested that the onegraviton limit of interaction is an objective trigger for wave packet reduction, because of the bifurcation in spacetimes induced, leading to theories in which the random or pseudorandom manifestations of the particle within the wave are nonlinear consequences of gravity. Objective orchestrated reduction or OOR is then cited as a basis which intentional consciousness uses to follow collapse rather than participating in it, as the transactional model makes possible. The OOR model unlike transactional anticipation thus leaves freewill with a kind of orphan status, following, but not participating in, the collapse process itself.
By reducing the energy of a transaction to a superposition of ground and excited states, the transactional approach may combine with quantum computation to produce a spacetime anticipating quantum entangled system which may be pivotal in how the conscious brain does its computation. The brain is not a marvelous computer in any classical sense. We can barely repeat seven digits. But it is a phenomenally sensitive anticipator of environmental and behavioral change. Subjective consciousness has its survival value in enabling us to jump out of the way when the tiger is about to strike, not so much in computing which path the tiger might be on, because this is an intractable problem and the tiger can also take it into account in avoiding the places we would expect it to most likely be, but by intuitive conscious anticipation. What is critical here is that in the usual quantum description which considers only the emitter, we have only the probability function because the initial conditions are insufficient to determine the outcome. There is thus no useful way quantum uncertainty can be linked to conscious freewill. Only by completing the sexual paradox of time by including the advanced absorber waves can we see how anticipation might be achieved.
Entanglement, SpaceTime and Gravity
Two forms of evidence also link quantum entanglement to cosmological processes that may involve gravity and the structure of spacetime. The holographic primciple asserts that in a variety of unified theories, an nD theory can be holographically represented by the physics of a corresponding (n1)D theory on a surface enclosing the region.
Fig 26: (a) An illustration of the holographic principle in which physics on the 3D interior of a region, involving gravitational forces represented as strings, is determined by a 2D holographic representation on the boundary in terms of the physics of particle interactions. This correspondence has been successfully used in condensed matter physics to represent the transition to superconductivity, as the dual of a cooling black hole's "halo" (Merali 2011). (b) Holographic principle explained. Einstein's field equations can be represented on antide Sitter space, a space similar to hyperbolic geometry, where there is an infinite distance from any point to the boundary. This 'bulk' space can also be thought of as a tensor network as in (c). In (1998) Juan Maldacena discovered a 11 correspondence between the gravitational tensor geometry in this space with a conformal quantum field theory like standard particle field theories on the boundary. A particle interaction in the volume would be represented as a more complex field interaction on the boundary, just as a hologram can generate a complex 3D image from wavefront information on a 2D photographic plate (Cowen 2015). The holographic principle can be used to generate dualities between higher dimensional string theories and more tractable theories (fig 39) that avoid the infinities that can arise when we try to do the analogue of Feynman diagrams to do perturbation theory calculations in string theory. (c) Entanglement plays a pivotal role because whan the entanglement between two regions on the boundary is reduced to zero, the bulk space pinches off and separates into two regions. (d) In an application to cosmology, entanglement on the horizon of black holes may occur if and only if a wormhole in spacetime connects their interiors. Einstein and Rosen addressed both wormholes and the pairsplitting EPR experiment. Juan Maldacena sent colleague Leonard Susskind the cryptic message ER=EPR outlining the root idea that entanglement and wormholes were different views of the same phenomenon (Maldacena and Susskind 2013, Ananthaswamy 2015). (e) Time may itself be an emergent property of quantum entanglement (Moreva et al. 2013). An external observer (1) sees a fixed correlated state, while an internal observer using one particle of a correlated pair as a clock (2) sees the quantum state evolving through two time measurements using polarizationrotating quartz plates and two beam splitters PBS1 and PBS2.
A collaboration between physicists and mathematicians has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory. The holographic principle states that gravity in say a threedimensional volume can be described by quantum mechanics on a twodimensional surface surrounding the volume. The process applies generaly to antide Sitter spaces modelling gravitation in ndimensions and conformal field theories in (n1)dimensions and plays a central role in decoding string and Mtheories (see fig 26). Juan Maldacena's (1998) paper has become the most cited one in theoretical physics, with over 7000 citations. Now the researchers have found that quantum entanglement is the key to solving this question. Using a quantum theory (that does not include gravity), they showed how to compute the energy density, which is a source of gravitational interactions in three dimensions, using quantum entanglement data on the surface. This allowed them to interpret universal properties of quantum entanglement as conditions on the energy density that should be satisfied by any consistent quantum theory of gravity, without actually explicitly including gravity in the theory (Lin et al. 2015).
In a second experimental investigation, working directly with quantum entangled states, (Moreva et al. 2013) time itself was found to be be an emergent property of quantum entanglement. In the experiment, an external observer sees time being fixed throughout, while an observer using one particle in and entanglement as a clock percieves time as evolving.
Fig 27:(a) The cosmic background  a redshifted primal fireball (WMAP). This radiation separated from matter, as charged plasma condensed to atoms. The fluctuations are smoothed in a manner consistent with subsequent inflation. (b) Eternal inflation and big bounce models. Fractal inflation model leaves behind mature universes while inflation continues endlessly. Big crunch leads to a new big(ger) bang. (c) Darwin in Eden: "Paradise on the cosmic equator. "  life is an interactive complexity catastrophe consummating in intelligent organisms, resulting ultimately from force differentiation. This summative Σ interactive state is thus cosmological and as significant as the α of its origin and Ω of the big crunch or heat death in endless expansion.
The Quantum Universe
The universe appears to have had an explosive beginning, sometimes called the big bang, in which space and time as well as the material leading to the galaxies were created. The evidence is pervasive, from the increasing redshift of recession of the galaxy clusters, like the deepening sound of a train horn as the train recedes, to the existence of cosmic background radiation, the phenomenally stretched and cooled remnants of the original fireball. The cosmic background shows irregularities of the early universe at the time radiation separated from matter when the first atoms formed from the flux of charged particles. From a very regular symmetrical 'isotropic ' beginning for such an explosion, these fluctuations, which may be of a quantum nature, have become phenomenally expanded and smoothed to the scale of galaxies consistent with a theory called inflation. Our view of the distant parts of the universe, which we see long ago because of the time light has taken to reach us, likewise confirm a different more energetic galactic early life. We can look out to the limits of the observable universe and because of the long delay which light takes to cross such a vast region, witness quasars and early energetic galaxies, which are quite different from mature galaxies such as our own milky way.
The ultimate fate of the universe is less certain, because it's rate of expansion brings it very close to the limiting condition between the gravitational attraction of the mass energy it contains ultimately reversing the expansion, causing an eventual collapse, and continued expansion forever. The evidence is now in favour of a perpetual and possibly accelerating expansion and astronomers are seeking an explanation for this apparent lack of mass in dark matter and a dark energy, called 'quintessence' in some of its more varying forms, promoting accelerating expansion. The missing mass is clearly evident in close galaxies, which spin so rapidly they would fly apart if the only matter present was the luminous matter of stars, black holes and gaseous nebulae. WMAP and Planck data now suggest the universe's rate of expansion has increased part way through its lifetime and that its largescale dynamics are governed mostly by dark energy (68.3%), with successively smaller contributions from dark matter (26.8%) and ordinary galactic matter and radiation (4.9%). From the time of the cosmic microwave background radiation (CMB), dark matter comprised 63% of the matter, photons 15%, atoms 12% and neutrinos 10%, but because photons have zero rest mass, and the CMB is full of low energy photons, the particle ratio is about 10^{9} photons for each proton or neutron.
Fig 28: Cosmic history including inflation and dark energy.
Dark energy: We know dark energy exists because given both dark and normal matter of the galaxies there is only about 70% enough mass to make it flat and if this were so, the universe would have negative curvature. In 1998 two separate teams noted that distant supernovae were much dimmer than they should be. The simplest and most logical explanation is that the expansion of the universe is now accelerating.
Dark energy is poorly understood at a fundamental level, the main required properties are that it functions as a type of antigravity, it dilutes much more slowly than matter as the universe expands, and it clusters much more weakly than matter, or perhaps not at all. The cosmological constant Λ (see equation 5), is the simplest possible form of dark energy since it is constant in both space and time, and this leads to the current standard ΛCDM model of cosmology, involving the cosmological constant and cold dark matter. It is frequently referred to as the standard model of Big Bang cosmology, because it is the simplest model that provides a reasonably good account of (1) the cosmic microwave background, (2) the largescale structure of the galaxies, (3) the abundances of hydrogen (including deuterium), helium, and lithium and (4) the accelerating expansion of the universe.
Quintessence is a model of dark energy in the form of a scalar field forming a fifth force of nature that changes over time, unlike the cosmological constant which always stays fixed. It could be either attractive or repulsive depending on the ratio of its kinetic and potential energy. One theory attaches the turning on of dark energy part way through the expansion to new types of stringtheory related axion (ArXiv:1409.0549). Another to an additional scalar field that operates in a seesaw mechanism with the grand unification energy of the Higgs particle (doi:10.1103/PhysRevLett.111.061802) gaining a very small energy in inverse relation to the Higgs energy. The seesaw mechanism is used to model small neutrino masses. Yet another ascribes it to 'dark magnetism'  primordial photons with wavelength greater than the universe (arxiv.org/abs/1112.1106). A fourth idea is that the graviton has mass (doi:10.1038/nature.2013.13707).
Dark matter is likewise poorly understood. There are four basic candidates, axions, machos (nonluminous, small stars, black holes etc) and wimps (weakly interacting massive particles which might emerge from extensions of the standard model), complex dark matter experiencing strong selfinteractions, while intercting with normal matter only through gravity.
Modified Newtonian Dynamics (MOND) attempts avoid the need for dark matter by modifying gravity to account for the observed high velocities of stars around the galaxy by amending Newton's Second Law so that gravity is proportional to the square of the acceleration instead of the first power, so that it varies inversely with galactic radius (as opposed to the inverse square) at extremely small accelerations, characteristic of galaxies, yet far below anything typically encountered in the Solar System or on Earth. However, MOND and its generalisations such as TeVeS do not adequately account for observed properties of galaxy clusters, and no satisfactory cosmological model has been constructed.
Fig 28(b) EG and it's experimental test: (a) Two forms of long range entanglement connecting bulk excitations that carry the positive dark energy either with the states on the horizon or with each other. (b) In antideSitter space (left) the entanglement entropy obeys a strict area law and all information is stored on the boundary. In deSitter space (right) the information delocalizes into the bulk volume and creates a memory effect in the dark energy medium by removing the entropy from an inclusion region. (c) The ESD profile predicted by EG for isolated central galaxies, both in the case of the point mass approximation (dark red, solid) and the extended galaxy model (dark blue, solid), compared with observed values. The difference between the predictions of the two models is comparable to the median 1σ uncertainty on our lensing measurements (grey band).
Emergent Gravity (EG) as a Comprehensive Solution: ES is a radical new theory of gravitation developed by Erik Verlinde in 2011 (arXiv:1001.0785), in which he developed from scratch a fundamental theory of how Newtonian gravitation can be shown to arise naturally in a theory in which space is emergent through a holographic scenario similar to the one discussed above in the context of black holes. Gravity is explained as an entropic force caused by changes in the information associated with the positions of material bodies. A relativistic generalization of the presented arguments directly leads to Einstein's equations. The way in which gravity arises from entropy can most easily be visualized in the context of polymer elasticity where a linear polymer which randomly wriggles into a disordered arrangement thermodynamically is pulled out straight, resulting in an elastic force tending to take it back into a disordered configuration. Space is then an emergent property of the holographic boundary and gravitation a consequence of entropy following an area law at the boundary surface, as in black hole entropy (Bekenstein J 1973 Black holes and entropy Phys. Rev. D 7, 2333).
In November 2016 Verlinde (arXiv:1611.02269) extended the theory to make predictions that can explain both dark energy and dark matter as manifestations of entanglement under the holographic scenario. The entanglement is long range and connects bulk excitations that carry the positive dark energy either with the states on the horizon or with each other. Both situations lead to a thermal volume law contribution to the entanglement entropy that overtakes the area law at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic states do not thermalise at subHubble scales, but exhibit memory effects in the form of an entropy displacement caused by (baryonic) matter. The emergent laws of gravity thus contain an additional 'dark' gravitational force describing the 'elastic' response due to the entropy displacement, which in turn explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
A month later in December 2016 (arXiv:1612.03034), a group of astronomers made a test of the theory using weak gravitational lensing measurements. As noted, in EG the standard gravitational laws are modified on galactic and larger scales due to the displacement of dark energy by baryonic matter. EG thus gives an estimate of the excess gravity (as an apparent dark matter density) in terms of the baryonic mass distribution and the Hubble parameter. The group measured the apparent average surface mass density profiles of 33,613 isolated central galaxies, and compared them to those predicted by EG based on the galaxies' baryonic masses and find that the prediction from EG, despite requiring no free parameters, is in good agreement with the observed galaxygalaxy lensing profiles. This suggests that a radical revisioning of the relationship between gravity and cosmology could be under way which will transform current attempts at unifying gravitation and quantum cosmology.
Fig 29: Lower left:Dark matter illuminated: The bullet cluster, two colliding galaxies 3.4 billion lightyears away, have total mass far less than the mass of the cluster's two clouds of hot xray emitting gas (red). The blue hues show the distribution of dark matter in the cluster with far more mass than the gas. Otherwise invisible, the dark matter was mapped by observations of gravitational lensing of background galaxies. Unlike the gas, the dark matter seems to have passed right through indicating little interaction with itself or other matter although obersvations of galaxy cluster Abell 3827 sugests a possible dark force interaction consistent with complex dark matter. Top and right: False colour view of excess xray emissions from the centre of our galaxy suggesting a dark matter particle with mass ranging from around 10 GeV at possible LHC energies upwards. The 13 GeV signal is in good fit with a 3651 GeV dark matter particle annihilating to bb. The angular distribution of the excess is approximately spherically symmetric and centered around the dynamical center of the Milky Way (within 0.05^{o} of Saggitarius A* the central black hole), showing no sign of elongation along the Galactic Plane, which would be expected with a pulsar distribution. The signal is observed to extend to at least 10^{o} from the Galactic Center, disfavoring the possibility that this emission originates from millisecond pulsars. The shape of the gammaray spectrum from millisecond pulsars appears to be significantly softer than that of the gammaray excess observed from the Inner Galaxy (ArXiv: 1402.6703). Earthbound dark matter detectors have also caught events consistent with this mass range (doi:10.1038/521017a). The Planck survey sees no evidence for selfinteracting dark matter collisions in the CMB inconsistent with dark matter theories in which selfinteraction would be frequent at the cosmic origin. The Alpha Magnetic Spectrometer on board the International Space Station has also detected more positrons than expected which could be the result of dark matter being annihilated, but might also be caused by nearby pulsars. Right: Signal of gravitational waves from LIGO believed to be from two collidiing black holes in a binary system, the "chirp" coming from their increasing orbital frequency as they merge.
Confirmation of the existence of gravitational waves came in 2016 with the detection of the 'chirp' at two widely space detectors (fig 29) in the groundbreaking LIGO experiment, believed to be due to the last throes of two colliding black holes in a death spiral. An alternative explanation to a coalescing black hole is a gravastar. Gravastars are how a black hole becomes transformed if we define spacetime as quantized, based on the Planck length. Matter does not collapse inside because quantization makes this impossible in a manner consistent with dark energy preventing collapse. Instead of an event horizon, we have light in orbit. The internal pressure might also become manifest in a big bang origin of a daughter universe (Cardoso V et al. 2016 Is the GravitationalWave Ringdown a Probe of the Event Horizon? Phys Rev Lett 116, 171101). Subsequently investigations of echoes of the chirp following the main pulse have been found to be consistent with a quantummechanical boundary, such as a firewall, a quantummechanically high energy interface at the event horizon, or of a gravistar. Some versions of string theory also suggest that black holes are 'fuzzballs'  tangled threads of energy with a fuzzy surface, in place of a sharplydefined event horizon (Merali Z 2016 doi:10.1038/nature.2016.21135, Cardoso V et al. 2016 Gravitationalwave signatures of exotic compact objects and of quantum corrections at the horizon scale Phys. Rev. D 94, 084031. DOI: 10.1103/PhysRevD.94.084031 ). Any of these quantum theories would contradict the universality of general relativity but the lack of such structures would likewise contradict quantum theory, so this is a potential acid test of their relationship.
There are further searches under way for lighter dark matter candidates such as the dark photon, using very intense rays of lower energy particles. Complex dark matter, or the dark sector was first suggested in 1986 (Holdom B 1986 Phys. Lett. B 166,196198), but remained largely unexplored until a group of theorists resurrected the theory (ArkaniHamed B et al. 2009 Phys. Rev. D 79, 015014), in light of results from a 2006 satellite mission called PAMELA (Payload for Antimatter Matter Exploration and Lightnuclei Astrophysics), which had observed a puzzling excess of positrons in space. Theorists suggested that they might be spawned by darkmatter particles annihilating each other, but the weakly interacting massive particles (WIMPs) most often suggested would have also decayed into protons and antiprotons, which weren't seen by PAMELA. Another motivation came from a result reported in 2004 that found that the magnetic moment created by the spin and charge of the muon did not match the predictions of the standard model. This anomaly, called the muon g2, could also be rectified by a darksector force.
Fig 29b: A variety of searches are underway for lighter dark matter candidates. Loglog plots vertical axis relative interaction strength horizontal GeV from 0.01 to 1.
In 2016 a team fired protons at thin targets of lithium7, which created unstable beryllium8 nuclei that then decayed and spat out pairs of electrons and positrons(ArXiv:1504.01527). According to the standard model, physicists should see that the number of observed pairs drops as the angle separating the trajectory of the electron and positron increases. But the team reported that at about 140^{o}, the number of such emissions jumps  creating a 'bump' when the number of pairs are plotted against the angle  before dropping off again at higher angles. This suggests that a minute fraction of the unstable beryllium8 nuclei shed their excess energy in the form of a new particle with a mass of about 17 MeV, which then decays into an electronpositron pair. They were searching for a dark photon candidate, but subsequently papers suggest a "protophobic X boson" (ArXiv:1604.07411). Such a particle would carry an extremely shortrange force that acts over distances only several times the width of an atomic nucleus. And where a dark photon (like a conventional photon) would couple to electrons and protons, the new boson would couple to electrons and neutrons. Experimental resolution of this anomaly should be forthcoming within a year.
Cosmic Inflation: The evidence from the early universe indicates that there was not simply an explosive beginning in a big bang but an extremely rapid exponential inflation of the universe in the first 10^{35} sec into essentially the huge expanding universe we see today. In some 'eternal inflation' models the inflation is fractal leaving behing behind mature 'bubble' universes while inflation continues unabated (fig 27(b)). The inflationary model explains the big bang neatly in terms of the same process of symmetrybreaking which caused the four forces of nature, gravity, electromagnetism and the weak and strong nuclear forces to become so different from one another. The largescale cosmic structure is thus related to the quantum scale in one logical puzzle. In this symmetrybreaking the universe adopted its very complex 'twisted ' form which made hierarchical interaction of the particles to form protons and neutrons, and then atoms and finally molecules and complex molecular life possible. We can see this twisted nature in the fact that all the charges in the nucleus are positive or neutral protons and neutrons while the electrons orbiting an atom are all negatively charged.
Symmetrybreaking is a classic example of engendering at work. Cosmic inflation explains why the universe seems to have just about enough energy to fly apart into space and no more, and why disparate regions of the universe which seemingly couldn't have communicated since the bigbang at the speed of light, seem to be so regular. Inflation ties together the differentiation of the fundamental forces and an exponential expansion of the universe based on a form of antigravity which exists only until the forces break their symmetry (p 311). Inflation explains galactic clusters as phenomenally inflated quantum fluctuations and suggests that our entire universe may have emerged from its own wave function in a quantum fluctuation. However more recent modeling suggests that, due to these quantum effects, inflation can lead to a multiverse where the universe breaks up into an infinite number of patches, which explore all conceivable properties as you go from patch to patch. Hence we shall investigate other models such as the ekpyrotic scenario which also predict a smoothed out universe. On the other hand the latest data from the Planck satelitte does favour the simplest models of inflation, in which the size of temperature fluctuations is, on average, the same on all distance scales (doi:10.1038/nature.2014.16462).
Fig 29c: Above: Sketch of the timeline of the holographic Universe. Time runs from left to right. The far left denotes the holographic phase and the image is blurry because space and time are not yet well defined. At the end of this phase (denoted by the black fluctuating ellipse) the Universe enters a geometric phase, which can now be described by Einstein's equations. The cosmic microwave background was emitted about 375,000 years later. Patterns imprinted in it carry information about the very early Universe and seed the development of structures of stars and galaxies in the late time Universe (far right). Below: Angular power spectrum of CMB anisotropies, comparing Planck 2015 data with best fit ΛCDM (dotted blue curve) and holographic cosmology (solid red curve) models, for l ≥ 30.
Holographic Origin However a class of holographic models for the very early Universe (Afshordi et al. 2017) based on threedimensional perturbative superrenormalizable quantum field theory (QFT) has been tested against cosmological microwave background observations and found that they are competitive to the infationary standard cold dark matter model with a cosmological constant (ΛCDM) of cosmology.
Engendering Nature: Cosmic SymmetryBreaking, Inflation and Grand Unification
At the core of the cosmic inflation concept is cosmolgical symmetrybreaking, in which the fundamental forces of nature, which make up the matter and radiation we relate to in the everyday world gained the very different properties they have today from a single superforce. There are four quite different forces. The first two are well known  electromagnetism and gravity  both longrange forces we can witness as we look out at distant galaxies. The others are two shortrange nuclear forces. The colour force holds together the three quarks in any neutron or proton and indirectly binds the nucleus together by the strong force, generating the energy of stars and atom bombs. The weak radioactive force is responsible for balancing the protons and neutrons in the nucleus by interconverting the flavours of quarks and leptons (p 311).
There is a fundamental 'sexual' division among the waveparticles based on their quantum spin. All particles come in one of two types:
Fermions, of halfintegral spin can only clump in complementary pairs in a single wave function and thus, being incompressible, make up matter.
Bosons of integral spin which can become coherent and can all enter the same wave function in unlimited numbers, as in a laser, and hence form radiation and as virtual particles appearing and disappearing through quantum uncertainty, the force fields which act between the particles.
Fig 30: Scalar and vector fields illustrate the classical behaviour or potential functions and electrostatic fields and fluid flows. A scalar is a single quantity wheras a vector field in 3 dimensions has 3dimensional vectors. Quantum fields likewise can have differing dimensions depending on their spin. Spin0 fields have one degree of freedom and are scalar. Spin1 fields have three degrees of freedom and are vectors. Photons, because they are massless have lost the longitudinal mode and have only two degrees of freedom (polarisation). The one additional degree of freedom contributed by the Higg's boson gives back to the weak bosons the degree of freedom they need to be massive and have a varying velocity. Spin 1/2 fermions have twocomponent wave functions which turn into their negatives upon a 360 degree revolution, leading to the Pauli exclusion principle.
Spin1 bosons such, as the photon, behave like 3D vector fields and form the wellknown fields of electricity and magnetism. Electric charge is essentially the capacity to emit and absorb virtual photons and comes in +/ attractiverepulsive forms. The photon's longitudinal field however is lost because it is massless, leaving only the two transverse fields defined by the polarization. Combining with a Higgs to form a heavy photon such as the Z_{0} particle adds this missing field.
Fig 31: Symmetry and local symmetries are believed to underlie the fundamental forces. Top left: 60 degree rotational geometric symmetry of a snowflake, charge symmetry of electromagnetism and isotopic spin symmetry between a neurton and a proton illustrate symmetries in nature. Right: The electromagnetic force can be conceived of as an effect required to make the global symmetry of phase change local. A global phase shift does not alter the twoslit interference of electron waves (which usually have one light band in the centre), but a phase filter which locally shifts the phase through one slit has precisely the same effect as applying a magnetic field between the slits. The local phase shift causes the centre peak to become split in both cases. Gravitation can likewise be conceived as a symmetry of the Lorenz transformations of relativity, usually referred to as Poincare invariance.
Spin 1/2 fermions behave very differently. They have fields with only two degrees of freedom and, unlike the photon whose wave function becomes itself when turned through 360^{o}, when the fermionic fields are rotated by 360^{o} their wave function becomes its negative. Hence two particles in the same wave function, such as electrons in an atomic or molecular orbital have to have opposite spins to remain attracted, or they will fly apart. Hence fermions resist compression and form matter. Gravity behaves as stress tensors in spacetime, and it is universally attractive, so its quantum fields behave as spin2 gravitons.
We thus have another fundamental sexual complementarity manifesting as the relationship between matter and radiation. The half integral spin of electrons was first discovered in the splitting of the spectral lines of electrons in atomic orbitals into pairs whose spin angular momentum corresponded to +/1/2 rather than the 0, 1 , 2 etc. of atomic s, p , d and f orbitals (p 318). As spin states have to differ by a multiple of Planck's constant h a particle of spin s has 2s+1 components. A glance at the known waveparticles (p 311), indicates that the bosons and fermions we know are very different from one another in their properties and patterns of arrangement. There is no obvious way to pair off the known bosons and fermions, however there are reasons why there may be a hidden underlying symmetry, which pairs each boson with a fermion of onehalf less spin, called supersymmetry, because in supersymmetric theories the infinities that plague quantum field theories cancel and vanish, the negative contributions of the fermions exactly balancing the positive contributions of the bosons. This would mean that there must be undiscovered particles. For example corresponding to the spin2 graviton would be a spin3/2 gravitino, a spin1 graviphoton a spin1/2 gravifermion and a spin0 graviscalar.
Fig 32: (a) Standard model of the four fundamental forces is based on the combined SU(3) x SU(2) x U(1) symmetries of the RGB color force, the +/ electroweak force (charge and flavor) and weak hypercharge. The waveparticles are divided into two disparate groups  bosons and fermions. The fermions, which make matter are divided between quarks which experience all the forces including colour and leptons which experience only the electroweak and gravity. The bosons, which mediate the forces have integer spin and freely superimpose, as in lasers and hence also make radiation. Halfinteger spin fermions only superimpose in pairs of opposite spin and hence resist compression into one space, thus making solid matter. Each quark comes in three colours (RGB) and pairs of flavours (e.g. up and down with charges 2/3 and 1/3) with antiquarks having anticolors (CMY) and antiflavors with opposite charges. Quarks associate (a) in pairs to form mesons (e.g. π^{+} ud, ud, ud, depending on color, π^{0} uu or dd, π^{} du, with higher mass mesons involving the heavier quarks) (b) in triplets to forms baryons (e.g. p^{+} uud, n udd), (c) transient tetraquarks (e.g. udsb) and (d) transient pentaquarks (uudcc). The fermions also come in three series of increasing mass. The gluons have a coloranticolor charge. (b) The forces converge at high energies. Electromagnetism is first united with the weak force ostensibly through the spin0 Higgs boson, then with the colour force gluons and finally with gravity. (c) Force differentiation tree, in which the four forces differentiate from a single superforce, with gravity displaying a more fundamental divergence. (d) the scalar Higgs field has lowest energy in the polarized state, resulting in electroweak symmetrybreaking. The SU(2) x U(1) symmetry corresponds to three W bosons and one B boson all massless. Under symmetrybreaking the last W and the B coalesce into Z_{0} and γ, leaving W^{±}. (e) the stable atomic nuclei with their increasing preponderance of neutrons are equilibrated by the weak force. This force is chiral, engaging lefthanded interactions, for example in neutron decay, as shown. Weak interactions may explain the chirality of RNA and proteins (King R372, R374).
The four fundamental forces appear to converge at very great energies and to have been in a state of symmetry at the cosmic origin as a common superforce. A key process mediating the differentiation of the fundamental forces is cosmic symmetrybreaking. The shortrange weak force behaves in many ways as if it is the same as electromagnetism, except the charged W^{+},W^{} and neutral Z_{0} carrier particles corresponding to the electromagnetic photon are very massive. One can of course consider this division of a common superforce into distinct complementary forces as a nd of sexual division, just as the division into male and female is a primary division. In this respect gravity stands apart from the other three forces which share a common medium of spin1 bosons and broke symmetry first.
Fig 33: Colour force: Top left: Mesons (quarkantiquark) and baryons (three quarks) mediate their color by exchanging gluons of appropriate coloranticolour combinations. Top centre: The electromagnetic field reduces effective charge by forming virtual electronantielectron pairs. Top right: The colour force also does this by forming quarkantiquark pairs, but in addition the gluons have a colour charge (unlike the uncharged photon) which increases the effective charge towards infinity at great distances, while remaining relaxed at short distances (asymptotic freedom), allowing the quarks to move freelywithin a confined space. This phenomenon, which is also known as camoflage is also illustrated in the lower series of diagrams where electromagnetism has only shielding while colour has shielding and camoflage. The effect of quark and gluon confinement is that individual particles cannot be isolated. When they are driven apart in a very energetic collision, a shower of particles results which eventually neutralizes the colour charge. In another form of asymptotic freedom, quantum electrodynamics and in particular electric charge (the gauge coupling constant) diminishes at very high energies in the presence of gravity, with the same trends expected for the weak and color forces (doi:10.1038/nature09506).
Every proton and neutron is itself believed to consist of three subparticles called quarks as follows: n=udd, p^{+}=uud. Neutron decay is thus actually the transformation of a down quark into an up (see figs 32, 33). The three quarks are bound together by a force, called the colour force because each quark comes in one of three colours, just as electric charges come in two types, positive and negative. Each neutron has one up and two down quarks and each proton two up and one down. To balance the charges each up must have charge 2/3 and each down 1/3. However, regardless of their up or down flavour, there is always one of each colour, so that the proton and neutron are colourless.
lepton 
mass

symbol

charge

quark

mass

symbol

charge

electron neutrino 
< 16 eV


0

up

2.3 MeV

u u u

2/3

electron 
0.5 MeV

e

1

down

4.8 MeV

d d d

1/3

muon neutrino 
< 65 eV


0

charm

1275 MeV

c c c

2/3

muon 
106.6 MeV


1

strange

95 MeV

s s s

1/3

tau neutrino 
< 65 eV


0

truth

173 GeV

t t t

2/3

tau 
1784 MeV


1

beauty

4180 MeV

b b b

1/3

A second, quite different force, the weak nuclear force, is responsible for radioactive decay. If a nucleus has too many neutrons, one neutron can decay into a proton, an electron and an antineutrino (fig 32e). This reaction and its reverse act to keep the balance of protons and neutrons, which is roughly 50:50 to keep each nuclear particle in the lowest possible energy states under the strong force, but becomes biased toward neutrons in heavier elements, fig 32(e) because of instability caused by the accumulated repulsive positive charges of the protons. Significantly, the reaction does not preserve mirror symmetry, as it gives rise only to lefthanded electrons, the antineutrino involved in beta decay having right handed helicity.
Fig 34: CP Violation and Neutrino Oscillation: (a) Decay of the Ko meson is a parallel to photon polarization. The K1 component (see below) by decay is similar to vertical polarization removing the horizontal component from circularly polarized light. However there is a small amplitude for the K2 to go into resonance back into the K1 form, just as dextrose rotates the polarization of light, allowing it to subsequently decay again, similarly to detecting horizontal polarization in the rotated light. Lower left Feynman diagram for quark flavour mixing. Just as classical chirality requires three dimensions, CPviolation of the Ko requires at least three families of fermions. Investigations of the B meson containing a b (beauty) quark indicate flavour mixing, suggesting a fourth family is possible. There can be no more than four or the extra neutrino types would cause an unrealistic expansion rate of the universe. (b) Treversal violation experiment on the B meson. When one meson decays at time t1 , the identity of the other is "tagged" but not measured specifically. In the top panel, the tagged meson is a "B0", where B stands for Bbar. This surviving meson decays later at t2 , encapsulating a timeordered event, which in this case corresponds to "B0" > B . To study time reversal, the BaBar collaboration compared the rates of decay in one set of events to the rates in the timereversed pair. In the present case, these would be the "B" > B0 events, shown in the bottom panel (Zeller 2012). (c) Interactions in the decay of the Λ^{0}_{b} baryon.
The weak force is known to be chiral, but the asymmetry of nature runs even deeper. In 1964 the principle of CP (chargeparity) conservation was overthrown by the neutral K_{0} meson. CP violation is accommodated in the standard model (SM) of particle physics by the Cabibbo–Kobayashi–Maskawa (CKM) mechanism that describes the transitions between up and downtype quarks, in which quark decays proceed by the emission of a virtual W boson and where the phases of the couplings change sign between quarks and antiquarks. The neutral K_{0} usually decays into 3 πmesons, but once in 500 times is found after a strange delay to decay into only two. The neutral K_{0} meson, and its antiparticle both decay into a pair of mesons. The rapid decay of the component into πmesons, subsequently leaves the remaining component which does not follow the same decay. However subsequently there is a small amplitude for conversion of some of the K_{2} back to K_{1} resulting in a K_{L} which is not matterantimatter symmetric, since it contains differing components of K_{0} and its antiparticle. Thus the reaction is preferred over the mirrorimage. Since the K_{0} has quark constituents (d, antis) and its antiparticle (antid,s), this implies that the reaction should be directed in time. Similar considerations are used to explain the preponderance of matter over antimatter. It is suggested that the one part in 10^{8} of matter to radiation could have come from a similar process resulting in a slight differential in the stability of matter and antimatter with respect to time. Potential confirmation of baryonic CP violation, which would be pivotal for matter  antimatter asymmetry, has also been found in LHC studies of the decay of Λ^{0}_{b} baryons decaying to pπ π +π and pπ K +K final states with the former having a 3.3 sigma significance of around 1 in 1000 due to chance alone (Nature Physics 2017 doi:10.1038/nphys4021).
An even more glaring symmetry violation has been discovered in the B meson which indicates a direct violation of time reversal as shown in fig 34(b). Tviolation can be inferred from CP violation by applying the CPT theorem, which states that all local Lorentz invariant quantum field theories are invariant under the simultaneous operation of charge conjugation, parity reversal, and time reversal, but in the Bmeson experiment (Lees et al. 2012), Tviolation was detected directly. The experiment takes advantage of entangled B and B (Bbar) mesons in the Y(4s) resonance produced in positronelectron collisions at SLAC. This allows measurement of an asymmetry that can only come about through a T inversion, and not by a CP transformation. Each of the entangled B0 and B0 mesons resulting from the Y(4s) can decay into either a CP eigenstate, or a state that identifies the flavour of the meson. To study T inversion, the experimenters selected events where one meson decayed into a flavour state and the other decayed into a CP eigenstate. The time between these two decays was measured, and the rate of decay of the second with respect to the first was determined. After detecting and identifying the mesons, the experimenters determined the proper time difference between the decay of the two B states by determining the energy of each meson and measuring the separation of the two meson decay vertices along the e+  e beam axis. When timereversed pairs were compared, the BaBar collaboration found discrepancies in the decay rates. The asymmetry, which could only come from a T transformation and not a CP violation, was significant, being fourteen standard deviations away from time invariance (Zeller 2012).
This has since led to a theory of the arrow of time based on the existence of Tviolating quantum interactions. Despite the Lorenz equations of special relativity connecting space and time, in conventional quantum theory states are presumed to undergo continuous translation over time. There is thus a fundamental difference between space and time in that quanta can be confined in space but have to persist in time to avoid nonconservation of massenergy. In the theory separate wave eqations are established which would allow quanta to be located in time in the way they are in space. In the words of the researcher Joan Vaccaro (2016): "If T symmetry is obeyed, then the formalism treats time and space symmetrically such that states of matter are localized both in space and in time. In this case, equations of motion and conservation laws are undefined or inapplicable. However, if T symmetry is violated, then the same sum over paths formalism yields states that are localized in space and distributed without bound over time, creating an asymmetry between time and space. Moreover, the states satisfy an equation of motion (the Schrodinger equation) and conservation laws apply. The Schrodinger equation of conventional quantum mechanics, where time is reduced to a classical parameter, emerges as a result of coarse graining over time".
Fig 34b: Oscillation of an electron neutrino into the other two known types muon and tauon over distance. (d): An experiment using neutrino oscillations to verify quantum 'entanglement' in terms of the violation of the constraints imposed by local causality over the longest distance ever  735 km  using the LeggettGarg inequality, a variant of Bell's inequalities that works over distinct times, or energies in the case of this neutrino experiment. Quantum and classical theoretical predictions are in blue and red and the experimental result is in black (Arxiv:1602.00041). Instead of a single evolving system at different times, we can use an ensemble with 'stationarity' the correlations depending only on the time differences. One can then perform measurements on distinct members of an identically prepared ensemble, each of which begins in some known initial state. The combination of the prepared and stationarity conditions acts as a substitute for noninvasive (weak) quantum measurements, because wave function collapse and classical disturbance in a given system do not influence previous or subsequent measurements on distinct members of the ensemble. Energy can be used as a proxy for time because the energy of a neutrino determines the unitary time evolution.
Although the standard model gives zero rest mass for the neutrino, neutrinos are now known to have a small mass, making them current focal candidates for exploring beyond the standard model. This is consistent with the idea that the neutrino types are able to interconvert by a resonance, or oscillation, similar to that of the Ko meson. This explains the small observed flux of neutrinos from the sun, which is only about 1/3 what it should be for the nuclear energy required to keep it at current luminosity.
In the early universe there was a sea of protons and neutrons constantly interacting with electrons, neutrinos of every type and their antiparticles through weak interactions. Because neutrons are slightly more massive (939.5 MeV) than the proton (938.2 MeV), there are fewer of them. As the expansion separates, these the weak interactions cease, leaving about a 1:5 n:p ratio at 1 second. The neutrons begin to decay with a halflife of 15 minutes (see fig 10). After 3 minutes, deuterium ( n + p^{+} + e^{}) becomes stable and is rapidly converted to helium. At this point neutron decay has reduced the n:p ratio to 1:8. These flush out another 1/8 of the particles (protons) leaving a 1:4 ratio of helium to hydrogen. More families of neutrinos than four would cause a faster expansion rate, and the faster reaction would produce more helium than observed. Experiments on supernovae limit the electron neutrino mass to less than 16 eV. All neutrinos must have a mass less than 65 eV or the universe will be closed and collapse and moreover the expansion rate would be slower than observed. Recent evidence from the Planck survey indicates that the summed masses of the three neutrinos must be less than 0.21 eV. There are even more neutrinos than photons, several billion for every proton, electron and neutron.
It is also unknown whether neutrinos are their own antiparticle and are thus Majorana fermions, which would behave differently from the others. The concept goes back to Majorana's suggestion in 1937 that neutral spin1/2 particles can be described by a real wave equation, in contrast to the complex wave functions of our known Dirac fermions such as the electron. Majorana fermions would therefore be identical to their antiparticle (because the wave functions of particle and antiparticle are related by complex conjugation).
The three neutrino states that interact with the charged leptons in weak interactions are each a different superposition of the three neutrino states, each of definite mass. Neutrinos are created in weak processes in one of the three flavours. As a neutrino propagates through space, the quantum mechanical phases of the three mass states advance at slightly different rates due to the slight differences in the neutrino masses. This results in a changing mixture of mass states as the neutrino travels, but a different mixture of mass states corresponds to a different mixture of flavour states. So a neutrino born as, say, an electron neutrino will be some mixture of electron, mu, and tau neutrino after traveling some distance. This shapeshifting ability is measured by three mixing angles: θ_{12}, θ_{23} and θ_{13} which determines the periodicity of each shape shift.
In the Standard Model of particle physics, fermions only have mass because of interactions with the Higgs field. These interactions involve both left and righthanded versions of the fermion. However, only lefthanded neutrinos (with left helicities  spins antiparallel to momenta) have been observed so far, with antineutrinos being righthanded. Neutrinos may have another source of mass through the Majorana mass term. The most popular conjectured solution currently is the seesaw mechanism, where righthanded neutrinos with very large Majorana masses are added. If the righthanded neutrinos are very heavy, they induce a very small mass for the lefthanded neutrinos, which is proportional to the inverse of the heavy mass. The actual mass of the righthanded neutrinos is unknown and could have any value between 10^{15} GeV and less than 1 eV.
These neutrinos are called "sterile" as they would interact only with other neutrinos or via gravity and thus could be candidates for dark matter or the "dark raditation" connecting other dark matter particles, possibly of righthanded chirality as noted. The number of sterile neutrino types is undetermined, in contrast to the number of active neutrino types, which has to equal that of charged leptons and quark generations to ensure the anomaly freedom of the electroweak interaction.
Sterile neutrinos have gained interest as a possible explanation for the excess of matter in the universe  that in the first microseconds after the big bang, the young, hot universe contained extremely heavy, unstable sterile neutrinos that soon decayed, some into leptons and the remainder into their antimatter counterparts, but at unequal rates. They would become heavy and the other neutrinos very light by the seesaw mechanism. The slight excess would then become the matter after mutual annihilation of the majority. This would require sterile neutrinos to be Majorana fermions.
During the early universe when particle concentrations and temperatures were high, neutrino oscillations can behave differently. Depending on neutrino mixingangle parameters and masses, a broad spectrum of behavior may arise including vacuumlike neutrino oscillations, smooth evolution, or selfmaintained coherence. The physics for this system is nontrivial and involves neutrino oscillations in a dense neutrino gas.
Evidence of the degree of cosmic clumping and of the combined neutrino masses (above) from Planck may make their existence less likely (doi:10.1038/nature.2014.16462). On the other hand best estimates of neutrino number from Planck and WMAP are around 3.3 (Olive et al., Chin. Phys. C, 38, 090001) leaving some room for a neutrino contribution to dark radiation (ArXiv: 1109.2767). Although some experiments have found no evidence for sterile neutrinos in particle decays, several experiments have noted anomalies in the shapeshifting of neutrinos in which more muons are produced than consistent with direct oscillation from an electron neutrino alone, suggesting a sterile intermediate upping the conversion rate. There is also a deficit of neutrinos from nuclear reactors, suggesting a route to an undetectable form.
The discovery in 2016 that the current local rate of expansion is 9% faster than previously thought tests the consistency of dark energy models and could be explained by the existence of a sterile neutrino (Sokol 2016). Alternatively an axionlike light particle that interacted with quarks in the very early universe could also explain why the cosmic abundance of lithium is lower than predicted levels (Goudelis et al. (2016).
Fig 35: Higgs manifestations. (a): Generation of the Higgs by four pathways, q and g are the quarks and gluons making up the colliding protons. Bremsstralung is "braking radiation" caused by particles glancing off one another. (b) Two pathways of Higgs decay into photons, or Z_{0} bosons. L are leptons. (c) Anomalies in the decayse of the Higgs hits at effects beyond the standard model. (d) LHC Atlas Higgs decay. The observed mass of the Higgs at 125 GeV has been claimed to be consistent with technicolour an older theory extending the standard model with a fifth force, implying the Higgs could be a compostie of 'techniquarks' (doiI: 10.1103/PhysRevD.90.035012). Both the Higgs and the Bmeson (ArXiv:1506.08614) have shown some anomalies in the way they decay with biases in how oten they decay in into taus, muons or electrons, which may indicate a second Higgs or other force or particles appearing, but so far no evidence of supersymmetry. Technicolor is a force field invoked to explain the hidden mechanism of electorweak symmetrybreaking. The mechanism for the breaking of electroweak gauge symmetry in the remains unknown. The breaking must be spontaneous, meaning that the underlying theory manifests the symmetry exactly, but the solutions (ground and excited states) do not and the W and Z bosons become massive and also acquire an extra polarization state. Despite the agreement of the electroweak theory with experiment at energies accessible so far, the process causing the symmetry breaking remains hidden. The simplest mechanism of electroweak symmetry breaking introduces a single complex field and predicts the existence of the Higgs boson. Typically, the Higgs boson is "unnatural" in the sense that quantum mechanical fluctuations produce corrections to its mass that lift it to such high values that it cannot play the role for which it was introduced. Unless the Standard Model breaks down at energies less than a few TeV, the Higgs mass can be kept small only by a delicate finetuning of parameters. Technicolor avoids this problem by hypothesizing a new interaction coupled to new massless fermions. This interaction is asymptotically free at very high energies and becomes strong and confining as the energy decreases to the electroweak scale of 246 GeV. These strong forces spontaneously break the massless fermions' chiral symmetries, some of which are weakly gauged as part of the Standard Model. This is the dynamical version of the Higgs mechanism. The electroweak gauge symmetry is thus broken, producing masses for the W and Z bosons. The new strong interaction leads to a host of new composite, shortlived particles at energies accessible at the Large Hadron Collider (LHC). This framework is natural because there are no elementary Higgs bosons and, hence, no finetuning of parameters.
Symmetrybreaking and Cosmic Inflation: A key explanation for symmetrybreaking is that originally all the particles had zero rest mass like the photon, but some of the boson force carriers like the W changed to mediate a shortrange force by becoming massive and gaining an extra degree of freedom (the freedom to change speed) by picking up an additional spin0 particle called a Higgs boson. The elusive Higgs, which has now been discovered in the LHC may also explain why the universe flew apart. The universe begins at a temperature a little below the unification temperature  slightly supercooled, possibly even a result of a quantum fluctuation. In the early symmetric universe empty space is forced into a higherenergy arrangement than its temperature can support called the false vacuum.
The result is a tremendous energy of the Higgs field, or rather the 'inflation' field as the energy is ascribed to another elusive force. This behaves as a super antigravity, exponentially decreasing the universe's curvature, inflating the universe in 10^{35} of a second to something already compaable to its present size. This inflationary phase becomes broken once the Higgs field collapses, breaking symmetry to a lower energy polarized state, rather like a ferromagnet. does, to create the asymmetric force arrangement we experience to form the true vacuum. In this process the Higgs particles, which are zero spin and have one wave function component, unite with some of the particles, such as W^{+/} and Z_{0} to give them nonzero rest mass by adding their extra component , allowing the additional longitudinal component of the wave function associated with a varying velocity.
Fig 35a: The particle that wasn't. A recently detected resonance initially observed by ATLAS and CMS at the LHC which later faded, could have been explained through the existence of a new particle called F(750) or digamma lying outside the standard model (SM) yet inconsistent with existing theories such as supersymmetry. It would have had zero electric charge and mass of ~750 GeV/c2, six times the mass of the Higgs decaying into two photons. The spin could either be 0, favoured by theoretical arguments, or 2 to allow for a decay to two photons. Spin 1 is excluded by the LandauYang theorem, which states that a massive particle with spin 1 cannot decay into two photons. To be produced at LHC with significant crosssection, F(750) should couple to some of the constituents of the proton  quarks and gluons. In all models the coupling of a neutral particle to photons would require the existence of other particles beyond the SM, carrying electric charge. If ?(750) couples to gluons, new particles with color are also required. As a consequence, in all scenarios considered, a whole new sector of particles must be added to the SM, some of which are likely to be within the energy reach of the LHC. If F(750) is elementary, many scenarios predict the existence of new fermions with SM charges that, unlike the known fermions, do not acquire mass through the Higgs mechanism, while in the scenarios in which it is a composite like the proton, bound states are more likely to be formed. It is also possible that F(750) has invisible decays into particles without SM charges. Several groups have speculated that these decays could be connected to the existence of dark matter in the Universe (Redi 2016). Over 500 papers were produced giving theiretical models for the particle which has since faded to statistical insignificance.
Because the true vacuum is at a lower energy than the false one, it grows to engulf it releasing the latent heat of this energy difference as a shower of hot particles, the hot fireball we associate with the big bang. Normal gravity has now become the attractive force we are familiar with. The reversal of the sign of gravity means that the potential energy os now reversed, so that it adds to the large kinetic energy of the universe flying apart. Two energies which cancelled now became two which add  an insignificant universe  almost nothing  becomes one of almost incalculable proportions. The end result is a universe flying apart at almost exactly its own escape velocity, whose kinetic energy almost balances the potential energy of gravitation. Symmetrybreaking can leave behind defects if the true vacuum emerges in a series of local bubbles which join. Depending on whether the symmetries which are broken are discrete, circular, or spherical, corresponding anomalies in the form of domain walls, cosmic strings or magnetic monopoles may form.
Fig 35b: (a) A photon may undergo mixing to a pseudoscalar particle  an axion  in an external magnetic field. (b) The magnetic field is a pseudovector field because, when one axis is reflected, as shown, reversing parity, the magnetic field is not reflected, but reversed, because the currents are reversed. The position of the wire and its current are vectors, but the magnetic field B is a pseudovector, as is any vector cross product p=a x b. Any scalar product between a pseudovector and an ordinary vector is a pseudoscalar. A pseudoscalar particle corresponds to a scalar field which is likewise inverted under a change of parity. (c) When photons in an initially unpolarized light beam (consisting of both parallel and perpenidcular components) enter an external magnetic field, axionphoton mixing depletes only the parallel electric field components (dichrosim) leading to polarization. This could be used to detect axions in distant quasars.
Axions: In addition, other weaklyinteracting particles may emerge, such as the axions which some researchers associate with cold dark matter. Axions were originally envisaged to explain why CP (chargeparity) violation does not happen with the color force as it does with the weak force. One of the terms in the Lagrangian energy equation for chromodynamics is chiral, breaking CP symmetry, but the color force does not break symmetry. The most elegant solution to this is a new continuous U(1) symmetry whose spontaneous symmetry breaking relaxes the chiral term to zero. This leads to a new spin0 pseudoscalar particle  the axion. If axions inherit a mass they become natural cold dark matter candidates. One theory of axions relevant to cosmology predicts that they would have no electric charge, a very small mass in the range from 10^{6} to 1 eV/c^{2}, and very low interaction crosssections for strong and weak forces. Because of their properties, axions would interact only minimally with ordinary matter, but could change to and from photons in magnetic fields, as a result of mixing due to the pseudovector nature of the magnetic field as a cross product (fig 35b).
Fig 35c: The 95% CL upper limits on the gluino (left) and squark (right) pair production cross sections as a function of neutralino versus gluino (squark) mass ("Search for supersymmetry in events with photons and missing transverse energy in pp collisions at 13 TeV" CMS Collaboration).
Supersymmetry, illustrated in fig 36, in which each boson has a fermion partner and vice versa, which has been a favourate of extensions of the Standard Model because it balances the vacuum contributions of the bosons and fermions, has failed to demonstrate any evidence of its existence in the latest rounds of LHC experiments, leading up to the end of 2016, using energies up to 13 TeV.
Fig 35c shows the production limits for particle pair production of two supersymmetric candidates with no experimental evidence of their existence up to the high range of energies provided by the LHC. Ths means that no evidence for any extension of the Standard Model in terms of fundamental particle creation is likely to occur in the current round, leaving physics with only the single Higgs as a trophy and no immediate prospect of a resolution.
Fig 36: Unproven symmetries: (a) SU(5) theory extending the standard model, sometimes referred to as hyperweak, was an attempt to make an immediate extension of the ideas of the electroweak unification to unification with the colour force, through which a quark could decay into leptons. However its prediction that the proton should also be unstable, like the neutron, and decay e.g. as in (b), has not been validated in any experiment. (ce) Supersymmetry, a hypothetical symmetry between fermions and bosons identifies each with a supersymmetric partner of one half spin less. The hierarchy problem deals with why the weak force for example, is 10^{32} times stronger than gravity. In other words, how the electroweak scale (~246 Gev the Higgs field vacuum expectation) relates to the Planck scale (10^{35} m or 10^{19} GeV) where black holes could spontaneouly form from uncertainty applied to gravity (lower chart) . Supersymmetry would provide a solution because the negative vacuum contribution of the fermions would then balance the positive contribution of the bosons. This would also see the strengths of the three vector forces coming together neatly at high energies. 1 eV is the energy to move one electron through 1 volt. 1 GeV= 10^{9} eV. A proton has a mass of 0.9 GeV and an electron 0.511 MeV. The Z_{0} and Higgs have masses of 91 and ~126 GeV. Because E=mc^{2}, strictly the units are GeV/c^{2}. The LHC is running at up to 14 TeV = 1.4 x 10^{4} GeV. The simplest theory minimal supersymmetric standard model (MSSM) is shown in (e). Symmetrybreaking would give the supersymmetric partners a large mass, but again, the highest energy LHC results have so far shown no evidence for supersymmetric partners appearing.
SMASH Physics: A minimal extension of the Standard Model (SM) called SMASH ( Standard Model Axion Seesaw Higgs portal inflation) provides a potentially complete and consistent picture of particle physics and cosmology up to the Planck scale. According to Ballesteros et al. (2016) the model adds to the SM three righthanded SMsinglet neutrinos, a new vectorlike color triplet fermion and a complex SM singlet scalar σ whose vacuum expectation value at ∼ 10^{11} GeV breaks lepton number and a PecceiQuinn symmetry simultaneously. Primordial inflation is produced by a combination of σ and the SM Higgs. Baryogenesis proceeds via thermal leptogenesis. At low energies, the model reduces to the SM, augmented by seesawgenerated neutrino masses, plus the axion, which solves the strong CP problem and accounts for the dark matter in the Universe. The model can be probed decisively by the next generation of cosmic microwave background and axion dark matter experiments. It builds on Shaposhnikov's (2005) model, which added three neutrinos to the three already known in order to solve four fundamental problems in physics: dark matter, inflation, some questions about the nature of neutrinos, and the origins of matter. SMASH adds a new field to explain some of those problems a little differently. This field includes two particles: the axion, a dark horse candidate for dark matter, and the inflaton, the particle behind inflation. As a final flourish, SMASH uses the field to introduce the solution to a fifth puzzle: the strong CP problem, which helps explain why there is more matter than antimatter in the universe.
Monopoles: Although Maxwell's equations have symmetry between the electric and magnetic fields E and B and do not prohibit magnetic monopoles, their absence led to Gauss law: . However Dirac discovered that the existence of a single magnetic monopole in the universe would explain the quantization of charge. Consider a system consisting of a single stationary electric charge (e.g. an electron) and a single stationary magnetic monopole. Classically, the electromagnetic field surrounding them has a total angular momentum proportional to the product q_{e}q_{m}, and independent of the distance between them. Quantum mechanics dictates that angular momentum is quantized in units of h, so therefore the product q_{e}q_{m} must also be quantized. This means that if even a single magnetic monopole existed in the universe, and the form of Maxwell's equations is valid, all electric charges would then be quantized: .
Fig 37: GUT structure of a magnetic monopole. Near the center (about 10^{29} cm) there is a GUT symmetric vacuum. At about 10^{16} cm, its content is the electroweak gauge fields of the standard model. At 10^{15} cm, it is made up of photons and gluons. At the edge to the distance of 10^{13} cm, there are fermionantifermion pairs. Far beyond nuclear distances it behaves as a magneticallycharged pole of the Dirac type. In effect, the sequence of events during the earliest moment of the universe has been fossilized inside the magnetic monopole.
In a U(1) gauge group with quantized charge, the group is a circle of radius 2π/e. Such a U(1) gauge group is called compact. Grand unified theories (GUTs) uniting electroweak and strong forces lead to compact U(1) gauge groups, so they explain charge quantization in a way that seems to be logically independent from magnetic monopoles. However, the explanation is essentially the same, because in any GUT which breaks down into a U(1) gauge group at long distances, there are magnetic monopoles. In the early universe if the symmetrical unified state of the GUT froze out in different regions, various topological defects in the symmetrybreaking can form 2D domain walls, 1D cosmic strings, or monopoles, depending on the type of symmetry which is broken. Unlike Dirac monopoles, which would be point singularities of infinite selfenergy, such monopoles would have unified force interactions at very short radii and would thus have a finite but very large mass. If the horizon of the domains were very large, due to inflation, these would be vanishingly infrequent but would still be integral to the cosmic description. Although such cosmic monopoles have never been detected experimentally there is good evidence for Dirac magnetic monopoles as lattice quanta in condensed matter spin ices (doi:10.1126/science.1177582, doi:10.1126/science.1178868).
In some models, cosmic inflation has a fractal branched structure, like a snowflake, which is perpetually leaving behind mature universes like ours. Recently it has become clearer that, even with additional dark matter, possibly comprising neutrinos and other exotic particles, there may not be enough mass to stop the expansion, which may even be accelerating. Various hyperbolic forms of inflation and an additional repulsion called quintessence involving a longrange repulsive dark energy have both been invoked to address this problem.
Rehabilitating Duality: String Theory, Quantum Gravity and Spacetime Structure
Quantum theory is formulated within spacetime, but massenergy, through gravitation in general relativity alters the structure of spacetime by curving it. This has made a comprehensive integration of gravity with the other forces of nature difficult to achieve and may indicate a fundamental complementarity between the theories. Something of this paradox can be understood in graphic terms if we consider the implications of quantum uncertainty over very small time intervals, small enough to allow a virtual black hole to form. In this case a quantum fluctuation could give rise to a wormhole in the very spacetime in which it is conceived raising all manner of paradoxes of connected universes and time loops into the bargain. This leads to a fundamental conceptual paradox in which spacetime is flat or slightly curved on large scales but a seething topological foam of wormholes on very small scales. These problems lead to fundamental difficulties in describing any form of quantum field in the presence of gravity.
The unification of gravity with the other forces brings new and deeper mysteries into play. Theories which treat particles as points are plagued with infinities the very points themselves imply as infinite concentrations of energy. Point particles may thus on very small scales become string, loop or membrane excitations. The theories broadly called 'superstring' explain the infinite selfenergies associated with a point particle, and the different particles themselves as different excitation on a closed or open loop or string. However none have been found so far which correspond to our own peculiar asymmetric set of particles.
Central to such theories is supersymmetry  a pairing between bosons and fermions of adjacent spin. The idea behind this is based on ground state zeropoint fluctuations  the energies that arise through uncertainty when a quantum is considered in its lowest (ground) energy state. Only a perfect balancing of the negative zeropoint energies of the fermions against the corresponding positive zeropoint energies of the bosons implied by supersymmetry would cancel the potential infinities arising from the arbitrarily short wavelengths that result from the electromagnetic field when quantum gravitation is included in the unification scheme. These would effectively curl spacetime to a point (Hawking R303 46, 50). It is possible however that it is the collective contribution of the two groups which balance so that there is not an individual set of bosonfermion pairings but two symmetrybroken groups  bosons and fermions which collectively balance one another  reflecting the standard model. Lisi's model below is like this.
Fig 38: (Above) Point particles (a), such as the charged electron, have infinite selfenergies because their fields tend to infinity at the vertex and they have precise vertices of interaction. Strings (b) turn the infinities into harmonic quantum excitations at a fundamental scale such as the Planck scale smoothing both the infinities and turning the vertices into smooth manifold tansitions. (Wolfson R760, Sci. Am. Jan 96). They can be regarded either as open strings or loops. The different excitations (c) correspond to different particles e.g. of higher massenergy.(Below) Compactification of the 12 or so unseen dimensions leave only our 4 of spacetime on large scales (Sci. Am. Jan 96). Compactification of one dimension to form a tube is a way 11D Mtheory can be linked to 10D superstrings which are on smaller scales, stringlike tubes.
Supersymmetric theories generally require over 10 dimensions to converge, all but four of which are 'compactified'  curled up on subparticulate scales, leaving only our four dimensions of spacetime as global dimensions. Such 'theories of everything' or TOEs have not yet fully explained how the particular arrangements of particles and forces in our universe are chosen out of the millions of possibilities for compactification these higher dimensional theories permit when supersymmetry is broken to produce the particles and forces we experience at low energies.
The internal symmetry dimensions of existing particles come close to the additional number required, suggesting the key can be found in the known particles. If we take 1 for the Higgs, 1 for the neutrino, 2 for the electroweak, 3 for colour, and 4 for spacetime we have 11. However in string theory the compactifications occur on a huge variety of spaces called CalabiYau manifolds, presenting up to 10^{500} possible configurations. Fourdimensional spacetime is optimal mathematically for complexity. In some unification theories, one of the compactified dimensions might be much larger (fig 38). Duality, in which fundamental particles in one description may become composite in another and vice versa may also enable apparently divergent theories to be understood through a convergent dual.
Fig 39: Relation between Mtheory and dualities between string theories (ex Hawking R303, Duff R170). Originally string theories were entirely bosonic and formulated in 26 dimensions for internal consistency until the advent of consistent 10 dimensional theories. Type I has one supersymmetry in 10 dimensions. It is based on unoriented open and closed strings, while the rest are based on oriented closed strings. Type II have two supersymmetries. IIA is nonchiral (parity conserving) while the IIB is chiral (parity violating). The heterotic string theories are based on a hybrid of a type I superstring and a bosonic string. There are two kinds of heterotic strings differing in their tendimensional gauge groups: the heterotic E8xE8 string and the heterotic SO(32) string. There are two types of duality. Sduality says that a collection of strongly interacting particles in one theory can be viewed as a collection of weakly interacting particles in another thus avoiding infinities. Tduality states that a string propagating around a circle of radius R is equivalent to a string in the dual propagating around a circle of radius 1/R. If a string has momentum p and winding number n around the circle in one description, it will have momentum n and winding number p in the dual description (see fig 40).
Recently a possible unification of several theories including 10dimensional superstring theories and 11dimensional supergravity have been proposed in the form of Mtheory for membrane, or according to its proponents, magic. The essential idea is that 11dimensional membrane theory looks like 10dimensional string theory if one of the two membrane dimensions are rolled up into a tiny tube along with one of the 11dimensions. In this point of view several of these theories are actually complementary mathematical formulations of the same object. This brings in the 'holographic principle' (fig 26), in which a theory in a multidimensional region can be equivalent to a theory on the boundary of the region, one dimension lower (Cowen 2015, Duff R175).
Particles can come in two types, one vibrational states of strings (vibrating particles) and the other topological  how many times a string wraps around the compactified dimension (winding particles). The winding particles on a tube of radius R are identical to the vibrational particles on a tube of radius 1/R. Duality is a paradoxical concept in which there is a natural relationship between theories which continue to have strong interactions and the perturbation theory fails, with dual theories whose interaction strengths are the reciprocals of the originals and hence converge nicely. The nemesis comes if we end up having to deal with a TOE whose interactions are mid rage, so that neither the original nor the dual can be unraveled.
Fig 40: Duality between string theories. Winding particles in one have the same energetics as vibrational particles in the other and vice versa (Duff R175). The concept of duality may solve intractable infinities by finding a dual theory which is convergent. In the dual theory, particles like magnetic monopoles, which are a composite of quarks and other particles, become fundamental and electrons and quarks become composites of these. No particle is thus truly fundamental, each locked in sexual paradox with its dual.
Supergravity gives a good example of how supersymmetry works. Like any field theory of gravity, where gravitational spacetime stress tensors convert to spin2 particles, a supergravity contains a spin2 field whose quantum is the graviton. Supersymmetry requires the graviton field to have a superpartner. This field has spin 3/2 and its quantum is the gravitino. The number of gravitino fields is equal to the number of supersymmetries. There are 8 extended supergravity theories and each of them has a characteristic number of distinct supersymmetries ranging from n = 1 to 8. In each theory there is one spin2 graviton and there are n spin3/2 gravitinos. The number of particles with lower spins is also completely determined. If n is equal to 1 the theory is simply supergravity with one graviton and one gravitino. If n is 2. the theory includes 1 graviton, 2 gravitinos and 1 spin1 particle (graviphoton). Perhaps the most realistic model of this kind is given when n = 8. The complement of elementary particles then consists of 1 graviton, 8 gravitinos. 28 graviphotons 56 spin1/2 particles (gravifermions) and 70 spin0 particles (graviscalars). An intriguing property of the extended supergravity theories is their extreme degree of symmetry. Each particle is related to particles with adjacent values of spin by supersymmetry transformations. and these supersymmetries are of local form. Thus a graviton can be transformed into a gravitino and a gravitino into a graviphoton. Within each family of particles that have the same spin all the particles are related by a global internal symmetry, much like the internal symmetry that relates proton and neutron.
Supersymmetry provides at least one candidate for dark matter. There are four neutralinos that are their own antiparticles (Majorana fermions) and are electrically neutral, the lightest of which is typically stable. Because these particles only interact with weak vector bosons, they are not directly produced at hadron colliders in copious numbers and are favoured dark matter candidates. In supersymmetry models, all Standard Model particles have partners with the same quantum numbers except for spin, which differs by 1/2 from its partner. Since the superpartners of the Z_{0} boson (zino), the photon (photino) and the neutral higgs (higgsino) have the same quantum numbers, they can mix to form four eigenstates of the mass operator called "neutralinos". Alternatively these four states can be considered mixtures of the bino the neutral wino (superpartners of the U(1) gauge field corresponding to weak hypercharge and the W bosons), and the neutral higgsino. In many models the lightest of the four neutralinos turns out to be the lightest supersymmetric particle (LSP).
An alternative dark matter candidate has emerged from extending the SU(3) symmetry of the color force with one extra force field to conserve baryon number, resulting in an SU(4) x SU(2) x U(1) extension of the standard model explaining why the proton never decays as the lightest baryon. The model requires quarks to have heavier partners, and the lightest of these has the right properties to be dark matter (ArXiv:1511.07380).
In one form of the holographic principle, discussed above and illustrated in fig 26, quantum entanglement on the boundary gives rise to gravitationlike forces on the interior, possibly explaining gravity and relativity in terms of holographic entanglement.
A possible key to the higher dimensional theories is the 8dimensional number system called the octonians. Just as complex numbers form a two dimensional plane, for which the second component is a multiple of i, the square root of 1, octonians form a system of 8components. Associated with the octonians are the exceptional symmetry groups such as G4 and E8. Internal symmetries such as that of colour, and of charge, as well as the wellknow Lorentz transformations of special relativity are already the basis for explaining the standard model.
Another key to a possible unraveling of the Gordian knot of the theory of everything comes from dualities. Electromagnetism is renormalizable because by adjusting for the infinite self energy of a charge we arrive at a theory like quantum electrodynamics where each more complicated diagram with more vertices makes a contribution 137 times smaller to the interaction and it is then possible to correctly deduce the combined effects without infinities creeping in. Essentially the idea is as follows:
Fig 41: Octonians and the Fano plane Just as complex numbers have two components a + bi with i^{2} = 1, so the octonians have eight components 1, e_{1}, ..., e_{7} such that e_{i}^{2} = 1. Multiplication of coordinate vectors is determined by the 'Fano plane'. Any e_{i} , e_{j}, e_{k} connected by arrows multiply in the manner e_{i} x e_{j} = e_{k}. Those connected in the reverse direction inherit a minus sign. Each line also loops back to the first coodinate in a cyclic manner.
Another dimensional issue is that the only spheres which will admit a vector field without singularities, socalled 'hairy ball's, are S^{1} the circle, and S^{3} , S^{7} the 3D and 7D spheres. Our twosphere S^{2} always gets places where one hair stands on end like the crown of your head. Thus the status of the unit octonians has a dual 7D coincidence between algebra and topology, which may be essential in establishing for example a uniform time flow.
The three dimensional nature of space has also been linked to quantum reality. In the 'emergent' picture the three dimensions of space and one of time would arise from quantum gravity and the differentiaton of the forces of nature in the cosmic origin. But a more fundamental basis has been suggested that, given a single dimension for time, quantum theory is the only theory that can supply the degree of randomness and correlation seen in nature  and it can only do so if space is 3D (ArXiv:1206.0630, ArXiv:1212.2115). A subsequent paper shows this constraint applies if microscopic objects interact "pairwise" with each other, as they appear to in ours (ArXiv:1307.3984) but could be higher if pairwise became three or more. If our conventional complex number quantum theories are replaced by quaternions or octonians, the dimensionality could rise to five or nine (Phys. Rev. D 84 125016).
Stephen Hawking, who has been a consistent champion of the TOE quest, has lamented that although the connections implied by Mtheory dualities are so convincing that to not think they are on the right track "would be a bit like believing that God put fossils into the rocks in order to mislead Darwin about the evolution of life " (Hawking R303 57), he now worries (R304) that the search for a consistent theory may remain beyond reach in a single theory because of the implications of Godel's theorem, which proves that any logical system containing finite arithmetic admits formally undecidable propositions. If the search for a TOE runs up against this nemesis, the description of the universe may become undecidable. An indication of the possible complexity of a TOE uniting gravity and quantum field theories comes from superfluid helium 3. At close to absolute zero, helium 3 remains superfluid, and as the temperature rises fractionally a number of bound quantum excitations rather like quasimolecules, form in the medium. Many of the known properties of unified field theories can be modeled using superfluidity on the one hand and these bound structures on the other, as equivalents of gravitational and the other quantum fields. This indicates that the theory sought may not just be a limit of gravitation and quantum fields, but a deeper theory in which both of these are merely stability states.
Fig 42: Above, a depiction in Garrett Lissi's Exceptionally Simple Theory of Everything. Below, superstring theories suffer from having many different forms of compactification during symmetry breaking. Here an attempt is made to find why our universe has an optimal configuration among the millions of possibilities. The CalabiYau manifold illustrated is just one compactification. which shows a local 2D crosssection of the real 6D manifold known in string theory as the CalabiYau quintic. This satisfies the Einstein field equations and is a popular candidate for the wrappedup 6 hidden dimensions of 10dimensional string theory at the scale of the Planck length (1.6 x 10^{35} m or about 10^{20} times the size of a proton). The 5 rings that form the outer boundaries shrink to points at infinity, so that a proper global embedding would be seen to have genus 6 (6 handles on a sphere, Euler characteristic 10). The underlying real 6D manifold (3D complex manifold) has Euler characteristic 200, is embedded in the 4D complex projective plane, and is described by the equation z_{0}^{5} + z_{1}^{5} + z_{2}^{5} + z_{3}^{5} + z_{4}^{5} = 0 in five complex variables. The displayed surface is computed by assuming that some pair of inhomogeneous complex variables, say z_{3}/z_{0} and z_{4}/z_{0}, are constant (thus defining a 2manifold slice of the 6manifold), renormalizing the resulting equations, and plotting the local Euclidean space solutions to the complex equation z_{1}^{5} + z_{2}^{5} = 1.
Garrett Lisi in (2007) published "An Exceptionally Simple Theory of Everything" setting out a possible scheme for a theory uniting gravity with the other forces based on root vector systems generating E8 "via a superconnection described by the curvature and action over a four dimensional base manifold". Although this theory remains speculative, it brings together an ingenius utilization of the internal symmetries of E8 with the dynamical topology of the underlying manifold, retaining an intrinsic complementarity between discrete and continuous aspects, despite its manifestly algebraic basis.
However a later paper (Distler and Garibaldi 2009) claims any "Theory of Everything" obtained by embedding the gauge groups of gravity and the Standard Model into a real or complex form of E8 lacks certain representation theoretic properties required by physical reality. Lisi (2011) has commented on critiques of his work. A 2015 discussion of this can be found at (www.physicsforums.com/threads/isthereanynewstogarrettlisitheory.790057/).
While the existence of the Higgs particle has now been confirmed in the first round of the LHC runs, completing the standard model of physics, there is still no experimental support for Supersymmetry.
One particularly siginificant prediction of this model is that the universe may be algebraically symmetrybroken so that the bosons and fermions give a balanced positive and negative contriution to the massenergy of the universe, but collectively rather than in supersymmetric pairs, while each have different numbers and arrangements of particles, as is the case in the standard model. In the 240 dimensional root system of E8, there are 2^{2}.^{8}C_{2} = 112 'bosonic' root vectors with integer coordinates and 128 = 2^{8}/2 'fermionic' ones with half integer coordinates. Both types are 8D vectors with Pythagorean length 2^{1/2} and coordinates adding to an even number, hence the 128 rather than 256. Stephen Adler in a (2014) paper has invoked a process invoking both SU(8) unification and supergravity into a nonsupersymmetric model based on such a complementation.
Fig 43: E8 root vectors.
Brane cosmology forms an explanation alternative to supersymmetry for the hierarchy problem (fig 36) why gravity is so much weaker than the other forces. The central idea is that the visible, fourdimensional universe is restricted to a brane inside a higherdimensional space, called the "bulk" or "hyperspace". If the additional dimensions are compact, as in compactified superstring theories, then the observed universe contains the extra dimensions, and then no reference to the bulk is appropriate. Some versions of brane cosmology, based on the large extra dimension idea, can explain the weakness of gravity relative to the other fundamental forces of nature, thus solving the hierarchy problem. In the brane picture, the other three forces (electromagnetism and the weak and strong nuclear forces) are localized on the brane, i.e 4D spacetime, but gravity has no such constraint and propagates on the full e.g. 5D spacetime. Much of the gravitational attractive power "leaks" into the bulk. As a consequence, the force of gravity should appear significantly stronger on small (subatomic or at least submillimetre) scales, where less gravitational force has "leaked". Various experiments are currently under way to test this. Extensions of the large extra dimension idea with supersymmetry in the bulk appears to be promising in addressing the socalled cosmological constant problem.
Loop quantum gravity (LQG) is a theory that attempts to quantize general relativity. The quantum states in the theory do not live inside the spacetime. Rather they themselves define spacetime. The solutions describe different possible spacetimes. Space becomes granular as a result of quantization. Space can be viewed as an extremely fine fabric or network "woven" of finite loops. These networks of loops are called spin networks, whose evolution over time is called a spin foam. When the spin network is tied in a braid, it forms something like a particle. This entity is stable, and it can have electric charge and handedness. Some of the different braids match known particles as shown in fig 44, where a complete twist corresponds to +1/3 or 1/3 unit of electric charge depending on the direction of the twist. Heavier particles are conceived as more complex braids in spacetime. The configuration can be stabilized from spacetime quantum fluctuations by considering each quantum of space as a bit of quantum information resulting in a kind of quantum computation. The predicted size of this structure is the Planck length, ~10^{35} m. There is no meaning to distance at scales smaller than this. LQG predicts that not just matter, but space itself, has an atomic structure.
Fig 44: Left: Loop quantum gravity is an alternative to superstring theory. Right: Braided spacetime gives an underlying basis for unifying the fundamental particles. It is similar to the preonic Rishon model where an TTT = antielectron; VVV = electron neutrino; TTV, TVT and VTT = three colours of up quarks; TVV, VTV and VVT = three colours of down antiquarks; with the other particles appearing from the antirishons (Nuclear Physics B 204 1982 141167).
The most spectacular consequence of loop quantum cosmology is that the evolution of the universe can be continued beyond the Big Bang, which becomes a sort of cosmic Big Bounce, in which a previously existing universe collapsed, not to the point of singularity, but to a point before that where the quantum effects of gravity become so strongly repulsive that the universe rebounds back out, forming a new branch. Successive universes might thus be able to evolve their laws of nature. The big bounce has also been calculated to invoke a form of cosmic inflation ( doi: 10.1016/j.physletb.2010.09.058). Hints of an experimental result that might confirm the existence of spacetime foam come from extreme high energy gamma ray bursts from quasar black holes, where the extremely high energy rays appear to arrive later than lower energies consistent with being slowed by spacetime quantization (ArXiv:1305.2626).
Fig 45: The big bounce in loop quantum gravity.
In general relativity spacetime ceases to be a "container" over which physics takes place and has no objective physical meaning. Instead the gravitational interaction is represented as just one of the fields forming the world. Einstein's comment was "Beyond my wildest expectations". In quantum gravity, the problem of time remains an unsolved conceptual conflict between general relativity and quantum mechanics. Roughly speaking, the problem of time is that there is none in general relativity. This is because the Hamiltonian is a constraint that must vanish. However, in quantum mechanics, the Hamiltonian generates the time evolution of quantum states. Therefore, we arrive at the conclusion that "nothing moves" ("there is no time") in general relativity. Since "there is no time", the usual interpretation of quantum mechanics measurements at given moments of time breaks down.
The ekpyrotic scenario, the term meaning 'conflagrationary', is a cosmological model of the early universe that explains the origin of the largescale structure of the cosmos and also has a big bounce. The original ekpyrotic models relied on string theory, branes and extra dimensions, but most contemporary ekyprotic and cyclic models use the same physical ingredients as inflationary models (quantum fields evolving in ordinary spacetime). The model has also been incorporated in the cyclic universe theory (or ekpyrotic cyclic universe theory), which proposes a complete cosmological history, both the past and future. The name is wellsuited to the theory, which addresses the fundamental question that remains unanswered by the big bang inflationary model: what happened before the big bang?
The explanation, is that the big bang was a transition from a previous epoch of contraction to the present epoch of expansion. The key events that shaped our universe occurred before the bounce, and, in a cyclic version, the universe bounces at regular intervals. It predicts a uniform, flat universe with patterns of hot spots and cold spots now visible in the cosmic microwave background (CMB), and has been confirmed by the WMAP and Planck satellite experiments. Discovery of the CMB was originally considered a landmark test of the big bang, but proponents of the ekpyrotic and cyclic theories have shown that the CMB is also consistent with a big bounce.
Fig 46: A cyclic ekpyrotic universe based on colliding branes, which are periodically attracted to oneanother, causing a big bang (inset left), resulting in a cycle two period of which are illustrated. Mutual forces between the branes may provide a feedback driving expansion and contraction.
The search for primordial gravitational waves in the CMB (which produce patterns of polarized light known as Bmodes) may eventually help scientists distinguish between the rival theories, since the ekpyrotic and cyclic models predict that no Bmode patterns should be observed.
A key advantage of ekpyrotic and cyclic models is that they do not produce a multiverse. When the effects of quantum fluctuations are properly included in the big bang inflationary model, they prevent the universe from achieving the uniformity and flatness that the cosmologists are trying to explain. Instead, inflated quantum fluctuations cause the universe to break up into patches with every conceivable combination of physical properties. Instead of making clear predictions, inflationary theory allows any outcome. The idea that the properties of our universe are an accident and come from a theory that allows a multiverse of other possibilities is hard to reconcile with fact that the universe is extraordinarily simple (uniform and flat) on large scales and that elementary particles appear to be described by fundamental symmetries.
There are two types of polarization, called Emodes and Bmodes. This is in analogy to electrostatics, in which the electric field (Efield) has a vanishing curl and the magnetic field (Bfield) has a vanishing divergence. The Emodes arise naturally from scattering in a heterogeneous plasma. The Bmodes are not sourced by standard scalar perturbations. Instead they can be created either by gravitational lensing of Emodes, which has been measured by the South Pole Telescope in 2013, or from gravitational waves arising from cosmic inflation. Detecting the Bmodes is extremely difficult, as the degree of foreground contamination is unknown, and the weak gravitational lensing signal mixes the relatively strong Emode signal with the Bmode signal. In 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the Bmode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation. However, based on the combined data of BICEP2 and Planck, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way.
The possibilities remain open between our universe having unique laws derived from fundamental symmetries or being one of many types of universe whose laws happen to support complexity and life  a 'manyuniverses' perspective. Some theories (Smolin R649) even suggest the laws of nature might be capable of evolution from universe to universe, resulting in one containing observers. The anthropic principle asserts that the existence of (conscious) observers is a constraint delimiting what laws of nature are possible. Anthropic arguments (Barrow and Tipler R45) may enable a form of selfselection in the sense that simple universe which could not sustain life or observers would never be observed, guaranteeing our universe has dimensionalities, symmetrybreakings giving rise to fundamental constants consistent with the interactive fractal complexity (p 317). Regardless of these uncertainties in the final TOE, the general features of force unification, symmetrybreaking and inflation are likely to remain part of our understanding of the cosmic origin.
The SexuallyComplex Quantum World
We have seen that all phenomena in the quantum universe present as a succession of fundamental complementarities in a shifting vacuum groundswell of uncertainty, out of which the superabundance of quantum diversity emerges. In this process we have discovered a multiple overlapping series of divisions: (i) waveparticle complementarity fundamental to the quantum, (ii) the roles of emitters and absorbers, (iii) the advanced and retarded solutions of special relativity, (iv) the fermions comprising matter complementaing the bosons mediating radiation, (v) virtual and real particles distinguishing force fields from positive energy matter and radiation, and the engendered symmetrybreakings between (vi) space and time (reflecting that between momentum and energy) and (vii) between the four fundamental forces of nature, which in turn cause the quantum architecture of atoms and molecules to be asymmetric and capable of complexity of interaction to form living systems (p 317) and finally (viii) duality, which makes it difficult or impossible to determine what is a fundamental particle and what is composite in a sexual paradox between dual descriptions. Sexual paradox may also be manifest in the difficulty of separating the forces from the seething quantum 'ground' of vaccum uncertainty, which is generative of all types of quantum. To understand conscious anticipation, or freewill, may require the inclusion of advanced waves, forming a paradoxical complement to the positive energy arrow of time.
All these complementarities possess attributes of sexual paradox and are pivotal to generating the complexity and diversity of the universe as we know it. There is no way to validly mount a single description based on only one of these complementary aspects alone. All attempts to define a theory based only on one aspect implicitly involves the other as a fundamental component, just as the propagators of the particles in quantum field theory are based on wavespreading. Classical mechanistic notions of a whole made out of clearly defined parts, as well as temporal determinism fail. The mathematical idea of a reality made out sets of points or point particle becomes replaced by the excitations of strings, again with wavebased harmonic energies. Just as we have an irreducible complementarity between subjective expereince and the objective world, so all the features of the quantum universe present in sexually paradoxical complementarities. It is thus hardly surprising that these fundamental and irreducible complementarities may come to be expressed as fundamental themes in biological complexity, thus making sexuality a cumulative expression of a sexual paradox which lies at the foundation of the cosmos itself.
Although both the Taoist and Tantric views of cosmology are based on a complementation between female and male generative principles, many people, including a good proportion of scientists still adhere to a mechanistic view of the universe as a Newtonian machine. In this view biological sexuality seems to be barred from having any fundamental cosmological basis, being an end product of an idiosyncratic process of chance and selection, in a biological evolution which has no apparent relation with or capacity to influence the vast energies and forces which shape the cosmological process. The origins of life remain mysterious and potentially accidental rather than cosmological in nature and evolution an erratic series of accidents preserved by natural selection.
However if we reverse this logic and begin with a sexually paradoxical cosmology, the phenomenon of biological sexuality then becomes a natural cumulative expression of physical sexual paradox operating in a new evolutionary paradigm in the biological world, rich with new feedback processes which give it the central role in genetics and organismic reproduction we regard as the signature and raison d'etre of reproductive sexuality.
Appendix: Complementary Views of Quantum Mechanics and Field Theory
Fig 47: Werner Heisenberg (R760).
Heisenberg was the first person to define the concept of quantum uncertainty, or indetermincay, as the term also means in German.
Heisenberg's research concentrated on momentum and angular momentum. It is well known both rotations in 3D space and matrices in general do not commute. , because matrix multiplication multiplies the rows of the first matrix by the columns of the second:
, but .
Hence AB  BA 0. More generally, if C = AB, . In quantum mechanical notation, we have so , showing that , all states leading to completeness with unit probability.
Fig 48: Erwin Schrodinger (R760)
Schrodinger's wave equation and Heisenberg's matrix mechanics highlight a deeper complementarity in mathematics between the discrete operations of algebra and the continuous properties of calculus. When Heisenberg was trying to solve his matrix equations, the mathematician David Hilbert suggested to look at the differential equations instead. But it fell to Schrodinger, who took his mistress up into the Alps and discovered his wave equation on a romantic tryst. It was only when Hilbert and others examined the two theories closely that it was discovered they were identical, but complementary, descriptions.
Schrodinger derived his timeindependent wave equation as follows. The Hamiltonian dynamical operator representing the total kinetic and potential energy H = K + V , of the system, in terms of how the wave varies with time and space:
, where .
This is a nonrelativistic equation expressed in terms of the first time derivative. If we now assume the wave function consists of separate space and time terms , and seek time independence of the wave function at constant energy E, we get
, or .
Interpreted in terms of matrix mechanics, the Schrodinger wave equation becomes a sum of basis vectors representing each of the wave states. The algebraic version of the equation , , becomes . Solving in terms of a transformation to a new state, we have , where . Hence and so . Thus and . This the famous eigenvalue (ownvalue) problem, whose stable standing wave solutions are the s, p, d and f orbitals of an atom.
Heisenberg's problem of uncertainty expressed in noncommuting operators such as position x and momentum p gives us back the uncertainty relation when we reinterpret momentum in terms of the wave function as a differential operator , we have
.
Hence , another view of the uncertainty relation .
In Schrodinger's view the wave function varies with time accodring to a fixed operator, but in the Heisenberg view the wave function is a fixed vector in Hilbert space and the Hermitian operator is time evolving.
Fig 49: Paul Dirac
Dirac extended Schrodinger's equation to make it relativistic, at the same time ushering in the existence of the positron and antimatter generally as solutions coming out of the equation. His equation is: , where ψ(x, t) is the wave function for the electron of rest mass m with spacetime coordinates x, t. The p_{1}, p_{2}, p_{3} are the components of the momentum, c is the speed of light, and h is Planck's constant divided by 2π. The new elements in this equation are the 4x4 matrices α_{k} and β and the fourcomponent wave function. The four components are interpreted as a superposition of a spinup electron, a spindown electron, a spinup positron, and a spindown positron.
Fig 50: Feynman diagram for first order photon exchange in electronelectron repulsion. Richard Feynman with his own diagram (R760).
The underlying waveparticle complementarity in Feynman's approach to quantum field theory, despite its apparent explanation of the electromagnetic field in terms of particle interaction is succinctly demonstrated in the firstorder diagram from electronelectron scattering (electromagnetic charge repulsion) through exchange of virtual photons provided by uncertainty. The propagator for the diagram is:
where are the variants of the Pauli spin matrices, the Dirac function represents the discrete interaction of the virtual photon over the spacetime interval, and K are the propagators for electrons a and b to be carried by Huygen's wavefront principle according to the wave summations for t_{2} > t_{1} representing positive energy 'retarded' solutions travelling in the usual direction in time and for the corresponding negative energy solutions in the reversed 'advanced' time direction t_{2} < t_{1 }, where E_{n} and are the energy eigenvalues and eigenfunctions for the wave equation.
This both explains how the relativistic solution gives rise to both time backward negative energy solutions and time forward positive energy ones, which make particleantiparticle creation and annihilation events critical to the sequence of Feynman diagrams possible, and also shows clearly in the complex exponentials the sinusoidal wave transmission hidden in the particle diagrams of the quantum field approach.