Genesis of Eden Diversity Encyclopedia

Get the Genesis of Eden AV-CD by secure internet order >> CLICK_HERE
Windows / Mac Compatible. Includes live video seminars, enchanting renewal songs and a thousand page illustrated codex.

Join  SAKINA-Weave A transformative network reflowering Earth's living diversity in gender reunion.

Return to Genesis of Eden?

NS 9 Feb 02

Worlds apart The planet has never been more divided over transgenic crops

ANYONE who thought the inexorable rise of genetically modified crops had been body-checked by consumer pressure and green opposition is wrong. According to flgures out last month, 5.5 million farmers worldwide-mainly in the US, Argentina, Canada and China-now grow GM crops covering more than 50 million hectares. That's an area the size of Spain. And with vast countries like Indonesia about to join the GM club, next year's leap could be bigger still. Yet in Britain, where there is still no commercial growing, the GM industry's prospects have taken another dive. A report on the potential health impacts of GM foods slams the current system of safety screening-developed in the US-as flawed and subjective, and calls for better tests (see p 7). The fact that existing GM crops haven't harmed anyone is no reason for complacency, the report warns. The next generation will be more complex, and even subtle changes in foods could have an impact on people dependent on single food sources-such as babies fed formula milk.

Just another gloomy warning from green consumer activists? Far from it. The report comes from a panel of scientists set up by the Royal Society in London, and is an astonishing sign of how far Britain's scientific establishment has moved on this issue. A few years ago, senior scientists were wont to dismiss public concerns about GM crops as hysteria. Now they are telling regulators to get tougher.

The report rightly has no truck with the more lurid fears about GM technology-such as the idea that the DNA that is added to food crops could create dangerous viruses. But as it points out, inserting genes into plants is not yet an exact science, so Unforeseen side effects on a plant's biochemistry are a real possibility. Toxins normally present in a plant at harmless levels might increase. Nutrients important to a balanced diet might decline.

Yet all companies have been required to do so far is show that their GM food crops are .substantially equivalent" to non-GM breeds. And that phrase has never been properly deflned. Must the plants look and smell about the same? Must they contain the same levels of starch, protein and flbre? Must they be equally well liked and tolerated by rats? Or pigs? There's no consensus among companies and regulators, and the panel is right to say that must be fixed.

lt would be naive, though, to see this as the key to hearts and minds. In Britain, neither big business nor its regulators is trusted on GM foods and consumers cannot y6t'see this technology giving them anything they want. Reforming the idea of substantial equivalence will not change this.

Nor will it make the environmental concerns go away. A couple of months ago, we reported on a worrying phenomenon in Canada: GM crops cross-pollinating with each other to produce "bastardised" strains. Resistant to more than one herbicide, such crops could in time behave like super-weeds.

The GM industry may be making a killing in certain parts of the world. But in sceptical countries it has a mountain to climb-and in Britain the mountain just got bigger.

Take a punt on fusion

ENDLESS energy with next to no radioactive waste? Fusion power will always sound too good to be true. And yet after two years of gloom, the sun is shining on the idea. In 1999, the US pulled out of the ambitious international project known as ITER-designed to generate usable energy by squeezing atomic nuclei together-claiming it had too many technical problems and cost too much. But last month the President's science adviser said the US should think again. Why the change of heart? One reason is a spate of promising findings from other fusion projects (see p 36). More significantly, ITER's costs have been more than halved. The present participants-Russia, Europe, Japan and, possibly, Canada-would have to flnd $4.8 billion over eight years to build the reactor. That's about $150 million a year each-loose change for most of the partners. The chance that ITER will actually work remains remote, but at this price it has got to be worth taking a punt. Besides, unlike the $30 billion space station, ITER has a clear, purpose and end point. If it doesn't generate enough electricity, we can simply pull the plug. There is, of course, another reason why the US is again making eyes at ITER. Did anyone really believe George Bush would allow his allies a stab at unlocking such a fantastic power source without the US?

All tied up Entangling particles is easy when you know how

THE dream of teleporting atoms and molecules-and maybe even larger objects-has become a real possibility for the first time. The advance is thanks to physicists who have suggested a method that in theory could be used to "entangle" absolutely any kind of particle.

Quantum entanglement is the bizarre property that allows two particles to behave as one, no matter how far apart they are. If you measure the state of one particle, you instantly determine the state of the other. This could one day allow us to teleport objects by transferring their properties instantly from one place to another. Until now, physicists have only been able to entangle photons, electrons and atoms, using different methods in each case. For instance, atoms are entangled by forcing them to interact inside an optical trap, while photons are made to interact with a crystal.

"These schemes are very specific," says Sougato Bose of the University of Oxford. But Bose and Dipankar Home, of the Bose Institute in Calcutta, have now demonstrated a single mechanism that could be used to entangle any particles, even atoms or large molecules.

To see how it works, consider the angular momentum or "spin" of an electron. To entangle the spins of two electrons, you first need to make sure they're identical in all respects but their spin. Then you shoot the electrons simultaneously into a beam splitter. This device "splits" each electron into a quantum state called a superposition, which gives it an equal probability of travelling down either of two paths. Only when you try to detect the electron do you know which path it took. If you split two electrons simultaneously, both paths could have one electron each (which will happen half of the time) or either path could have both. Bose and Home show mathematically that whenever one electron is detected in each path, they will be entangled. While a similar effect has been demonstrated before for photons, the photons used were already entangled in another way, even before they reached the beam splitter. "One of the advances we have made is that these two particles could be from completely independent sources," says Bose.

The technique should work for any objects-atoms, molecules and who knows what else-as long as you can split the beam into a quantum superposition. Anton Zeilinger, a quantum physicist at the University of Vienna in Austria, has already shown that this quantum state is possible with [email protected] molecules of C60. Although entangling such large objects is beyond our technical abilities at the moment, this is the first technique that might one day make it possible.

Any scheme that expands the range of particles that can be entangled is important, says Zeilinger. Entangling massive particles would mean they could then be used for quantum cryptography, computing and even teleportation. "it would be fascinating," he says. "The possibility that you can teleport not just quantum states of photons, but also of more massive particles, that in itself is an interesting goal." Ann Ananthaswamy More at: Physical Review Letters (vot 88, article 05401)

Anthrax Fallout

THE anthrax attack of 2001 is over. No more powder-laced letters have turned up in the mail since October and no new infections since November. Unless spores are still lurking in someone's lungs, office or mail, there will be no more victims.

Americans must now decide how to move on from an attack that claimed relatively few lives, but was a huge kick in the teeth for a country still reeling after 11 September. To make matters worse, as New Scientist goes to press, the FBI still has no culprit-or even a firm suspect, to judge by the doubling of the reward to $2.5 million last month.

Investigators are virtually certain of one thing, though: it was an inside job. The anthrax attacker is an American scientistand worse, one from within the US's own biodefence establishment. And only now, four months on from the posting of the first letters, are the frightening implications of that beginning to sink in. America's experience of bioterrorism was, above all, one of institutional failure and a breakdown in the trust on which those institutions are based. The US had its own bioweapons research turned against it-by one of its own. To add to the embarrassment, advances in the massive investigation so far owe more to the serendipity of a few researchers than to any organised response to bioterrorism. The first anthrax victim was Robert Stevens, a photo editor on a tabloid newspaper in Florida who died of the inhaled form of the disease on 5 October. Even before his death, the bacteria in his blood had been whisked off to Northern Arizona University in Flagstaff, where Paul Keim, a specialist in bacterial evolution, has a collection of genetic variants of anthrax. His lab quickly worked out what type it was. Nine days later, after anthrax had struck one of Stevens's colleagues and a television station in New York, the FBI made an announcement. The infections clearly weren't natural, and the Ames strain of anthrax was responsible.

Confusion reigned as to exactly what this meant (see "Mix-ups and muddles", p 10). But the crucial discovery, as revealed by New Scientist (27 October 2001, p 4), was that Stevens's bacteria were dead ringers for an unusually virulent strain from the US Army Medical Research Institute for Infectious Diseases in Fort Detrick, Maryland. And USAMRIID circulated these particular bacilli to only a few collaborators. Even getting this far owed much to luck. Anthrax DNA hardly varies at all, and strains have been genei*cally distinguishable only since the late 1990s. Currently only Keim's techniiVe, which counts variations in the number of repeat sequences of DNA at SO different places in the genome, can pinpoint the USAMRIID lineage. Investigators were only able do so because a few scientists happened to be keen on this kind of research, not because there was an organised system for tracking bioweapons. The attacker may not even have realised how precisely the source of his bacteria could be traced. But the genetic trail may now have gone cold. Investigators hoped that samples from the dozen labs holding USAMRIID's strain would be sufficiently different to reveal where the attacker got his bacteria.

'America's experience of, bioterrorism was, akbove aLL, one of institutional failure and a breakdown in trust'

They include three US military facilities: the navy's medical research lab, the army's Dugway Proving Ground in Utah, and Battelle, a defence contractor based in Ohio. So far, all the samples tested in Keim's analysis have been identical. His lab is now working with the Institute for Genomic Research in Maryland to try and find more revealing differences in non-repeat DNA. But it's not looking good-the labs have clearly been sharing the same bacteria. Hope now rests on analysing the way the anthrax was turned into the fine, floating powder that makes it a weapon. This was the attacker's masterstroke. The five envelopes held just 2 grams of powder each-yet they managed to contaminate a huge area. No one realised just how huge until postal workers started getting anthrax. Two died after Washington hospitals diagnosed flu and sent them [email protected] the publicity about anthrax in the men's workplace. Spores were then found all over the targeted offices, in postal equipment, and even on unrelated mail. Fears that contaminated mail might start to claim susceptible people far and wide seemed justified when two women in the Bronx and Connecticut, with no link to any known contamination, died of anthrax.

There was no excuse for not knowing how insidious weaponised anthrax can be. Ken Alibek, former head of the Soviet anthrax programme who now works in the US, knows how it infested his production plants-but investigators didn't ask him. Canadian biodefence scientists had even measured how easily a harmless relative of anthrax could spread though postal machines when it was weaponised. They warned the US the day Stevens's case was announced-but their e-mail was ignored, and US officials only found out about their work after postal workers had died.

'Canadian scientists warned the US on the day the first case was announced, but their e-mail was ignored'

Yet to the expert eye, the envelope opened in Senator Tom Daschle's office on 15 October obviously contained weaponised anthrax. It even resembled the powder the US military concocted in the 1960s (New ScienHst, 3 November 2001, p 5)., Its particles were a uniform 1.5 to 3 micrometres across, the optimal size for inhalation. It was highly concentrated, with no debris coated to prevent clumping, and even contained an unusual form of silica, A drying agent used in the US process. The US government insists it destroyed all its old weaponised anthrax. But in December, an American journalist broke the news that Dugway had been making more for nearly four years under the tutelage of Bill Patrick, who ran the anthrax programme before 1969. It is not clear whether Dugway had told the FBI. The lab had a reasonable motive: to test anthrax detectors, and study the powder's behaviour. Officials could have used that information to respond to the attack. Yet apparently they didn't-even though the Canadian research, which could have saved lives, used bacteria weaponised at Dugway.

But a clearer picture is now emerging. The attacker used the US military strain, and something like the US weaponisation process. Dugway undoubtedly weaponised Ames. The attacker either acquired 10 grams of the Dugway product, or the recipe for making it. Chemical analysis of the last anthrax letter discovered, addressed to Senator Charles Leahy and opened at USAMRIID, should tell which. If the powder isn't identical to Dugway's, someone else weaponised it. Tracing the chemicals used might lead to the perpetrator. One more clue points to someone who worked at USAMRIID itself. A US marine base got a letter in late September, after the anthrax letters were posted but before Stevens was diagnosed, calling an Egyptianborn scientist, Ayaad Assaad, a bioterrorist.

Assaad was laid off by USAMRIID in 1997, and,Nas harassed while he worked there. He was cleared of the bioterrorist charge. Barbara Rosenberg, a bioweapons expprt for the Federation of American Scientists, suspects the letter was the real attacker's attempt to frame Assaad by capitalising on antiMuslim feeling after 1 1 September. It revealed an insider's familiarity with USAMRIID.

The attacker also masqueraded, unconvincingly, as a Muslim in the anthrax letters themselves. This could be a clue to his motivations. If he wished to scale up US military action against Iraq, he almost succeededmany in Washington tried hard to see Saddam Hussein's hand in the attacks. If he wished merely to make the US pour billions into biodefence, he did succeed. And as a US bioweapons expert, he might already be reaping the increased funding and prestige that now goes with the job. That chilling possibility underscores the US's dilemma. The attack showed how badly the country needs to improve biodefence. Yet to do so, the US must boost the very institutions that s'[email protected] permitted this attack. That may help it prepare for the next one. But it may not prevent it. Debora MacKenzie

It isn't yours Campaigners call for a ban on all genetic patents

A CALL to declare the planet's genetic heritage a common resource that no one can patent has divided environmentalists and raised serious questions about the Bi6diversity Convention, one of the major successes of the Rio Earth Summit a decade ago.

A coalition of more than a hundred environment and citizens' groups headed by anti-biotechnology crusader Jeremy Rifldn has launched a campaign for a 'Treaty to Share the Genetic Commons". They want the treaty adop4ed at Rios successor, the World Summit in Johannesburg in August. 'Our aim is to prohibit all patents on plant, microoltianitan, animal and human life, including patents on genes and the products they code for, as well as chromosomes, cells, tissues, organs and organisms,' the joint declaration says.

The campaign directly contradicts one of the central tenets of the Biodiversity Convention, which green groups have persuaded most governments to sign. To encourage developing countries to preserve habitats, the convention allows them to claim intellectual property rights over their own genetic resources. That means many tropical countries now sell "bioprospecting rights" to biotech companies and deny access to independent scientists. But Rifkin, of the Washington DC-based Foundation on Economic Trends, told New Scientist: "No government can claim the right to own the products of millions of years of evolution or to charge bioprospectors. What we are saying is totally against the Biodiversity Convention."

That has not stopped some prominent green groups from joining his call for a "genetic commons". Friends of the Earth International, for example, appears to hack both the Biodiversity Convention and the new treaty. Many biotechnologists claim that a lot of research wouldn't be done if companies couldn't protect their investment by patenting genes. 'They should make money from patenting engineering processes, not the genes themselves," Rifkin responds. 'They have no more right to lock up genes for their own use than corporations a century ago had the right to patent chemical elements they discovered.' Rifkin argues that gene patents damage academic research because results aren't published, and also make genetic tests prohibitively expensive. This week, for instance, it was reported that US labs have stopped doing genetic tests for the iron overload condition haemochromatosis, because of the cost of royalties.

But others stand by the value of patenting genes. "In an ideal world perhaps Rifkin is right, but we think patents are a more practical way of promoting research", says Gordon Conway, president of the Rocketeller Foundation, which funds agricultural research for the developing world. Conway denies that patenting genes would leave research wholly in the hands of private corporations. "We are working to create partnerships between biotechnology companies and African research institutes," he says. Fred Pearce

Hothouse chips Why carve computers out of silicon when you can grow them on a crop virus instead?

MOST of us dread computer'viruses, but what about a computer that's made of viruses? Strange as it sounds, plant viruses can now be turned into the building blocks of microprocessors. Their inventors say the tiny biocircuits could even be used inside the body to make a new breed of sensors that detect blood glucose levels, for instance. But why use a virus? The big attraction is that at just 30 nanometres across, they are far smaller than the 130-nanometre wide components in today's microchips. It's getting tougher and more expensive to shrink conventional microchip technology, so the viral approach could bring the breakthrough in miniaturisation that chip manufacturers have been searching for. The viruses provide the perfect scaffold for tiny electronics systems because they can be made to arrange themselves into crystal-like arrays. This raises the tantalising possibility of self-organising circuits, which need little or no intervention to help them build useful three-dimensional structures that can be populated with circuit components. Until now, nanotechnologists have only constructed flat nanocircuits, using components like carbon nanotubes as transistors (New Scientist, 17 November 2001, p 26), but what they really want is to find a way for these molecular circuits to build themselves. To make their living, 3D microcircuits, chemist MG Finn and virologist Jack Johnson, both of the Scripps Research Institute in La jolla, California, chose to work with the cowpea mosaic virus, a common pathogen which stunts the growth of the black-eyed pea plant. This virus is encased in a protective protein coat that has 20 faces and 12 corners, or vertices. The researchers inserted DNA segments into the virus's genome that caused the pathogen to produce cysteine amino acids on the vertices of its viral shell. The resulting cysteine complex at each vertex sports sulphur-containing thiol groups, which bind readily to gold. So when the team added ultrafine gold particles to the cysteine-loaded viruses, they ended up with viruses studded with a pattern of gold electrodes (see Diagram). The researchers are now working with scientists at the US Naval Research Laboratory in Washington DC to bridge the electrodes with wires and organic molecules that can act as electrical switches, effectively doing the job of a transistor. In this way the researchers hope to build logic gates and combine them to perform complex operations. "Building the molecular electronics becomes an exercise in connecting the dots," says Finn. If they succeed, the microchips of the future could come from farms instead of high-tech factories., You can cultivate the virus cheaply by growing a few hectares of its host plant, the black-eyed pea, and then isolating the virus from the leaves. "It will be very exciting to see such circuits patterned on something nature has provided us with, like a virus," says Uzi Landman, who researches nanoscale computer circuit materials at the Georgia Institute of Technology in Atlanta. But it's not only viruses that are being harnessed for computer research. Last year, a team led by Michael Simpson at the Oak Ridge National Lab in Tennessee, succeeded in getting modified Pseudomonas bacteria to behave like logic gates. (New Scientist, 26 May 2001, p 24) The Scripps team wants to go beyond viral electronics, however, and use the viruses as tiny chemical reaction vessels. To do this, they will denature the virus's protein surface, allowing them to attach molecules to the inside instead effectively hollowing it out, perhaps to allow drugs to be ferried into the body. Catherine Zandonelia, New York

Fill her up with photons
"Quantum afterburner" will reclaim energy from waste heat

THE internal combustion engine took a giant leap forward last week when an American scientist showed how to extract more energy from a car's exhaust gases than was ever thought possible. The trick? Turn the exhaust system into a laser.

Since the four-stroke petrol engine was invented in the mid-19th century, engineers have been striving to extract the last ounce of power from it, guided by the laws of thermodynamics. These laws predict that an engine's efficiency is ultimately limiteVY the temperature of the gases as they burn.

As the hot gases expand, they drive a [email protected] a cylinder, generating power that's transmitted to the wheels. But once this expansion is over and the gases begin to cool, you can never get more useful energy out-or so physicists thought.

Now, by turning to the strange laws of quantum mechanics, physicist Marlan Scully of Texas A&M University in College Station claims it is possible after all to extract more useful energy from hot exhaust gases. The secret lies in the workings of a new type of laser being developed in a handful of labs around the world.

The revolutionary idea could lead to new generation of engines thait extract useful energy ftom pure heat. It could make cars (>f the future far more fuel efficient, though Scully believes tiny nanos'cale motors are likely to benefit from the idea first.

The laser works by passing single atoms through a cavity containing a pair of mirrors separated by a distance matching the wavelength of light that the atom can emit. If the hot atom has enough energy, it gives up a photon as it passes through the cavity. This photon then bounces between the mirrors, which effectively stores it in the cavity. The process continues as other atoms pass through, also giving up photons. If one mirror allows a small proportion of photons to escape, the cavity will emit light of one wavelength-that set by the mirror gapjust like a laser.

Scully's idea is to apply this principle to car exhaust. Since the exhaust is hot, the atoms it contains can be persuaded to give up this energy in the form of coherent light by passing them through an appropriately sized mirrored cavity. He calls his idea a "quantum afterburner". "It's controversial stuff,' he admits. 'But we should have completed a proof of principle experiment with C02 molecules soon."

It's an enormous surprise for physicists that the laser energy can be extracted at all. But Scully's work-which is partly funded by the US Navy and US Air Force-does not run counter to the laws of thermodynamics. Rather, his new limit for the energy available from the heat, in waste gases comes from exploiting quantum effects which operate outside the laws of thermodynamics.

Muhammad Zubairy, a visiting professor at Texas A&M who has been studying Scully's work, plans to publish a paper with him explaining just what could be done with this laser energy once it has been acquired from the exhaust. 'In principle, you could use it for anything," says Zubairy. Justin Mullins More at: Physical Review Letters (vol 88, article 050602)


Reality could be made of anything from blancmange to billiard balls and we'd never know, says Eugenie Samuel

WHAT are the fundamental building blocks of the Universe? Once we were told they were atoms. Then it turned out that these were not fundamental at all, but made of protons, neutrons and electrons. Protons and neutrons are in turn made of quarks. Deeper still, we now learn, come tiny vibrating strings and membranes living in a space of 10 or 11 dimensions. But we all expect that one day physicists will finally discover the deepest structures of nature. Won't they? Not necessarily. Maybe it's impossible to discover these deepest structures. What's more, maybe it doesn't matter what they are. That's the startling claim of Robert Laughlin, a Nobel laureate at Stanford University. According to Laughlin, it may be that what we call reality is a spontaneous phenomenon, emerging like a wave out of some forever unknowable cosmic medium.

In some ways, Laughlin's ultimate aim is not so different from that of other theoretical physicists. Their common goal is to find a single theory that unites quantum mechanics-the theor3r that describes the behaviour of matter on tiny scales-with Einstein's general theory of relativity, which describes space, time and gravity. Such a "theory of everything" would unite all the forces of nature and explain why time and space exist, as well as answering such trifles as how the Universe began and what happens at the centre of a black hole. Ambitious stuff.

The well-trodden route towards this ultimate theory is to try to find the right building blocks of reality and then see whether they can account for the natural phenomena we observe. Laughlin is treading a very different path, however, because he believes you can't build a theory of everything from the bottom up. The laws that govern large-scale phenomena will not be deduced from the laws that govern tiny particles, he says. "It's in the same way that flocking behaviour can be characterised without understanding everything about birds, or superconductivity without understanding atomic theory."

This idea is called emergence. It's a familiar phenomenon in the theory of condensed matter, which is Laughlin's background. Solids and liquids sometimes play host to strange entities that bear little resemblance to the atoms making up the substance.

For example, in some materials there are things called spin waves. Every atom acts a bit like a small magnet, with a north and a south pole aligned along its spin axis, and spin waves are oscillations in the alignment of these spins. "Somewhat like what would occur if one took a supple picket fence and rapidly twisted one end back and forth," says Laughlin. Because this is the quantum world, waves can be considered as particles, and vice versa, so spin waves behave like a kind of emergent particle.

Many other kinds of emergent creature live inside matter, including vibrational waves called phonons, electrical excitations called excitons in semiconductors, and waves of charge called plasmons. These are called variously "collective excitations" and 11 quasiparticles". From inside the material, these bizarre objects would seem as real as any other particle.

But if quasiparticles are indistinguishable from real particles, could it be that things we think of as real-electrons and so onare themselves quasiparticles, emerging out of some ubiquitous but undetectable cosmic stuff?

It's a controversial idea. Sure, even in string theory and other bottom-up theories, matter particles arise from the behaviour of smaller building blocks. But for Laughlin there's a crucial difference-we can never determine what that basic "stuff" is.

In a solid, for example, quasiparticles can't be derived or predicted from the behaviour of the individual particles they are made of. In general it has proved impossible to solve the quantum equations of motion for each interacting atom to predict the existence of, spin waves or phonons: there are just too many equations to handle.


This means knowing about the quasiparticles may tell you nothing about what they are made of. This isn't a problem for ordinary materials, because by studying these at higher tempV atures, we already know what they are made of. But if the Universe works like this, then maybe the underlying nature of reality is hidden from us. Everything is emergent, but we'll never know what from. It would explain why physicists have so far had such trouble finding the right fundamental particles to unify the whole of physics in a theory of everything. "If what you see is model-independent then you can't learn anything about the underlying equations by observing it," says Laughlin. "You could call this the dark side of emergence." At first sight, this is a depressing conclusion. Does it mean physicists should just give up their quest? Thankfully not. Laughlin thinks we just have to look elsewhere for the fundamental nature of reality-in the process of emergence itself.

The range of emergent particles found in most condensed-matter systems is far too limited to be a blueprint for a theory of everything. Such a theory would need to account for a plethora of particles, including their charge, spin and mass, and also spawn the whole of space and time. 'Known cases of emergence are too primitive to serve as a model for real space-time," Laughlin says. 'We need to find better ones." Whatever that model will eventually be, Laughlin's betting it will exploit a phenomenon called quantum criticality. This is a kind of behaviour seen in some materials near absolute zero, when they are poised between one state and another. For example, in some magnetic solids individual spins become so I.highly correlated that the behaviour of one affects them all, and the collective wavefunction of the material lacks any sense of scale. And to Laughlin, this is a highly desirable property, because scale invariance is also a fundamental property of space-time. In the standard model of particle physics, particles are thought of as collective oscillations of the vacuum of space. In this model, a small chunk of space oscillates exactly as much as a larger chunk of space. This is just like when you zoom in on a stretch of coastline while looking at a map. You see as much variation in the coast no matter what scale you're looking at, because as the scale of the map gets smaller, you lose sight of larger variations but become sensitive to smaller ones. So modelling the Universe with quantum criticality gives you scale invariance for free. But it also means that any sense of the material being made up of building blocks is lost. In a superconducting material, for example, nothing about what the material is made of follows from the behaviour of spin waves. 'If all we could observe was the quasiparticles, we wouldn't be able to tell," says Matthew Fisher, who works on the theory of quantum criticality at the University of California, Santa Barbara. Likewise, if the very fabric of the Universe is in a quantum-critical state, then the "stuff" that underlies realit is tota ly irrelevant- it could be anything, says Laughlin. Even if the string theorists show that strings can give rise to the matter and natural laws we know, they won't have proved that strings are the answer-merely one of the infinite number of possible answers. It could as well be pool balls or Lego bricks or drunk sergeant majors. just a minute, though. if you can warm up a quantum-critical solid so that the bits become visible again, why not heat up a piece of the Universe-a little matter, say-to do the same? This is effectively what experimental particle physicists have been doing for decades with particle accelerators. The trouble is, to see the underlying medium of reality you have to reach beyond the maximum energy that a quasiparticle can carry. The quasi-particle might then start to show signs of its nature.

Postcard from the edge: maybe we can never see much deeper into reality than the level of these subatomic particles

Fusion Power Hots Up

FOR as long as most of us can remember, the dream has been there: unlimited, clean energy from nuclear fusion. No green house gases to worry about and relatively little radioactive waste. Better still, the fuel for fusion comes ultimately from sea waterso no kmore reliance on Middle East oil or drilling in the pristine wilderness. The prize is certainly beguiling.

All it takes is some way to make a simple mix of hydrogen isotopes hot enough for their nuclei to fuse and yield colossal amounts of energy. Yet for almost half a century, nature has played a game of peekaboo with fusion physicists, fooling them into thinking they were nearing their goal and making them the butt of a cruel joke. 'Limitless fusion power is just 40 years awayand it always will be.'

Now a team of researchers at the Joint European Torus UET) pro ject at Culham, Oxfordshirethe international test bed for fusion machinesmay have broken the deadlock. At last October's meeting of the American Physical Society in Long Beach, California, they unveiled data showing that it really is possible to create and hang on to the 100 million 'C temperatures needed to bring the power source of the stars down to Earth: This is a promising development for fusion researchers, whose morale hit a low point in 1998, following the demise of ITER, the International Thermonuclear Experimental Reactor, a $10 billion project to build a fusion machine the size of a 10 storey office block. Calculations by fusion theorists in the US had raised grave doubts over claims that ITER would achieve ignition. Those doubts, plus the hefty price tag, led Congress to pull the plug on American involvementand thus on any hope of building the giant machine.

The death of ITER sparked much handwringing among physicists, some of whom were seriously starting to doubt whether nuclear fusion could ever be tamed. Perhaps it was to remain the preserve of stars alone. A backofthe envelope calculation hardly gives cause for opti mism. From the outside, stars look like blazing powerhouses, but get close in to a star like the Sun and you find that it packs barely a watt of fusion power per cubic metre of vol ume. A viable fusion reactor must cram in around a million times as much power - no mean feat. The more theorists explored the dauntingly complex physics of the plasma in machines like JET, the less they liked what they saw. One troublesome question is just what size a fusion reactor should be. Big machines are better for retaining the heat, simply because being bigger means it takes longer for the heat to escape. They also give better performance in terms of the amount of fusion power you get out compared to the heating power you have to put in. So bring on the humungous fusion machines, then? Well, not so fastthere's more to a practical device than just triggering fusion. Any workable power plant must produce steady amounts of fusion power for years on end, which means the reactor must be able to withstand years of being blasted by neutrons from the fusion reactions, and temperatures of 100 million 'C. And all those problems get worse with bigger machines, basically because there's more fusion power blasting each square metre of the machine's walls. Build too small a machine, and you can't hang on to the high temperatures long enough. But build one too big, and you run the risk of the plasma trashing your reactor in no time. Finding the optimal size meant putting a lot of faith in 'scaling laws"rules relating size to performance extracted from past fusion experiments the world over. And that began to look like a very dodgy strategy. For years fusion scientists had used these scaling laws as comfort blankets, clinging to their predictions of how the ultimate fusion machines would behave. 'The trouble is, there wasn't much real physics in them,' says Steve Cowley of Imperial College, London. 'They were basically just graphs of bestfit lines to data. It gave the impression that fusion research wasn't real science.' Putting the scaling laws on a more solid foundation was impossible without tackling the daunting challenge of turbulence once described by Einstein himself as the hardest problem in the whole of classical physics. The first detailed model of turbulence in machines like JET and ITER was dreamed up by William Dorland and Michael Kotschen reuther of the Institute for Fusion Studies at the University of Texas. The outcome could hardly have been worse. The model suggested that turbulence induced heat loss from the plasma in ITER made ignition unlikely. With some of ITER's supporters claiming they were 1199.5 per cent" certain the machine would reach ignition, Dorland and Kotschen reuther's model was as welcome as a ham sandwich at a bar mitzvah. Some fusion researchers still dismiss the model as fundamentally flawed. But many others began to have a crisi s of confidence about the whole future of fusion research. Outside the fusion community, some physi cists waded in with fingerwagging criticism of a project on which billions had been spent to no obvious benefit. Yet even as Dorland and Kotschenreuther were doing their controversial calculations, there were glimmerings of a getouta way in which even a turbulent plasma might hang on to its heat long enough to achieve ignition. It centred on effects left out of their sums that are now the hottest topic in fusion research, and making even some hardened sceptics sit up and take notice. Their promise is belied by their humdrum name: transport barriers. Creating 100 million 'C plasma is one thing; keeping it hot is another. Turbulence within the superhot plasma has a nasty habit of transporting the heat out as fast as colossal electric currents and particle beams can shovel it in. But if you can create a region of low turbulence within the plasma, it acts like a scarf around a hotwater pipeand the heat stops pouring out of the machine.

The existence of such heat transport barriers came as a complete surprise to fusion physicists. "No one predicted them,' says Cowley. "But we're certainly all glad that they exist.' The first transport barrier to reveal itself is known as the "highconfinement mode", or Hmode, which traps heat at the edge of the plasma. Experiments at various fusion laboratories have shown that when the amount of power trapped in the plasma exceeds a threshold, a region of low turbulence suddenly appears around the edge of the plasma. Exactly why Hmode occurs is still not fully understood, but its effect is dramatic, doubling the amount of time the machine can sustain fusion temperatures. Recent experiments at JET have shown that Hmode isn't stable, but repeatedly collapses, zapping the walls of the machine with huge heat loads. It's not all bad news, though: these sudden releases of energy also allow impurities, including fusionkilling helium "ash", to escape. The trick to keeping fusion alive lies in a balancing act, using Hmodes to keep the heat in while allowing their collapse to clean up the plasma. Researchers at JET have now perfected the trick by "tickling" the plasma with judicious amounts of magnetic and radiofrequency energy. They have also come up with a way to combat the heatload problem, using squirts of inert gas that spread the energy trapped in the edge of the plasma over a wider area, reducing wear and tear on the machine walls. But this is not the only good news to come the way of the longsuffering fusion community. At October's meeting of the American PhysicaLSociety, joelle Mailloux, one of the researchers at JET, presented the best evidence yet for even more powerful heattrapping effectsand ones that occur in the bulk of the plasma, not just at the edge: internal transport barriers.

These ITBs are now expected to play a crucial role in the success of any future powerproducing reactors. By carefully controlling conditions inside the machine, it's possible to put a twist in the magnetic field that dramatically cuts the rate of heat loss. "That gives higher temperatures and plasma densities even than Hmode," says Mailloux. 'And that should lead to smaller and cheaper reactors." At the APS meeting, Mailloux and her team unveiled the spectacular improvement they could have equations they could solve. That doesn't mean turbulence can be ignoredas the failure of theorists to predict heattrapping effects shows. Nowadays, effects such as Hmode and ITBs are being probed using powerful computers to see if they can be exploited on future commercial fusion power machines. Theorists like Cowley are also using computer models to study the potential of different types of fusion machines. Until now, most attention has focused on tokamaks, the doughnutshaped "magnetic bottle" design conceived by Soviet scientist Andrei Sakharov and colleagues in the early 1950s. But these tokamaks aren't the only game in town. Back in the 1970s, some theorists sketched out a modified design they claimed could make fusion easier to achieve. This is the spherical tokamakshaped more like a cored apple than a doughnutand it can cram more plasma into a smaller space, and make better use of the magnetic fields used to control it. Potentially, this makes it far more energyefficient than conventional tokamaks. That efficiency is described by a number called "beta', which is the ratio of the heat energy in the plasma to the magnetic field energy needed to hang on to the plasma. The higher the beta, the more efficient the machineand the better the prospects of producing economically viable levels of fusion power. In 1991, another team at Culham lashed together a prototype from spare parts lying around the lab. Known as START, it looked like a standard tokamak that had been squeezed together in the hands of a giant. But by the time it was shut down in 1998, START had stunned everyone by achieving a beta value of 0.4more than 10 times as efficient as JET, and still a world record. Now a successor has been built at Culham. Called MAST (MegaAmp Spherical Tokamak), it is already the talk of the fusion community. Using sophisticated com ter ,pu techniques, theorists have discovered ti the conditions that make heattrapping n possible in parts of the plasma in ordinary tokamaks can exist right through a spherical tokamakmaking it a good bet for achieving ignition. While stressing that it is still early days, even the sceptics are enthusiastic about spherical tokamaks. "MAST is a tremendously exciting experiment,' says Dorland, now at Imperial College, London, the theorist whose calculations cast doubt on the origi nal ITER project. 'Spherical tokamaks have a very exciting role to play in fusion.' And there is good news for ITER. The project is now being revived, although with a smaller, more sophisticated machine that will be half the cost of the original leviathan. Theorists are confident that the heattrapping effects seen in smaller machines like JET will appear in ITER, which is predicted to generate around 500 megawatts of power ftom just a few tens of megawatts of external heating. Last November, the first talks were held between the international partners still involved in the projqct to decide where to build the scaleddown ITER machine. There is even talk of the US rejoining the project, which is benefiting from the renewed sense of optimism about fusioneven among former doubters. They include Richard Hazeltine of the University of Texas, Austin. "I was a sceptic about the claims being made a few years ago, but I now share the optimism,' he says. 'As well as these heat trapping effects, we're beginning to under stand what's happening in fusion. The picture has really changed.' Advocates of the leaner, fitter ITER admit that getting the backing of the American fusion community is only part of the task they face. The question now is whether politicians in the US and elsewhere can be persuaded to forget all the years of hype and disappointment, and put their faith and yet more billions of dollars into the quest for this ultimate power source.

Robert Matthews is science correspondent of The Sunday Telegraph