Genesis of Eden Diversity Encyclopedia

Get the Genesis of Eden AV-CD by secure internet order >> CLICK_HERE
Windows / Mac Compatible. Includes live video seminars, enchanting renewal songs and a thousand page illustrated codex.

Join  SAKINA-Weave A transformative network reflowering Earth's living diversity in gender reunion.

Return to Genesis of Eden?

NS 11 aug, 8 sept, 14 jul 2001

Spectral Senses

Could we have an ever-changing variety of senses instead of just the basic five? Alison Motluk asks if it's time to rethink how our brains work

EVERYONE knows there are five basic senses. But try separating them one from the other in your daily life and suddenly they don't feel so distinct.

Eat a banana, for instance, and try to taste it without smelling it and experiencing that banana-y texture on your tongue. Can you really just taste, or must you sometimes taste-smell-feel? Try talking to your lover. Listen to what is said without watching the mouth move or feeling the caress of a hand. Can you simply hear, or is there always an element of hear-see-touch? Even on the phone, can you hear a voice without imagining a face? Hard, isn't it? The prevailing view of the brain still holds that there are five separate senses that feed into five distinct brain regions preordained to handle one and only one sense. The yellowness of the banana skin, the texture of its flesh, its smell and taste-each of these elements is parcelled up and analysed in isolation. Some theories of consciousness suggest that these dedicated brain areas somehow stamp each sense with a unique 'feeling". Then, the theory goes, the brain pastes the fragments back together, calls on memory to give it a name and recall what it's for and, voila, a banana.

But perhaps it's time for a radical rethink of how the brain works. Tasks we've long assumed were handled by only one sense turn out to be the domain of two or three. And when we are deprived of a sense, the brain responds-in a matter of days or even hours-by reallocating unused capacity and turning the remaining senses to more imaginative use. All this begs the questions: are the senses really so segregated? Are they separate at all? Indeed, is it possible that our senses are continuously developing and merging so that each one of us has our own private view of the world? It might be a big shift in thinking, but it began with a simple findingthe discovery of 'multisensory" neurons. These are brain cells that react to many senses all at once instead of just to one. No one knows how many of these neurons there are-maybe the)r are just a rare, elite corps. But perhaps there are no true vision, hearing or touch areas dedicated to a single sense, after all. Perhaps all the neurons in our brains are multisensory-and we mistakenly label them 'visual" or "auditory' simply because they prefer one sense over the others. That's the view of Alvaro Pascual-Leone at Harvard University. He made a splash five years ago when he showed that people who were born blind use the visual cortex when they read Braille. He wondered if rather than lie idle, parts of the brain meant for seeing just started helping out with touching. His more recent work has convinced him that not only blind people but everyone has the capacity to swap senses if they have to. He thinks that the brain is much more versatile than most researchers would have us believe.

To test the idea, Pascual-Leone blindfolded healthy, sighted volunteers for five days running, taught them Braille and watched how their brains responded. He even fitted their blindfolds with photographic paper-just to be sure volunteers weren't tampering with them. Before, during and after the blindfolding, they had a series of brain scans while they were set different tactile and auditory tasks-feeling either Braille characters or brush strokes on their fingertips and listening to tones or word fragments. Before the blindfolding began, the 'visual" areas were not switched on by the touching and hearing tasks. But as the week wore on the visual regions became more and more involved in routine touching and hearing.



If a person isn't seeing, Pascual-Leone found, parts of the "visual" cortex are roped in to help out in tasks involving other senses. In fact, the newly recruited regions soon become indispensable. When he tried temporarily disrupting the workings of the visual areas, using a technique called transcranial magnetic stimulation, or TMS, the blindfolded volunteers found it hard to read their Braille. Taking the blindfolds off for just a day, though, was enough to undo the changes; suddenly touching and hearing tasks no longer triggered visual areas, even though volunteers were blindfolded again briefly for the scan. 'Removing the blindfold and being exposed to the seeing world for, 12 to 24 hours is sufficient to revert all changes induced by the five days of blindfolding," says Pascual-Leone. What was astonishing was how quickly the brain seemed able to recruit new areas and equally effortlessly reverse that process. It was far too quick to be the result of new connections forming from scratch reasoned Pascual-Leone. 'It must be assumed,' he says, 'that tactile and auditory input into the 'visual cortex' is present in all of us and can be unmasked if behaviourally desirable."

Pascual-Leone now feels the brain is not organised into "visual" and "auditory" and "tactile' regions at all. Instead he thinks it is split into units that have specific jobs to do or particular problems to [email protected] distance, for example, or timing intervals. These problem-solving units simply use the best information available. Sometimes they may prefer certain senses to others, based on how suitable they are for the assigned computation, and sometimes they may use more than one, if that helps. Vision, for instance, might be the preferred way to judge distances. But if you can't see, hearing or touch can certainly fill in. The preference of a particular problemsolving unit for a specific sense may explain the notion of sense-specific regions, he says. just because an area tends to call on vision doesn't mean it can't process other senses, only that it may not bother if its first choice sense is on hand. This may have tricked neuroscientists into thinking that the brain is structured in parallel, segregated systems processing different types of sensory signals, says Pascual-Leone.

There is some good evidence that the brain can mix up the senses to solve particular problems. One of the main benefits of sensory integration may be better clarity and detection, says Barry Stein, at Wake Forest University in Winston-Salem, North Carolina, one of the first researchers to identify the brain's multisensory capabilities. Even weak signals should be taken seriously if they're picked up by more than one sense.

We are, for example, much more sensitive to a chemical when we combine smell and taste. Pamela Dalton, at the Monell Chemical Senses Center in Philadelphia, asked 10 people to smell benzaldehyde, a cherry-almond odour that has no taste, and to taste saccharin, a sweetener that has no smell. Before each testing session, she worked out the point where each volunteer could no longer detect each substance and prepared even weaker samples. Then she asked them to slosh the solution around in their mouths and sniff the odour at the same time. Combining taste and smell made both substances much more apparent, she found. "Ten minutes before, they hadn't been able to detect it," says Dalton.

A brain combining senses can also make better sense of ambiguous information. David Lewkowicz at the New York State Institute for Basic Research in Developmental Disabilities on Staten Island shows this nicely with a visual image of two balls moving from opposite sides of a screen, merging briefly in the centre, then continuing along their merry ways (see "Brain Games"). But when a beep sounds at the moment the two balls merge, what you see changes completely. Now, instead of passing through each other and continuing along the same trajectory, the two balls bounce off each other and return to the side they came from.

Combining hearing with vision can lead us to draw different conclusions about what we've seen too. A single flash of light, can appear to be two flashes when it coincides with two beeps, says Ladan Shams and her colleagues at Caltech in Pasadena. Even when we know there is just one flash, we can't help perceiving it as two. Apparently the brain won't let us draw contradictory conclusions from two different senses.

Increasingly, scientists are discovering that even everyday activities may actually make use of more than one sense. Consider the task of running your fingers over a pattern of raised ridges and deciding in what direction they are running. What sense do you call upon? Most of us would guess the obvious: touch. But a group at Emory University in Atlanta has demonstrated that in perfectly normal people parts of the "visual' brain are also essential for perceiving touch. They started by scanning people's brains to see what regions were activated when they were trying to decide the orientation of some grating patterns on a touch pad. They found that a part of the brain that's involved in recognising objects by sight was active while people felt the gratings, even though they couldn't see them. "What excited us was what our subjects told us," says Krish Sathian, a lead member of the team. "When they were doing the tactile task, they were actually visualising in their mind's eye the orientation of the grating." Did visual imagery just provide a convenient aid, or was it essential to the task? To find out, they used the TMS technique to disrupt the activity in the 'visual" region the volunteers had been using. Suddenly, their volunteers could no longer tell the direction of the pattern. The researchers concluded last year in the journal Nature (vol 401, p 587) that the 'visual' cortex is closely involved in certain tactile tasks. They claimed it was the first time that visual processing was shown to be instrumental in ordinary tactile perception. But Sathian admits that the activated region may not really be visual at all. It could be a part of the brain that helps us visualise what's being touched. 'We certainly can't rule out that what we're seeing is multimodal processing in an area previously thought to be just visual," he says. Pascual-Leone's bold interpretation, that the brain is organised by task rather than by individual sense, is by no means the accepted one. Even most scientists who study multisensory processing consider it extreme. "At least some areas are exclusively unisensory,' says Sathian. There's very clearly a primary visual cortex with strong inputs from the eye, he says, and a primary somatosensory cortex getting information from the body. But that's not to say that the map of the brain is static-far ftom it. New multisensory areas are being found all the time. "The boundaries are being pushed back," says Sathian, "just not pushed back all the way."

Those boundaries were seriously tested by an experiment that involved 'rewiring" the brains of ferrets. The findings called into question the well-guarded notion that certain brain areas can only dedicate themselves to certain tasks. They suggest that, although the brain may tend to develop in a particular way, with vision processed at the back of the head and hearing on the sides, it doesn't have to be that way. A group at MIT in Boston wanted to know how much they could override innate developmental pathways. 'If we put the retina into the auditory cortex, will it see?" asks Sarah Pallas, a member of the team, now at Georgia State University in Atlanta. The researchers surgically rearranged one brain hemisphere in a handful of newborn ferrets, so that the nerves from the retina, which normally go to the visual thalamus and then on to the visual cortex, now connected to the auditory thalamus and eventually to the auditory cortex. To their surprise, they found that the auditory cortex on the rewired side arranged itself like a visual cortex: the cells showed selectivity for orientation and motion, and they encoded a two-dimensional map of visual space. The rewired animals also seemed to behave perfectly normally. Using only the untouched hemisphere the researchers trained the animals to go to a food spout on one side of a test room if they heard a sound and one on the other if they saw a light. Amazingly, even after the visual cortex on the healthy side was completely destroyed, the animals found their way to the food.

'We were able to turn the auditory cortex into a visual cortex," says Pallas. "Maybe they couldn't recognise their grandmother with that, but they certainly could detect light." In fact, the young ferrets seemed so normal that the researchers had to mark them to tell them apart from their siblings. The experiment revealed just how multimodal the brain may be. The amazing rewired auditory cortex was not only seeing-it was hearing at the same time, Pallas told a meeting of multisensory scientists in New York last autumn. Though the finding has not yet been published, she said that preliminary testing showed that the rewired auditory cortex was responding well to sound. What's more, the study shows that what goes into the brain can have a lot of influence on how it's ultimately organised. Although some parts of the brain may be predisposed to become one thing or another, the rewiring shows they aren't predetermined. "Sensory inputs can influence the regional identity of the cortex,' says Pallas. But how far does this go? We can fairly assume that people deprived of sight early on will have their brains wired up differently from people who see. But what about someone who has been nearsighted since birth-could that person have a quite a different brain from someone who's experienced the world through sharper eyes? Is someone born into the high rises of Hong Kong wired up differently from a person growing up in the Gobi desert? Pascual-Leone thinks that, both at the functional and the anatomical level, our brains are quite unique. "Blind people are not experiencing the world like a sighted person with eyes closed,' he says, "but rather, they have a dramatically different world representation and hence consciousness.' Indeed, maybe each of us has our own very personal take on the world, sensed by our own unique brain. Alas, we only know how it feels to be ourselves, so it's impossible to know. And we can't ask those ferrets whether they were really seeing, or somehow hearing the light. It makes you wonder all over again about bananas-is the divine yellow fruit the very same to you as it is to me? Probably not. 0

Six months and counting Just how should we prepare for El Niño's arrival?

El Niño is back. Well, probably. The see-saw in weather conditions across the Pacific Ocean is tipping once again, and its baleful effects could be upon us within six months. That's the verdict of most of the climate modellers who have been number crunching temperature data from the middle of the Pacific.

Four years ago, the "El Niño of the century' wreaked havoc worth $32 billion around the wodd, to homes, farming, forestry and tourism. Many governments are still bruised. This time, forewamed by timely predictions, they are likely to be thinking up plans to counter the threat. Fatalism in the face of the weather may be a thing of the past, but there's a danger that we'll replace it with something worse.

The stakes are high. Indonesia won't want to repeat the burning of Bomeo's rainforests, which having been dried by El Nifio, choked Its neighbours in smoke. And as President Robert Mugabe campaigns for re-election in Zimbabwe he won't want the drought that often coincides with El Nifio to turn existing food shortages into famine. The UN, In a report last year, urged governments to "prepare now for the next El Nifio". But prepare how, exactly?

The certainties are few. For a start, El Nifio forecasting is still a very tentative affair. Not all climate modellers working on the case are confident that El Nifio will arrive as predicted in six months'time. Those that are, say that lt will be weaker than before. But that could be wrong, too. Six months before the last El Nifio, few of the models predicted that lt would be anything out of the ordinary (New Scientist, 31 May 1997, p 6).

El Nifio's impact isn't always predictable either. It's supposed to bring drought to East Africabut try telling that to the Kenyans. At the height of its last appearance, the country was hit by massive rains in the middle of what should have been the dry season. Widespread floods brought Rift Valley fever to cattle herders and cholera to the capital.

And on the other side of the world, in Costa Rica, the environmentally savvy government advised moving large herds of cattle from the area normally hit by drought. Sadly, the rains failed right where the hapless animals had been moved to. Cattle died by the thousand.

The danger, now that governments have got wind of the importance of El Nifio, is that they look at the forecasters' predictions and then take the wrong action. For scientists, it may be a triumph to be able to announce a probability of, say, 70 per cent that El Nifio will arrive in six months. But for governments it creates serious problems. You can look pretty stupid-and waste a lot of money-acting to forestall an act of God that never shows.

Sure, climatologists are making excellent progress. They are getting to grips not only with the dynamics of El Nifio, but also with the vagaries of the Asian monsoon, the North Atlantic Oscillation and other features of the global climate which are slowly emerging from the chaos of our weather. But there Is much to learn before the dream of worldwide long-range weather forecasting becomes a reality.

Forecasters have long associated El Nifio years with drought in India. But that link also seems to have broken down. Why? Is it, as some have suggested, caused by a recent flip in the North Atlantic Oscillation, which has strengthened the jet stream over Asia? We need to know, too, what causes East Africa's hit-and-miss relationship with El Nifio. Has lt got something to do with precisely when the phenomenon kicks in? Or perhaps how strong it Is? It's a confusing picture that global warming may already be complicating. There are some things that prudent governments certainly should be doing. Unblocking the rubbish-clogged drains of Nairobi and putting the city's cholera teams on alert makes good sense, as do refilling ttie grain stores of Zimbabwe and banning people from setting fires in the rainforests of Bomeo.

But the overriding lesson may be that while vulnerable countries should prepare for the worst, they shouldn't bank on it happening. Last year the UN bemoaned "a general lack of belief among potential forecast users around the world in the reliability of [El Nifiol forecasts'. Maybe that scepticism isn't so bad. Like stereotypical Englishmen, countries should leam to carry umbrellas when the forecast even hints at rain. Just don't open them until the rain starts to fall.

Glimmer from the Milky Way's heart of darkness

IT'S official: our Galaxy really does have a supermassive black hole at its centre. How do we know? By the sight of a flare of gas near itthe closest anyone's ever come to imaging the edge of a supermassive black hole.

Last year, researchers used the Hubble Space Telescope to look at stars close to the centre of the Milky Way. Their high speeds suggested they were orbiting a huge mass, 2.6 million times the mass of the Sun. Even so, astronomers couldn't be certain it was a black hole. "There's always been that nagging doubt," says-Fulvio Melia, a black hole expert at the University of Arizona.

They couldn't be sure because the stars were still a long way from the Galaxy's centre. The stars were 30,000 times the distance from the centre to the supposed black hole's "event horizon"-the limit beyond which nothing, not even light, can escape. If the object the stars were circling was as big as the volume they enclosed, it

wouldn't be dense enough to be a black hole.

But an international team has used Chandra, NASNs orbiting X-ray observatory, to capture the image of a faint X-ray flare only 20 times the distance from the centre to the event horizon. "This is the closest to a supermassive black hole we've ever seen," says team member Fred Baganoff of the Massachusetts Institute of Technology.

The team knew how far away the flare was from the centre of the black hole because it flickered over a period of about 10 minutes. A celestial body cannot change any faster than the time light takes to travel across it. Light travels about 150 million kilometres in 10 minutes, so the object producing the flare can be no bigger than this-roughly the distance between Earth and the Sun. This flare is at a point in space and time that must be more warped by the gravity of a black hole than anything previously seen. Eugenie SamueL Boston More at: Nature (vot 413, p 45)

Line 'em up If you want to build a quantum computer, this could be where to start

A SIMPLE row of phosphorus atoms embedded in a sliver of ordinary silicon could help create a hugely powerful quantum computer. "Many people thought that the placement of single phosphorus atoms could be a show-stopper," says Jeremy O'Brien ftom the Centre for Quantum Computer Technology at the University of New South Wales in Sydney. 'But we have demonstrated that it is possible.'

Conventional digital computers shuffle around bits of information that are either in an 'on" or an "off" state. Quantum computers will exploit the fact that a quantum bit, or 'qubit", can exist in an infinity of states between on and off to achieve undreamed-of computing speeds.

Individual atoms are the obvious candidates for quoits. The problem is to get them into a state where they can be both controlled and protected from outside interference. Physicists have coaxed a few atoms into exotic states that fit the bill, but they didn't make much of a computer.

In 1998, Bruce Kane of the University of Maryland proposed a new scheme: embedding phosphorus atoms inside a silicon crystal (New Scientist, 24 June 2000, p 36). His architecture could incorporate many quoits, with the added advantage that it would be easy to build such a device into conventional microchip circuits.

Now O'Brien and his team have got partway to creating it, with a row of phosphorus atoms, spaced just nanometres apart, in a pure silicon crystal surface, as they explain in a forthcoming issue of Physical Review B. The team started with a clean, atomically flat silicon surface in an ultra-high vacuum to stop the silicon atoms combining with oxygen. Then they covered the surface with a layer of hydrogen atoms.

Using the superfine tungsten tip of a scanning tunnelling microscope, they plucked out single hydrogen atoms where they wanted the quoits to be. To get phosphorus into the holes, the researchers exposed the

surface to phosphine gas (PH3)' One phosphine molecule bonded with each exposed silicon atom. "The idea is that the phosphorus atoms replace silicon atoms in a silicon crystal,' says O'Brien.

The team must now work out a way of growing more silicon over the phosphorus atoms, to enclose them within a crystal. 'The prospects for achieving this next step look very promising," O'Brien says.

But 'the hurdle is not completely passed", warns David DiVincenzo, a quantum computing expert from IBM's T. J. Watson Research Center in Yorktown Heights, New York. 'And it is only one of many hurdles on the way to the quantum computer." Melanie Cooper, melbourne

Dose of clover

WHILE consumers and supermarkets remain wary of genetically modified food, a team of researchers is suggesting feeding cattle GM fodder designed to keep them heathy.

Every year, cattle farmers in north America lose billions of dollars when their animals get 'shipping fever", a pneumonia-like Illness often triggered by the stress of being transported. But Injecting conventional vaccines is expensive, and also causes stress.

So Raymond Lee and his team at the University of Guelph in Canada are developing an edible vaccine. They took the gene fdr leukotoxin, a major protein in the bacterium Mannheimia haemolytica, which causes the disease, stripped it Of its toxic elements, and Inserted the rest of the gene into the genome of white clover, a favourite for cattle. As the researchers hoped, the modified clover made leukotoxin, and when they injected the protein into rabbits the animals produced antibodies to it. 'it can still produce an immune response,' says Lee. The antibodies neutrallsed the original toxin.

But a commercially available edible vaccine Is still years away. Lee's team have yet to confirm whether their clover can prevent the disease in cattle and, if So, how much is needed. 'We don't know yet how much to feed them,@ admits Lee. This winter the team will start giving the modified clover to cattle to find out.

The researchers say the genetically modified fodder will not affect cows differently from the inoculations they get now. "You're still Injecting bits of bacterial protein,l says Lee.

But Isabelle Meister of Greenpeace says the GM approach is still a worry. 'What if non-target animals start eating lt?" she says. She also points out how easily genetically modified Starlink corn slipped Into the human diet. Alison Motiuk More at: Infection and Immunity Ivol 69, p 57861

Plants fight off the ravages of the ozone hole

ANTARCTICA'S native flora is doing better than expected in the face of the growing ozone hole over the continent. Rather than being killed off by scorching ultraviolet light, many plants seem able to repair any damage almost overnight. Contrary to expectations, the latest evidence suggests that the high levels of UV light passing through the ozone hole are having little effect on photosynthesis, researchers told the conference last week. Daniela Lud and her team at the Netherlands Institute of Ecology in Yerseke found small amounts of DNA damage in samples of UV-irradiated plants. However, they appeared to be able to mend any damage within a day.

The plants protect themselves by quickly producing a natural sunscreen, researchers believe. Kevin Newsham of the British Antarctic Survey found that within 24 hours of a strong UV blast Antarctic mosses and liverworts increased production of sunscreen pigments and carotenoids. These substances block out UV and mop up harmful oxygen radicals produced by the light. Deneb Karentz, a biologist at the University of San Francisco, says that the plants are adapted to their new situation. The ozone hole has been developing for nearly 25 years, she explains. "Any organisms that couldn't survive the increase in UV during the first few years are not around now." Although the ozone hole presents an extra challenge to Antarctic plants, Karentz adds, it's important to remember that life originally evolved without the protection of an ozone blanket. "UV protection and repair processes are very old mechanisms that came about in times of high UV." But life in the ocean seems to be fairing less well. Microbes in the Antarctic seas are more sensitive to UV than their land-based neighbours, says Patrick Neale from the Smithsonian Environmental Research Center near Washington DC. He has found evidence that increased UV light is causing photosynthesis rates in marine phytoplankton to fall.

Neale believes the difference can be explained by the love-hate relationship that photosynthetic organisms have with the Sun. Antarctic plants live under highintensity light in the summer, so they have developed aggressive UV defences. Microbes in the sea have the opposite problem. There is little light in the ocean depths, so making chlorophyll to harvest the Sun's energy is more important than producing sunscreens. When water movement forces phytoplankton to the sea surface, they can be damaged by high UV.

Fusion Boost

THE decades-long effort to build a nuclear fusion reactor has received a major boost. In experiments at the US National Fusion Facility in San Diego, researchers have quadrupled the rate of fusion in superhot deuterium gas. Fusion reactors aim to reproduce the Sun's power source, but the problem is containing the hot plasma. The San Diego team achieved more stable containment and higher pressure by carefully manipulating the magnetic fields that control the spinning plasma. This brings us a step closer to a commercial reactor that could provide enormous amounts of energy with hardly any pollution or waste. The team, which includes researchers from Columbia and Princeton universities, as well as General Atomics of San Diego and others, is using DIII-D, a tokamak reactor whose heart is a doughnut-shaped cavity 4.5 metres in diameter. Inside the cavity a plasma of deuterium is heated to 100 million kelvin and held in place with powerful magnetic fields. Deuterium is a heavy isotope of hydrogen, and when its nuclei collide under this intense pressure, some of them fuse to form helium, releasing large amounts of energy. The goal of ftision research is a reactor that produces much more energy than the large amounts needed to run it. The experimental tokamaks that exist around the world, such as the joint European Torus UET) reactor at Culham near Oxford, have to date not progressed far beyond the break-even point. Early theoretical and experimental results suggested that there is a limit to how pressurised the plasma can be before it begins to bulge unpredictably. But then other work in the early 1990s suggested that you could solve the problem by spinning the plasma around the cavity as if it were a racetrack.

Experiments at Columbia and DIII-D had shown it was easy to set the plasma spinning, but it tended to slow down and become unstable again. Now the DIII-D team has found out why: the plasma was magnifying tiny imperfections in the magnetic field that contained it.

So they fitted sensors to detect the imperfections-some as weak as the Earth's magnetic field-and then corrected them with arrays of magnets in the cavity controlled via feedback loops. "It takes very little power because the errors are about one part in a thousand," says Ronald Stambaugh of General Atomics. The team found that the spinning plasma did not sloiv down and they could ramp up the pressure to twice the previous limit, quadrupling the rate of fusion. Rob Goldston, who worked on Princeton's tokamak until it was closed in 1997, says he is very excited by the result. "This is a very deep insight into the behaviour of stable plasmas." DIII-D is only about one-eighth the size you'd need for a commercial reactor, and such a reactor would have to run on a mixture of deuterium and its heavier sibling tritium.

Despite these differences, Stambaugh believes the principle will work in a commercial model. "We're doing this research with the belief that the physics will transfer," he says. Researchers in Europe, japan, Russia and Canada are now lobbying governments to fund a prototype called ITER that would produce power (New Scientist, 14 October 2000, p 4). Michael Watkins of JET says that the work done at Culham, combined with the DIII-D team's method, should have real benefits for the ITER project. 'Tokamak research is in a very strong position now," he says. Eugenie Samuel, Boston

What's the matter? The mirror world of antimatter isn't such a perfect reflection after all

MATTER and antimatter are not exact mirror images of each other, say scientists at the Stanford Linear Accelerator Center in California. The result will help physicists solve the mystery of why our Universe seems to contain more matter than antimatter.

Physicists believe that after the big bang, matter and antimatter were created in equal proportions. But today the Universe seems to contain mainly matter, so most of the antimatter must have disappeared some time before "normal" matter-the neutrons, protons and electrons we see now-was formed. For nature to have favoured matter over antimatter in this way, they must have slightly different properties.

In 1964, physicists first spotted a difference in the decay rates of a subatomic particle called a K rgpson and those of its antimatter partner. To be certain, researchers wanted to see if other particles exhibited the same phenomenon, known as chargeparity (CP) violation. The most promising candidate was a particle called a B meson, and special particle accelerators-called B factories-were built to produce pairs of Bs and anti-Bs.

Since it started work in June 1999, the B factory at Stanford has produced 32 million such pairs. The huge Babar detector that measures their decay rates produces a number known as sin2p. If matter and antimatter are exact mirror images, sin2o should be zero. Charge-parity violation, as predicted by the standard model of particle physics, should produce a value of 0.7.

In February this year, preliminary results from the Babar team produced a figure slap in the middle, with large enough uncertainty to agree with either result (New Scientist, 17 February, p 9). But the Babar team plugged on, trying to gather enough data to reduce the uncertainty. To prevent any subconscious bias from creeping into the calculation, the latest value of sin2o produced by Babar is kept concealed from the scientists by the computer.

T'wo weeks ago, the computer disclosed the new result: 0.59. "When it came up, we said OK, that's it, we've gotta go with it," says Stewart Smith of Princeton University, spokesman for the Babar team. This figure has an uncertainty of 0.14, making it consistent with charge-parity violation. "It's fair to say there's no inconsistency,' says Smith.

A rival B factory at the High Energy Accelerator Research Organization (KEK) in Tsukuba, Japan, also produced an inconclusive result earlier this year. It is due to report its latest results in the next few weeks.

Ken Peach of the Rutherford Appleton Laboratory in Oxford, who has worked for many years on CP violation in K mesons, says that the Babar results are an exciting confirmation of the standard model. "This is certainly a huge step in our confidence in the model." He also says it is possible that whatever created the large imbalance between matter and antimatter in the Universe produced this smaller effect in the standard model. 'We just don't know how they're related yet," he says. Eugenie Samuel, Boston

Two's a crowd IVF brings joy to thousands, but it's creating multiple headaches

THE developed world is facing a disastrous "epidemic" of twin and triplet births, scientists warned at a conference last week. They are calling for radical changes to fertility treatments to prevent a huge increase in problem pregnancies and birth defects.

Increased use of IVF is one reason for the rise in multiple births. "The incidence of multiple pregnancy after IVF in Britain is about 25 per cent. That is a real concern," says fertility expert Robert Winston of Imperial College, London. 'The pressures both upon the clinics and on the patients to

go as close to the limits as possible are still undeniably there." Multiple births often occur after IVF because doctors transplant more than one embryo to make a successful pregnancy more likely. Another problem is that the drugs used to induce ovulation often make ovaries release several eggs at once. Winston says these drugs are handed out too freely. "Sadly, they are not regulated in many countries,' he says. Multiple pregnancies are a problem for health services because they're plagued by

complications. Babies are often premature, underweight and need expensive intensive treatment, and their mothers need more prenatal and antenatal care. Multiple births can also lead to neurological disorders, with triplets being 20 times more likely than singletons to have cerebral palsy.

The epidemic of multiple births arising from fertility treatments escalated swiftly, experts warn. Between 1980 and 1997, the twin birth rate in the US increased by 42 per cent, according to Laura Schieve who is based at the Centers for Disease Control and Prevention in Atlanta, Georgia. Triplet and higher multiple births increased by 370 per cent. "In the US, as elsewhere, there's been an alarming rise in multiple births,' Schieve told the meeting. If current trends for triplet

births continue, says jaroslaw Oleszczuk of Polish Mother's Memorial Hospital in L6d2, almost a third of all people born will be a triplet in some coun tries within a decade or so. 'If we don't change anything, the rates will increase exponentially as they have done over the past 20 years,' he says.

'A third of all people born will be a triplet In some countries within a decade'

In the US, for instance, triplet births could rise to around 350,000 each year. "The figure for the healthcare costs is into billions and billions of dollars a year," Oleszczuk says. "We're not even counting the psychosocial costs for the families and for the triplets themselves, who have to deal with these problems."

Some countries have regula tions that limit the number of embryos t can be transferred in IVF, usually to just three.

British authorities are considering reducing that to two. But Brian Lieberman of Manchester Fertility Services, a private clinic, thinks that doesn't go far enough: 'None of these strategies will suffice except for the replacement of a single embryo.' Advances in IVF technology could make this a viable option. Embryos are usually transferred to the womb after 3 days, but new techniques allow them to develop for 5 days into a ball of cells called a blastocyst. Transferring a viable blastocyst gives much higher success rates (New Scientist, 17 October 1998, p 22).

Isaac Blickstein of the Kaplan Medical Center in Rehovot, Israel, says we should ditch ovulation-inducing drugs altogether. "Maybe in the near future they will be unethical," he says. He recommends that doctors switch over to IVF instead.

If these policies were put in place, Oleszczuk says, rates of triplet births could stabilise at natural levels within a few years.

Time stands still Simple solutions can transform lives, so what are we waiting for?

IN A world where 2 billion people live in homes that don't have light bulbs, technology holds the key to banishing poverty, says the United Nations in a major report published this week. But rich nations and multinational corporations need to do a lot more to put technology into the hands of the world's poorest people.

Even the simplest technologies can transform lives and save money. Vaccines, crops, computers and sources of solar energy (see Table) can all reduce poverty in developing countries. For example, cheap oral-rehydration therapy developed in Bangladesh has dramatically cut the death toll from childhood diarrhoea. But there has been a "market failure to meet the needs of the poor", says lead author Sakiko Fukuda-Parr. "There's no global framework for supporting research and development that addresses the common needs of poor people," she says.

Multinationals must become part of the solution, because they own around 60 per cent of the world's technology. But they seldom make products for poor customers. Of 1223 new drugs marketed worldwide from 1975 to 1996, for example, just 13 were for tropical diseases. "It's the big corporations that own the technology that really should read this report," says Fukuda-Parr. "We're asking them to be more socially responsible." They could do more to provide vital Oroducts such as medicines at different prices around the world to suit what people can afford (New Scientist, 7 July, p 6). Or pledge a percentage of their profit towards research and development for the poor. Governments from rich countries should pay more too. They and other sources such as the World Bank and international institutes could provide as much as $10 billion. Developing countries should also make better use of intellectual property laws that entitle them to vital medicines, just as South Africa did recently with AIDS drugs. Critics of the report say it doesn't take poor people's views into account. "You have to ask: is it affordable to people who earn less than a dollar a day? Is it accessible to them? Can it be managed by local people?' says Lucja Wisniewska of the British-based charity Intermediate Technology Development Group. Controversially, the report backs genetically modified crops despite the widespread opposition to them among Western environmentalists and non-governmental organisations. 'To reject it entirely is forgoing a huge opportunity," says Fukuda-Parr. "If it's so good for multinationals, why shouldn't it be used by poor farmers," she says. Computers could also rev olutionise the lives of poor people, allowing them to tap into a global wealth of free information that could help solve local problems. But they'd need to be cheap and wireless. Fukuda-Parr says that Brazil and India have already developed cheap com puters, proving that coun tries can do it for themselves. But the objectives will be difficult to achieve. Time has stood still in sub-Saharan Africa, where there has been no increase in tractor use for a decade. Andy Coghtan

The heat is on Pressure mounts on global climate deal as hopes fbr forests fade

TALKS to salvage the Kyoto Protocol could be undermined before they even start by research suggesting that planting forests to curb global warming could backfire.

The world's nations are due to meet in Bonn next week to thrash out ways to combat climate change. The protocol gives governments the option to plant trees to soak up carbon dioxide, rather than cutting emissions of the greenhouse gas.

But this provision is deeply flawed, warns Richard Betts of Britain's Meteorological Office. He says it doesn't take into account other ways that new forests can affect climate. 'Carbon accounting alone will overestimate the contribution of afforestation to reducing climate warming," he told New Scientist. This week, Betts presented the first detailed calculations showing that planting trees across the snow-covered swathes of Siberia and North America will heat the planet rather than cool it. And even away from the tundra, the cooling potential of forests is much less than previously supposed, he told a climate conference in Amsterdam.

His findings may further undermine support for the Kyoto Protocol. Several industrialised countries are wavering following the withdrawal of the US from the proposed treaty earlier this year.

Green forest canopies reflect much less solar radiation than most other land surfaces. They also absorb more, heating the Earth's surface. This effect is greatest where forests replace snowy tundra, which normally reflects large amounts of solar radiation.

Betts calculates that at northern latitudes, warming as a result of planting forests will 'overwhelm any cooling effect due to the trees soaking up CO2.

Both Canada and Russia want to plant forests in their empty tundras to help meet their Kyoto commitments, because a hectare of immature forest can absorb more than 100 tonnes of carbon each year, despite growing slowly. But Betts calculates that the net warming effect of heat-absorbent forests in both regions is equivalent to an annual emission of 75 tonnes of carbon per hectare.

His new calculations also halve estimates for the carbon sink potential of western European forests. "Even in places where the cooling effect is still dominant,' says Betts, "the cooling influence is generally much smaller than expected when considering carbon sequestration alone."

So should some countries be destroying forests instead? "I am not suggesting that we deforest," says Betts. "But afforestation is not always an effective alternative to cutting fossil fuel emissions.' Fred Pearce

Deutch's Multiverses

Parallel universes are no longer a figment of -~~ our imagination. They're so real that we can : reach out and touch them, and even use them ~ ~ to change our world, says Marcus Chown ~ ~

FLICKING through New Scientist, you stop ae this page, think "that's 6 interesting" and read these words. Another you thinks "what nonsense", and moves on. Yet another lets out a cry, keels over and dies.

Is this an insane vision? Not according to David Deutsch of the University of Oxford. Deutsch believes that our Universe is part of the multiverse, a domain of parallel universes that comprises ultimate reality. ~

Until now, the multiverse was a hazy, ill-defined concept-little more [8 than a philosophical trick. But in a paper yet to be published, Deutsch has worked out the structure of the multiverse. With it, he claims, he has answered the last criticism of the sceptics. "For 70 years physicists have been hiding from it, but they can hide no longer." If he's right, the multiverse is no trick. It is real. So real that we can mould the fate of the universes and exploit them.

Why believe in something so extraordinary? Because it can explain one of the greatest mysteries of modern science: why the world of atoms behaves so very differently from the everyday world of trees and tables.

The theory that describes atoms and their constituents is quantum mechanics. It is hugely successful. It has led to computers, lasers and ~ riuclear reactors, and it tells us why the Sun shines and why the ground Li beneath our feet is solid. But quantum theory also tells us something very disturbing about atoms and their like: they can be in many places at once. ~ This isn't just a crazy theory-it has observable consequences (see er "}nterfering with the multiverse", p 29).

But how is it that atoms can be in many places at once whereas big things made out of atoms-tables, trees and pencils- apparently cannot? Reconciling the difference between the microscop~c and the macroscopic is the central problem in quantum theory.

The many worids interpretation is one way to do it. This idea was proposed by Princeton graduate student Hugh Everett 111 in 1957. According to many worlds, quantum theory doesn't just apply to atoms, says Deutsch. "The world of tables is exactly the same as the world of atoms."

But surely this means tables can be in many places at once. Right. But nobody has ever seen such a schizophrenic table. So what gives?

The idea is that if you observe a table that is in two places at once, there are also two versions of you-one that sees the table in one place and one that sees it in another place.

The consequences are remarkable. A universe must exist for every physical possibility. There are Earths where the Nazis prevailed in the Second World War, where Marilyn Monroe married Einstein, and where the dinosaurs survived and evolved into intelligent beings who read New Scientist.

However, many worlds is not the only interpretation of quantum theory. Physicists can choose between half a dozen interpretatians, all of which predict identical outcomes for all conceivable experiments. Deutsch dismisses them all. 'Some are gibberish, like the Copenhagen interpretation," he says-and the rest are just variations on the many worlds theme.

For example, according to the Copenhagen interpretation, the act of observing is crucial. Observation forces an atom to make up its mind, and plump for being in only one place out of all the possible places it could be. But the Copenhagen interpretation is itself open to interpretation. What constitutes an observation? For some people, this only requires a large-scale object such as a particle detector. For others it means an interaction with some kind of conscious being. Worse still, says Deutsch, is that in this type of interpretation you have to abandon the idea of reality. Before observation, the atom doesn't have a real position. To Deutsch, the whole thing is mysticismthrowing up our hands and saying there are some things we are not allowed to ask. Some interpretations do try to give the microscopic world reality, but they are all disguised versions of the many worlds idea, says Deutsch. "Their proponents have fallen over backwards to talk about the many worlds in a way that makes it appear as if they are not." In this category, Deutsch includes David Bohm's 'pilot-wave" interpretation. Bohm's idea is that a quantum wave guides particles along their trajectories. Then the strange shape of the pilot wave can be used to explain all the odd quantum behaviours, such as interference patterns. In effect, says Deutsch, Bohm's single universe occupies one groove in an immensely complicated multi-dimensional wave function.

"The question that pilot-wave theorists must address is: what are the unoccupied grooves?" says Deutsch. 'It is no good saying they are merely theoretical and do not exist physically, for they continually jostle each other and the occupied groove, affecting its trajectory. What's really being talked about here is parallel universes. Pilot-wave theories are parallel-universe theories in a state of chronic denial."

Back and forth

Another disguised many worlds theory, says Deutsch, is John Cramer's "transactional" interpretation in which information passes backwards and forwards through time. When you measure the position of an atom, it sends a message back to its earlier self to change its trajectory accordingly.

But as the system gets more complicated, the number of messages explodes. Soon, says Deutsch, it becomes vastly greater than the number of particles in the Universe. The full quantum evolution of a system as big as the Universe consists of an exponentially large number of classical processes, each of which contains the information to describe a whole universe. So Cramer's idea forces the multiverse on you, says Deutsch.

So do other interpretations, according to Deutsch. "Quantum theory leaves no doubt that other universes exist in exactly the same sense that the single Universe that we see exists," he says. "This is not a matter of interpretation. It is a logical consequence of quantum theory." Yet many physicists still refuse to accept the multiverse. "People say the many worlds is simply too crazy, too wasteful, too mindblowing," says Deutsch. "But this is an emotional not a scientific reaction. We have to take what nature gives us."

A much more legitimate objection is that many worlds is vague and has no firm mathematical basis. Proponents talk of a multiverse that is like a stack of parallel universes. The critics point out that it cannot be that simple-quantum phenomena

occur precisely because the universes interact. "What is needed is a precise mathematical model of the multiverse," says Deutsch. And now he's made one. The key to Deutsch's model sounds peculiar. He treats the multiverse as if it were a quantum computer. Quantum computers exploit the strangeness of quantum systems-their ability to be in many states at once-to do certain kinds of calculation at ludicrously high speed. For example, they could quickly search huge databases that would take an ordinary computer the lifetime of the Universe. Although the hardware is still at a very basic stage, the theory of how quantum computers process information is well advanced. In 1985, Deutsch proved that such a machine can simulate any conceivable quantum system, and that includes the Universe itself. So to work out the basic structure of the multiverse, all you need to do is analyse a general quantum calculation. "The set of all programs that can be run on a quantum computer includes programs that would simulate the multiverse," says Deutsch. "So we don't have to include any details of stars and galaxies in the real Universe, we can just analyse quantum computers and look at how information flows inside them."

If information could flow freely from one part of the multiverse to another, we'd live in a chaotic world where all possibilities would overlap. We really would see two tables at once, and worse, everything imaginable would be happening everywhere at the same time.

Deutsch found that, almost all the time, information flows only within small pieces of the quantum calculation, and not in between those pieces. These pieces, he says, are separate universes. They feel separate and autonomous because all the information we receive through our senses has come from within one universe. As Oxford philosopher Michael Lockwood put it, "We cannot look sideways, through the multiverse, any more than we can look into the future."

Sometimes universes in Deutsch's model peel apart only locally and fleetingly, and then slap back together again. This is the cause of quantum interference, which is at the root of everything from the two-slit experiment to the basic structure of atoms.

Other physicists are still digesting what Deutsch has to say. Anton Zeilinger of the University of Vienna remains unconvinced. "The multiverse interpretation is not the only passible one, and it is not even the simplest," he says. Zeilinger instead uses information theory to come to very different conclusions. He thinks that quantum theory comes from limits on the information we get out of measurements (New Scientist, 17 February, p 26). As in the Copenhagen interpretation, there is no reality to what goes on before the measurement.

But Deutsch insists that his picture is more profound than Zeilinger's. 'I hope he'll come round, and realise that the many worlds theory explains where the information in his measurements comes from.' Why are physicists reluctant to accept many worlds? Deutsch blames logical positivism, the idea that science should concern itself only with objects that can be observed. In the early 20th century, some logical positivists even denied the existence of atomsuntil the evidence became overwhelming. The evidence for the multiverse, according to Deutsch, is equally overwhelming. "Admittedly, it's indirect," he says. 'But then, we can detect pterodactyls and quarks only indirectly too. The evidence that other universes exist is at least as strong as the evidence for pterodactyls or quarks.'


Perhaps the sceptics will be convinced by a practical demonstration of the multiverse.

And Deutsch thinks he knows how. By building a quantum computer, he says, we can reach out and mould the multiverse. "One day, a quantum computer will be built which does more simultaneous calculations than there are particles in the Universe," says Deutsch. "Since the Universe as we see it lacks the computational resources to do the calculations, where are they being done?" It can only be in other universes, he says. 'Quantum computers share information with huge numbers of versions of themselves throughout the multiverse." Imagine that you have a quantum PC and you set it a problem. What happens is that a huge number of versions of your PC split off from this Universe into their own separate, local universes, and work on parallel strands of the problem. A split second later, the pocket universes recombine into one, and those strands are pulled together to provide the answer that pops up on your screen. "Quantum computers are the first machines humans have ever built to exploit the multiverse directly," says Deutsch. At the moment, even the biggest quantum computers can only work their magic on about 6 bits of information, which in Deutsch's view means they exploit copies of themselves in 26 universes-that's just 64 of them. Because the computational feats of such iomputers are puny, people can choose to ignore the multiverse. "But something will happen when the number of parallel calculations becomes very large,' says Deutsch. "if the number is 64, people can shut their eyes but if it's 10^64, they will no longer be able to pretend.'

What would it mean for you and me to know there are inconceivably many yous and mes living out all possible histories? Surely, there is no point in making any choices for the better if all possible outcomes happen? We might as well stay in bed or commit suicide.

Deutsch does not agree. In fact, he thinks it could make real choice possible. In classical physics, he says, there is no such thing as "if'; the future is determined absolutely by the past. So there can be no free will. In the multiverse, however, there are alternatives; the quantum possibilities really happen. Free will might have a sensible definition, Deutsch thinks, because the alternatives don't have to occur within equally large slices of the multiverse. 'By making good choices, doing the right thing, we thicken the stack of universes in which versions of us live reasonable lives," he says. "When you succeed, all the copies of you who made the same decision succeed too. What you do for the better increases the portion of the multiverse where good things happen."

Let's hope that deciding to read this article was the right choice.

Further reading: "The structure of the multiverse" by David Deutsch, h quant-ph/0104033 The Fabric of Reality by David Deutsch, Penguin (1997)