Get the Genesis
of Eden AV-CD by secure
internet order >> CLICK_HERE
Windows / Mac Compatible. Includes live video seminars, enchanting renewal songs and a thousand page illustrated codex.
Taco trouble NS 7 oct 2000 Activists demand ban on modified crops that can't be eaten
THE discovery in the US of food contaminated by a genetically modified crop approved only for animal feed could have serious repercussions. Activists are demanding that GM crops shouldn't be approved if they can't safely be eaten by people-a move that could stall efforts to use GM plants for industrial uses such as making drugs, plastics and biofuels. Aventis CropScience of North Carolina last week voluntarily withdrew its StarLink maize after traces were found in taco shells and also agreed to buy back all of this year's crop. Because StarLink is engineered to produce an insecticidal protein called Cry9C that shares some properties with known allergens, the US Environmental Protection Agency (EPA) had approved it only for animal consumption.
'The StarLink case could stall efforts to use modified plants for making drugs, pladics and blofuels'
It is not clear how StarLink got into the food chain. There might have been a mixup at a mill, a farmer could have passed off the animal feed as com fit for humans to get a higher price, or StarLink maize may have pollinated other maize growing nearby. The Food and Drug Administration is investigating how controls failed. Friends of the Earth, which triggered the recall by sending maize products to an independent lab for testing, remains doubtful that GM plants can be effectively segregated. "We are absolutely convinced that the FDA has not been diligent enough in protecting our food supply," says Mark Helm of FoE. "The FDA has to do a heck of a lot more pre-market safety testing." "This experience absolutely raises the bar and raises the safety standard," admits an EPA official who did not want his name revealed. "Since the Cry9C protein could have occurred in human food, that raises the bar substantially on approving something just for animal use." However, Dale Andolphe of the Canola Council of Canada in Winnipeg, a farmers'group, is worried about the effect that will have on the development of "plant factories" that could churn out everything from vaccines to plastic polymers. 'If StarLink was for the production of biodiesel products, it would never get food approval," he says. "We need to maintain integrity in the food safety system. But I don't know if a prohibition on industrial-use products is required." He points out that industrial varieties of food crops have long been grown. Some rapeseed, or canola, for example, is used to make industrial lubricants and contains high levels of compounds toxic to humans. -, Newer varieties destined for human consumption have been bred to have lower levels of these toxins. In countries such as Britain and Canada, where both types are grown, there are strict controls to prevent contamination. Nell Boyce, Washington DC
Things aren't looking so bright for dark matter NS 7 oct 2000
AN ASTROPHYSICS experiment designed to tell us about the cosmic microwave background may scupper one of the central tenets of modern cosmology-that the Universe is crammed full of dark matter. The Boomerang experiment surveyed the cosmos from a balloon above Antarctica, and its results reported earlier this year provided the most detailed map yet of the microwave background-photons that echo the big bang. The conclusion was that the Universe is "flat": finely balanced between expanding forever or collapsing back into a "big crunch". But the Boomerang data may be telling us something else too. Maps of the background show "blobs" where the temperature of the radiation varies slightly from the average. The size of the blobs reflects how long it took photons to cross the early Universe, and how much matter was around then. Analysing the Boomerang data produced a chart of blob sizes that peaked at just the right size for the Universe to be flat. But astronomers were surprised by the absence of a second peak, predicted by the theory of dark matter. Many astronomers believe that, for galaxies to behave as they do, more than 90 per cent of the mass of the Universe must be dark matter. lt should cause clumping in normal matter, producing more peaks in the Boomerang data. But Stacy McGaugh of the University of Maryland has an explanation. His studies of dim galaxies thought to contain a lot of dark matter convinced him that they were better explained by a modification to the law of gravity called Modified Newtonian Dynamics (MOND). Last year he predicted what would happen to the microwave background if there wasn't any dark matter. "it turns out my model does fit the [Boomerang] data," he says. Other astronomers admit the standard model of the Universe needs changing. But ff there is no dark matter, "we'll need more normal matter in the early Universe to provide drag on the photons to smooth out the spectrum," says Max Tegmark at the University of Pennsylvania, "and that might contradict our theory of . . . how matter forms after the big bang." Eugenie Samuel
Saved ftm the big crunch NS 1 May 2001
THE Universe is destined to expand forever, says an astronomer who has surveyed clusters of galaxies in deep space. His study found that there were many more clusters when the Universe was half its current age than astronomers had thought. This implies that the Universe is too light for gravity to stop it expanding. Harald Ebeling of the University of Hawaii and his team have so far discovered 101 massive galaxy clusters that are more than 5 billion light years away. "Their number is much higher than people had expected," Ebelin . 'told the meeting in Honolulu last week. And there are probably more to come, as Ebeling's Massive Cluster Survey (MACS) is currently only 75 per cent complete.
The distant clusters show the Universe as it was billions of years ago. Some of them contain thousands of galaxies, making them some of the most massive systems in the Universe to be held together by gravity.
Ebeling found the new clusters by sifting through old data from the now retired German X-ray satellite ROSAT. The tenuco ous hot gas that fills the space between the galaxies in a cluster emits energetic X-rays. But because of the immense distances of these clusters, ROSAT could only pick up the faintest signal: on average 30 X-ray photons from each of them, says Ebeling. 'It's marvellous what you can do with just 30 little photons," says Bill Forman of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. Ebeling's team is doing "a very hard job", he says. If the Universe is as massive as most people think, it should have had far fewer massive clusters in the past than it does now, explains team member Patrick Henry. With so much matter, clusters would have formed slowly over the life of the Universe, and would still be forming now, he says. In a low-density Universe, by contrast, the sparse matter would have formed into clusters quickly and then settled down. This means that the number of clusters in th@ past would be comparable to the present number-which is what MACS is finding. Earlier surveys have either found massive clusters at smaller distances, or distant clusters with a lower mass. Both are less important for constraining cosmological theories, says Ebeling. T'he mass density of the Universe is key to cosmological theories about its fate. If the density is low, the Universe wifl expand forever; if it is high the expansion will eventually stop and the Universe may contract towards a "big crunch". Although the MACS survey is incomplete, Ebeling says he is confident that it will show that the mass density must be less than half that needed to stop the cosmic expansion. "Thirty per cent [of the critical density] appears to be in the right ballpark," he says.
'It's marvellous what you can do with just thirty little photons' Bin Forman
That would put the cluster results in line with other recent cosmological evidence. The MACS results are very promising, says Aaron Lewis of the University of Colorado. "Cluster observations will be one of a few solid legs on which the determination of the mass density of the Universe rests."
Neuroscience Is coming of age. NS 18 Nov 2000
For the first time there Is a realistic hope of designing treatments for paralysis, head Injuries and stroke, and progressive neurological diseases such as muhiple scleronis and brain cancer. But just as scientists pick up speed In their quest for new therapies, politicians are applying the brakes. The British Parliament recently voted down proposals to allow researchers to study stem cells harvested from [email protected] that may ultimately help paralysed people walk again and treat devastating neurological diseases. And if the Republicans prevail in the contested US prosidentlal elections, they will likely reverse an earlier decision allowing such research to be publicly funded. While these moves may be motivated by the best Intentions, they could delay long-awaited advances by years. Researchers are trying new approaches to avoid this ethical conundrum-they are making huge strides in finding new sources of stem cells, for Instance. But without the knowledge that might be gained from studying embryonic stem cells, many of these efforts will be wasted.
WHEN the full potential of embryonic stem cells is realised, surgeons will use them in the same way sculptors use clay. Taken from a vezy early embryo, and given the right chemical or protein signals, they can be shaped into any tissue or replace any damaged cells. However, pro-life campaigners are telling governments to stop this line of research. They argue that a.n embryo, even if it has only eight cells, is a human life. Destroying it is murder. Film director and former "Superman" Christopher Reeve, who became paralysed from the neck down in 1995, disagrees. "This has nothing to do with abortion," he told a packed auditorium at this year's Society for Neuroscience meeting in New Orleans. Reeve urged researchers to "have the courage to proceed to human trials with various promising therapies for paralysis, including those using stem cells to regrow nerve tissue. Time may be short, however. In August, President Bill Clinton allowed publicly funded research into embryonic stem cell research to go ahead in the US with certain restrictions, a move hailed by scientists. But pressure from Republicans in the US Congress has severely limited funding for such research And George W. Bush's aides say he wants to reverse Clinton's decision. Republicans retain a shm majority in Congress, so any more loosening of restrictions in the near future seems unlikely. Sudi delays in opening up embryonic stem cell research could have a profound effect. For the first time, there is a feeling of [email protected] among neuroscientists that real progress can be made in developing therapies against a host of neurological afflictions. Stem cells are helping researchers develop exciting potential therapies for brain injury, stroke and diseases such as multiple sclerosis and motor neuron disease. Each year, 2 million people suffer head injuries in the US, says Tracy McIntosh of the University of Pennsylvania. And there is no available treatment, bar drilling a hole in the head, he says. 'The Incas did that 10,000 years ago." His group induced the equivalent of concussion in 48 mice. Three days later they injected either mouse neural stem cells or human kidney cells-as a control-into the animals' brains. After five weeks, mice given the stem cells showed a significant improvement in cognitive function, as measured by their ability to navigate a water maze, he told the New Orleans conference. Such work depends on embryonic stem cells. In Britain, a proposed relaxation of the 1990 Human Fertilisation and Embryology Act would give researchers even more freedom than in the US. They could take stem cells from embryos up to two weeks old, just before the nervous system begins to form. And it would allow "therapeutic cloning"-fusing an adult cell with an egg stripped of its nucleus to make an embryo, from which stem cells could be harvested. But as in the US, such proposals face staunch opposition from pro-life MPs who say making embryos for spare parts is morally repugnant. They also argue that therapeutic cloning is one short step from reproductive cloning, which would create an entirely new person. Two weeks ago in Westminster, these concerns scuttled a private member's bill that would have allowed stem cells to be isolated from embryos. Although private member's bills rarely result in a change to the law, Conservative MP Edward Leigh forced the bill to a vote after only ten minutes of debate. That was too lime time, supporters say, to make MPs aware of all the facts. Parliament will hold a free vote later this year on whether to extend the 1990 act. Yet embryonic stem cells have already led to advances in treating the animal equivalent of motor neuron disease, which eventually leaves humans unable to move. Jeffrey Rothstein at Johns Hopkins University in Baltimore, Maryland, found that paralysed rats given an injection of embryonic stem cells into the spinal fluid dramatically regain partial leg movement. Evan Snyder of Harvard Medical School in Boston has also made progress in using stem cells to kill off cancerous brain cells. Clinical trials for treating Huntington's disease are also starting to bear fruit. Marc Peschanski from the Cr6teil laboratory of INSERM, the French National Health and Medical Research Institute, has given five patients eight-day-old fetal stem cells that had already begun differentiating into nervous tissue. Implants of the cells into the striatum, a brain region that helps control movement, improved movement and cognitive function in three of the five patients, he told the conference. "Long-term clinical improvements are possible," he says. These advances are founded on embryonic stem cell research. Many who object to the use of this tissue say the whole ethical morass could be avoided by sticking to nonembryonic stem cells. These reside in every tissue in the body and replace old cells when they die. So far about 20 kinds have been discovered, and they can be harvested from cadavers, blood and even skin. Snyder has used stem cells harvested from newborn mice to treat stroke, where a localised region of the brain is damaged. His treatment exploits the special ability of stem cells to home in on sites of injury. Snyder's team induced a stroke in adult male rats by briefly cutting off the main cerebral artery. One day later, they injected them in the back of the brain with either stem cells, a growth factor, or both. These migrated to the injury site, and all the rats showed some recovery, according to three standard tests of motor function. The rats on the combined therapy did best. Jeffery Kocsi of Yale University in New Haven, Connecticut, is using adult stem cells to treat multiple sclerosis. In MS, the insulating sheath of myelin that surrounds nerve fibres dies, disrupting nerve transmission. He used X-rays and injections of a toxic chemical to damage discrete areas of myelin in adult marmosets. He then took stem cells from the subventricular zone of the monkeys' brain, added some growth factors, and injected them back into the affected region. Three weeks later there was "extensive remyelination". Kocsi has high hopes for auto-transplantation therapy to treat demyelinating diseases. "Even a 5 or 10 per cent improvement would significantly affect human quality of Iffe," he says. Turning paralysis from a permanent to a treatable condition is one of neuroscience's greatest prizes. Researchers are making, advances on a number of fronts, and one of the most promising involves stem cells. Fred Gage of the Salk Institute in La JoHa, California, says cell transplants can help regenerate nerves. The adult central nervous system produces its own stem cells, but they only become neurons in a couple of places in the body. We need to understand how to make them work for us, says Gage, so we can regenerate neurons wherever they are damaged. If stem cells are taken from the spinal cord and placed in the part of the brain where neurons are born, they become capable of differentiating into neurons. And scientists are close to working out which molecules in surrounding tissues are responsible. Gage says teasing out this mechanism will be critical if we are to develop treatments for paralysis. But even though rapid progress is being made in developing sources of adult stem cells and treatments that use them, these cells are not as versatile as embryonic cells.
Skin stem cells can only make skin. Neural stem cells can only make neurons. Most are difficult to harvest and don't live long in cell culture dishes. So far, most of the successful experiments in rats with stem cells have used the embryonic variety. In August, Ira Black at the Robert Wood Johnson Medical School in New Jersey reported that he had converted bone marrow stem cells into the precursors of neurons. This makes adult stem cells more attractive, since marrow is easier to harvest. But Black says we will never know what biochemical cues give cells their identity unless researchers can study embryonic stem cells-the most undifferentiated cells of all. Limiting research to adult stem cells would leave scientists groping in the dark. "We don't yet know enough even to ask the proper questions," he says. Neuroscientists still face formidable challenges. Many cells die, for example, and some go to the wrong place or become the wrong kind of cell. Rothstein says we still don't know how long implanted stem cells survive, or whether some win become tumours due to their extraordinary ability to keep dividing. It isn't even clear when to implant such cells. These questions can only be answered by testing stem cells over and over in the lab-and embryonic cells are the only type suitable, say researchers. Ironing out these kinks is a key issue in developing therapies for neurological diseases. Last week, Britain's Royal Society reaffirmed the need for more work on embryonic cells, calling for scientists to be allowed to grow stem cells from embryos younger than 14 days. Such research could be critical. Neuroscientists are a cautious bunch and fear that offering even a hint that they could cure brain disease, spinal injury and stroke simply raises patients' hopes prematurely. But this year the 20,000-plus neuroscientists who gathered in New Orleans let slip their optimism that repairing damage to the brain and spine is finally within reach. Christopher Reeve summed up the mood. "There is no reason why this problem and other disorders of the brain and central nervous system can't be overcome," he told the meeting. Researchers agree. "We can do it soon. We must do it soon," said Dennis Choi, outgoing president of the society. It remains to be seen just how much progress politicians will allow scientists to make. "scientists know a lot, but the obstacle of politics will affect implementation," says Reeve. 'What happens in a Bush presidency, God forbid." Jonathan Knight Alison Motiuk and Helen Phillips
We see far less than we think we do.
Rather than logging every detail of the visual scene, says
Simons, we are actually highly selective about what we take in.
Our impression of seeing everything is just that-an impression.
In fact we extract a few details and rely on memory, or perhaps
even our imagination, for the rest. Others have a more radical
interpretation: they say that we see nothing at all, and our belief
that we have only to open our eyes to take in the entire visible
world is mistaken-an illusion. Until the last decade, vision researchers
thought that seeing really meant making pictures in the brain.
By building detailed intemal representations of the world, and
comparing them over time, we would be able to pick out anything
that changed. Then in 1991, in his book Consciousness Explained,
the philosopher Daniel Dennett made the then controversial claim
that our brains hold only a few salient details about the world-and
that this is the reason we are able to function at all. We don't
store elaborate pictures in short-tenn memory, Dennett said, because
it isn't necessary and would take up valuable computing power.
Rather, we log what has changed and assume the rest has stayed
the same. Of course, this is bound to mean that we miss a few
details. Experimenters had already shown that we may ignore items
in the visual field if they appear not to be significant-a repeated
word or line on a page of text, for instance. But nobody, not
even Dennett, realised quite how little we really do "see".
just a year later, at a conference on perception in Vancouver,
British Columbia, John Grimes of the University of Illinois caused
a stir when he described how people shown computer-generated pictures
of natural scenes were blind to changes that were made during
an eye movement. Dennett was delighted. 'I wish in retrospect
that I'd been more daring, since the effects are stronger than
I claimed," he says. Since then, more and more examples have
been found that show just how illusory our visual world is. It
tums out that your eyes don't need to be moving to be fooled.
In a typical lab demonstration, you might be shown a picture on
a computer screen of, say, a couple dining on a terrace. The picture
disappear, to be replaced for a fraction of,a second by a blank
screen, before reappearing significantly altered-by the raWuV
of a railing behind the couple, perhaps. The picture flickers
back and forth, and many people search the screen for up to a
minute before they see the change. A few never spot it. It's an
unnerving experience. But to some extent "change blindness"
is artificial because the change is masked in some way. In real
life, there tends to be a visible movement that signals the change.
But not always. As Simons points out, "We have all had the
experience of not noticing a traffic signal change because we
had briefly looked away.' And there's a related phenomenon called
inattentional blindness, that doesn't need any visual trick at
all: if you are not paying attention to some feature of a scene,
you won't see it. In our own simple demonstration, few people
spot that the first "t" in "New Scientist"
is not the same on the cover as on page 27. Last year, with Christopher
Chabris, also at Harvard, Simons showed people a videotape of
a basketball game and asked them to count the passes made by one
or other team. After about 45 seconds, a man dressed in a gorilla
suit walked slowly across the scene, passing between the players.
Although he was visible for five seconds, 40 per cent of the viewers
failed to notice him. When the tape was played again, and they
were asked simply to watch it, they saw him easily. Not surprisingly,
some found it hard to believe it was the same tape. Now imagine
that the task absorbing their attention had been driving a car,
and the gorilla-man had been a pedestrian crossing their path.
According to some estimates, nearly half of all fatal motorvehicle
accidents in the US can be attributed to driver error, including
lapses in attention. It is more than just academic interest that
has made both forms of cognitive error hot research topics. Such
errors raise important questions about vision. For instance, how
can we reconcile these gross lapses with our subjective experience
of having continuous access to a rich visual scene? Last year,
Stephen Kosslyn of Harvard University showed that imagining a
scene activates parts of the visual cortex in the same way as
seeing it. He says that this supports the idea that we take in
just what information we consider important at the time, and fill
in the gaps where the details are less important.."The illusion
that we see 'everything' is partly a result of filling in the
gaps using memory," he says. "Such memories can be created
based on beliefs and expectations." Ronald Rensink of the
University of British Columbia in Vancouver believes that our
impression of a rich visual world comes from our building intemal
representations, though he accepts that they are far less detailed
than was once thought. According to his "coherence theory",
the brain first constructs a temporary layout of the visual scene-not
much more than the basic geometry and light distribufion. Then
attenfion comes along and pulls out a few of these "proto-objects"
to a higher resolufion. More importantly, he explains, 'what attention
does is to stabilise these representations so that they form an
individual object, something with continuity in space and in time".
The moment attention is released, they dissolve back into the
volafile, unresolved landscape. In Rensink's view, focused attention
is needed to perceive change. But while Rensink or Kosslyn would
argue that there is some role for intemal images or memory, other
researchers argue that we can get the impression of visual richness
without holding any of that richness in our heads. Back in 1992,
Kevin O'Regan, an experimental psychologist at the French National
Centre for Scientific Research (CNRS) in Paris put forward what
later be known as his .grand illusion" theory. He argued
that we hold no picture of the vi@ world in our brains. Instead,
we refer back to the external visual world as different aspects
become important. The illusion arises from the fact that as soon
as you ask yourself "am I seeing this or that?" you
tum your attention to it and see it. According to O'Regan, it's
not just our impression of richness that is illusory, but also
the sense of having control over what we see. "We have the
illusion that when something flickers outside the window, we notice
it flickering and decide to move our eyes and look,' says Susan
Blackmore AV'&e University of the West of England, who supports
O'Regan's views. "That's balderdash." In fact, she says,
we are at the mercy of our change detection mechanisms, which
automatically drag our attention here, there and everywhere. At
a meeting in Brussels in July this year, O'Regan and Alva No@
of the University of California, Santa Cruz, updated the controversial
theory. Sensation, whether it be visual, auditory or tactile,
is not something that takes place in the brain, they argue. Rather
it exists in the knowledge that if you were to perform a certain
action, it would produce a certain change in sensory input. "Sensation
is not something that we feel, but sensation is something that
we do," says O'Regan. According to this idea, the sensation
of "redness" arises from knowing that moving your eyes
onto a red patch will produce a certain change in the pattern
of stimulation in line with laws of redness. In other words, the
role of the brain is to initiate the exploratory action and to
hold the knowledge of those laws: together this give rise to the
sensation of redness. Once you dismiss the need for visual memory,
O'Regan says, many of the problems that vision researchers have
grappled with, for decades vanish. Namely, how does the ropy physics
of the eye give rise to the largely flawless experience of visual
perception? Leaving aside the blind spot in each retina and the
fact that we view the world through a jerky sequence of eye movements
or 'saccades", we have two upside-down retinal images. If
you assume that our brains build detailed reconstructions from
such inadequate, distorted input, you have to postulate some kind
of compensation mechanism in the visual system. In O'Regan's model
no such mechanism is required because there is no reconstruction.
His theory also explains change blindness. Take the example of
the dining couple. The reason you don't notice the raising of
the railing is because you didn't notice the railing in the first
placeit wasn't relevant so it remained invisible. O'Regan's ideas
have not been generally accepted. 'He's pushed the idea that we
lack visual representations farther than most people in the field
have been willing to," says Simons. But despite their differences,
Simons, Rensink and O'Regan all say that of all the myriad visual
details of any scene that you could record, you take only what
is relevant to you at the time. In the Simons-Levin experiment,
for example, even the object to which the person is attending-the
stranger asking for directions be swapped without them noticing.
Despite the fact that they were looking at him for around a minute,
half the subjects encoded none of the details of his physical
appearance that were later to change. It was not relevant that
the stranger had a certain haircut or that his trousers were a
certain colour. What was relevant was that he was a person in
a certain location addressing them with a certain query 'Paying
attention to an object does not give you all of that object's
properties for free," says Simons. He points out that those
who did notice the switch were students of about the same age
as the 'strangers". Being in the same social group, he and
Levin speculated, they would be more inclined to take in individual
details, whereas older subjects might categorise the stranger
as .student" and leave it at that. The relationships between
attention, awareness and vision have yet to be clarified. But
there is one thing on which most researchers agree: because we
have a less than complete picture of the world at any one time,
there is the potential for distortion and error. And that has
all sorts of implications, not least for eyewitnesses. If it is
possible to stand less than a metre from a person and talk to
them for a minute without taking in more than a few basic facts,
how reliable is the testimony of a person who witnesses a scene
from a distance, when thev were optivious to its significance
and ordy later c@ to recall it? 'In my view, imagery plays a key
role in many sorts of false memories," says Kosslyn. 'One
is 'filling in' the gaps and later remembering not only what was
attended to, but also what was filled in." In retrospect,
he says, we don't make any distinction between the two types of
information. For all our experience of a rich visual world, it
seems that we take in no more than a handful of facts about the
world, throw in a few stored images and beliefs, and produce a
convincing whole in which it is impossible to tell what was real
and what imagined. As Blackmore puts it: "There is a world
and a brain in it, which together are building a construction,
a story, a great confabulation."
Laura Spinney is a writer based in London
Further reading:"Failure to detect changes to people during roal-world interaction' by Daniel J. Simons and Daniel T Levin, Psychonomic Bulletin and Review, vol 4, p 644 (1998) "Solving the 'real' mysteries of visual perception: the world as an outside memory" by J. K. O'Regan, Canadian Journal of Psychology, vol 46, p 461 (1992)
"Beyond the grand illusion: what change blindness really teaches us about vision" by A. Nod [email protected], Viwal Cognition, vol 7, p 93 (2000). @ try the 'caning couple" experiment and others, see hftp://nivea.psycho.univparls5.fr*SSW html
Hidden from Consciousness
'YOU MIGHT PREFER THE NOTION THAT YOU ARE IN CONTROL OF YOUR OWN MIND. BUT WHERE DID THAT IDEA COME FROM?.'
mental activities, such as our thoughts and feelings? Most people-and many researchers-consider that these originate within the realms of consciousness. We don't agree. We suggest that all the thoughts, ideas, feelings, attitudes and beliefs traditionally considered to be the contents of consciousness are produced by unconscious processes-just like actions and perceptions. It's only later that we become aware of them as outputs when they enter our consciousness. As pointed out by Jeffrey Grey of the Institute of Psychiatry in London-consciousness occurs too late to affect the outcomes of the mental processes that it is apparently linked to. You may prefer the notion that you are in charge of your own mind. But where did that idea come from? If you stop to think about it, you'll probably find that it just popped into your head-like all your thoughts. Perhaps you have decided to read the rest of this article. But did "you" really make that choice? Keep reading, if you can. You may never think of 'yourself" in quite the same way again. The next time you're casually talking to someone, see if you can guess what your next words will be. If speech is a product of our conscious minds, it should be easy. But almost certainly you'll have to wait until you hear your words spoken before you can know what they are. The same applies to writing, particularly creative writing. Many authors say they often don't know what the next sentence will reveal, or where the next tum in the plot will take them. Enid Blyton, the children's story writer, described to the psychologist Peter McKellar how she would close her eyes and wait, typewriter at the ready, for her r-haracters to emerge into her mind's eye and begin their adventures. She felt she could not have thought up the storylines by herself. So who did? Her brain created them outside her conscious experience, but she only became aware of them later as fully formed ideas, conversations, even jokes, In our view, speaking, writing and all of the brain's information processing activities occur at an unconscious level, only later giving rise to a continuous conscious experience of the world and of yourself. In our model, we refer to these unconscious" parts of the brain as Level 2. Within this level, there must be some kind of decision-making device, a central executive structure. The CES identifies the most important task the brain is carrying out at any moment, and selects the information that best describes the current state of the brain in relation to the chosen task. Only this information would be allowed to enter Level 1, to produce 'our' conscious experience (see Diagram). Imagine that you are sitting in an uncomfortable chair, listening to a lecture. If the talk is interesting, you'll be aware of the speaker's voice, the meaning of what's being said, and perhaps also the speaker's surroundings. These are all products of Level 2 processing that the CES currently considers important. It has therefore allowed them into Level 1-so "you" experience them. At the same time, Level 2 is processing information about the hard chair, the smell of the room, sounds from outside and the whispered conversation going on behind you. Because they are not important to the task of listening to the lecture, the CES does not select them for entry to level 1, and "you" remain unaware of them. However, ff the talk becomes boring, the CES might judge that doing something about your discomfort is now the priority, and "you" become aware of the hard chair. More dramatically, even during the most engaging talk, if your name is whispered, you suddenly become aware of the conversation behind you and lose the thread of the lecture completely. But 11 you" didn't consciously decide to attend to the pressure of the chair or to the conversation. The information was 'outed" by the CES from Level 2 to Level 1 and you become aware of the product. Outing can be public, in the form of speech or writing, or can remain private, in the form of feelings and thoughts. But whether public or not, these outed products have certain distinctive features. They are always identified as belonging to the "here and now", rather like a process of automatic date stamping, and they are labelled as belonging to the "self". Actions, especially when they are labelled as originating from self, are also tagged as being voluntary. In the process of outing, any thoughts, ideas, beliefs, perceptions and acts become "yours" and are automatically linked to the idea of free will. Inevitably, it will be difficult to prove this account of consciousness and free will, especially as most people recoil from the idea that their thoughts originate anywhere other than in their own consciousness. Even if, as we propose, all the contents of Level 1 are second-hand, they can only be viewed by "you" as first-hand experiences. But if we move away from everyday experiences and think about some of the effects of hypnosis, it becomes clear just how illusory is the feeling that we control the contents of our own consciousness. In some people who are easily hypnotised, it is possible to create the experiences of blindness, deafiiess, paralysis or insensitivity to pain. Richard Bryant and Kevin McConkey, psychologists at the University of New South Wales in Sydney, have shown that a hypnotically blind person is still able to respond to visual information, in a way that resembles cases of 'blindsight" (New Scientist, 5 September 1998, p 38). For instance, people who had been hypnotised could respond accurately to indicator lights above a set of switches, even when they were experiencing hypnotic blindness. Perhaps hypnosis works by allowing an external influence, such as the suggestion of the hypnotist, to affect the decisionmaking process of the CES (Contemporary Hypnosis, vol 16, p 215). In the case of hypnotic blindness, it is as though the hypnotist is able to persuade the CES to stop selecting visual informafion as current and so prevent its entry into Level 1. The hypnotised person claims they cannot see, but with further appropriate suggestions their sight is restored. Because they continue to respond to visual signals when hypnotically blind, Level 2 must still be processing the relevant information. The CES simply ceases to select it for entry into consciousness. As a result, the person doesn't experience the visual signals and so will report, quite honestly, that they cannot "see" anything.
The idea that many aspects of consciousness represent the products of prior levels of "unconscious" processing is not new. Pioneers such as Hermann von Helmholtz and Wilhelm Wundt, who founded the first psychology laboratory in Leipzig in 1879, recognised that most mental processes were in many ways no different to the physiological processes of the respiratory, cardiac or digestive systems. All are efficient automatic processes that happen outside our awareness. Nevertheless, many people consider that mental events accompanied by conscious experience somehow involve additional or superior processes that are not present in the vast range of unconscious operations. We disagree. According to our model, everything experienced in consciousness has already been formed in the unconscious, and consequently there is no need to propose any additional or further processing. The selected products of Level 2 and the contents of Level 1 are one and the same thing-the only difference is that once these products are selected as current and .outed" they become part of the conscious experience of the individual. The contents of Level 1 as conscious experience do not go on to do anything more, nor do they directly influence any other processing. They are simply replaced by the next set of current contents from Level 2. Even when it comes to thinking, which just doesn't seem possible without consciousness, all is not as it seems. In his book Psychology: The science of mental life, George Miller provides a thoughtprovoking illustration. He invites the reader to try to think of their mother's maiden name and report what happens in their mind as they do so. Most people describe feelings of tension, maybe an irrelevant image or two, and then suddenly the answer was there in full consciousness. Consciousness, he says, gives no clue as to where the answer comes from. It is the result of thinking, not the process of thinking, that appears in consciousness. As Susan Blackmore from the University of the West of England in Bristol has pointed out, consciousness does not "do" anything; copsciousness is simply "what it is like to be me now". But consciousness has its uses. Along with our actions, the publicly outed elements of our consciousness enable others to form a picture of us. In order to survive in complex social groups, this picture should be as consistent and apparently rational as possible. Society also needs us to take responsibility for our actions. Consequently, one of the most important creations of Level 2 is a behef in a self to which mental processes can be attributed. Level 2 thinking and feeling becomes "I think" and "I feel' when selected for entry to Level 1. Level 2 is responsible for creating and maintaining this consistent selfrepresentation. To do so it has to keep track of what has already been outed in the form of biographical memory. As Nicholas Humphrey, a psychologist and philosopher at the London School of Economics, has suggested, having a strong representation of ourselves may provide the basis for understanding others and for them understanding us.
More than you know: all of our mental processes, from thoughts to actions, originate In our unconscious. The central executive structure then decides what it will allow to enter consciousness
The self we pr(sent to the outside world is thus a useful fiction created by Level 2 and the experience we have-in Level 1of control, free will and continuity of experience is simply a congenial myth. According to Michael Gazzaniga, a neuroscientist at Dartmouth College in Hanover, New Hampshire, part of the role of consciousness is to serve as a reliable "spokesperson" for the individual. To achieve this, output at Level 1 needs to be made consistent with previously outed material. This can lead to Level 2 inventing plausible explanations if it does not have all the relevant information in biographical memory. When that invention is selected to enter our consciousness, our experience will be that it is correct and accurate. More than 30 years ago, Richard Nisbett and Timothy Wilson of the University of Michigan in Ann Arbor demonstrated this process. They suspended two cords from the ceiling and asked people simply to tie the two ends together. The only snag was that the cords were too far apart to reach both simultaneously When the subjects had exhausted all their ideas, the experimenter walked past one of the cords and "accidentally" brushed against it, setting it in motion. Very soon after that most of the subjects solved the problem, coming up with the idea of tying a weight on one of the cords and setting it swinging, making it easy to reach the ends of both cords at the same time. However, when they were asked how they arrived at the solution most failed to mention the experimenter's hint-they simply said that the solution just dawned on them. It seems that information about the significance of the experimenter's behaviour was not selected by the CES for elevation to Level 1 and so was not available to be outed in speech and thought. In conjunction with the nofion of a self, Level 2 also creates an illusion of control over our actions-the appearance of free will. We usually feel that voluntary actions follow a clear intention to act. Benjamin Libet of the University of Califon-da, however, used electrical recordings of the brain's activity to show that preparations for carrying out an act can be detected in our brains shortly before the intention to act appears in our consciousness (New Scientist, 5 September 1998, p 32). The idea that you form an intention and then act on it is compelling, but wrong. Even if you look carefully at your own experience of decision making, it is evident that you don't make up your own mind-if you are honest, and you take the time, you discover that your mind makes itself up. Take the following familiar example of deciding to get out of bed in the morning. Guy Claxton, a psychologist at the University of Bristol, describes his own experience of this. First, on waking, he becomes aware of thinking "I must get up" or "I'm going to get up' and then his mind drifts off onto other things and he continues lying there. Then, when he is in the middle of a completely unrelated train of thought, he suddenly "comes to" to find that he has already begun to get out of bed. In other words, when you decide to get up, you frequently don't. When you have stopped thinking about getting up, at some point you do. Some of the views expressed here may be unsettling. They seem to rob us of the most cherished characteristics of the human mind. But while we are saying that our-conscious experiences of self and control are an elaborate delusion, we are not dispensing with the notions themselves. We are merely shifting those mental processes traditionally associated with them away from the domain of consciousness into the unconscious mechanisms of Level 2. We accept that somewhere in our minds is a representation of a self, and there are clearly systems of control, maybe even free will. But none of these reside in our consciousness. Our message is that we should all learn to accept our Level 2 and extend the concept of "myself" to include it when claiming to make decisions, organise or plan strategies. Perhaps we all should recognise that "me" is, at best, a partial and often biased version of the "larger me' in our unconscious. We should not deceive ourselves by believing that the 'me' each one of us is conscious of has any significant influence over our actions and experiences. In many respects this "me' only operates as a monitor or recorder of events which occur elsewhere in the unconscious parts of our minds. Perhaps by now you have begun to think of yourself differently, to re"m that .you" are not really in control. Nevertheless, it will be virtually impossible to let go of the myth that self and free will are integral functions of consciousness. The myth is something we are strongly adapted to maintain, and almost impossible to escape from. Maybe that's not so surprising, because, after all, "we' are part of the illusion.
Mr W. Halligan Is at the School of Psychology, Cardiff University David A. Oakley Is at the Hypnosis Unit, Department of Psychology, University College London
Furdor reading: Understanding Conwioumm, by Max Volmans (Routledge, 2000) The Volitional Brain: Towards a nourosdows of free will, edhed by Anthony Freeman, Benjamin Libet and Keith Suthodand (imprint Academic, 1999) The Minor Past by Michael S. Gazzaniga (University of Calffomia Press, 2000)