While doing some research on human evolution, I stumbled across the web site for a wonderful meeting that was held in March at San Diego to celebrate the sequencing of the chimpanzee genome. You can watch the lectures here. By comparing the chimp genome to the human genome, scientists are discovering exactly how we evolved into the peculiar species that we are. If you find yourself in an argument with someone who claims that evolution has nothing to do with cutting edge science, plunk them down in front of these talks. Without evolution, genomics is gibberish.
(Note--Oliver Baker informs me that this page won't work in Firefox. IE and Safari are fine.)
Is Intelligent Design the same thing as creationism? The people who back Intelligent Design have spilled an awful lot of ink saying they're different. Even self-proclaimed creationists have tried to claim a difference. Somehow, both of these camps think that any confusion between the two is evidence of the lazy arrogance of evolutionists. In fact, the evidence points towards Intellgent Design being just a bit of clever repackaging to get creationist nonsense into the classroom. (See this useful article.)
A little clarity has emerged over at the new Sarkar Lab Weblog. They've created a "Creationist Faculty," described as a "list of faculty who have spoken in favor of creationism in its traditional form or as intelligent design." They add, "Please fell free to nominate members to this Hall of Shame."
Today they announced an addition--William Dembski, the loudest Intelligent Design advocate out there. Nominated by? Dembski.
Question asked and answered.
If you took a census of life on Earth, you'd probably find that the majority of life forms looked like this. It's a virus known as a bacteriophage, which lives exclusively in bacteria. There are about 10 million phages in every milliliter of coastal sea water. All told, scientists put the total number of bacteriophages at a million trillion trillion (10 to the 30th power). Bacteriophages not only make up the majority of life forms, but they are believed to have existed just about since life itself began. Since then, they have been evolving along with their hosts, and even making much of their hosts' evolution possible by shuttling genes from one host to another. Thanks in large part to bacteriophages, more and more bacteria are acquiring the genes they need to defeat antibiotics. Bacteriophages also kill off a huge portion of ocean bacteria that consume greenhouse gases. If you suddenly rid the world of all bacteriophages, the global climate would lurch out of whack.
It may seem strange that the world's most successful life form looks a bit like the ship-drilling robots that swarmed through The Matrix. But the fact is that the bacteriophage is nanotechnology of the most elegant, most deadly sort. To get a real appreciation of its mechanical cool, check out the movie from which this picture comes. (Big and small Quicktime.) The movie is based on the awesome work of Michael Rossmann of Purdue University and his colleagues. (Their most recent paper appears in the latest issue of Cell, along with even more cool movies.) Rossmann and company have teased apart pieces of a bacteriophage and have gotten a better understanding of how they work together. The phage extends six delicate legs in order to make contact with its host, E. coli.. Each leg docks on one of the bacteria's receptors, giving the phage the signal that it is time to inject its DNA. The legs bend so that its body pulls towards the bacterium. The pulling motion makes the base of the phage begin to spin like the barrel of a lock. A set of shorter legs, previously held flush against the base of the virus, unfold so that they can clamp onto the microbe's membrane. The phage's sheath, shown here in green, shrinks as its spiralling proteins slide over one another. A hidden tube emerges, which in turn pushes out a needle, which rams into the side of the bacterium. The needle injects molecules that can eat away at the tough inner wall of the microbe, and the tube then pushes all the way into the microbe's interior, where it unloads the virus's DNA.
It has taken a while, historically speaking, for scientists to come to appreciate just how sophisticated parasites such as bacteriophages can be, a subject I explored at length in my book Parasite Rex. The best human-designed nanotech pales in comparison to bacteriophages, a fact that hasn't been lost on scientists. Some have been using bacteriophages to build nanowires and other circuitry. Others see them as the best hope for gene therapy, if they can be engineered to infect humans rather than bacteria. In both cases, evolution must play a central role. By allowing the phages to mutate and then selecting the viruses that do the best job at whatever task the scientists choose, the scientists will be able to let evolution design nanotechnology for them. From the depths of deep time, one of the next great advances in technology may come. And perhaps some more work in Hollywood, I hope.
Spiteful bacteria. Two words you probably haven't heard together. Then again, you probably haven't heard of altruistic bacteria either, but both sorts of microbes are out there--and in many cases in you.
Bacteria lead marvelously complicated social lives. As a group of University of Edinburgh biologists reported today in Nature, a nasty bug called Pseudomonas aeruginosa, which causes lung infections, dedicates a lot of energy to helping its fellow P. aeruginosa. The microbes need iron, which is hard for them to find in a usable form in our bodies. To overcome the shortage, P. aeruginosa can release special molecules called siderophores that snatch up iron compounds and make them palatable to the microbe. It takes a lot of energy for the bacteria to make siderophores, and they aren't guaranteed a return for the investment. Once a siderophores harvests some iron, any P. aeruginosa that happens to be near it can gulp it down.
At first glance, this generosity shouldn't exist. Microbes that put a lot of energy into helping other microbes should become extinct--or, more exactly, the genes that produce generosity in them should become extinct. Biologists have discovered mutant P. aeruginosa that cheat--they don't produce siderophores but still suck up siderophores made by the do-gooders. It might seem as if the cheaters should wipe the do-gooders off the face of the Earth. The solution to this sort of puzzle--or at least one solution--is helping out family. Closely related microbes share the same genes. If a relative scoops up the iron and can reproduce, that's all the same for your genes.
To test this hypothesis, the Edinburgh team ran an experiment. They filled twelve beakers with bacteria they produced from a single clone. While the bacteria were all closely related, half were cheaters and half were do-gooders. They let the bacteria feed, multiply, and compete with one another. Then they mixed the beakers together, and randomly chose some bacteria to start a new colony in twelve new beakers. More successful bacteria gradually became more common as they started new rounds. In the end, the researhers found--as they predicted--that these close relatives evolved into cooperators. The do-gooders wound up making up nearly 100% of the population.
That didn't happen when the researchers put together two different clones in the same beakers. When the bacteria had less chance of helping relatives, the do-gooders wound up making up less than half of the population.
But the biologists suspected that even families could turn on themselves. Mathematical models suggest that the benefit of helping relatives drops if relatives are crammed together too closely. They never get a free lunch--siderophores produced by other, unrelated bacteria. Instead, all the benefits of consuming iron are offset by the cost of producing the siderophores. In the end, the benefit doesn't justify the cost.
The Edinburgh team came up with a clever way to test this prediction out. They ran the same colony experiment as before, but now they didn't take a random sample from the mixed beakers to start a new colony. Instead, they took a fixed number of bacteria from each beaker. This new procedure meant that there was no longer a benefit to being in a beaker where the bacteria were reproducing faster than the bacteria in other beakers. The only way to survive to the next round of the experiment was to outcompete the other bacteria in your own beaker--even if they were your own relatives. The researchers discovered that when closely related bacteria were forced to compete this way, utopia disappeared. Instead, the ratio of cheaters to do-gooders remained about where it started, around 50:50.
The evolutionary logic of altruism also has a dark side, known as spite, which the Edinburgh have explored in a paper in press at the Journal of Evolutionary Biology. (They've posted a pdf on their web site.) It's theoretically possible that you can help out your relatives (and even yourself) by doing harm to unrelated members of your same species, even if you have to pay a cost to do it. You might even die in the process, but if you could wreak enough havoc with your competitors, this sort of behavior could be favored by evolution. Biologists call this sort of behavior spite.
It turns out that many bacteria are spiteful in precisely this way. They produce antibiotics known as bacteriocins that are poisonous to their own species. These poisons take a lot of energy to make, and the bacteria often die as they release them. But these spiteful bacteria don't kill their own kin. Each strain of bacteria that makes a bacteriocin also makes an antidote to that particular kind of bacteriocin. Obviously, evolution won't favor a lineage of microbes that all blow themselves up. But it may encourage a certain balance of spite--a balance that will depend on the particular conditions in which the bacteria evolve.
Understanding the evolution of spiteful and altruistic bacteria will help scientists come up with new ways to fight diseases. (The altruism of P. aeruginosa can make life hell for people with cystic fibrosis, because the bacteria cooperate to rob a person of the iron in his or her lungs.) But bacteria can serve as a model for other organisms who can be altruistic or spiteful--like us. While some glib sociobiologists may see a link between a spiteful self-destructive microbe and a suicide bomber, the analogy is both disgusting and stupid. Yet the same evolutionary calculus keeps playing out in the behavior of bacteria and people alike.
(Update 6.27.04: Did I say siderophiles? I meant siderophores...)Marriage, we're told by the president and a lot of other people, can only be between one man and one woman. Anything else would go against thousands of years of tradition and nature itself. If the president's DNA could talk, I think it might disagree.
In the 1980s, geneticists began to study variations in human DNA to learn about the origin of our species. They paid particular attention to the genes carried by mitochondria, fuel-producing factories of the cell. Each mitochondrion carries its own small set of genes, a peculiarity that has its origins over two billion years ago, when our single-celled ancestors engulfed oxygen-breathing bacteria. When a sperm fertilize an egg, it injects its nuclear DNA, but almost never manages to deliver its mitochondria. So the hundreds of mitochondria in the egg become the mitochondria in every cell of the person that egg grows up to be. Your mitochondrial DNA is a perfect copy of your mother's DNA, her mother's DNA, and so on back through history. The only differences emerge when the mitochondrial DNA mutates, which it does at a fairly regular rate. A mother with a mutation in her mitochondria will pass it down to her children, and her daughters will pass it down to their children in turn. Scientists realized that they might be able to use these distinctive mutations to organize living humans into a single grand genealogy, which could shed light on the woman whose mitochondria we all share--a woman who was nicknamed Mitochondrial Eve.
Alan Wilson of the University of California and his colleagues gathered DNA from 147 individuals representing Africa, Asia, Australia, Europe, and New Guinea. They calculated the simplest evolutionary tree that could account for the patterns they saw. If four people shared an unusual mutation, for example, it was likely that they inherited from a common female ancestor, rather than the mutation cropping up independently in four separate branches. Wilson's team drew a tree in which almost all of the branches from all five continents joined to a common ancestor. But seven other individuals formed a second major branch. All seven of these people were of African descent. Just as significantly, the African branches of the tree had acquired twice as many mutations as the branches from Asia and Europe. The simplest interpretation of the data was that humans originated in Africa, and that after some period of time one branch of Africans spread out to the other continents.
Despite the diversity of their subjects, Wilson's team found relatively little variation in their mitochondrial DNA. Although their subjects represented the corners of the globe, they had less variation in their genes than a few thousand chimpanzees that live in a single forest in the Ivory Coast. This low variation suggests that living humans all descend from a common ancestor that lived relatively recently. Wilson's team went so far as to estimate when that common ancestor lived. Since some parts of mitochondrial DNA mutate at a relatively regular pace, they can act like a molecular clock. Wilson and his colleagues concluded that all living humans inherited their mitochondrial DNA from a woman who lived approximately 200,000 years ago.
The first studies by Wilson and others on mitochondrial DNA turned out to be less than bulletproof. They had not gathered enough data to eliminate the possibility that humans might have originated in Asia rather than Africa. Wilson's students continued to collect more DNA samples from a wider range of ethnic groups. Other researchers tried studying other segments of mitochondrial DNA. Today they have sequenced the entire mitochondrial sequence, and the data still points to a recent ancestor in Africa. All mitochondrial DNA, it now appears, came from a single individual who lived 160,000 years ago.
More recently, men offered their own genetic clues. Men pass down a Y chromosome to their sons, which remains almost completely unchanged in the process. Y chromosomes are harder to study than mitochondrial DNA (in part because each cell has only one Y chromosome but thousands of mitochondria). But thanks to some smart lab work, scientists began drawing the Y-chromosome tree. They also found that all Y chromosomes on Earth can be tracked down to a recent ancestor in Africa. But instead of 170,000 years, the age of "mitochondrial Eve," they found that their "Y-chromosome Adam" lived about 60,000 years ago.
This discrepancy may seem bizarre. How can our male and female ancestors have lived thousands of years apart? Different genes have different history. One gene may sweep very quickly through an entire species, while another one takes much longer to spread.
In 2001 I wrote an essay on this odd state of affairs for Natural History. At the time, scientists weren't sure just how real the discrepancy was. After all, both estimates still had healthy margin of errors. If mitochondrial Eve was younger and Y-chromosome Adam was older, they might have missed each other by only a few thousand years. On the other hand, if the gap was real, there were a few possible explanations. In one scenario, a boy 60,000 years ago was born with a new mutation on his Y chromosome. When he grew up, its genes helped him reproduce much more successfully than other Y chromosomes, and his sons inherited his advantage. Thanks to natural selection, his chromosome became more common at a rapid rate, until it was the only chromosome left in our species. (This selective sweep might have been just the last in a long line of sweeps.)
Now comes a fascinating new paper in press at Molecular Biology and Evolution. Scientists at the University of Arizona suspected that some of the confusion over Adam and Eve might be the result of comparing the results of separate studies on the Y chromosome and mitochondrial DNA. One study might look at one set of men from one set of ethnic backgrounds. Another study might look at a different set of women from a different set of backgrounds. Comparing the studies might be like comparing apples and oranges. It would be better, the Arizona team decided, to study Y chromosomes and mitochondrial DNA all taken from the same people. Obviously, those people had to be men. The researchers collected DNA from men belonging to three populations--25 Khosians from Southern Africa, 24 Khalks from Mongolia, and 24 highland Papuan New Guineans. Their ancestors branched off from one another tens of thousands of years ago.
The results they found were surprisingly consistent: the woman who bequeathed each set of men their mitochondrial DNA was twice as old as the man whose Y chromosome they shared. But the ages of Adam and Eve were different depending on which group of men the scientists studied. The Khosian Adam lived 74,000 years ago, and Khosian Eve lived 176,500 years ago. But the Mongolian and New Guinean ancestors were both much younger--Adam averaged 48,000 years old and Eve 93,000 years.
You wouldn't expect these different ages if a single Y chromosome had been favored by natural selection, the Arizona team argues. Instead, they are struck by the fact that Khosians represent one of the oldest lineages of living humans, while Mongolians and New Guineans descend from younger populations of immigrants who left Africa around 50,000 years ago. The older people have an older Adam and Eve, and the younger people have a younger one. The researchers argue that some process has been steadily skewing the age of Adam relative to Eve in every human population.
Now here's where things may get a little sticky for the "one-man-one-woman-is-traditional-and-natural" camp. The explanation the Arizona scientists favor for their results is polygyny--two or more women having children with a single man. To understand why, imagine an island with 1,000 women and 1,000 men, all married in monogamous pairs, just as their parents did, and their grandparents, and so on back to the days of the first settlers on the island. Let's say that if you trace back the Y chromosomes in the men, you'd find a common ancestor 2,000 years ago. Now imagine that the 1,000 women are all bearing children again, but this time only 100 men are the fathers. You'd expect that the ancestor of this smaller group of men lived much more recently than the common ancestor of all 1,000 men.
Scientists have proposed that humans have a history of polygyny before (our sperm, for example, looks like the sperm of polygynous apes and monkeys, for example). But with these new DNA results, the Arizona researchers have made a powerful case that polygyny has been common for tens of thousands of years across the Old World. It's possible that polygyny was an open institution for much of that time, or that secret trysts made it a reality that few would acknowledge. What's much less possible is that monogamy has been the status quo for 50,000 years.
People are perfectly entitled to disagree over what sort of marriage is best for children or society. But if you want to bring nature or tradition into the argument, you'd better be sure you know what nature and tradition have to say on the subject.
After a couple months of merciless story deadlines, hard disk crashes, and strange viruses that you only find out about once you have kids, the Loom is creaking back to life. Expect several postings this week. For now, let me direct you to a review I wrote a couple weeks ago for The New York Times Book Review about Devil in the Mountain, a book about the Andes. The author, an Oxford geologist, dissects these mountains like a surgeon cutting open a living person. It reminded me of the times I've driven around with geologists; all of the landscape that blurs past most of us is a vast palimpsest to them, and waving their hand out the window, they tell you about hundreds of millions of years of mountain building and destruction. Short of grabbing a geologist for a ride, I'd suggest getting the book.
Chris Mooney has just blogged on a depressing new report that came out today that documents how the Bush administration puts politics before science.
For several decades, evolutionary biologists have been trying to figure out the forces that set this balance. It appears that they come down to a tug of war between competing interests. Imagine a species in which a freakish mutation makes the females gives birth to lots and lots of daughters. If you're a male, suddenly your chances of reproducing look very good--certainly better than all those females. Now imagine that some of these lucky males acquire mutations that makes them father more sons than daughters. This son-favoring mutation would spread because of the advantage to being male. In time, these mutations would tip balance of the sexes over to the males. The now-common males would have less chance of producing offspring than the now-rare females. The advantage shifts to the females. Over time, these opposite forces pull the ratio back and forth until they settle down into an equilibrium.
Sometimes, though, the ratio of males to females lurches out of balance. Some insects, for example, only give birth to daughters. That's because a third player has entered the tug of war--a bacterium called Wolbachia. Wolbachia lives in animal cells, and so the only way it can survive beyond the life of a host is to get into its eggs, which then grow into adults. Since Wolbachia cannot fit into sperm, males are useless to it. And so it has evolved a number of tools that it uses to forces its female hosts to give birth only to daughters.
But biologists have also noticed that in some situations other animals--including humans--can give birth to an overabundance of sons or daughters. In the early 1970s, Robert Trivers and Dan Willard, both then at Harvard, wondered if mothers might be able to control the sex ratio of their offspring to boost their own reproductive success. They imagined a species, such as deer, in which relatively few males mated with most females. If a doe gives birth to a healthy male, he is likely to grow up into a healthy buck that has a good chance of producing lots of grandchildren for the doe. In fact, he would be able to produce far more grandchildren than even the most successful daughter. He could impregnate lots of mates, while a daughter could produce only a couple offspring a year, which she would then have to nurse. So if the doe is in good health and can give birth to strong offspring, she would do best to produce sons.
On the other hand, if this doe gives birth to a male in poor condition, he may be unable to compete with other males, and his chances of reproducing fall to zero. Since most females that live to adulthood give birth to at least some offspring, it would make more sense to give birth to a daughter rather than a son in poor condition.
Trivers and Willard speculated that if a mother could somehow gauge the prospects of her offspring, she might manipulate their sex ratio to her own evolutionary advantage. In bad times, she'd produce females, and in good times she'd produce males.
Since Trivers and Willard first published their idea in 1973, scientists have tested it in hundreds of studies. Some of the results have been quite powerful. Scientists moved a bird known as a Seychelles warbler from one habitat to another and measured the sex ratio of their chicks. In places with lots of food, they produced lots of daughters that stayed at the nest to help raise their younger chicks. In places with little food, the ratio swung in favor of sons, which flew off in search of new territory. But the results have been far from clear-cut, particularly for mammals, which has led some researchers to wonder whether this particular force is very strong in mammal evolution.
In an article in press at The Proceedings of the Royal Society of London, South African zoologist Elissa Cameron of the University of Pretoria argues that the case for adjusting sex ratios is actually very good if you look at the evidence properly. She analyzed over 400 studies of sex ratios and noticed that, depending on the study, the scientists measured the condition of mothers at different points in their pregnancy. Some took their measurements at conception, some in the middle of gestation, and some at birth. While studies during gestation and at birth provided ambiguous results, almost all the studies done around conception gave strong support to the Trivers-Willard hypothesis.
These results, Cameron argues, indicate that mothers can shift the balance of the sexes, but only right around conception. It's possible, for example, that the amount of glucose in the uterus when an egg begins to divide may trigger different responses, depending on whether the egg is male or female. For example, high glucose may signal that the mother is doing well and would do well to raise sons, while low glucose would favor females. In a paper in press at Biology of Reproduction, Cheryl Rosenfeld and Michael Roberts of the University of Missouri review some evidence that may support Cameron. (You can download it for free here.) They raised female mice on two different diets--one high in saturated fats, and one high in carbohydrates. Mothers eating a high-fat diet (which probably led to high levels of glucose) gave birth to litters with two sons for every daughter. Mothers eating high-carb diets produced about one son for every two daughters.
And here's where test tube babies come in. Humans may not have the sort of mating imbalance that you find in deer, but a lot of evidence suggests that we descend from a long lineage of primates in which a few males mated with a lot of females. It wouldn't be surprising, therefore, to find adaptations in women to favor sons or daughters. Rosenfeld and Richards survey some interesting studies on census data that suggest that this does indeed happen. And test tube babies offer some clues about the biochemistry that may be at work. When doctors fertilize a woman's eggs and let them begin to divide into a ball of cells, they keep the embryo in a solution of glucose. The farther along the doctors let the embryos develop, the more likely it is that their patients will wind up with sons rather than daughters. In this glucose-rich environment, male embryos may thrive, while females may risk failure to develop. Even in an age of reproductive technology, it seems, we are grappling with our evolutionary legacy.
Recently I've been trying to imagine a world without leaves. It's not easy to do at this time of year, when the trees around my house turn my windows into green walls. But a paper published on-line today at the Proceedings of the National Academy of Science inspires some effort. A team of English scientists offer a look back at Earth some 400 million years ago, at a time before leaves had evolved. Plants had been growing on dry land for at least 75 million years, but they were little more than mosses and liverworts growing on damp ground, along with some primitive vascular plants with stems a few inches high. True leaves--flat blades of tissue that acted like natural solar panels--were pretty much nowhere to be found.
It's strange enough to picture this boggy, bare-stemmed world. But it's stranger still to consider that plants at the time already had the genetic potential to grow leaves. Some species of green algae--the organisms from which plants evolved--were growing half-inch leaf-like sheets 450 million years ago. Tiny bud-like leaves have been found on 400 million year old plant fossils. Despite having the cellular equipment necessary to grow leaves, plants did not produce full-sized leaves in great numbers until about 350 million years ago. When they finally did become leafy, the first trees emerged and gave rise to the earliest forests. Leaves have dominated the planet ever since. They capture enough carbon dioxide to make millions of tons of biomass every year, and as roots suck up water, trillions of gallons evaporate through them.
Why did leaves take 50 million years to live up to their genetic potential? Apparently they had to wait.
Plants, the researchers point out, take in carbon dioxide through elaborate channels on their surface called stomata. Living plants can adjust the number of stomata that grow on their leaves. If you raise them in a greenhouse flooded with carbon dioxide, they will develop significantly fewer stomata. That's because the plants can gather the same amount of carbon dioxide they need to grow while allowing less water to evaporate out of their stomata.
Geological evidence shows that 400 million years ago, the atmosphere was loaded with carbon dioxide--about ten times the level before humans began to drive it up in the 1800s. (It was 280 parts per million in the early 1800s, 370 ppm today, and is predicted to rise to 450 to 600 parts per million by 2050. In the early Devonian Period, it was around 3000 ppm.) Consistent with living plants, the fossil plants from the early Devonian had very few stomata on their leafless stems.
Why didn't these early plants grow lots of leaves with few stomata? If they did, they could have grown faster and taller, and ultimately produced more offspring. But the scientists point out that a big leaf sitting in the sun risks overheating. The only things that can cool a leaf down are--once again--stomata. As water evaporates out of these channels, it cools the leaf, just as sweat cools our own skin. Unable to sweat, early Devonian leaves would have been a burden to plants, not a boon.
About 380 million years ago, however, carbon dioxide levels began to drop. Over the next 40 million years they crashed 90%, almost down to today's levels. The decline in carbon dioxide brought with it a drop in temperature: the planet cooled enough to allow glaciers to emerge at the poles. In the paper published today, the scientists describe what happened to plants during that time. Two different groups of plants--ferns and seed plants--began to sprout leaves. As years passed, the leaves became longer and wider. And at the same time, the leaves became increasinly packed with stomata. From 380 to 340 million years ago they became eight times denser. It seems that the drop in carbon dioxide and temperature turned leaves from burden to boon, and the world turned green.
It's possible that plants themselves may have ultimately been responsible for the emergence of leaves. Before leaves evolved, roots appeared on plants. Unlike moss and liverworts, which can only soak up the water on the ground, plants with roots can seek out water, along with other nutrients. Their probing eroded rocks and built up soil. The fresh rock that the plants exposed each year could react with carbon dioxide dissolved in rainwater. Some of this carbon was carried down rivers to the ocean floor and could no longer rise back up into the atmosphere. In other words, roots pulled carbon dioxide out of the atmosphere and made it possible for leaves to evolve. The evolution of leaves in turn led to the rise of big trees, which could trap even more carbon, cooling the climate even more. Clearly, we are not the first organisms to tinker with the planet's thermostat.
Zallinger painted "The March of Progress" at a time when paleoanthropologists still had found only a few hominid species. When most experts looked the evidence, it seemed reasonable to line it up in a straight ancestor-descendant line, running from chimp-like apes to Neanderthals to us. But over the past 30 years, scientists have dug up many new sorts of hominids--perhaps as many as 20 species--and many of them don't seem to fit in Zallinger's parade. In some cases a number of different species seemed to have lived side by side--some that might have been our ancestors and others that veered off into their own strange gorilla-like existence. Neanderthals were not our ancestors, but rather our cousins, having branched apart from our own lineage over 500,000 years ago. And finally, a number of paleoanthropologists have taken a fresh look at some of the hominid species identified by their predecessors, and they've concluded that two or more separate species may have been unfairly lumped together under the same name.
As a result, many paleoanthropologists have turned against the march of progress. Certainly there's a single line of genealogy that links us to an ancestor we share with chimpanzees. (Just consult your own DNA if you doubt it.) But the forces of evolution did not steadily drive hominids towards our own condition. Evolution simulataneously wandered down many different avenues, most of which ended up as dead ends. Instead of a march, many experts began thinking of human evolution in terms of a bush. Some scientists have even claimed that the march of progress was a case of people imposing their cultural biases--the Western perception of human history as a steady improvement--on the fossil record. (See, for example, Stephen Jay Gould's display of Zallingeresque cartoons on pp. 30-35 Wonderful Life.)
But it's also possible to turn the question the other way. Scientists working in the 1970s were born just after the horrors of the Holocaust and came of age during the civil rights movements of the 1960s. Could they be eager to find examples of diversity in the hominid fossil record, to complement today's focus on ethnic diversity? Tim White of Berkeley has raised this possibility. He points out that some of the prime evidence for the bushiness of the hominid tree are extraordinarily busted-up skulls, which have been reduced to hundreds or thousands of chips. In the reconstruction of these skulls, it's possible for researchers to create a shape that seems so distinct that it must belong to a separate species. White also notes that with so few hominid fossils to study, it's possible to mistake variability within a single hominid species as evidence for two or more species. And, as I wrote in March, White has found evidence in the teeth of the earliest hominids between 5 and 6 million years old that they may be as similar as chimpanzees and bonobos.
Even the strongest advocates of the bushy tree generally held onto some pieces of the old march of progress. Before 1.8 million years ago, the evidence suggested, hominids were tiny, small-brained African apes that walked on two legs and could use simple stone tools. But shortly thereafter hominids got tall--in some cases over six feet tall. Their brains became larger as well. Their fossils (starting with the species Homo ergaster) began to turn up in harsh, dry African grasslands, suggesting that their new long legs helped them stride efficiently across vast distances, and their bigger brains allowed them to find new sources of food. And within a couple hundred thousand years, descendants of these tall hominids (known as Homo erectus) had bolted out of Africa and spread from the Caucasus Mountains to Indonesia. Some diversity would remain--Neanderthals, for example, appear to have evolved in Europe and didn't interbreed much if at all with our own species before they became extinct. But all of these species were now big-brained and long-legged.
This model began to strain a bit two years ago when scientists reported that one of the earliest fossils of a hominid out of Africa, in the former Soviet country of Georgia, was small. Although it was a full-grown adult, its brain was about 600 cubic centimers (ours is 1400 cc, plus or minus about 200 cc), and fit into a miniature skull. One stray footbone found along with the skull suggests that it stood less than five feet high. The discovery led some scientists suggested that the first exodus of African hominids began before the tall Homo erectus evolved. But this proposal is undercut by the many features that linked the Georgian skull to Homo erectus.
One fossil can't support all that much speculation. So that was why it was so fascinating to read today about the discovery of another tiny Homo erectus skull. This one comes from Africa, not Asia. Its skull was about the same size as the Georgian fossil. But it lived 800,000 years later. What are we to make of these tiny people? A couple hypotheses might explain these remarkable fossils, one from the March-of-Progress school, and one from the Bush school.
A Bushist could reasonably suggest that the evidence shows that a lot of the fossils that are called Homo erectus belong to separate species. Small hominid species were able to migrate out of Africa and, within Africa they thrived alongside taller species for hundreds of thousands of years. If this is true, it casts doubt on the important of long legs for the spread of hominids. It also raises questions about how big of a brain you need to make sophisticated tools. The newly discovered Kenyan individuals were found in the same rocks where scientists have found lots of well-crafted hand axes and butchered animal carcasses. Yet their brains were significantly smaller than other individuals that appear to be Homo erectus.
A Marcher could offer a different hypothesis: under certain conditions isolated groups of Homo erectus evolved from tall to short. After all, the same reduction has happened in our own species, among pygmies in Africa and several other populations around the world (and has evolved only over a matter of hundreds or thousands of years). But just because they became very small doesn't mean they became a separate species of their own.
The truth may be that scientists need dozens or hundreds of times more hominid fossils to make a confident choice between these alternatives. But today's news raises a very interesting possibility. In Africa, the fossil record of Homo erectus peters out about 800,000 years ago, replaced by species that seem more closely related to our own. But in Asia, heavy-browed weak-chinned Homo erectus seems to have lingered a long time--perhaps as recently as 40,000 to 80,000 years ago on Indonesia. That's just around the time when our own species left Africa and arrived in southeast Asia. My hunch is that miniature hominids may have survived in isolated refuges for a long time. Are they still lurking on some deserted island or jungle enclave? I doubt it. But it wouldn't surprise me if Homo sapiens did come face to face with them before they disappeared.
Our brains are huge, particularly if you take into consideration the relative size of our bodies. Generally, the proportion of brain to body is pretty tight among mammals. But the human brain is seven times bigger than what you'd predict from the size of our body. Six million years ago, hominid brains were about a third the size they are today, comparable to a chimp's. So what accounts for the big boom? It would be flattering ourselves to say that the cause was something we are proud of--our ability to talk, or our gifts with tools. Certainly, our brains show signs of being adapted for these sorts of things (consider the language gene FOXP2). But those adaptations probably were little more than tinkerings with a brain that was already expanding thanks to other factors. And one of those factors may have been tricking our fellow hominid.
In the 1980s, some primatologists noticed that monkeys and apes--unlike other mammals--sometimes deceived members of their own species, in order to trick them out of food or sneak off for some furtive courtships. The primatologists got to thinking that deception involved some pretty sophisticated brain power. A primate needed to understand something about the mental state of other primates and have the ability to predict how a change in that mental state might change the way other primates behaved.
The primatologists then considered the fact that humans aren't the only primates with oversized brains. In fact, monkeys and apes, on average, have brains twice the size you'd predict for mammals of their body size. Chimpanzees and other great apes have particularly big brains, and they seemed to be particularly adept at tricking each other. What's more, primates don't simply have magnified brains. Instead, certain regions of the brain have expanded, such as the neocortex, the outer husk of the brain which handles abstract associations. Activity in the neocortex is exactly the sort of thinking necessary for tricking your fellow ape.
Taking all this into consideration, the primatologists made a pretty gutsy hypothesis: that the challenges of social life--including deception--actually drive the expansion of the primate brain. Sometimes called the Machiavellian Intelligence hypothesis, it has now been put to its most rigorous test so far, and passed quite well. Richard Byrne and Nadia Corp of the University of St. Andrews in Scotland published a study today in the Proceedings of the Royal Society of London. (The link's not up yet, but here's a New Scientist piece.) They found that in 18 species from all the major branches of primates, the size of the neocortex predicts how much deception the species practices. Bigger brains mean more trickery. They were able to statistically rule out a number of other factors that might have created a link where none existed. And they were able to show that deception is not just a side-effect of having a big brain or something that opportunistically emerges more often in big groups. Deception is probably just a good indicator of something bigger going on here--something psychologists sometimes call "social intelligence." Primates don't just deceive one another; they also cooperate and form alliances and bonds, which they can keep track of for years.
While deception isn't just an opportunistic result of being in big groups, big groups may well be the ultimate source of deception (and by extension big brains). That's the hypothesis of Robin Dunbar of Liverpool, as he detailed last fall in the Annual Review of Anthropology. Deception and other sorts of social intelligence can give a primate a reproductive edge in many different ways. It can trick its way to getting more food, for example; a female chimp can ward off an infanticidal male from her kids with the help of alliances. Certain factors make this social intelligence more demanding. If primates live under threat of a lot of predators, for example, they may get huddled up into big groups. Bigger groups mean more individuals to keep track of, which means more demands on the brain. Which, in turn, may lead to a bigger brain.
If that's true, then the human brain may have begun to emerge as our ancestors huddled in bigger groups. It's possible, for example, that early hominids living as bipeds in patchy forests became easier targets for leopards and other predators. Brain size increased modestly until about two million years ago. It may not have been able to grow any faster because of the diet of early hominids. They probably dined on nuts, fruits, and the occasional bit of meat, like chimpanzees do today. That may not have been enough fuel to support a really big brain; brain tissue is incredibly hungry, demanding 16 times more energy than muscle, pound for pound. It was only after hominids began making butchering tools out of stones and got a steady supply of meat from carcasses that the brain began to expand. And it was probably around this time (between 2 and 1.5 million years ago) that hominids began evolving the extraordinary powers of deception (and other sorts of social intelligence) that humans have. We don't just learn how other people act--we develop a powerful instinct about what's going on in their minds. (I wrote about the neuroscience behind this "mentalizing" last year in an article for Science.)
So next time you get played, temper your anger with a little evolutionary perspective. You've just come face to face with a force at work in our evolution for over 50 million years.
UPDATE 7/3/04: A skeptical reader doubted some of my statements about the brain and the energy it requires. Those who crave more information should check out Northwestern University anthropologist William Leonard's article "Food for Thought" in Scientific American.These treks have something profound to say about biological change--how life can start out exquisitely adapted to one world and then eventually become adapted just as exquisitely to an utterly different one. Before creationists began marketing bacterial flagella and other examples of intelligent-design snake oil, they loved to harp on the transition from land to sea. Who could possibly believe the story those evolutionary biologists tell us, of a cow plunging into the sea and becoming a whale? And it was true, at least until the 1980s, that no one had found a fossil of a whale with legs. Then paleontologists working in Pakistan found the fossil of a 45-million year old whale named Ambulocetus that looked in life like a furry crocodile. Then they found a seal-like whale just a bit younger. Then they found tiny legs on a 50-foot long, 40-million year old whale named Basilosaurus. I wrote about these discoveries and others like them in my first book, At the Water's Edge, in 1998. I'm amazed at how the fossils have continued turning up since then. Paleontologists have found goat-like legs on a dog-sized whale that lived 50 million years ago, known as Pakicetus. They've found other whales that may have been even more terrestrial than Pakicetus, and many others that branch off somewhere between Pakicetus and Basilosaurus. In the latest review of fossil whales, the evolutionary tree of these transitional species sports thirty branches.
All these discoveries have apparently made whales unsuitable for creationist rhetoric. Yes, you can still find some pseudo-attacks on the fossils, but you have to look hard. The more visible creationists, the ones who testify at school board meetings and write op-eds for the Wall Street Journal, don't bring up whales these days. The animals apparently no longer serve the cause. It's hard to distract people from evidence when it can kick them in the face.
Whales, moreover, were not the only mammals that moved into the water. Seals, sea lions, manatees, and other lineages evolved into swimmers as well, and paleontologists are also filling in their fossil record. It's fascinating to compare their invasions, to see how they converged on some of the same strategies for living in the water, and how they wound up with unique adaptations. The June issue of The Journal of Vertebrate Paleontology has two papers that shed light on one of the weirdest of these transitions--a transition, moreover, we know only from fossils. The animals in question were sloths.
That's right--I'm talking about the sort of animals that hang from trees by their three toes. Sloths may seem an unlikely choice for a sea-going creature; if you threw one of these creatures in the water, I'd imagine it would sleepily sink away without a trace. I've never hurled a three-toed sloth myself, so I can't say for sure. But the sloths alive today are actually just a vestige of a once-grand menagerie that lived in North and South America. Many species prowled on the ground, growing as tall as ten feet. And one lineage of these giant sloths that lived on the coast of Peru moved into the ocean.
In 1995 Christian de Muizon of the National Museum of Natural History in Paris and his colleagues announced the discovery of sloth fossils in Peru dating back somewhere between three and seven million years. The rocks in which they found the bones had formed in the sea; the same rocks have yielded other ocean-going creatures including fish, sea lions, and weird dolphins with walrus-like tusks. The sloths, de Muizon concluded, were aquatic as well. Terrestrial sloths have much longer lower leg bones than upper ones, but the Peruvian sloths had reversed proportions. Manatees and otters also have reversed legs, which suggests that the sloths' limbs were adapted for powerful swimming strokes. The front of their skull was manatee-like as well: its jaws extended out well beyond its front teeth, with a rich supply of blood vessels. Like manatees, de Muizon argued, the sloths had powerful muscular snouts they used to root out sea grass.
In their initial report, the paleontologists dubbed the fossils Thalassocnus natans. But it was already clear that they might have more than one species on their hands. In the years since, they've dug into the Peruvian rocks and found hundreds of sloth fossils, which they have been carefully studying and comparing. The new papers are not the last word on Thalassocnus, but the sloths are already shaping up as a great illustration of a transition to the water.
Instead of a single species, de Muizon's team has now identified at least five. They lived, respectively, seven to eight million years ago, six million years ago, five million years ago, three to four million years ago, and, finally, 1.5 to three million years ago. The earliest species look more like ground sloths on land, while later species show more adaptations to the water. For example, the radius, one of the lower bones of the foreleg, became much broader. The change--which can also be seen on sea lions--allowed the forelegs deliver a better swimming stroke. The teeth become less like those of ground sloths, adapted for browsing on leaves and assorted vegetation. Instead, they became adapted for full-time grazing. The coast of Peru is a bone-dry desert with nothing to graze on, and so the only thing to graze on would be sea grass.
The sloth skull changes as well. Both the upper and lower jaws stretch out further and further. From the oldest species to the youngest, the distance from the front teeth to the tip of the jaw nearly doubles. At the same time, the entire skull became stronger, to withstand the forces involved in tearing sea grasses from the sea floor. And finally, bones in the palate evolved to support muscles that could keep the digestive tract separate from the sloth's airway--something important when you're feeding underwater.
The changes documented in these fossils suggest that the earliest Thallassocnus sloths eked out an existence on land along the Peruvian shore. In a bleak desert, the sea grass that washed up on the beach would have been like manna. De Muizon and his colleagues have found another clue in the early sloths that supports this beach-comber hypothesis: their teeth bear scrape marks that suggest they were getting a lot of sand in their mouths; later sloths show no such marks. Over five million years or so, the sloths evolved adaptations that allowed them to move further and further out into the water, to feed on sea grass beds. Natural selection would have put a strong premium on these adaptations, since they would let sloths graze in lush underwater forests rather than pick through sandy flotsam and jetsam on the beach.
De Muizon's group have yet to sort out all the differences throughout the entire skeletons of all five species. We'll have to wait for those papers. But there's enough in print now to raise some interesting questions. In whales, seals, and manatees alike, their arms and hands became flippers--stubby, webbed, fin-like limbs. Thalassocnus still had big, long-clawed fingers on its hands. De Muizon proposes that they would have enabled the sloths to hold onto rocks to stay submerged as they fed on sea grass. Manatees don't need to do this because their bones are especially dense; the sloths had not yet acquired this adaptation. It seems that Thalassocnus only traveled part of the way down the road to a marine life before they became extinct.
Why they became extinct (as opposed to manatees, for example), is also intriguing. Did something happen 1.5 million to 3 years ago that ruined their home? Perhaps the coastal waters off Peru became too cold. If the sloths had spread further along the coast, they might not have been so vulnerable. Other mammals moved into the water at very restricted sites as well. For their first few million years or so, whales could only be found off the coast of Pakistan. If some Indian volcano had blanketed the neighborhood in ash, we might never have known what a whale looks like.
UPDATE Monday June 21, 7 pm: PZ Meyers has put photos of one of the skulls on Panda's Thumb
Love demands an explanation. Less than 5% of mammal species live monogamously, with males and females staying together beyond mating, and fathers helping mothers care for babies. We humans aren't the most monogamous species of the bunch, but we're closer to that end of the spectrum than the other end, where mating is little more than ships bumping into each other in the night.
A biological explanation for love--as with any biological explanation--has two levels. On one level are the molecular circuits that produce love, and on another level are the evolutionary forces that favor the construction of those circuits in the first place. It turns out that in this case one of the best guides to both levels of explanation is the vole.
The prairie vole is a five-percenter. When a male prairie vole mates, something happens to his brain. He tends to stay near her, even when other females are around, and then helps out with the kids when they arrive--grooming them, huddling around them to keep them warm, and so on. By contrast, the meadow vole, a close relative, is a ninety-five percenter. Male meadow voles typically couldn't care less. They're attracted to the scent of other females and don't offer parental care.
Scientists have searched for years now for the molecular basis for this difference. One promising candidate was a molecule called the vasopressin V1a receptor (V1aR). In certain parts of the brains, male prairie voles produce more V1aR than meadow voles. To test whether this difference had anything to do with the dedication of male meadow voles, Larry Young of Emory University and his colleagues injected a virus carrying the V1aR gene into the brains of medow voles. As they report today in Nature, the virus caused the meadow voles to begin huddling with their mates almost as loyally as prairie voles.
So what happened? It seems that for prairie voles, love is a drug. When male prairie vole mate, their brains release a chemical called vasopressin. Vasopressin does a lot of things all over the body, such as regulating blood pressure. In the brain of prairie voles, it latches onto vasopressin V1a receptors that stud the neurons in a region called the ventral palladium. This region is part of the brain network in vertebrates that produces a sense of reward. Young and company propose that the memory a male forms of mating with the female gets associated with her fragrance. Later, every time he gets a whiff of her, he feels that same sense of reward. This brain circuit is also responsible for the high from cocaine and other drugs--as well as the addiction. Even looking at drug paraphernalia can make an addict feel the old cravings, because his memories are tinged with the high.
By contrast, male meadow voles have relatively few vasopressin receptors in their brains, so that vasopressin released during sex doesn't switch on the same circuit and they never develop the same memories. And so, to them, the smell of their mate produces no special feeling. In an accompanying commentary, Evan Balaban of McGill University in Montreal says that V1a receptors may be "the adjustable nozzle atop a social-glue dispenser in the mammalian brain."
You can almost see the spam already on its way to your mailbox:LADIES! Trying to land Mr. Right? Just inject our new Vaso-Love virus into his brain before your next date, and HE WILL BE YOURS FOREVER!!!
Don't buy it.
It's true that much of the circuitry in our brains is similar to that in voles. And we also produce vasopressin and other neurotransmitters that are associated with love and other feelings. That's because we share a common ancestor with voles that had the basic system found in the heads of both species. But since our two lineages diverged, perhaps 100 million years ago, the systems have diverged as well. The bonding in voles depends on the male smelling the female. That's not surprising, given that voles and other rodents have an exquisite sense of smell. We don't; we're more of a visual species. Differences like these mean that what works for the vole probably won't work for the human. (Even among rodents, Vaso-Love doesn't work: gene-therapy experiments--in which the prairie vole gene has been injected into mice and rats--haven't altered their behavior.)
Even if Vaso-Love won't be hitting the patent office anytime soon, Young's research can shed some light on our own love lives. That's because he and his colleagues have discovered a fascinating difference between the V1aR gene in the two species of voles. All genes have a front and back end. At the front, you typically find a short sequence that acts as an on-off switch, which can only be operated by certain proteins. In praire voles, this front end also contains a short sequence that is repeated over and over again--known as a microsatellite. In meadow voles, by contrast, the microsatellite is very short.
Somehow the microsatellite affects how the gene is switched on in each species. A long microsatellite produces more receptors--and a loyal male--in the prairie vole brain than in a meadow vole. While it isn't clear how microsatellites alter the amount of V1aR produced, what is clear is that it is, evolutionarily speaking, easy to go from one behavior to the other. The same gene produces different behavior simply due to being common or scarce. Moreoever, microsatellites are famous for their high rate of mutation. That's because the DNA-copying machinery of our cells has a particularly hard time copying these sequences with complete accuracy. (Just imagine typing a copy of that manscript in The Shining, filled with "All Work and No Play Makes Jack A Dull Boy," over and over again. It wouldn't be surprising if you discovered once you were done that you missed a couple of those sentences, or added a couple on.) Since microsatellites control behavior, it's relatively easy for new behaviors to evolve as these microsatellites expand and contract.
This sort of flexibility can help explain the fact males and females of closely related mammals (such as the prairie and mountain voles) often evolve different behaviors towards each other. Here is where an explanation of love shifts levels, from molecules to evolutionary forces. Monogamy and fatherly care are favored by natural selection in certain siutations, and not in others. Scientists have identified a lot of factors that can produce a shift from one to another. One particularly unromantic force for monogamy is known as mate guarding. In some species, females can mate with many partners and then choose which sperm to use. If a male guards a female after mating, she'll have no choice but to use his sperm. Young's research suggests that mammals can switch pretty readily from one sexual behavior to another pretty quickly thanks to minor, common mutations.
Primates--our own branch of the mammal tree--seem to fit the general pattern. Marmosets, which pair up for years, have lots of V1a receptors, and promiscuous macaques don't. It will be interesting to watch what scientists find when they look more carefully at vasopressin in the brains of humans and our closest living relatives, chimpanzees and bonobos. As a rule, monogamous species tend to have males and females of the same size. In other species, males tend to fight with one another to mate with females, which gives big males and edge over smaller ones. Male chimps, for example, are bigger than females. Since our ancestors split with those of chimps, we became more monogamous, so that males are much closer to females in size.
One leading hypothesis for this shift has to do with our big brains. The human brain grows at a tremendous rate after birth, using up lots of energy along the way. Human children are also more helpless than other apes; a baby chimp can quickly clamp onto its mother and hang on. The care and feeding of hominid babies may have gradually required the work of two parents, which would have favored monogamy.
Not that monogamy became a hard and fast rule, of course. Even within the loyal prairie vole species Young has found variations. Some of them produce more vasopressin receptors, and some fewer. Likewise, some of them are more monogamous than others. On both counts, the same goes for humans. It's not surprising that humans should vary in their receptors, since microsatellites mutate so easily. Mutations that knock out most of the microsatellites have even been linked to autism, which is, among other things, a social disorder that makes it difficult for people to form deep attachments. Vaso-Love gene therapy may not get you the perfect man, but what if measuring how many V1a receptors a man has in his ventral palladium may give you a hint of whether he's going to stick around? Will a PET scan some day become part of the modern courtship ritual?
Update 6/17 5:20 pm: Be sure to check out Bornea Chela's Jason South's comments. He brings up some important points that are also discussed in the original papers.
In the New York Times this morning, the poet Diane Ackerman has written an essay about the brain, in which she waxes eloquent about its ability to discern patterns in the world. The essay is distilled from her new book, An Alchemy of the Mind, which I've just reviewed for the Washington Post. I didn't much like the book, although it took me a while to figure out what was bothering me about it. If you read the essay, you can get the flavor of the book, not to mention Ackerman's general style in her previous books (which have taken on subjects such as endangered species and the senses). Ackerman has a fondness for sipping tea, tie-dye dresses, and hummingbird feeders, and an even greater fondness for writing about them. I know people who have been put off by her aesthetics, and I find them cloying as well. But that wasn't really at the heart of my dislike of the book. (And besides, my own aesthetics leans towards shark tapeworms and dissected sheep brains, so I'm hardly one to complain about other people.) It took me a few days to realize that the problem with the book was embedded in a deeper problem: how we talk about nature (which includes our own minds).
By we, I don't mean cognitive neuropsychologists or planetologists or molecular ecologists. I mean the rest of us, or the collective us, the ones who consciously or unconsciously create the language, metaphors, and stories that serve as our shared understanding of the world. The words we use, even in passing, to describe genes or brains or evolution can lock us into a view of nature that may be meaningful or misleading. When people say, "Being dull is just written into his DNA," they may only intend a light joke, but the metaphor conjures a false image of how personality emerges from genetics and environment and experience. This figure of speech may seem like nothing more than a figure of speech until people step into the office of a genetics counselor to find out about their unborn child.
The brain suffers from plenty of bad language. In some cases, the language is bad because it's unimaginative. In Alchemy of the Mind, Ackerman points out that calling neurotransmitters and receptors keys and locks does a disservice to their soft, floppy nature. In other cases, though, the language is bad because it's based on gross simplifications of outmoded ideas. Yet it survives, taking on a life of its own separate from the science. My favorite example, which I wrote about last year, is the bogus story you always hear about how we only use ten percent of your brain.
Ackerman indulges in this sort of bad language a lot. One example: she loves referring to our "reptile brain," as if there was a nub of unaltered neurons sitting at the core of our heads driving our basic instincts. The reality of the brain--and of evolution--is far more complex. The brain of reptilian forerunners of mammals was the scaffolding for a new mammal brain; the old components have been integrated so intimately with our "higher" brain regions that there's no way to distinguish between the two in any fundamental way. Dopamine is an ancient neurotransmitter that provides a sense of anticipation and reward to other animals, including reptiles. But our most sophisticated abilities for learning abstract rules, carried out in our elaborate prefrontal cortex, depend on rewards of dopamine to lay down the proper connections between neurons. There isn't a new brain and an old brain working here--just one system. Yet, despite all this, it remains seductive to use a phrase like "reptile brain." It conjures up lots of meanings. Ackerman floods her book with such language, which I grouse about other bad language in my review.
Which makes me wonder, as a science writer myself: is all poetry is ultimately dangerous? Does scientific understanding inevitably get abandoned as we turn to the juicy figure of speech?
Update: 6/14/04 11 AM: NY Times linked fixed
The Panda's Thumb: I guess that means he has no shame
Pharyngula: I guess that means he has no shame
Gene Expression: Tell that to Mrs. Coolidge....
The Panda's Thumb: What mitochondria and Y chromosomes tell us about marriage...
Pharyngula: The Loom is back in business
Peter on Takes One to Know One
rs on Channel Surfing For The Inner Chimp
Wetware1 on Mooney on the Science War
oliver on Channel Surfing For The Inner Chimp
tyas on Takes One to Know One
Noumenon on Deadly, Tiny, and Ready For Its Close-up, Mr. DeMille