Sunday, May 20, 2007

Bloggin' it

After reading about the relative success of Paul Shirley's new book as an NBA journeyman, I thought to myself:
1. He's not a bad writer - quite witty and funny. Wait, I'm witty and funny...sometimes. What if I could capture it for posterity's sake, to prove to my friends that yes, at times I was witty and funny, and not all serious and pseudo-philosophical most of the time? That was a long sentence. Am I supposed to end it with a question mark, because it began really long...
2. He had a unique perspective at an insider's only industry on which most people have an opinion. And I work in the music industry, which we all is is extremely stable. (I should probably add a pseudo lawyer's note that this blog represents only my views on not those of the music label which employs me .)
3. Imagine if he kept this own blog and, provided the viewership was high enough, he could just get checks rolling in from syndicating ads via Adsense or the rest. Yes - this is my dream. Having enough readers enjoy this so I can get a fat check.

Okay I'll keep it short. My friend who shall remain nameless (EE) recently remarked my entries were way too long. Instead I'll look to write several times in Price Club like sample sizes (seriously, aren't those samples the best thing? Go in the morning and just snack on your shopping trip. Free lunch baby.)

Sunday, March 4, 2007

The evolutionary purpose of religion

Like any good writer the author serves more to explain than to take sides. But she does provide compelling evidence that belief and religion are evolutionary byproducts, and if so, may suggest they're outmoded modules less likely to be needed in today's scientific world. One of the more compelling examples mentioned involves our inability to conceptualize non-consciousness (i.e. death) and hence our default conclusion of an afterlife.

Link
God has always been a puzzle for Scott Atran. When he was 10 years old, he scrawled a plaintive message on the wall of his bedroom in Baltimore. “God exists,” he wrote in black and orange paint, “or if he doesn’t, we’re in trouble.” Atran has been struggling with questions about religion ever since — why he himself no longer believes in God and why so many other people, everywhere in the world, apparently do.

Call it God; call it superstition; call it, as Atran does, “belief in hope beyond reason” — whatever you call it, there seems an inherent human drive to believe in something transcendent, unfathomable and otherworldly, something beyond the reach or understanding of science. “Why do we cross our fingers during turbulence, even the most atheistic among us?” asked Atran when we spoke at his Upper West Side pied-à-terre in January. Atran, who is 55, is an anthropologist at the National Center for Scientific Research in Paris, with joint appointments at the University of Michigan and the John Jay College of Criminal Justice in New York. His research interests include cognitive science and evolutionary biology, and sometimes he presents students with a wooden box that he pretends is an African relic. “If you have negative sentiments toward religion,” he tells them, “the box will destroy whatever you put inside it.” Many of his students say they doubt the existence of God, but in this demonstration they act as if they believe in something. Put your pencil into the magic box, he tells them, and the nonbelievers do so blithely. Put in your driver’s license, he says, and most do, but only after significant hesitation. And when he tells them to put in their hands, few will.

If they don’t believe in God, what exactly are they afraid of?

Atran first conducted the magic-box demonstration in the 1980s, when he was at Cambridge University studying the nature of religious belief. He had received a doctorate in anthropology from Columbia University and, in the course of his fieldwork, saw evidence of religion everywhere he looked — at archaeological digs in Israel, among the Mayans in Guatemala, in artifact drawers at the American Museum of Natural History in New York. Atran is Darwinian in his approach, which means he tries to explain behavior by how it might once have solved problems of survival and reproduction for our early ancestors. But it was not clear to him what evolutionary problems might have been solved by religious belief. Religion seemed to use up physical and mental resources without an obvious benefit for survival. Why, he wondered, was religion so pervasive, when it was something that seemed so costly from an evolutionary point of view?

The magic-box demonstration helped set Atran on a career studying why humans might have evolved to be religious, something few people were doing back in the ’80s. Today, the effort has gained momentum, as scientists search for an evolutionary explanation for why belief in God exists — not whether God exists, which is a matter for philosophers and theologians, but why the belief does.

This is different from the scientific assault on religion that has been garnering attention recently, in the form of best-selling books from scientific atheists who see religion as a scourge. In “The God Delusion,” published last year and still on best-seller lists, the Oxford evolutionary biologist Richard Dawkins concludes that religion is nothing more than a useless, and sometimes dangerous, evolutionary accident. “Religious behavior may be a misfiring, an unfortunate byproduct of an underlying psychological propensity which in other circumstances is, or once was, useful,” Dawkins wrote. He is joined by two other best-selling authors — Sam Harris, who wrote “The End of Faith,” and Daniel Dennett, a philosopher at Tufts University who wrote “Breaking the Spell.” The three men differ in their personal styles and whether they are engaged in a battle against religiosity, but their names are often mentioned together. They have been portrayed as an unholy trinity of neo-atheists, promoting their secular world view with a fervor that seems almost evangelical.

Lost in the hullabaloo over the neo-atheists is a quieter and potentially more illuminating debate. It is taking place not between science and religion but within science itself, specifically among the scientists studying the evolution of religion. These scholars tend to agree on one point: that religious belief is an outgrowth of brain architecture that evolved during early human history. What they disagree about is why a tendency to believe evolved, whether it was because belief itself was adaptive or because it was just an evolutionary byproduct, a mere consequence of some other adaptation in the evolution of the human brain.

Which is the better biological explanation for a belief in God — evolutionary adaptation or neurological accident? Is there something about the cognitive functioning of humans that makes us receptive to belief in a supernatural deity? And if scientists are able to explain God, what then? Is explaining religion the same thing as explaining it away? Are the nonbelievers right, and is religion at its core an empty undertaking, a misdirection, a vestigial artifact of a primitive mind? Or are the believers right, and does the fact that we have the mental capacities for discerning God suggest that it was God who put them there?

In short, are we hard-wired to believe in God? And if we are, how and why did that happen?

“All of our raptures and our drynesses, our longings and pantings, our questions and beliefs . . . are equally organically founded,” William James wrote in “The Varieties of Religious Experience.” James, who taught philosophy and experimental psychology at Harvard for more than 30 years, based his book on a 1901 lecture series in which he took some early tentative steps at breaching the science-religion divide.

In the century that followed, a polite convention generally separated science and religion, at least in much of the Western world. Science, as the old trope had it, was assigned the territory that describes how the heavens go; religion, how to go to heaven.

Anthropologists like Atran and psychologists as far back as James had been looking at the roots of religion, but the mutual hands-off policy really began to shift in the 1990s. Religion made incursions into the traditional domain of science with attempts to bring intelligent design into the biology classroom and to choke off human embryonic stem-cell research on religious grounds. Scientists responded with counterincursions. Experts from the hard sciences, like evolutionary biology and cognitive neuroscience, joined anthropologists and psychologists in the study of religion, making God an object of scientific inquiry.

The debate over why belief evolved is between byproduct theorists and adaptationists. You might think that the byproduct theorists would tend to be nonbelievers, looking for a way to explain religion as a fluke, while the adaptationists would be more likely to be believers who can intuit the emotional, spiritual and community advantages that accompany faith. Or you might think they would all be atheists, because what believer would want to subject his own devotion to rationalism’s cold, hard scrutiny? But a scientist’s personal religious view does not always predict which side he will take. And this is just one sign of how complex and surprising this debate has become.

Angels, demons, spirits, wizards, gods and witches have peppered folk religions since mankind first started telling stories. Charles Darwin noted this in “The Descent of Man.” “A belief in all-pervading spiritual agencies,” he wrote, “seems to be universal.” According to anthropologists, religions that share certain supernatural features — belief in a noncorporeal God or gods, belief in the afterlife, belief in the ability of prayer or ritual to change the course of human events — are found in virtually every culture on earth.

This is certainly true in the United States. About 6 in 10 Americans, according to a 2005 Harris Poll, believe in the devil and hell, and about 7 in 10 believe in angels, heaven and the existence of miracles and of life after death. A 2006 survey at Baylor University found that 92 percent of respondents believe in a personal God — that is, a God with a distinct set of character traits ranging from “distant” to “benevolent.”

When a trait is universal, evolutionary biologists look for a genetic explanation and wonder how that gene or genes might enhance survival or reproductive success. In many ways, it’s an exercise in post-hoc hypothesizing: what would have been the advantage, when the human species first evolved, for an individual who happened to have a mutation that led to, say, a smaller jaw, a bigger forehead, a better thumb? How about certain behavioral traits, like a tendency for risk-taking or for kindness?

Atran saw such questions as a puzzle when applied to religion. So many aspects of religious belief involve misattribution and misunderstanding of the real world. Wouldn’t this be a liability in the survival-of-the-fittest competition? To Atran, religious belief requires taking “what is materially false to be true” and “what is materially true to be false.” One example of this is the belief that even after someone dies and the body demonstrably disintegrates, that person will still exist, will still be able to laugh and cry, to feel pain and joy. This confusion “does not appear to be a reasonable evolutionary strategy,” Atran wrote in “In Gods We Trust: The Evolutionary Landscape of Religion” in 2002. “Imagine another animal that took injury for health or big for small or fast for slow or dead for alive. It’s unlikely that such a species could survive.” He began to look for a sideways explanation: if religious belief was not adaptive, perhaps it was associated with something else that was.

Atran intended to study mathematics when he entered Columbia as a precocious 17-year-old. But he was distracted by the radical politics of the late ’60s. One day in his freshman year, he found himself at an antiwar rally listening to Margaret Mead, then perhaps the most famous anthropologist in America. Atran, dressed in a flamboyant Uncle Sam suit, stood up and called her a sellout for saying the protesters should be writing to their congressmen instead of staging demonstrations. “Young man,” the unflappable Mead said, “why don’t you come see me in my office?”

Atran, equally unflappable, did go to see her — and ended up working for Mead, spending much of his time exploring the cabinets of curiosities in her tower office at the American Museum of Natural History. Soon he switched his major to anthropology.

Many of the museum specimens were religious, Atran says. So were the artifacts he dug up on archaeological excursions in Israel in the early ’70s. Wherever he turned, he encountered the passion of religious belief. Why, he wondered, did people work so hard against their preference for logical explanations to maintain two views of the world, the real and the unreal, the intuitive and the counterintuitive?

Maybe cognitive effort was precisely the point. Maybe it took less mental work than Atran realized to hold belief in God in one’s mind. Maybe, in fact, belief was the default position for the human mind, something that took no cognitive effort at all.

While still an undergraduate, Atran decided to explore these questions by organizing a conference on universal aspects of culture and inviting all his intellectual heroes: the linguist Noam Chomsky, the psychologist Jean Piaget, the anthropologists Claude Levi-Strauss and Gregory Bateson (who was also Margaret Mead’s ex-husband), the Nobel Prize-winning biologists Jacques Monod and Francois Jacob. It was 1974, and the only site he could find for the conference was at a location just outside Paris. Atran was a scraggly 22-year-old with a guitar who had learned his French from comic books. To his astonishment, everyone he invited agreed to come.

Atran is a sociable man with sharp hazel eyes, who sparks provocative conversations the way other men pick bar fights. As he traveled in the ’70s and ’80s, he accumulated friends who were thinking about the issues he was: how culture is transmitted among human groups and what evolutionary function it might serve. “I started looking at history, and I wondered why no society ever survived more than three generations without a religious foundation as its raison d’être,” he says. Soon he turned to an emerging subset of evolutionary theory — the evolution of human cognition.

Some cognitive scientists think of brain functioning in terms of modules, a series of interconnected machines, each one responsible for a particular mental trick. They do not tend to talk about a God module per se; they usually consider belief in God a consequence of other mental modules.

Religion, in this view, is “a family of cognitive phenomena that involves the extraordinary use of everyday cognitive processes,” Atran wrote in “In Gods We Trust.” “Religions do not exist apart from the individual minds that constitute them and the environments that constrain them, any more than biological species and varieties exist independently of the individual organisms that compose them and the environments that conform them.”

At around the time “In Gods We Trust” appeared five years ago, a handful of other scientists — Pascal Boyer, now at Washington University; Justin Barrett, now at Oxford; Paul Bloom at Yale — were addressing these same questions. In synchrony they were moving toward the byproduct theory.

Darwinians who study physical evolution distinguish between traits that are themselves adaptive, like having blood cells that can transport oxygen, and traits that are byproducts of adaptations, like the redness of blood. There is no survival advantage to blood’s being red instead of turquoise; it is just a byproduct of the trait that is adaptive, having blood that contains hemoglobin.

Something similar explains aspects of brain evolution, too, say the byproduct theorists. Which brings us to the idea of the spandrel.

Stephen Jay Gould, the famed evolutionary biologist at Harvard who died in 2002, and his colleague Richard Lewontin proposed “spandrel” to describe a trait that has no adaptive value of its own. They borrowed the term from architecture, where it originally referred to the V-shaped structure formed between two rounded arches. The structure is not there for any purpose; it is there because that is what happens when arches align.

In architecture, a spandrel can be neutral or it can be made functional. Building a staircase, for instance, creates a space underneath that is innocuous, just a blank sort of triangle. But if you put a closet there, the under-stairs space takes on a function, unrelated to the staircase’s but useful nonetheless. Either way, functional or nonfunctional, the space under the stairs is a spandrel, an unintended byproduct.

“Natural selection made the human brain big,” Gould wrote, “but most of our mental properties and potentials may be spandrels — that is, nonadaptive side consequences of building a device with such structural complexity.”

The possibility that God could be a spandrel offered Atran a new way of understanding the evolution of religion. But a spandrel of what, exactly?

Hardships of early human life favored the evolution of certain cognitive tools, among them the ability to infer the presence of organisms that might do harm, to come up with causal narratives for natural events and to recognize that other people have minds of their own with their own beliefs, desires and intentions. Psychologists call these tools, respectively, agent detection, causal reasoning and theory of mind.

Agent detection evolved because assuming the presence of an agent — which is jargon for any creature with volitional, independent behavior — is more adaptive than assuming its absence. If you are a caveman on the savannah, you are better off presuming that the motion you detect out of the corner of your eye is an agent and something to run from, even if you are wrong. If it turns out to have been just the rustling of leaves, you are still alive; if what you took to be leaves rustling was really a hyena about to pounce, you are dead.

A classic experiment from the 1940s by the psychologists Fritz Heider and Marianne Simmel suggested that imputing agency is so automatic that people may do it even for geometric shapes. For the experiment, subjects watched a film of triangles and circles moving around. When asked what they had been watching, the subjects used words like “chase” and “capture.” They did not just see the random movement of shapes on a screen; they saw pursuit, planning, escape.

So if there is motion just out of our line of sight, we presume it is caused by an agent, an animal or person with the ability to move independently. This usually operates in one direction only; lots of people mistake a rock for a bear, but almost no one mistakes a bear for a rock.

What does this mean for belief in the supernatural? It means our brains are primed for it, ready to presume the presence of agents even when such presence confounds logic. “The most central concepts in religions are related to agents,” Justin Barrett, a psychologist, wrote in his 2004 summary of the byproduct theory, “Why Would Anyone Believe in God?” Religious agents are often supernatural, he wrote, “people with superpowers, statues that can answer requests or disembodied minds that can act on us and the world.”

A second mental module that primes us for religion is causal reasoning. The human brain has evolved the capacity to impose a narrative, complete with chronology and cause-and-effect logic, on whatever it encounters, no matter how apparently random. “We automatically, and often unconsciously, look for an explanation of why things happen to us,” Barrett wrote, “and ‘stuff just happens’ is no explanation. Gods, by virtue of their strange physical properties and their mysterious superpowers, make fine candidates for causes of many of these unusual events.” The ancient Greeks believed thunder was the sound of Zeus’s thunderbolt. Similarly, a contemporary woman whose cancer treatment works despite 10-to-1 odds might look for a story to explain her survival. It fits better with her causal-reasoning tool for her recovery to be a miracle, or a reward for prayer, than for it to be just a lucky roll of the dice.

A third cognitive trick is a kind of social intuition known as theory of mind. It’s an odd phrase for something so automatic, since the word “theory” suggests formality and self-consciousness. Other terms have been used for the same concept, like intentional stance and social cognition. One good alternative is the term Atran uses: folkpsychology.

Folkpsychology, as Atran and his colleagues see it, is essential to getting along in the contemporary world, just as it has been since prehistoric times. It allows us to anticipate the actions of others and to lead others to believe what we want them to believe; it is at the heart of everything from marriage to office politics to poker. People without this trait, like those with severe autism, are impaired, unable to imagine themselves in other people’s heads.

The process begins with positing the existence of minds, our own and others’, that we cannot see or feel. This leaves us open, almost instinctively, to belief in the separation of the body (the visible) and the mind (the invisible). If you can posit minds in other people that you cannot verify empirically, suggests Paul Bloom, a psychologist and the author of “Descartes’ Baby,” published in 2004, it is a short step to positing minds that do not have to be anchored to a body. And from there, he said, it is another short step to positing an immaterial soul and a transcendent God.

The traditional psychological view has been that until about age 4, children think that minds are permeable and that everyone knows whatever the child himself knows. To a young child, everyone is infallible. All other people, especially Mother and Father, are thought to have the same sort of insight as an all-knowing God.

But at a certain point in development, this changes. (Some new research suggests this might occur as early as 15 months.) The “false-belief test” is a classic experiment that highlights the boundary. Children watch a puppet show with a simple plot: John comes onstage holding a marble, puts it in Box A and walks off. Mary comes onstage, opens Box A, takes out the marble, puts it in Box B and walks off. John comes back onstage. The children are asked, Where will John look for the marble?

Very young children, or autistic children of any age, say John will look in Box B, since they know that’s where the marble is. But older children give a more sophisticated answer. They know that John never saw Mary move the marble and that as far as he is concerned it is still where he put it, in Box A. Older children have developed a theory of mind; they understand that other people sometimes have false beliefs. Even though they know that the marble is in Box B, they respond that John will look for it in Box A.

The adaptive advantage of folkpsychology is obvious. According to Atran, our ancestors needed it to survive their harsh environment, since folkpsychology allowed them to “rapidly and economically” distinguish good guys from bad guys. But how did folkpsychology — an understanding of ordinary people’s ordinary minds — allow for a belief in supernatural, omniscient minds? And if the byproduct theorists are right and these beliefs were of little use in finding food or leaving more offspring, why did they persist?

Atran ascribes the persistence to evolutionary misdirection, which, he says, happens all the time: “Evolution always produces something that works for what it works for, and then there’s no control for however else it’s used.” On a sunny weekday morning, over breakfast at a French cafe on upper Broadway, he tried to think of an analogy and grinned when he came up with an old standby: women’s breasts. Because they are associated with female hormones, he explained, full breasts indicate a woman is fertile, and the evolution of the male brain’s preference for them was a clever mating strategy. But breasts are now used for purposes unrelated to reproduction, to sell anything from deodorant to beer. “A Martian anthropologist might look at this and say, ‘Oh, yes, so these breasts must have somehow evolved to sell hygienic stuff or food to human beings,’ ” Atran said. But the Martian would, of course, be wrong. Equally wrong would be to make the same mistake about religion, thinking it must have evolved to make people behave a certain way or feel a certain allegiance.

That is what most fascinated Atran. “Why is God in there?” he wondered.

The idea of an infallible God is comfortable and familiar, something children readily accept. You can see this in the experiment Justin Barrett conducted recently — a version of the traditional false-belief test but with a religious twist. Barrett showed young children a box with a picture of crackers on the outside. What do you think is inside this box? he asked, and the children said, “Crackers.” Next he opened it and showed them that the box was filled with rocks. Then he asked two follow-up questions: What would your mother say is inside this box? And what would God say?

As earlier theory-of-mind experiments already showed, 3- and 4-year-olds tended to think Mother was infallible, and since the children knew the right answer, they assumed she would know it, too. They usually responded that Mother would say the box contained rocks. But 5- and 6-year-olds had learned that Mother, like any other person, could hold a false belief in her mind, and they tended to respond that she would be fooled by the packaging and would say, “Crackers.”

And what would God say? No matter what their age, the children, who were all Protestants, told Barrett that God would answer, “Rocks.” This was true even for the older children, who, as Barrett understood it, had developed folkpsychology and had used it when predicting a wrong response for Mother. They had learned that, in certain situations, people could be fooled — but they had also learned that there is no fooling God.

The bottom line, according to byproduct theorists, is that children are born with a tendency to believe in omniscience, invisible minds, immaterial souls — and then they grow up in cultures that fill their minds, hard-wired for belief, with specifics. It is a little like language acquisition, Paul Bloom says, with the essential difference that language is a biological adaptation and religion, in his view, is not. We are born with an innate facility for language but the specific language we learn depends on the environment in which we are raised. In much the same way, he says, we are born with an innate tendency for belief, but the specifics of what we grow up believing — whether there is one God or many, whether the soul goes to heaven or occupies another animal after death — are culturally shaped.

Whatever the specifics, certain beliefs can be found in all religions. Those that prevail, according to the byproduct theorists, are those that fit most comfortably with our mental architecture. Psychologists have shown, for instance, that people attend to, and remember, things that are unfamiliar and strange, but not so strange as to be impossible to assimilate. Ideas about God or other supernatural agents tend to fit these criteria. They are what Pascal Boyer, an anthropologist and psychologist, called “minimally counterintuitive”: weird enough to get your attention and lodge in your memory but not so weird that you reject them altogether. A tree that talks is minimally counterintuitive, and you might believe it as a supernatural agent. A tree that talks and flies and time-travels is maximally counterintuitive, and you are more likely to reject it.

Atran, along with Ara Norenzayan of the University of British Columbia, studied the idea of minimally counterintuitive agents earlier this decade. They presented college students with lists of fantastical creatures and asked them to choose the ones that seemed most “religious.” The convincingly religious agents, the students said, were not the most outlandish — not the turtle that chatters and climbs or the squealing, flowering marble — but those that were just outlandish enough: giggling seaweed, a sobbing oak, a talking horse. Giggling seaweed meets the requirement of being minimally counterintuitive, Atran wrote. So does a God who has a human personality except that he knows everything or a God who has a mind but has no body.

It is not enough for an agent to be minimally counterintuitive for it to earn a spot in people’s belief systems. An emotional component is often needed, too, if belief is to take hold. “If your emotions are involved, then that’s the time when you’re most likely to believe whatever the religion tells you to believe,” Atran says. Religions stir up emotions through their rituals — swaying, singing, bowing in unison during group prayer, sometimes working people up to a state of physical arousal that can border on frenzy. And religions gain strength during the natural heightening of emotions that occurs in times of personal crisis, when the faithful often turn to shamans or priests. The most intense personal crisis, for which religion can offer powerfully comforting answers, is when someone comes face to face with mortality.

In John Updike’s celebrated early short story “Pigeon Feathers,” 14-year-old David spends a lot of time thinking about death. He suspects that adults are lying when they say his spirit will live on after he dies. He keeps catching them in inconsistencies when he asks where exactly his soul will spend eternity. “Don’t you see,” he cries to his mother, “if when we die there’s nothing, all your sun and fields and what not are all, ah, horror? It’s just an ocean of horror.”

The story ends with David’s tiny revelation and his boundless relief. The boy gets a gun for his 15th birthday, which he uses to shoot down some pigeons that have been nesting in his grandmother’s barn. Before he buries them, he studies the dead birds’ feathers. He is amazed by their swirls of color, “designs executed, it seemed, in a controlled rapture.” And suddenly the fears that have plagued him are lifted, and with a “slipping sensation along his nerves that seemed to give the air hands, he was robed in this certainty: that the God who had lavished such craft upon these worthless birds would not destroy His whole Creation by refusing to let David live forever.”

Fear of death is an undercurrent of belief. The spirits of dead ancestors, ghosts, immortal deities, heaven and hell, the everlasting soul: the notion of spiritual existence after death is at the heart of almost every religion. According to some adaptationists, this is part of religion’s role, to help humans deal with the grim certainty of death. Believing in God and the afterlife, they say, is how we make sense of the brevity of our time on earth, how we give meaning to this brutish and short existence. Religion can offer solace to the bereaved and comfort to the frightened.

But the spandrelists counter that saying these beliefs are consolation does not mean they offered an adaptive advantage to our ancestors. “The human mind does not produce adequate comforting delusions against all situations of stress or fear,” wrote Pascal Boyer, a leading byproduct theorist, in “Religion Explained,” which came out a year before Atran’s book. “Indeed, any organism that was prone to such delusions would not survive long.”

Whether or not it is adaptive, belief in the afterlife gains power in two ways: from the intensity with which people wish it to be true and from the confirmation it seems to get from the real world. This brings us back to folkpsychology. We try to make sense of other people partly by imagining what it is like to be them, an adaptive trait that allowed our ancestors to outwit potential enemies. But when we think about being dead, we run into a cognitive wall. How can we possibly think about not thinking? “Try to fill your consciousness with the representation of no-consciousness, and you will see the impossibility of it,” the Spanish philosopher Miguel de Unamuno wrote in “Tragic Sense of Life.” “The effort to comprehend it causes the most tormenting dizziness. We cannot conceive of ourselves as not existing.”

Much easier, then, to imagine that the thinking somehow continues. This is what young children seem to do, as a study at the Florida Atlantic University demonstrated a few years ago. Jesse Bering and David Bjorklund, the psychologists who conducted the study, used finger puppets to act out the story of a mouse, hungry and lost, who is spotted by an alligator. “Well, it looks like Brown Mouse got eaten by Mr. Alligator,” the narrator says at the end. “Brown Mouse is not alive anymore.”

Afterward, Bering and Bjorklund asked their subjects, ages 4 to 12, what it meant for Brown Mouse to be “not alive anymore.” Is he still hungry? Is he still sleepy? Does he still want to go home? Most said the mouse no longer needed to eat or drink. But a large proportion, especially the younger ones, said that he still had thoughts, still loved his mother and still liked cheese. The children understood what it meant for the mouse’s body to cease to function, but many believed that something about the mouse was still alive.

“Our psychological architecture makes us think in particular ways,” says Bering, now at Queens University in Belfast, Northern Ireland. “In this study, it seems, the reason afterlife beliefs are so prevalent is that underlying them is our inability to simulate our nonexistence.”

It might be just as impossible to simulate the nonexistence of loved ones. A large part of any relationship takes place in our minds, Bering said, so it’s natural for it to continue much as before after the other person’s death. It is easy to forget that your sister is dead when you reach for the phone to call her, since your relationship was based so much on memory and imagined conversations even when she was alive. In addition, our agent-detection device sometimes confirms the sensation that the dead are still with us. The wind brushes our cheek, a spectral shape somehow looks familiar and our agent detection goes into overdrive. Dreams, too, have a way of confirming belief in the afterlife, with dead relatives appearing in dreams as if from beyond the grave, seeming very much alive.

Belief is our fallback position, according to Bering; it is our reflexive style of thought. “We have a basic psychological capacity that allows anyone to reason about unexpected natural events, to see deeper meaning where there is none,” he says. “It’s natural; it’s how our minds work.”

Intriguing as the spandrel logic might be, there is another way to think about the evolution of religion: that religion evolved because it offered survival advantages to our distant ancestors. This is where the action is in the science of God debate, with a coterie of adaptationists arguing on behalf of the primary benefits, in terms of survival advantages, of religious belief.

The trick in thinking about adaptation is that even if a trait offers no survival advantage today, it might have had one long ago. This is how Darwinians explain how certain physical characteristics persist even if they do not currently seem adaptive — by asking whether they might have helped our distant ancestors form social groups, feed themselves, find suitable mates or keep from getting killed. A facility for storing calories as fat, for instance, which is a detriment in today’s food-rich society, probably helped our ancestors survive cyclical famines.

So trying to explain the adaptiveness of religion means looking for how it might have helped early humans survive and reproduce. As some adaptationists see it, this could have worked on two levels, individual and group. Religion made people feel better, less tormented by thoughts about death, more focused on the future, more willing to take care of themselves. As William James put it, religion filled people with “a new zest which adds itself like a gift to life . . . an assurance of safety and a temper of peace and, in relation to others, a preponderance of loving affections.”

Such sentiments, some adaptationists say, made the faithful better at finding and storing food, for instance, and helped them attract better mates because of their reputations for morality, obedience and sober living. The advantage might have worked at the group level too, with religious groups outlasting others because they were more cohesive, more likely to contain individuals willing to make sacrifices for the group and more adept at sharing resources and preparing for warfare.

One of the most vocal adaptationists is David Sloan Wilson, an occasional thorn in the side of both Scott Atran and Richard Dawkins. Wilson, an evolutionary biologist at the State University of New York at Binghamton, focuses much of his argument at the group level. “Organisms are a product of natural selection,” he wrote in “Darwin’s Cathedral: Evolution, Religion, and the Nature of Society,” which came out in 2002, the same year as Atran’s book, and staked out the adaptationist view. “Through countless generations of variation and selection, [organisms] acquire properties that enable them to survive and reproduce in their environments. My purpose is to see if human groups in general, and religious groups in particular, qualify as organismic in this sense.”

Wilson’s father was Sloan Wilson, author of “The Man in the Gray Flannel Suit,” an emblem of mid-’50s suburban anomie that was turned into a film starring Gregory Peck. Sloan Wilson became a celebrity, with young women asking for his autograph, especially after his next novel, “A Summer Place,” became another blockbuster movie. The son grew up wanting to do something to make his famous father proud.

“I knew I couldn’t be a novelist,” said Wilson, who crackled with intensity during a telephone interview, “so I chose something as far as possible from literature — I chose science.” He is disarmingly honest about what motivated him: “I was very ambitious, and I wanted to make a mark.” He chose to study human evolution, he said, in part because he had some of his father’s literary leanings and the field required a novelist’s attention to human motivations, struggles and alliances — as well as a novelist’s flair for narrative.

Wilson eventually chose to study religion not because religion mattered to him personally — he was raised in a secular Protestant household and says he has long been an atheist — but because it was a lens through which to look at and revivify a branch of evolution that had fallen into disrepute. When Wilson was a graduate student at Michigan State University in the 1970s, Darwinians were critical of group selection, the idea that human groups can function as single organisms the way beehives or anthills do. So he decided to become the man who rescued this discredited idea. “I thought, Wow, defending group selection — now, that would be big,” he recalled. It wasn’t until the 1990s, he said, that he realized that “religion offered an opportunity to show that group selection was right after all.”

Dawkins once called Wilson’s defense of group selection “sheer, wanton, head-in-bag perversity.” Atran, too, has been dismissive of this approach, calling it “mind blind” for essentially ignoring the role of the brain’s mental machinery. The adaptationists “cannot in principle distinguish Marxism from monotheism, ideology from religious belief,” Atran wrote. “They cannot explain why people can be more steadfast in their commitment to admittedly counterfactual and counterintuitive beliefs — that Mary is both a mother and a virgin, and God is sentient but bodiless — than to the most politically, economically or scientifically persuasive account of the way things are or should be.”

Still, for all its controversial elements, the narrative Wilson devised about group selection and the evolution of religion is clear, perhaps a legacy of his novelist father. Begin, he says, with an imaginary flock of birds. Some birds serve as sentries, scanning the horizon for predators and calling out warnings. Having a sentry is good for the group but bad for the sentry, which is doubly harmed: by keeping watch, the sentry has less time to gather food, and by issuing a warning call, it is more likely to be spotted by the predator. So in the Darwinian struggle, the birds most likely to pass on their genes are the nonsentries. How, then, could the sentry gene survive for more than a generation or two?

To explain how a self-sacrificing gene can persist, Wilson looks to the level of the group. If there are 10 sentries in one group and none in the other, 3 or 4 of the sentries might be sacrificed. But the flock with sentries will probably outlast the flock that has no early-warning system, so the other 6 or 7 sentries will survive to pass on the genes. In other words, if the whole-group advantage outweighs the cost to any individual bird of being a sentry, then the sentry gene will prevail.

There are costs to any individual of being religious: the time and resources spent on rituals, the psychic energy devoted to following certain injunctions, the pain of some initiation rites. But in terms of intergroup struggle, according to Wilson, the costs can be outweighed by the benefits of being in a cohesive group that out-competes the others.

There is another element here too, unique to humans because it depends on language. A person’s behavior is observed not only by those in his immediate surroundings but also by anyone who can hear about it. There might be clear costs to taking on a role analogous to the sentry bird — a person who stands up to authority, for instance, risks losing his job, going to jail or getting beaten by the police — but in humans, these local costs might be outweighed by long-distance benefits. If a particular selfless trait enhances a person’s reputation, spread through the written and spoken word, it might give him an advantage in many of life’s challenges, like finding a mate. One way that reputation is enhanced is by being ostentatiously religious.

“The study of evolution is largely the study of trade-offs,” Wilson wrote in “Darwin’s Cathedral.” It might seem disadvantageous, in terms of foraging for sustenance and safety, for someone to favor religious over rationalistic explanations that would point to where the food and danger are. But in some circumstances, he wrote, “a symbolic belief system that departs from factual reality fares better.” For the individual, it might be more adaptive to have “highly sophisticated mental modules for acquiring factual knowledge and for building symbolic belief systems” than to have only one or the other, according to Wilson. For the group, it might be that a mixture of hardheaded realists and symbolically minded visionaries is most adaptive and that “what seems to be an adversarial relationship” between theists and atheists within a community is really a division of cognitive labor that “keeps social groups as a whole on an even keel.”

Even if Wilson is right that religion enhances group fitness, the question remains: Where does God come in? Why is a religious group any different from groups for which a fitness argument is never even offered — a group of fraternity brothers, say, or Yankees fans?

Richard Sosis, an anthropologist with positions at the University of Connecticut and Hebrew University of Jerusalem, has suggested a partial answer. Like many adaptationists, Sosis focuses on the way religion might be adaptive at the individual level. But even adaptations that help an individual survive can sometimes play themselves out through the group. Consider religious rituals.

“Religious and secular rituals can both promote cooperation,” Sosis wrote in American Scientist in 2004. But religious rituals “generate greater belief and commitment” because they depend on belief rather than on proof. The rituals are “beyond the possibility of examination,” he wrote, and a commitment to them is therefore emotional rather than logical — a commitment that is, in Sosis’s view, deeper and more long-lasting.

Rituals are a way of signaling a sincere commitment to the religion’s core beliefs, thereby earning loyalty from others in the group. “By donning several layers of clothing and standing out in the midday sun,” Sosis wrote, “ultraorthodox Jewish men are signaling to others: ‘Hey! Look, I’m a haredi’ — or extremely pious — ‘Jew. If you are also a member of this group, you can trust me because why else would I be dressed like this?’ ” These “signaling” rituals can grant the individual a sense of belonging and grant the group some freedom from constant and costly monitoring to ensure that their members are loyal and committed. The rituals are harsh enough to weed out the infidels, and both the group and the individual believers benefit.

In 2003, Sosis and Bradley Ruffle of Ben Gurion University in Israel sought an explanation for why Israel’s religious communes did better on average than secular communes in the wake of the economic crash of most of the country’s kibbutzim. They based their study on a standard economic game that measures cooperation. Individuals from religious communes played the game more cooperatively, while those from secular communes tended to be more selfish. It was the men who attended synagogue daily, not the religious women or the less observant men, who showed the biggest differences. To Sosis, this suggested that what mattered most was the frequent public display of devotion. These rituals, he wrote, led to greater cooperation in the religious communes, which helped them maintain their communal structure during economic hard times.

In 1997, Stephen Jay Gould wrote an essay in Natural History that called for a truce between religion and science. “The net of science covers the empirical universe,” he wrote. “The net of religion extends over questions of moral meaning and value.” Gould was emphatic about keeping the domains separate, urging “respectful discourse” and “mutual humility.” He called the demarcation “nonoverlapping magisteria” from the Latin magister, meaning “canon.”

Richard Dawkins had a history of spirited arguments with Gould, with whom he disagreed about almost everything related to the timing and focus of evolution. But he reserved some of his most venomous words for nonoverlapping magisteria. “Gould carried the art of bending over backward to positively supine lengths,” he wrote in “The God Delusion.” “Why shouldn’t we comment on God, as scientists? . . . A universe with a creative superintendent would be a very different kind of universe from one without. Why is that not a scientific matter?”

The separation, other critics said, left untapped the potential richness of letting one worldview inform the other. “Even if Gould was right that there were two domains, what religion does and what science does,” says Daniel Dennett (who, despite his neo-atheist label, is not as bluntly antireligious as Dawkins and Harris are), “that doesn’t mean science can’t study what religion does. It just means science can’t do what religion does.”

The idea that religion can be studied as a natural phenomenon might seem to require an atheistic philosophy as a starting point. Not necessarily. Even some neo-atheists aren’t entirely opposed to religion. Sam Harris practices Buddhist-inspired meditation. Daniel Dennett holds an annual Christmas sing-along, complete with hymns and carols that are not only harmonically lush but explicitly pious.

And one prominent member of the byproduct camp, Justin Barrett, is an observant Christian who believes in “an all-knowing, all-powerful, perfectly good God who brought the universe into being,” as he wrote in an e-mail message. “I believe that the purpose for people is to love God and love each other.”

At first blush, Barrett’s faith might seem confusing. How does his view of God as a byproduct of our mental architecture coexist with his Christianity? Why doesn’t the byproduct theory turn him into a skeptic?

“Christian theology teaches that people were crafted by God to be in a loving relationship with him and other people,” Barrett wrote in his e-mail message. “Why wouldn’t God, then, design us in such a way as to find belief in divinity quite natural?” Having a scientific explanation for mental phenomena does not mean we should stop believing in them, he wrote. “Suppose science produces a convincing account for why I think my wife loves me — should I then stop believing that she does?”

What can be made of atheists, then? If the evolutionary view of religion is true, they have to work hard at being atheists, to resist slipping into intrinsic habits of mind that make it easier to believe than not to believe. Atran says he faces an emotional and intellectual struggle to live without God in a nonatheist world, and he suspects that is where his little superstitions come from, his passing thought about crossing his fingers during turbulence or knocking on wood just in case. It is like an atavistic theism erupting when his guard is down. The comforts and consolations of belief are alluring even to him, he says, and probably will become more so as he gets closer to the end of his life. He fights it because he is a scientist and holds the values of rationalism higher than the values of spiritualism.

This internal push and pull between the spiritual and the rational reflects what used to be called the “God of the gaps” view of religion. The presumption was that as science was able to answer more questions about the natural world, God would be invoked to answer fewer, and religion would eventually recede. Research about the evolution of religion suggests otherwise. No matter how much science can explain, it seems, the real gap that God fills is an emptiness that our big-brained mental architecture interprets as a yearning for the supernatural. The drive to satisfy that yearning, according to both adaptationists and byproduct theorists, might be an inevitable and eternal part of what Atran calls the tragedy of human cognition.

Robin Marantz Henig, a contributing writer, has written recently for the magazine about the neurobiology of lying and about obesity.

Saturday, February 24, 2007

Sick

So I'm at home with a cold with the usual symptoms which need not be described here. But it got me thinking of the analogy of viruses to music. In a way music can be easily compared to viruses - people can get "infected" if they really like a song, while some are "immune" if they just don't like it. The question though, which everyone would like to answer, is what makes certain viruses spread faster than others? And when I say viruses, certainly we can generalize to music, youTube videos, the next big thing, stocks, etc.

(One quick aside: there's a lot of existing theory (as mentioned in Fooled by Randomness and Origin of Wealth) which states that we're all heavily biased by hindsight. In the case of business, we tend to see companies which perform well and associate success factors to them, in the hopes we might emulate those factors and equally succeed in some fashion. Built to Last, In Search of Excellence, and a whole slough of business texts are based on this premise.

Sounds reasonable, but it's essentially wrong. Because unless you analyze the losers as much as you analyze the winners, there's no way to really suppose those "winner traits" are wholly unique or effective. In most cases, it may simply be luck - after all, you can theoretically flip 5 heads in a row, but flipping 5 heads may have nothing to do with your ability as a coinflipper. And because losers, at least in business, are quietly wiped away by the tides of bankruptcy/acquisition, we have no way of separating both groups. We do have "some" evidence to suggest these companies were just as much lucky as they were capable, given the fall in prominence of several of the companies mentioned in all those texts.)

I mention the parenthetical as often times we ascribe the viral growth of certain phenomena (Harry Potter, American Idol, fads in general) to certain factors, but modern experiments in network theory suggest attributes may be wholly irrelevent. In models mentioned in Six Degrees, the quality of the virus had little impact on the growth of the virus - meaning it was just as likely to fail than not. Rather, it's the ability of the virus to reach a subgroup of people (called a percolating cluster) and to completely saturate that group. The next step is for some members of that percolating cluster to have connections outside the immediate group, and which can then allow transmission at a far higher rate.

What's the big conclusion of all this studying? Instead of analyzing features, product, functionality, etc. of certain viral phenomena, we have to look at the consumers who use it. Are they fairly insular (e.g. I imagine a social networking site for refrigerator repair men might be grow beyond that immediate audience)? Are they well-connected? What do they use my product/service/idea for beyond what I (as the business owner) intended?

MySpace may be a good example of this - initially starting with a focus on bands, it soon became the "defacto" social network. And in retrospect it may make sense because bands and their fans are a diverse lot, given there may not be deep similarities between you and another fan of the same band. Hence the ability to reach several distinct groups quite readily.

As a counter example which did well but did not reach huge levels of usage, Paypal initially focused on auction owners (which in itself was simply a random event, where marketing individuals found initial instances of the product being used on auction sites), who in turn were growing thanks to the growth of auction sites (primarily Ebay). Now Paypal has differentiated by offering other financial products, though their primary service is still auctioneers. (For more info on Paypal, check out this quick read: Paypal Wars.)

Starbucks memo

"In a blunt Feb. 14 memo, Howard Schultz warned executives that the chain may be commoditizing its brand and making itself more vulnerable to competition from other coffee shops and fast-food chains." - WSJ, 2/24/07

When I initially read this memo, I thought Schultz sounded like a schmuck. All this nostalgic talk of "what was" and the breezy picture of yesteryear appeared to be the rumblings of, at worst, a rambling luddite. See below for a copy of the actual memo.

But a few observations make this memo an interesting piece:
His comment on small, gradual changes having a large impact on the overall system. There's a fair amount of research (cited in Origin of Wealth and Six Degrees) which suggests small preferences or adjustments can lead to massive epidemics or "cascades," due to the interconnectedness of the players involved. Now I don't know if Schultz is a big reader of complexity economics or network theory, or is just extremely intuitive, but his concerns of these small changes strongly affecting the value perception of the Starbucks brand seem closely analogous.
As mentioned before, his concerns sound legitimate, though the explanations seem weak. For the most part, the Starbucks brand is not a static concept, constantly changing depending on consumer tastes and opportunities for innovation (like automatic machines and flavor sealing packs). While the core concept of high value (and subsequently, high priced) coffee is under attack, can't innovation be targeted toward achieving those aims? I don't think the issue, as he mentioned, is making coffee slower or smell better: if anything, I want those baristas to make my chai latte as fast as humanly possible. What's missing is the compelling reason to stay. As an avid Starbucks fan (from spending time reading and studying GMAT at various locations in Manhattan), once I get my drink, unless I have a book on me or am meeting friends, there isn't much in the nicely laid out patio area to convince me to stay. Maybe it's video games, the ability to reserve tables for study groups, or even mini-concerts, but once the coffee/tea/latte is purchased, what's really to do?

On a completely irrelevent side note, isn't it interesting how Amazon has become the de facto imdb of books? Every blog I read which reviews a book always links to the Amazon page for that book. Now I'm generally fine with that because the page, besides purchasing ability, also has reviews and even excerpts for viewing. But shouldn't such a "homepage" for books be less, well, commercial? And can't they fix the damn URL's to include more metadata, rather an endless string of random characters and numbers? Just a thought.


From: Howard Schultz
Sent: Wednesday, February 14, 2007 10:39 AM Pacific Standard Time
To: Jim Donald
Cc: Anne Saunders; Dave Pace; Dorothy Kim; Gerry Lopez; Jim Alling; Ken Lombard; Martin Coles; Michael Casey; Michelle Gass; Paula Boggs; Sandra Taylor

Subject: The Commoditization of the Starbucks Experience

As you prepare for the FY 08 strategic planning process, I want to share some of my thoughts with you.

Over the past ten years, in order to achieve the growth, development, and scale necessary to go from less than 1,000 stores to 13,000 stores and beyond, we have had to make a series of decisions that, in retrospect, have lead to the watering down of the Starbucks experience, and, what some might call the commoditization of our brand.

Many of these decisions were probably right at the time, and on their own merit would not have created the dilution of the experience; but in this case, the sum is much greater and, unfortunately, much more damaging than the individual pieces. For example, when we went to automatic espresso machines, we solved a major problem in terms of speed of service and efficiency. At the same time, we overlooked the fact that we would remove much of the romance and theatre that was in play with the use of the La Marzocca machines. This specific decision became even more damaging when the height of the machines, which are now in thousands of stores, blocked the visual sight line the customer previously had to watch the drink being made, and for the intimate experience with the barista. This, coupled with the need for fresh roasted coffee in every North America city and every international market, moved us toward the decision and the need for flavor locked packaging. Again, the right decision at the right time, and once again I believe we overlooked the cause and the affect of flavor lock in our stores. We achieved fresh roasted bagged coffee, but at what cost? The loss of aroma -- perhaps the most powerful non-verbal signal we had in our stores; the loss of our people scooping fresh coffee from the bins and grinding it fresh in front of the customer, and once again stripping the store of tradition and our heritage? Then we moved to store design. Clearly we have had to streamline store design to gain efficiencies of scale and to make sure we had the ROI on sales to investment ratios that would satisfy the financial side of our business. However, one of the results has been stores that no longer have the soul of the past and reflect a chain of stores vs. the warm feeling of a neighborhood store. Some people even call our stores sterile, cookie cutter, no longer reflecting the passion our partners feel about our coffee. In fact, I am not sure people today even know we are roasting coffee. You certainly can't get the message from being in our stores. The merchandise, more art than science, is far removed from being the merchant that I believe we can be and certainly at a minimum should support the foundation of our coffee heritage. Some stores don't have coffee grinders, French presses from Bodum, or even coffee filters.

Now that I have provided you with a list of some of the underlying issues that I believe we need to solve, let me say at the outset that we have all been part of these decisions. I take full responsibility myself, but we desperately need to look into the mirror and realize it's time to get back to the core and make the changes necessary to evoke the heritage, the tradition, and the passion that we all have for the true Starbucks experience. While the current state of affairs for the most part is self induced, that has lead to competitors of all kinds, small and large coffee companies, fast food operators, and mom and pops, to position themselves in a way that creates awareness, trial and loyalty of people who previously have been Starbucks customers. This must be eradicated.

I have said for 20 years that our success is not an entitlement and now it's proving to be a reality. Let's be smarter about how we are spending our time, money and resources. Let's get back to the core. Push for innovation and do the things necessary to once again differentiate Starbucks from all others. We source and buy the highest quality coffee. We have built the most trusted brand in coffee in the world, and we have an enormous responsibility to both the people who have come before us and the 150,000 partners and their families who are relying on our stewardship.

Finally, I would like to acknowledge all that you do for Starbucks. Without your passion and commitment, we would not be where we are today.

Onward…

Sunday, February 18, 2007

Best five minutes

The WSJ recently ran an article (pasted below) on the best five minutes of a movie, and how those five minutes really define the film. Alternatively, there are several books out there which describe the Power Law or Zipf's law which implies much of the average impact may come a small minority of drivers, i.e. average income is driven up by a small number of enormously rich people - when those outliers are removed, average income is much lower. One might apply the Power law qualitatively toward a movie - while most of a film is meant to build tension or provide context, you really only maximize your enjoyment in an extended scene, within which only a few moments are truly memorable. Any story, movie, song...they're really just great moments surrounded by context - a noodle dish with a single piece of duck, or a salad with one giant croutons. You could try to have more, but really, the scarcity and focus of those few items define the experience.

So originally I thought this post as a way to share my favorite short scenes in my favorite movies, which I might do eventually, but probably better would be describing real moments which have happened in the past month.

1. After a few hours of flirting, joking, story telling, light touching, more flirting, and alcohol consumption - getting the phone number/kiss/makeout session. The equivalent of notching the win, the pride and self-satisfaction, recognizing I am rewarded with amplification to the pleasure areas in my mind by a rush of various chemicals whose process were also enjoyed previouly by thousands of generations of my ancestors...isn't evolution grand?
(Certainly there are additional activities which I could also consider "my favorite" in this venue, but honestly, they usually take a bit longer than 5 minutes. ;-) (was this too much innuendo? Have I offended you?))

2. The first glass of wine/alcohol when having my usual weekly dinners with my buddy Ray. This is usually on a Wed or Thurs, we're both tired from the week but happy to be trying usually an exotic restaurant within the gustatory capital of the world. The first glass and gulp allow us the moment to exhale, remove the pressures of the office, and dive into the topic of the evening.

3. Making a turn around jumper during a pick up game - one of a few shots I can make with decent consistency. I may have turned the ball over umpteen times, waved my arms in providing ineffective defense, and missed a few layups, but if I can pivot my feet, get some distance, and sink that shot...there are few better feelings than watching that orange orb softly fall into the basket. Like it was meant to go no where else.

4. Recognizing and then building upon the business potential of a syndication project (still skunk works) at work.




How One Scene Can Say Everything

Deconstructing the best five minutes of 'Little Miss Sunshine'
February 17, 2007; Page P11

We all have favorite scenes from classic films; the quirkiness and diversity of our choices can be astonishing. Lately, though, I've been struck by how many movie lovers share a fondness for the same part of the same recent picture. As soon as I bring up the subject of my favorite moment in "Little Miss Sunshine," someone is sure to finish the sentence I've barely begun with, "The one where the son runs away from the van."

What makes that moment -- actually a five-minute-long sequence -- so memorable, or, in my view, enthralling? The question starts to answer itself when you take the time, as I've been doing, to study the sequence's substance on DVD. All of the ingredients that give the movie its special distinction can be found in the emotional and dramatic concentrate of what the DVD menu refers to as scene 16, "End of a Dream." Watch it on your own as a model of modern filmmaking, but read what I've written about it only if you've seen the film. There's no way to discuss such exceptional work without giving away crucial plot points, and my own point is to celebrate specifics, not to spoil pleasure.

MORGENSTERN'S PICKS
[Morgentstern's Picks]
In a perfect world, every great scene ever shot would be instantly retrievable. Even though some can only be found on video cassettes, many treasures are readily available.

The dream that ends has been dreamed by the touchingly tormented adolescent son, Dwayne, who wants to be a test pilot. He wants it so passionately that he has taken a vow of silence, inspired by his goofy reading of Nietzsche, until he gets into the Air Force Academy. We know he'll blurt out something sooner or later, so his silence is a blithely funny set-up in a film that's full of funny set-ups (the entire road trip is a set-up for Olive's performance at the climactic Little Miss Sunshine pageant) and unexpected payoffs.

Scene 16 begins as a welcome respite from the shock of the grandfather's death, followed by the hilarity of the encounter with a motorcycle cop who never notices the dead body in the back of the VW van. Inside the van, whose broken horn keeps bleating disconsolately, Olive whiles away the miles by giving her brother an eye test with a chart she found at the hospital. Then she gives him a color-blindness test, and suddenly the comedy turns dark. Dwayne can't see the green A inside the circle of red dots; he really is color-blind. That means, as his intermittently suicidal uncle Frank explains, he can never be a test pilot. At first Dwayne processes this slowly, but then the darkness explodes into full-blown horror as the boy goes berserk, beating on the seats and windows and, when the van stops, running from it down an embankment into a suburban field, where he finally breaks his silence with a heartbreaking cry of "F-! F-! F-!"

[Big Moment: Abigail Breslin and  Paul Dano in 'Little Miss Sunshine.']
Big Moment: Abigail Breslin and Paul Dano in 'Little Miss Sunshine.'

Within the space of a couple of minutes we've been whipsawed, though never manipulated, from a state of benign enjoyment through several intermediate zones, including anxiety, to a sense of authentic tragedy. That's remarkable enough, but the scene's central drama is yet to come. On the embankment's edge, Dwayne's father, mother, uncle and little sister stare down helplessly at the solitary boy, who has fallen to his hands and knees in an agonized crouch. His mother ventures toward him, tries to console him, but he'll have none of it -- he lashes out furiously, reminding her of the family's flagrant failures. Following her retreat, Olive's father turns to his little girl and says, fairly hopelessly, "You want to try talking to him?" As she tiptoes down the slope, we wonder what this unworldly child can possibly say.

The answer is nothing, not a word. Olive puts one arm around Dwayne, rests her head against his shoulder and sits with him in healing silence.

It's a gorgeous resolution of a desperate situation. Until the color-blind test, Dwayne has been almost purely a comic character, no more dysfunctional in his monkish silence and punkish truculence than the rest of his screwed-up family -- excluding Olive, of course, who's the film's radiant life force. His parents haven't taken the full measure of his chronic anger; to do so they'd have to hold themselves as well as their problem child to account. But the film takes Dwayne seriously from the start, even though we don't know, until scene 16, what the writer, Michael Arndt, and the directors, Jonathan Dayton and Valerie Faris, have in store for him.

The secret of the film's appeal is that it's neither a comedy with drama nor a drama with comedy, but a story that's open to its characters' behavior -- where their feelings lead them is where the action goes. The secret of scene 16's power is that once the comedy takes a hairpin turn into tragedy, the only character who intuits the depth of that tragedy finally gets to act on it. Olive doesn't draw on some mysterious wisdom. When her dad suggests that she try talking to Dwayne, she knows there's nothing to say. She's just a little kid who sees that her brother is suffering. But when she applies her comforting touch in that eloquent silence, the whole family, along with Dwayne, starts to heal.

Scene 16 is only one of 23, at least as the DVD divides the film -- scenes that mix density with clarity, simplicity with complexity, in a modestly-budgeted enterprise that may well win an Oscar for Best Picture. If that should come to pass, it will be partly because this picture projects a bright ray of hope for the future of original films at a bleak, conformist time in the medium's history. While monster attractions with overhyped stars peddle primitive premises, belaboring one primal feeling at a time, "Little Miss Sunshine" ebbs and flows, dodges and feints, derives generous emotional dividends from fugitive feelings, and captures, without confining, the lovely firefly nature of life.

Friday, February 16, 2007

Pancakes

IHOP has announced a free pancakes day this Tuesday, and it got me thinking how much I would love a fresh serving. A thick, tall, but not too wide stack, perhaps with little chocolate chips staring at my like soft marble, or a string of banana slices and pecans arranged together to appear as if a little cake were assembled for celebration. Add in some warm syrup properly swathing the fluffy edges, never too much in one place else patches of wet sweetness saturate too deeply. A cool glass of thick milk chasing after each bite, an unsuspecting grin shaping my mouth, each taste adds to my desire but each taste also fills my belly till I can eat no more.

And I realize it is not the pancakes I yearn for, but the soft smile of my mother visage, the hearty chuckle of my dad's odd humor, and the whispered gossip from my sister's voice. For are not pancakes but a small piece of the quiet home left so far away?

Unfortunately, one cannot spend the entire day wishing for breakfast dishes. But one can make the time to plan a brunch.

Thursday, February 1, 2007

Books I would read

Alternatively, these would be books I'd like to write or participate in writing:

Synchronized Disorder
Everything you know about teamwork is wrong; a new guide for building teams beyond the vague conventional wisdom of "hiring talent" and "rewarding winners."

Without Faith
A new non-theistic philosophy meant to celebrate the best of man's morality, providing a framework to examine one's personal ideology without the need for

Sarcasm: A history
From the genetic and evolutionary drivers to a recent record of sarcastic development, Sarcasm provides a nuanced look at one of humanity's more ironic forms of humor.