1.
Why are adults half-blind to the ways of the child’s mind? Equally puzzling, why are they so gullible about fashionable dogmas on that oddly vexed subject? Years ago I was stunned to hear Anna Freud declare in a lecture at Harvard that if a three-year-old wandered unrestrained from Central Square to Harvard Square, he would likely commit every crime in the statute books on the way. How had psychoanalysis managed to displace Rousseau’s Emile or William Blake’s “Songs of Innocence” from mythological center stage so quickly?
The usual explanation for adult incomprehension of the child’s mind is, of course, that we are all victims of infantile amnesia and, having forgotten what our own early childhood was like, we must learn about it again from scratch and from the outside. Yet there is something a bit fishy about this standard account. For, in fact, we are the only species where parents really teach their young—and we are astonishingly adept at it. It is remarkable, for example, how human adults talking with young children simplify their syntax and lexicon to match what the kid can understand.1 And we talk cute “Motherese” without instruction, and without even realizing how crucial it is for modeling the prosody and sound structures of a language. So, though we may be half-blind about the child’s mind, we obviously know far more about it than we realize, know it (as some seem to draw comfort from saying) unconsciously.
But what about our gullibility in accepting fashionable dogmas, particularly gloomy ones? Are we so anxious about our parental duties that we attend only to those things kids do that tell us whether we’re succeeding or failing as parents? Does that blind us to the ordinary day-to-dayness of how young children’s minds work? We seem even less curious about the “mental processes” of young kids than we are about our own.
An example. There’s no other species on the face of the earth whose young point to things to bring them to the attention of an adult, even looking back from the things pointed out to see whether the adult “got it.” Not even our closest primate cousins do it. Even blind human babies do it in response to strange sounds, also at about eight months, though their pointing usually disappears a month or two after it starts, unrequited by feedback from adults. It is an evolutionary miracle. But parents, generally, just take it for granted, pay it no heed—unless it fails to appear, as in autistic babies.
Then gullibility sets in. Despite having witnessed this wizard act of “intersubjective” sharing right there at cribside, parents (educated ones especially) are fully prepared to buy a wildly exaggerated dogma of “egocentrism”: that young children in their first few years are incapable of recognizing or appreciating another human being’s perspective. Even psychologists went overboard on this one. And it has taken the last two decades of research to shake a belief that should have been seen as absurd from the start by any observer. The fact is that virtually from birth, we are involved, we human beings, in refining and perfecting our species-unique gift of sharing attention and achieving workable “intersubjectivity.”2
But once again the anomaly of “not knowing what you know” interferes with our recognizing the reality. In fact, if you do a close frame-by-frame analysis of mothers and infants during their first eight months, say, it is plain as day that the mothers know exquisitely well how to manage direct eye-to-eye contact, how to respond to their infant’s efforts to bring objects into those eye-to-eye bouts, or just what kind of expression to use to tempt their baby to follow a shift in the direction of their gaze. We might say they know unconsciously. Or perhaps we should say something like “Humans are born parents!” Born that way or not, parents seem to know a very great deal more than they know that they know—including here as well older siblings and baby sitters, among many others.
Perhaps it’s the social definition of “growing up” and “bringing up” that produces our half-blindness and our gullibility. For, as Philippe Ariès3 long ago made plain, raising children, however much it expresses human tenderness, is also an ideological, social, and religious enterprise, fraught with duties and moral responsibilities. And like all such enterprises, it is hemmed in by dangers and pitfalls that all too easily create attitudes that suppress what, from a broader perspective, seems like “just” common sense. Can we discern in the dogma of all-encompassing childhood egocentrism—which runs so counter to the day-to-day, observable relations of ordinary parents with their children—the long arm of Christian dogmas of selfishness and original sin? It is not that children come out of Blake’s “Songs of Innocence,” but only that they are complicated and far more familiar with the world than is often acknowledged—“the adult in the crib,” to paraphrase the title of one of the books under review.
Advertisement
In recent times, of course, things have become even more complex. Hillary Rodham Clinton may indeed be right that “it takes a village” to raise a child. But the village would be in tough shape without federal and state funds. And can you let one in ten American children grow up in families below the poverty line, no matter in what village—given that those poor kids will mostly be living in the village’s black ghetto? What kind of theory of the child’s mind and its development is most usefully brought to bear on problems of this kind?
Which brings us to the two books under review. The Scientist in the Crib, by Alison Gopnik, Andrew Meltzoff, and Patricia Kuhl, is a triumph, a clear-headed account of the kinds of things that go on inside the heads of young children. It describes the way young kids look at the world, how they go about coping with the endless problems they encounter, and how parents, as it were, manage “as if born to it” to help them understand what it’s all about. But more important than anything else, it sets out a way of looking at the mind growing up that forswears baby-book norms, current jeremiads about “teach-them-early-or-lose-out- forever,” and dogmas about “the” right way to bring up kids. And most refreshingly, its authors could not care less about the labels of the disciplines they draw on—psychology, neuroscience, linguistics. They are experts in the new “cognitive sciences” but with none of the cocksure “cog sci” baby-as-computer-to-be-programmed attitudes of a decade ago; they wear their learning lightly and gracefully.
The other book, John Bruer’s The Myth of the First Three Years, is of a different genre altogether. It is an expression of protest, levelheaded though it may be, speaking out strongly against the all too prevalent false claim that if young children don’t embark on a serious learning career by the third year, they will fall irreversibly behind and even suffer brain defects. It is a troubled and troubling book.
How could the dogmas attacked in Bruer’s book have gained credence in a world that gave birth to The Scientist in the Crib—the pair published within weeks of each other? Why are even such well-intentioned proponents of adequate child care as Hillary Rodham Clinton hinting that contemporary brain research demonstrates that (to use the current mantra) “early is forever”?
The Scientist in the Crib speaks in the voice of intelligent parents talking to other intelligent parents—witty, rather personal, and very well informed. The three authors report “data”; they review and reframe great philosophical dilemmas; but their good-humored admission of their own, shall I say, epistemic vulnerability keeps their story buoyant throughout. The book will be as well received on Merton Street, where Oxford philosophers keep their debating headquarters, as it will among parents.
The authors, though caught up in the never-ending cognitive revolution of our times, begin by revisiting the “ancient questions” our forebears raised about the nature of the mind—how we gain our knowledge of the world, of others, of ourselves, and how we manage to make our knowledge known to others. From the start, they forswear the arrogance of objectivity, of the “view from nowhere”: “Trying to understand human nature,” they write, “is part of human nature.” The best we can do is construct representations of what the world is like and test those representations against what we experience—whether we are developmental scientists or two-year-olds. And it is in that spirit that the book bears the title The Scientist in the Crib.
Reversing the order characteristic of books about the mind, which usually start with how the mind knows the physical world, they take up the question of how children learn about other minds—what they seem to have by way of a beginning endowment, how they manage, with the aid of cooperating parents, to become workaday “folk psychologists.” This is the new terrain of “intersubjectivity” that I mentioned earlier, and they discuss it with the flair of direct involvement. The chapter provides the opportunity for them to set out their major theme. Babies begin life with “start-up” knowledge inherited from our evolutionary past, knowledge that provides the means for making a first shot at representing what they encounter. Once they have done this, infants are in a position to repair their first “edition” in the light of new experience, which, in turn, alters previous knowledge in such a way as to make new experience possible. In short, new experience leads to new knowledge which then permits new experience, the cycle never ending. But most important, the cycle requires the involvement, even the collusion, of others, whose minds and ways of thought we come to take for granted, much as we take the world for granted. Both are constructions, representations of the physical and the social.
Advertisement
The authors then show how children construct a world of space, time, and causality, and they deal particularly with the human push to explain experience, showing that this involves trade-offs in which some things are represented at the expense of ignoring others and that there are forms of blindness to the world that are part of the process of learning. They turn to an engaging and wonderfully informative chapter on the ways children learn language—how and when to use it in what ways, and how to use it in representing the world of things and people. This analysis is blessedly free both of the kind of Chomskian nativism that has everything there at the start and of the equally tiresome dogmatism that treats language as a kit of social conventions.
We are given a brief but convincing account of how children master the Sound Code—how meaningless sounds are put together to make meaningful words—and an excellent description of how Motherese manages to help the child grasp the phonology, syntax, and lexicon of her mother tongue, with some shrewd added remarks on dyslexia and dysphasia. The young child’s mastery of language could not proceed without the steady dialogic support from parents; nor could it ever get going without some “start-up” knowledge of how language is structured and to what uses it can be put. For the rest, as with the very development of the nervous system, the child’s increasing control and understanding of language is a function of opportunities provided (or created).
Two closing chapters tie the book together, one on what has been learned about the child’s growing mind, the other about the growing brain. The growth of mind is likened to the repair of Ulysses’ boat during his ten years of wandering, with Ulysses always aboard and usually under sail. “By the end of the journey hardly anything remained of the original vessel.” As we go through life, new “experience interacts with what we already know about the world to produce new knowledge,” and this in turn permits us to have still more new experience, which in turn transforms our knowledge. But kids have an advantage Ulysses never had: “adult teachers” usually have some useful knowledge to impart about repairing boats and other matters.
The growing brain is not easily summed up, and The Scientist in the Crib is respectful of its complexities. The brain has 100 billion cells, each seemingly out to make a synaptic connection with any nearby cell that fires when it does (hence the neuroscience slogan “cells that fire together, wire together”). Cells either become part of a network or die off, unused. There are surprisingly few time constraints about when things “have to” happen in the nervous system, and no “critical periods,” as with those birds who have to hear their native song within a certain period of time or go songless forever. The restrictions in the system are mostly imposed at any point by what connections have already been laid down: if you don’t have them, you can’t use them. On the other hand, most human infants, unless they’re raised in a closet, have enough neural connectivity and networking capacity to move along a normal course of development:
The new scientific research doesn’t say that parents should provide special “enriching” experience over and above what they experience in everyday life. It does suggest, though, that a radically deprived environment could cause damage.
A few examples from the book may illustrate some of its major points. Take for a start the seemingly head-in-the-clouds question “What does it take to make a human being human?” First, you need to treat human beings as if they had human minds, but that comes easy—at least to our species. Parents tell you that one of their earliest parental pleasures is “discovering” that their baby has a mind. And with this starts the eye-to-eye contact and joint attention that distinguishes us from all other primates.
So parents treat kids as if they had human intentions, desires, and beliefs, and expect their babies to treat them in just that same way. And in response, kids become increasingly sensitive to mental states in others over the first year. By about eighteen months they even begin imitating intended rather than observed behavior: they imitate, for example, an adult’s intended reach for an object, rather than an observed reach that had been blocked by an obstacle.
The same sensitivity to subjective states even occurs with young chimpanzees raised in the human way (to go beyond the book for a moment). Sue Savage-Rumbaugh and her colleagues, trying to “enculturate” the young bonobo Kanzi by bringing him up “human,” now find him much more “intersubjective” than at the start. The little troop of appreciative graduate students treat him as if he had human desires, beliefs, and intentions, and expect the same in return. After several years, Kanzi has became so like human beings that he finds his chimpanzee-raised sister’s intersubjective insensitivity almost beyond forbearance.4
So in some deep sense “becoming human” depends upon being with others who treat you as subjectively human. It starts early and does not stop. Without it, how could we ever do something as simple as hiding something from somebody? Consider an “experiment” of Alison Gopnik and Andrew Meltzoff’s. Two- and three-year-olds have to hide a toy from you but they themselves must still be able to see it. That’s the game. There’s a screen on the table in front of them that they can use if they want to. The two-year-olds have a terrible time getting their minds around the problem. They can’t grasp the idea of using a screen to hide something behind it. Yet they have some interesting hypotheses, a bit like scientists before they hit on the right idea. One is to hide the toy behind their backs: If they can’t see it, how can you? It turns out that three-year-old children solve the problem easily. Are younger kids more egocentric? Remember the eight-month-olds who bring things to your attention by pointing at them and then looking at you to determine whether you “get it”? So how do you get from eight months to three years? And just on the basis of everyday experience of the world? The transition, the authors suggest, involves a process by which the child tries out more hypotheses. And new hypotheses inevitably reflect what you’ve learned from trying out other ones in the past.
One of Patricia Kuhl’s more revealing studies of children is based on tested knowledge of grownups. We’ve long known that all natural language is constructed of phonemes—that is, a speech sound, a critical change in which changes the meaning of the word of which it is a part, as when we shift from b to p, yielding blot and plot in English. Listening to speech, we ignore sound differences that do not affect meaning, like the difference between an aspirated p as in pin (which can blow out a match held close to the mouth) and an unaspirated p as in spin (which won’t even make the flame flicker). Spin remains the same word whether you blow on the p or not.
What shifts in speech sound do ten-month-olds distinguish, even before they’ve mastered their mother tongue? Are they already tuned to its phonemes, like the grownups who look after them? Patricia Kuhl exposed children to a series of speech sounds a few seconds apart, say l as in lemon, look, light. Kids very soon became habituated, got bored, let their attention wander. But then she slipped a new sound into the series, one that lives on the other side of the English phoneme boundary between l and r, as in going from look to rook. A ten-month-old raised in an English-speaking environment, bored with the succession of l sounds, will snap right back to attention when an r appears. But not a Japanese ten-month-old: to him, as to his parents, l and r are the same old thing, lot and rot interchangeable, indistinguishable. So how do you master the phoneme structure of your native language before you start speaking it? That is still not fully understood. What Kuhl found is that kids babble in the phonemes of their own language before they talk it. But how do kids under ten months accomplish that? By listening to Motherese, perhaps?
2.
John Bruer is a philosopher of science and distinguished foundation executive, president of the McDonell Foundation, which dispenses millions annually in support of developmental research, brain research included. In The Myth of the First Three Years he describes how he became annoyed at false claims about the way the brain may be irreversibly stunted by “deprivation.” He was particularly provoked in the spring of 1996 when he read reports from a workshop called “Bridging the Gap between Neuroscience and Education,” sponsored by the influential Education Commission of the States and the Charles A. Dana Foundation. “What seemed to be happening,” he writes, “was that selected pieces of rather old brain science were being used, and often misinterpreted, to support preexisting views about child development and early childhood policy.” Then a year later, much the same thing happened at a White House conference on “Early Childhood Development and Learning: What New Research on the Brain Tells Us about Our Youngest Children.”
Bruer first sets out to discredit claims about the once-and-forever effects of early stimulation on later brain development and he does this with authority. His objective, plainly, is to discourage recklessness in high places—whether in the White House or on the campaign trail—for he feels that there is a danger of stampeding parents and educators with the “scare talk.” They would do better if they could simply enjoy their children: play with them, sing to them, read to them, talk with them.
Bruer begins by reviewing the “brain science” usually cited to promote the importance of early stimulation, such as the research that won David Hubel and Torsten Wiesel their Nobel Prize in 1981. If you block off vision in one eye of a kitten for its first months of life and leave the other eye exposed to the world, the occluded eye, in effect, becomes blind, all the neurons that might have served it either having died or been taken over by other brain functions. In fact, that wasn’t even a new story then: ophthalmic surgeons had long warned against delay in removing opaque cataracts. The visual system simply does not develop without light.
But the brute fact of the matter is that very little else in the nervous system is anywhere near that specialized that early. Hubel and Wiesel’s findings simply cannot be generalized to apply to most other brain functions. And even early blindness with translucent cataracts that let through some light but no image has no such drastic effects. 5 Another bit of evidence that is often misinterpreted is from Peter Huttenlocher’s study of “synaptic density” cited by Bruer.6 The brain’s synaptic connections increase rapidly in the first three years and then begin to decline. Should you try to stimulate these connections? As Bruer points out, early growth is genetically programmed and not driven by stimulation at all.
Even scare claims that supposedly derive from school experience and are not related to brain research are dubious—like starting reading and writing as early as you can. Hungarian schoolchildren do not start reading and writing until they are seven; yet they end up near the top of the European league by age twelve. Is there such a thing, neuronally or otherwise, as the appropriate time to start learning the three Rs? Not a shred of evidence confirms that one period is critical and not another. Bruer writes: “There is no research on how different kinds of day care or preschool affect a child’s brain.”
After examining a host of early child care programs, an $88 million study carried out by the National Institute of Child Health and Human Development (and made much of at the 1997 White House conference) found that they had only minor effects. Psychologically well-adjusted mothers and sensitive, responsive mothers had the most securely attached infants, the study concluded. But the “amount, stability, type, or quality” of child care beyond what a mother provided had little effect on a child’s attachment to her mother. It seems neither to enhance it nor, for that matter, to interfere with it. The only exception is for children raised in economically disadvantaged families where high-quality child care compensates somewhat for poor mothering.
As for child care’s effect on cognitive development, the chief finding was that kids looked after “by adults who engage with them in frequent affectionate responsive interactions” during the first three years of life develop better cognitive and linguistic skills. This is true whether such care comes from the parent or from child care helpers—but parent care has a much greater effect. Indeed, present high-quality day care (holding quality of parental care constant) has a disappointingly small, though statistically significant, beneficial effect, “accounting for between 1 and 4 percent of the difference between children’s scores on tests of cognitive and mental development.” In effect, parents’ responsiveness to young kids matters a lot for cognitive growth (hardly a new finding), but even good day care as we practice it today, while it helps, does not have a big effect. John Bruer, accordingly, hints at the questions of policy these findings suggest. For example: Should we “invest in fewer but higher-quality day care slots” to help improve current practice, “or in more but less expensive slots” to achieve better coverage? That’s certainly a reasonable and research-worthy policy question.
So why all the fuss? Bruer puts it wryly: people have come to think that “it is better to have synapses than even God on your side.” And the evidence from “damaged brains” seems to them more compelling than the evidence from “damaged lives or insecure attachments.” Scary talk about developing brains has deadly effects, even if it does no more than create those back-of-the-mind suspicions that “maybe there’s something to it.” Such claims can provoke the school board of Montgomery County, Maryland, to make their kindergarten program more “academic”—“Academics Instead of Naps,” as The Washington Post headlined the story. But does it make sense to begin reading and writing at three?
Again, Hungary’s preschools are instructive: they emphasize that there should be much more oral work as a prelude to reading (including nursery rhymes, songs, and show-and-tells). And it works. So does it work in Flemish Belgium. And in German-speaking Switzerland, kids who start reading later and are given lots of oral training are more literate by age twelve than their French-Swiss cousins, who begin reading at four. The “new” Britain, probably the most hurry-up-and-read country in Europe, drops steadily lower in the literacy league tables.7 Small wonder John Bruer is indignant. And his clearly written book serves his cautionary purposes well.
3.
How did we get into the present early-or-never pedagogical overkill? I suspect there are two quite different things at work—one very mundane, the other almost as mythological as Anna Freud’s three-year-old committing all the crimes in the statute book.
First, the mundane. Many more young mothers are joining the labor force, and it is neither easy nor inexpensive for them to find good child care for their kids. They worry whether they’re doing the right thing, and the current patchwork quilt of child care facilities is not reassuring. By comparison, France looks after 85 percent of its three-year-olds in nationally financed and thriving écoles maternelles. Add to the American malaise the widespread but false belief among parents that all children will have to master highbrow mathematics and science to get on in the technological age ahead.8 Relevant as well is what Barbara Ehrenreich has called the new middle-class “fear of falling”9 : move to the suburbs, invest your all in a big mortgage on the house, with nothing to leave one’s children save the education you’ve given them, and it’s not good enough.
The “mythological” side is no less telling, however bizarre it may be. It’s a product of that human gullibility for dogmas of dread and danger about the young—Original Sin, the little criminal en route from Central Square, the 1920s theories claiming that showing affection to your kids would spoil them rotten. It doesn’t even take an official theory to get parents to accept the bizarre hypothesis that their four-month-old is trying to “manipulate” them. Bad-news theories seem to comfort us by providing a dire scenario that makes our own situation seem tame.
A look at the century just gone by gives some sense of the way we mythologize early childhood. It swings between dread and celebration. The very word “kindergarten“ implies that kids need to be sheltered and nurtured like delicate blossoms. There were deviants from this tender view—like Maria Montessori, who envisaged preschool as a special and protected place where Rome’s slum kids could be taught the skills and habits needed for escaping poverty. But in the interwar years in America, kindergarten and preschool were still very much in the tender tradition, very self-consciously “child-centered.” We middle-class Americans seemed pleased with ourselves, with our kids, with our nursery schools, and most important of all, with the future.
Then came World War II. By its end, several things had happened. For one, research in the behavioral sciences changed direction, as it were, from positive to negative, to an emphasis not so much on what makes the mind grow as on what stunts its growth. The concept of “deprivation” became central. First came “sensory deprivation.” 10 If you keep an adult in a featureless fog for twenty-four hours, a Ganzfeld as it was called, he emerges dimmer cognitively, unable to concentrate, easily confused. After a few hours back in the “real” world, he recovers. The press likened this process to “brainwashing,” but many behavioral scientists speculated, quite rightly as it turned out, that perhaps a certain minimum level of sensory hubbub must also be needed for normal brain (and mental) development to occur. To test this hunch, several teams of investigators raised laboratory rats from birth to young adulthood in dull, impoverished environments and discovered that they ended up much stupider than their litter-mate controls raised in normal environments. And they did not recover spontaneously once put back into the “real” world. Besides, there were indications (never fully verified) that such sensory deprivation produced defects in neurotransmission.
That was the start. “Deprivation” became the ruling metaphor for what might deter or hinder growth. A second wave of deprivation studies then followed, inspired by the English psychoanalyst John Bowlby, who reported that virtually all the adult psychopaths he had studied had been “deprived” of early “attachment” to their real or foster parents. This rather sober-sided study was soon followed by the famous Wisconsin study of young macaques raised by terry-cloth artificial “mothers.” The monkeys grew up fearful, out of control, completely deranged. “Attachment deprivation” was added to the list of publicly proclaimed childhood horrors.
The forms of “deprivation” the mice and macaques experienced required extreme, highly aberrant conditions of early rearing, conditions difficult and expensive to maintain even for animals. When Head Start was first proposed as a way of countering the poor school performance of kids from poverty backgrounds, however, it, too, was justified on the ground that these kids also were suffering “deprivation”—not sensory deprivation, not attachment deprivation, but “cultural” deprivation. America had just “discovered” poverty. 11 Poverty “causes” cultural deprivation. Head Start would replace what was missing by starting children early, before school began.
Then, out of the blue, a new line of research exploded on the scene, this time demonstrating the unexpectedly precocious “mental” capacities of young infants. William James’s bon mot about the infant’s world being a “blooming, buzzing confusion” turned out to be humbug. The scientific journals were flooded with new findings—soon picked up by the popular press. For example, even before six months, a baby will suck on a dummy nipple to bring a motherly face into better focus, but will desist when sucking sends the face into blur. Or, to take another example, older infants, given a choice, will choose a visual display that is richer in information than will younger ones.12 During their first year, the research soon showed, children were cognitively active, well on their way to being “scientists in the crib.”
First the devastating effects of early deprivation, then the unsuspected precocity of the infant mind. How could the two be put together? What emerged in popular thinking was a bizarre and forbidding conclusion: if the infant is that mentally alert very early on, then early deprivation must be all the more damaging. It is a bizarre conclusion for two obvious but easily overlooked reasons. For one thing, the “deprivation” of those early studies was extreme deprivation—grayed-out environments and terry-cloth mothers that don’t occur in “real life.” And for another, there is nothing to suggest that early cognitive abilities make kids more liable to harm or that they require that children be raised in an academically more “stimulating” environment. If anything, early precocity argues in favor of more opportunity for exploratory play for young children (as I and others have long argued 13 ). And besides, does one want to expose very young children to the experiences of failure inherent in “academic tasks”?
Much of the worry about “deprivation” is misplaced; and so is the counterpart worry that we aren’t “stimulating” children enough. Most kids have plenty of stimulation, and there is no credible evidence that higher-pressure, more “enriched” early environments produce “good” effects in the sense that drastically deprived ones produce bad effects. Certainly the European evidence I cited earlier should give us pause about “kid-pushing” in general. Perhaps Hungary has a lesson to teach us.
As for all the talk about the permanent effects of the unstimulated early brain, both our gullible reaction to it and our eagerness to use it in support of decent child care policy speak volumes about the New Reductionism that has America in its grip at the start of a new technological era, perhaps even a millennium. Clearly John Bruer is right to be concerned that unfounded brain-talk might drive us to “toughen up” our kindergartens, nursery schools, even our playfulness with kids. So how might we get such angst under control? Certainly, books like The Scientist in the Crib and The Myth of the First Three Years should have a calming effect.
But the prospects of restoring a humane perspective on the human condition do not seem bright in the short run. The new reductionism is being fed by too many sources—the oversimplified evolutionary turn of much contemporary psychology, the gung-ho “it’s-all-in-the-brain” proclamations of a few neuroscientists, and the Wellsian fantasies of recombinant genetics, to name three of the most prominent. If we can map physical and even mental illness on the human genome, then why not the human mind, human culture, human whatever? But can we map the never-ending contingencies facing Ulysses as he repaired his boat to meet the unforeseen troubles that befell him? Or lay out how a nervous system (with as many neurons as stars in the Milky Way) will shape up in response to the opportunities that an ever-changing environment brings its way? Obviously we are learning more and more all the time, including the fact that the brain is not our unforgiving keeper.
And while we are indeed learning more and more about the natural world, we would do well not to forget the big lesson of the twentieth century: the better we get at controlling the world of nature, the more difficulty we seem to have in maintaining a humane social order.
This Issue
March 9, 2000
-
1
See the chapter by Catherine Snow in C.E. Snow and C.A. Ferguson, editors, Talking to Children (Cambridge University Press, 1977).
↩ -
2
Chris Moore and Philip Dunham, editors, Joint Attention: Its Origins and Role in Development (Erlbaum, 1995).
↩ -
3
Philippe Ariès, Centuries of Childhood: A Social History of Family Life (Knopf, 1962).
↩ -
4
I must report that when he was taking one of those high-powered space-puzzle video tests that he’s given regularly, I was sitting beside him on the bench, as baffled as he was by one particular problem; he showed palpable relief to find me frowning when he looked over to check.
↩ -
5
Richard L. Gregory, Eye and Brain: The Psychology of Seeing (McGraw-Hill, 1966).
↩ -
6
For a summary of Huttenlocher and related research, see Chapter 3 of Bruer’s book. Bibliographical references to this work are also provided in Bruer’s notes.
↩ -
7
See Clare and David Mills, “Britain’s early years disaster,” a background research memorandum prepared for a documentary on BBC4, “Too Much, Too Young” by the producer, David Mills of Mills Production Ltd., 45 Loftus Road, London W12.
↩ -
8
Anthony P. Carnevale and Donna M. Desrochers, School Satisfaction: A Statistical Survey of Cities and Suburbs (Educational Testing Service, 1999).
↩ -
9
Fear of Falling: The Inner Life of the Middle Class (Pantheon, 1989).
↩ -
10
See Donald Hebb’s classic The Organization of Behavior (Wiley, 1949), which set forth the view, following the great Spanish neurologist Lorente de No, that the very cellular organi-zation of the brain depended upon early and continuing patterns of stimulation that created “cell assemblies” (what we now refer to as “neural networks”).
↩ -
11
See Michael Harrington, The Other America (Penguin, 1971).
↩ -
12
This finding conforms, of course, to the conclusion in The Scientist in the Crib that the more knowledge the child achieves, the better able she is to process new, richer information encountered.
↩ -
13
Jerome Bruner, “The Importance of Play,” in Roger Lewin, editor, Child Alive! (Doubleday, 1975).
↩