A few years ago, when my daughter was in middle school, she had to study for a quiz on “the five steps of the scientific method.” She had no problem memorizing five words in a given order, but she also had to be ready to explain them, and there she ran into trouble, until she was seized by a bright idea: here was a chance for her mother, who taught and wrote about the history of science, to make herself useful. “I guess it makes sense for it to be observation, hypothesis, prediction, experiment, confirmation,” she said to me, “but why couldn’t it be hypothesis, observation, prediction, experiment, confirmation? Or prediction, observation, confirmation, hypothesis, experiment? Or…”
“Exactly,” I interrupted, before she could offer me all 120 permutations of the five words. Then, rather than solving her problem, I made it worse. (What are mothers for?) “They could really go in any order. Actually, I think they’re likelier to occur simultaneously. Also, they could include plenty of other parts, like comparison, formalization, analogy, interpretation, visualization…” She gave me her “parents are charming but of scant utility” look and turned back to her notes. If only I could have referred her to Henry M. Cowles’s The Scientific Method: An Evolution of Thinking from Darwin to Dewey. Cowles’s book doesn’t solve her problem either, but makes it into a much bigger and more interesting phenomenon. (What are books for?)
What is the scientific method, and when, where, and how did it become, as the kids say, a thing? Authoritative definitions of “the scientific method” often state that it consists of a set of procedures including observation, experimentation, and the formation and testing of hypotheses by inductive and deductive reasoning. Such accounts, as a rule, ascribe science’s successes to the application of these procedures ever since the seventeenth century and the work of people such as Francis Bacon and Isaac Newton. But neither Bacon nor Newton nor anyone else in the seventeenth century would have recognized the phrase; moreover, neither would have agreed with current standard definitions. Bacon, for instance, rejected deductive reasoning as the bad old Aristotelian approach, and Newton, author of one of the boldest hypotheses in the history of science—the universal aether—denied any role for hypotheses in his science, famously declaring “hypotheses non fingo” (I frame no hypotheses).
Cowles traces the scientific method to a later period than the Scientific Revolution—the late nineteenth and early twentieth centuries. This makes sense, since it coincides with a tectonic shift in intellectual geography: the splitting of the sciences and the humanities into two diverging continents. To prove its distinctness among human endeavors, science required a defining method. It hadn’t always been so. Until sometime around the end of the nineteenth century, one could seek to understand the world in a way that was neither scientific nor humanistic but both—though even writing “both” implies a distinction between the two. Perhaps “integral” is better. A consequence of intellectual seismic shifts is that, by shifting the language too, they impede one’s efforts to think, write, and speak about a time before they had taken place.
A prominent example of someone who predated this shift—or at least predated its final accomplishment—and developed an integral approach to understanding the natural world is Charles Darwin, who provides Cowles with a starting point. Darwin originated what others retrospectively claimed as the scientific method by, according to Cowles, projecting his own method of experimentation and hypothesis-testing onto nature, and by simultaneously seeing that method as its own offspring: evolution had produced Darwin by the same experimental method as that by which Darwin had produced evolutionary theory. For Cowles, the salient features of Darwin’s method are its naturalism and universalism: Darwin understood his method as common not only to all human thought and creation but to living nature itself. His followers, though, ultimately transformed the method utterly, turning it from a natural process characterizing all of living nature to an artificial one that set science apart from everything else.
Many users of the phrase “the scientific method” pointed back at Darwin. He, however, as far as I know, never used it either in print or in private writings. Moreover, as Cowles emphasizes, the phrase “the scientific method” implies an insistence on the distinctness of science at odds with Darwin’s integral, universalist approach. Darwin was an observer, thinker, naturalist, philosopher. He was a splendid writer, also a meticulous one, no less an artisan of the English language than his contemporary Anthony Trollope, whose writing he followed closely and passed along to friends and whose language he occasionally borrowed.
Advertisement
Darwin was not a scientist. Now—since I can hear the creationist wolves howling at the gate—let me hasten to add that when I say Darwin wasn’t a scientist, I don’t mean he was unscientific or wrong or misguided. Please read “Darwin wasn’t a scientist” as you might read “Aristotle wasn’t a journalist” or “Benjamin Franklin wasn’t an ophthalmologist.” I mean that “scientist” was barely a thing during Darwin’s lifetime and certainly wasn’t the thing that he was. As with “the scientific method,” as far as I know, he never used the term “scientist,” although it did exist, and for an interesting and relevant reason.
The word “scientist” first appeared in March 1834, while Darwin was surveying the Falkland Islands on overland expeditions from the HMS Beagle, being no scientist but an explorer, adventurer, observer, and diarist. The word began as a passing joke in The Quarterly Review. The wit who coined it was the English philosopher and Anglican clergyman William Whewell, and the context was a positive, though excruciatingly patronizing, review of a best seller of popular science by the mathematician and physicist Mary Somerville, entitled On the Connexion of the Physical Sciences. Whewell praised Somerville for applying her womanly art to the project of unifying the rapidly fragmenting sciences. “One of the characteristics of the female intellect,” he observed, “is a clearness of perception, as far as it goes.” Unburdened by excessive powers of discernment or analysis, women could take in a whole intellectual landscape, serenely innocent of its variations in terrain. What they understood, they understood clearly; “what they see at all, they see in sunshine.” These advantages of her sex allowed Somerville to shed her feminine sunshine over the sciences, casting off the mutual obscurity that was overtaking them.
Whewell remarked that the sciences’ increasing fragmentation was plain in the lack of any general name for those who studied the material world. He canvassed the possibilities: “Philosopher” was too lofty, “savant” too French; the German “Natur-forscher,” rendered into English, became “nature-poker,” which was plainly out of the question. “Scientist,” Whewell reported, had been the suggestion of an “ingenious gentleman” at a meeting of the British Association for the Advancement of Science, who had justified his free use of the suffix by invoking, among others, “sciolist” (pretentious possessor of a smattering of knowledge, from the Latin sciolus). Whewell, who died in 1866, several decades before “scientist” caught on, would surely be astounded to learn what posterity did with his farcical word, including retroactively attaching it to two millennia of nature-pokers and sciolists from Aristotle to Newton to Whewell himself. Imagine Stephen Colbert, transported two hundred years into the future, discovering that “truthiness” was the twenty-third century’s standard of belief, and everyone from Socrates to Einstein was now a “truthineer.”
Cowles places Whewell’s neologism at the beginning of an extended period of anxious preoccupation with scientific methods, and Whewell’s remarks do betray anxiety, political as much as methodological. Science’s “disintegration,” Whewell wrote, was “like a great empire falling to pieces.” He echoed the Reverend William Vernon Harcourt, founder of the British Association, who, at its first meeting in 1831, had promised that the new association would do what the Royal Society was failing to do: protect British science against catastrophic dissolution. Without such an association, Harcourt warned, “colony after colony dissevers itself from the declining empire.” The actual British Empire was not declining or falling to pieces—it was expanding—yet it felt in perpetual danger of disintegration, particularly under pressure from French competition, as Cowles explains, and Harcourt was among those who associated this political danger with the fragmentation of an outdated scientific establishment. The British Association was the institutional expression of an anti-elitist, liberal movement seeking to place science and its empire on a new footing.
The old footing had been no less imperial. The Royal Society, for almost two centuries, had served as the institutional locus for science and empire. In 1620 Francis Bacon—natural philosopher, lawyer, statesman, and the society’s patron saint—had announced the equivalence of imperial dominion and applied science: printing, gunpowder, and the compass had changed the world such that any civil or religious authority now came second to “mechanical inventions” in the struggle for power over human affairs. Devices facilitating conquest and the administration of an empire had been all very well in the 1600s and 1700s, but by the 1830s there were those who believed that science and empire, in their conjoined pursuit of power, urgently needed to shift their approach. In the words of Charles Kingsley, zealous believer in Anglo-Saxon racial superiority, they must do so—as Cowles relates—by “inventing, producing, exporting, importing, [till]…the whole human race, and every land from the equator to the pole must henceforth bear the indelible impress and sign manual of English science.” Global industrial capitalism was the new Baconian program.
Advertisement
Rebellion against the reign of classical education was a defining feature of this new program, as it had been of the older one. Its supporters emphasized the specificity of science as distinct from literary and humanistic knowledge. When in 1875 Josiah Mason, Birmingham industrialist and mass-producer of key rings, pens, and pen-nibs, founded Mason Science College (later the University of Birmingham) “to promote the prosperity of the manufactures and industry of the country,” he specifically banned “mere literary instruction.” Thomas Henry Huxley, the bellicose Darwinian anatomist and paleontologist, gave the college’s inaugural address and devoted almost all of it to celebrating this act of exclusion. Adopting his signature pugnacious stance, Huxley argued that for students of physical science whose mission was to foster industrial progress, literary instruction would be a waste of valuable time, acknowledging with satisfaction that these views were “diametrically opposed to those of the great majority of educated Englishmen.”
Were they? Not according to Matthew Arnold, who objected that during the previous decade, the science-not-letters movement had progressed from the “morning sunshine of popular favor” to its “meridian radiance.” Arnold, with whom Huxley had picked a fight by invoking him as the personification of literary culture, rose to the defense of letters by arguing that theirs was the quintessentially human task of integration: relating separate forms of knowledge and interpretation—moral, scientific, aesthetic, social—to one another. Science and literature, he urged, must be integral parts of the same larger task of “knowing ourselves and the world.”
As further evidence of the turn against English letters, Arnold invoked Longfellow’s Song of Hiawatha, which drew on a semi-fictional jumble of Native American languages to present the ideal type of a noble savage: unburdened by Greek or Latin, Hiawatha fared inexorably “westward,” arriving, in the words of an admirer, at the very antipodes of Tennyson. George Eliot appreciated the poem for being “indigenous.” The other reviews of Hiawatha were mostly blistering, but it was enormously popular with readers and reading clubs in England as well as America.
At first glance, Hiawatha might seem to have little in common with Mason Science College, but Arnold was not wrong in associating Huxley’s inaugural address with populist America and its reading public. The American magazine The Popular Science Monthly took up Huxley’s cause against Arnold. To Arnold’s suggestion that literary scholarship was scientific as long as it was rigorous and systematic, while science was literary when it included the writings of Copernicus, Galileo, Newton, and Darwin, the “Editor’s Table” column in the journal chided that he obviously didn’t understand “the scientific method.”
This brings me back to the central question of Cowles’s book: the rise of “the scientific method” turns out to have happened crucially within the American continuation of the science-not-letters movement, particularly, as Cowles shows, through the medium of The Popular Science Monthly. “The scientific method,” admonished the magazine’s founder and editor, Edward Youmans, was not just different from “the literary method” but downright antithetical to it. Whereas the literary method had confined itself for centuries to sterile exercises with words, the scientific method had launched “an open and declared revolt” to demand the “actual study of things.”
The revolt of “the scientific method” provided Youmans and his magazine with a cause. Youmans, the son of a wainwright from upstate New York, was an autodidact and member of what his sister described as the “hard-working class” who worked his way to cultural prominence as a popular science writer. As a young man, Youmans had discovered the writings of Herbert Spencer, then at the height of his powers. Spencer was churning out best sellers of social theory and popular science that Darwin described as “detestable,” obscure, unedited, clever but empty, a lot of “dreadful hypothetical rubbish,” and a disappointing tissue of “words and generalities,” and inspiring Darwin’s friend the botanist Joseph Hooker to characterize Spencer as “all oil and no bone…a thinking pump.” (If you’re finding that image obscure, so did Hooker. “I can attach no meaning to the simile,” he confessed, but “it ought to have one.” Darwin was so pleased with it that he read it aloud to his family, by whom “it was unanimously voted first-rate, & not a bit the worse for being unintelligible.”)
Youmans made it his mission to apply Spencer’s oily thinking pump to America via The Popular Science Monthly, for which Spencer wrote the first article in the first issue, proposing that social phenomena were no less susceptible to scientific methods than biological ones. Ultimately, Spencer contributed almost a hundred articles to The Popular Science Monthly, championing the manifest destiny of the scientific method, whose territory, Youmans announced, was inexorably expanding. Cowles recounts that as a result of his exposure through the magazine, Spencer was greeted by crowds of “adoring fans” during his 1882 visit to the United States. He had become a household name in America, and “the scientific method” a household phrase and idea.
Thanks in no small part to Youmans’s Spencerian pump, the scientific method permeated American popular culture and influenced the major American intellectual movements of the late nineteenth and early twentieth centuries, notably pragmatism and behaviorism. These movements’ most important figures—including Charles Sanders Peirce, John Dewey, and, later, B.F. Skinner—developed their ideas about “the scientific method” partly in the pages of The Popular Science Monthly and its 1915 spinoff, The Scientific Monthly. In the series of articles introducing the philosophy of pragmatism, Peirce granted a monopoly on truth to “the scientific method,” which consisted of restricting one’s conception of a thing to its sensible effects. This method alone, Peirce promised, would carry people past their diverse points of view to converge upon a single, certain answer to any question, “like the operation of destiny.”
From pragmatism and behaviorism, Cowles follows “the scientific method” into the professional and business worlds. Abraham Flexner, in his 1910 assessment of the state of American medical education for the Carnegie Foundation, used the new slogan to launch the professionalism movement. Flexner called for a refounding not only of medical education but of social life and politics upon the scientific method. Frederick Winslow Taylor provides Cowles a further example: in his manifesto for scientific management, Taylor promised great increases in output if companies were run according to the scientific method.
Here, then, is the answer to when, where, and how “the scientific method” originated: not in any field or practice of science, but in the popular, professional, industrial, and commercial exploitation of its authority. This exploitation crucially involved the insistence that science held an exclusive monopoly on truth, knowledge, and authority, a monopoly for which “the scientific method” was a guarantee.
Cowles is an engaging narrator of this important story and a sensitive analyst of its outcome. But he ultimately assigns it a different emphasis than I would choose. As we’ve seen, he traces a development from Darwin’s original notion of nature’s own evolutionary experimental method through various stages of “repurposing” in the service of programs and movements including pragmatism, behaviorism, professionalism, and scientific management. Through this successive repurposing, Cowles shows that what began as a universal process embracing human thought and natural evolution became a prescriptive list of rules setting science apart from everything else. “The rise of ‘the scientific method,’” he concludes, was “less a success than a tragedy.” I agree that the enthronement of “the scientific method” was lamentable, and not only for middle schoolers tormented by quizzes. But to call it a tragedy implies immense powers at work, gods or fates or forces of nature, whereas the rise of the scientific method resulted from human activity of the most banal variety. Instead of a tragedy, I would call it a feat of branding equal to “diamonds are forever” or “Coke is it”: “The scientific method” became science’s brand.
This is not to deny, of course, that the sciences include procedures of observation, controlled experimentation, and analysis, and that these procedures are crucial to the progress of scientific understanding. But no list of four or five discrete steps can describe them, and they don’t operate the way Peirce and the others suggested, carrying the scientist inexorably toward transcendent truth. Interpretation remains present at every level. Everything we know is known by us; we can’t eliminate ourselves from the picture. Defining methods, choosing which ones to use, deciding how to use them, understanding what they produce: each of these acts is fundamentally interpretive.
To say so is not to be a radical relativist: Karl Popper, scourge of relativism, whose theory of falsifiability came to dominate discussion of the scientific method in the middle decades of the twentieth century, emphasized this very point. “Out of uninterpreted sense-experiences science cannot be distilled, no matter how industriously we gather and sort them,” Popper wrote in 1935. “Bold ideas, unjustified anticipations, and speculative thought, are our only means for interpreting nature: our only organon, our only instrument, for grasping her.”
During the same period that saw the establishment of the “scientific method” brand with its monopolistic claim on transcendent truth, certain scientists were arriving at revolutionary new theories of the physical world precisely by focusing upon the ineradicability of interpretation. In 1906, Henri Poincaré gazed at the Pantheon in Paris and rejected the notion of absolute space when he reflected that he could know with certainty neither the Pantheon’s dimensions and location, notwithstanding its hulking presence at the center of the city, nor his own, despite his obvious proximity to the building, nor even his distance from it. Perhaps he, his meter stick, and the Pantheon were all constantly changing dimensions; as long as they maintained the same relations to one another, he would be none the wiser. His knowledge could be rigorous and empirical, but never absolute: it could describe only relations between himself and other things.*
Niels Bohr and Werner Heisenberg made similar points when laying out their interpretation of quantum mechanics. Bohr reflected that any observation involves an interference with the thing observed. Our own acts of observation are a part of the world we see: we are “both onlookers and actors in the great drama of existence.” Heisenberg elaborated the idea by emphasizing that “what we observe is not nature in itself but nature exposed to our method of questioning,” and that science was therefore “a part of the interplay between nature and ourselves.” Scientists in this period were recognizing the necessity of interpretation and putting that recognition to work in radical new ways that were neither humanistic nor scientific but integrally both. Meanwhile “the scientific method” continued in pursuit of its manifest destiny.
American universities did much to advance this destiny; the first to take up the call were those founded in the last decades of the nineteenth century to promote the partnership of science and industry. Ira Remson, chemist, codiscoverer of saccharine, and president of Johns Hopkins University (founded a year later than Mason Science College, in 1876), declared that “the nation that adopts the scientific method will in the end outrank both intellectually and industrially the nation that does not.” Stanford University, where I teach, was created in 1885, an embodiment of the scientific method’s westward expansion. Stanford’s founding statement of purpose begins and ends with “mechanical” programs. But at Stanford, unlike at Mason Science College, a general liberal arts program made the cut when Leland Stanford Sr., who cofounded the university with Jane Stanford, his wife, decided this was important to fostering “business capacity,” observing that “technically educated boys do not make the most successful businessmen.” When the sciences reorient themselves around engineering, apparently, the humanities turn toward consulting.
David Starr Jordan—Stanford’s first president, an ichthyologist, and avid eugenicist—announced that the extended application of the scientific method had transformed education, calling it a “magic wand.” Among Stanford’s twenty-two founding faculty members was (the confusingly named) Fernando Sanford, a physicist specializing in electricity and its applications, and a partisan of the scientific method. Sanford gave the address at Stanford’s eighth commencement in 1899 where, with great simplicity and lucidity, he bestowed the scientific method upon the new graduates. First, collect facts; second, seek out causal relations among these; third, deduce conclusions; fourth, perform experiments to test these conclusions. Sanford also warned his audience to be on their guard against practitioners in fields such as history, philology, and even Latin who, “wish[ing] to appear especially progressive,” had “learned to use the language and to adopt the name of the scientific method.” These were mere pretenders; the scientific method bore no relation to language or literature, nor they to it, and Sanford closed by advising these scholars that if they didn’t want to be left in the dust, they could bloody well go out and find their own methods.
Little wonder that Stanford students have traditionally divided themselves into Techies and Fuzzies; their institution was founded on the divorce between the two. Reading Cowles in my office at Stanford, I understood with new clarity that Silicon Valley is the logical extreme of the Baconian program as Youmans et al. reconstrued it in the late nineteenth century.
Traveling to a later episode in the great divide between science and humanities, we alight on a spring evening at Cambridge University, 1959. Charles Percy Snow, CBE, fellow of Christ’s College, novelist, physicist, and government administrator, is delivering his “Two Cultures” lecture, which will create an immediate sensation and remain continuously in print for the rest of the century and well into the next. For the most part, however, people who cite Snow’s lecture don’t bother with anything beyond its two-word title, taking it to represent a lament over the division of the intellectual world into two mutually uncomprehending cultures, literary scholars and scientists. This was Snow’s window-dressing but not his main merchandise. In fact, he was censuring Britain for undervaluing applied sciences in education and politics, in contrast with Germany, the United States, and the Soviet Union. More generally, he was making a case for industrialization as the path to social as well as economic prosperity.
Snow scolded literary scholars and the old elite they represented for looking down their noses at their colleagues in engineering fields. These industrious young “handymen” might be unacquainted with Shakespeare, he argued, but they would soon be saving geopolitics by elevating the Third World to the living standards of the First. Where the Third World might have heard that one before, Snow did not pause to consider. India, he pointed out, was very poor, with a life expectancy less than half what it was in England. In pressing what was certainly then a progressive argument that “the only hope of the poor” lay in industrialization, Snow nevertheless omitted any mention of its ugly side, its history of exploitation and inequality. His audience may well have included South Asian witnesses to the Raj’s dismantling of their economies as part of England’s industrialization. Their perspective would surely have offered reasons to temper Snow’s faith in the benignity of industrial capitalism, even if we didn’t have the vantage point of 2020, with its ever-polarized living standards and environmental and geopolitical crises, and with its economy and culture dominated by Shakespeare scholars. Wait, sorry, no, I mean by engineer-capitalists elevated to the very heights Snow demanded on their behalf.
Snow’s lecture was—as anyone who reads Cowles’s valuable book will recognize—less a call to arms than the expression of a change already accomplished. It confirmed not only the economic and political power of engineering-capitalism but its cultural supremacy. People loved the lecture, not because Snow announced a revelatory truth but because he said something they already believed. They loved to hear their own views presented as radical expressions of truth to power, and what’s more, by a fellow of Christ’s College, Cambridge. Who doesn’t love having their cake and also licking the last speck of frosting off the plate? Snow’s idea that engineers would solve the world’s problems specifically by not reading Shakespeare, i.e., by devoting themselves single-mindedly to inventing industries to generate wealth, has since become so commonplace that we express it in a single word: “Innovation.” Definitively lost, between Whewell and Snow—or rather, vigorously shouted down—was just the idea that the title of Snow’s “Two Cultures” lecture seems to promote: that Shakespeare and the sciences might be jointly relevant to one project of understanding.
Finishing this review no longer in my office at Stanford but sheltering in place at home in an effort to help flatten the global pandemic curve, I am thinking that recovering that integral project of understanding (which we might also call intellectual integrity) is an urgent matter. Covid-19 has presented the world with a couple of powerful ultimatums that are also strikingly relevant to our subject here. The virus has said, essentially, Halt your economies, reconnect science to a whole understanding of yourselves and the world, or die. With much economic activity slowed or stopped to save lives, let us hope governments find means to sustain their people through the crisis. Meanwhile, with the din of “innovation” partially silenced, perhaps we can also use the time to think our way past science’s branding, to see science once again as integral to a whole, evolving understanding of ourselves and the world in the manner of the old nature-pokers.
This Issue
July 2, 2020
Ransacking the Republic
Democracy’s Red Line
-
*
Various writers have criticized Poincaré’s argument on the ground that the laws of physics wouldn’t allow for scale-shifting of the sort he imagined, while others have defended his conventionalism as compatible with and even essential to the development of general relativity. Whatever one thinks of Poincaré’s thought experiment, it reflects his conviction that a measurement is a relation among things in the world, not a transcendent absolute. Or, to put it differently, one can only measure the world from within it, as a part of it. This is an idea that informed major developments in contemporary physics. ↩