One evening a few years ago I was with some other faculty members at the University of Texas, telling a group of undergraduates about work in our respective disciplines. I outlined the great progress we physicists had made in explaining what was known experimentally about elementary particles and fields—how when I was a student I had to learn a large variety of miscellaneous facts about particles, forces, and symmetries; how in the decade from the mid-1960s to the mid-1970s all these odds and ends were explained in what is now called the Standard Model of elementary particles; how we learned that these miscellaneous facts about particles and forces could be deduced mathematically from a few fairly simple principles; and how a great collective Aha! then went out from the community of physicists.
After my remarks, a faculty colleague (a scientist, but not a particle physicist) commented, “Well, of course, you know science does not really explain things—it just describes them.” I had heard this remark before, but now it took me aback, because I had thought that we had been doing a pretty good job of explaining the observed properties of elementary particles and forces, not just describing them.1
I think that my colleague’s remark may have come from a kind of positivistic angst that was widespread among philosophers of science in the period between the world wars. Ludwig Wittgenstein famously remarked that “at the basis of the whole modern view of the world lies the illusion that the so-called laws of nature are the explanations of natural phenomena.”
It might be supposed that something is explained when we find its cause, but an influential 1913 paper by Bertrand Russell had argued that “the word ’cause’ is so inextricably bound up with misleading associations as to make its complete extrusion from the philosophical vocabulary desirable.”2 This left philosophers like Wittgenstein with only one candidate for a distinction between explanation and description, one that is teleological, defining an explanation as a statement of the purpose of the thing explained.
E.M. Forster’s novel Where Angels Fear to Tread gives a good example of teleology making the difference between description and explanation. Philip is trying to find out why his friend Caroline helped to bring about a marriage between Philip’s sister and a young Italian man of whom Philip’s family disapproves. After Caroline reports all the conversations she had with Philip’s sister, Philip says, “What you have given me is a description, not an explanation.” Everyone knows what Philip means by this—in asking for an explanation, he wants to learn Caroline’s purposes. There is no purpose revealed in the laws of nature, and not knowing any other way of distinguishing description and explanation, Wittgenstein and my friend had concluded that these laws could not be explanations. Perhaps some of those who say that science describes but does not explain mean also to compare science unfavorably with theology, which they imagine to explain things by reference to some sort of divine purpose, a task declined by science.
This mode of reasoning seems to me wrong not only substantively, but also procedurally. It is not the job of philosophers or anyone else to dictate meanings of words different from the meanings in general use. Rather than argue that scientists are incorrect when they say, as they commonly do, that they are explaining things when they do their work, philosophers who care about the meaning of explanation in science should try to understand what it is that scientists are doing when they say they are explaining something. If I had to give an a priori definition of explanation in physics I would say, “Explanation in physics is what physicists have done when they say Aha!” But a priori definitions (including this one) are not much use.
As far as I can tell, this has become well understood by philosophers of science at least since World War II. There is a large modern literature on the nature of explanation, by philosophers like Peter Achinstein, Carl Hempel, Philip Kitcher, and Wesley Salmon. From what I have read in this literature, I gather that philosophers are now going about this the right way: they are trying to develop an answer to the question “What is it that scientists do when they explain something?” by looking at what scientists are actually doing when they say they are explaining something.
Scientists who do pure rather than applied research commonly tell the public and funding agencies that their mission is the explanation of something or other, so the task of clarifying the nature of explanation can be pretty important to them, as well as to philosophers. This task seems to me to be a bit easier in physics (and chemistry) than in other sciences, because philosophers of science have had trouble with the question of what is meant by an explanation of an event (note Wittgenstein’s reference to “natural phenomena”) while physicists are interested in the explanation of regularities, of physical principles, rather than of individual events.
Advertisement
Biologists, meteorologists, historians, and so on are concerned with the causes of individual events, such as the extinction of the dinosaurs, the blizzard of 1888, the French Revolution, etc., while a physicist only becomes interested in an event, like the fogging of Becquerel’s photographic plates that in 1897 were left in the vicinity of a salt of uranium, when the event reveals a regularity of nature, such as the instability of the uranium atom. Philip Kitcher has tried to revive the idea that the way to explain an event is by reference to its cause, but which of the infinite number of things that could affect an event should be regarded as its cause?3
Within the limited context of physics, I think one can give an answer of sorts to the problem of distinguishing explanation from mere description, which captures what physicists mean when they say that they have explained some regularity. The answer is that we explain a physical principle when we show that it can be deduced from a more fundamental physical principle. Unfortunately, to paraphrase something that Mary McCarthy once said about a book by Lillian Hellman, every word in this definition has a questionable meaning, including “we” and “a.” But here I will focus on the three words that I think present the greatest difficulties: the words “fundamental,” “deduced,” and “principle.”
The troublesome word “fundamental” can’t be left out of this definition, because deduction itself doesn’t carry a sense of direction; it often works both ways. The best example I know is provided by the relation between the laws of Newton and the laws of Kepler. Everyone knows that Newton discovered not only a law that says the force of gravity decreases with the inverse square of the distance, but also a law of motion that tells how bodies move under the influence of any sort of force. Somewhat earlier, Kepler had described three laws of planetary motion: planets move on ellipses with the sun at the focus; the line from the sun to any planet sweeps over equal areas in equal times; and the squares of the periods (the times it takes the various planets to go around their orbits) are proportional to the cubes of the major diameters of the planets’ orbits.
It is usual to say that Newton’s laws explain Kepler’s. But historically Newton’s law of gravitation was deduced from Kepler’s laws of planetary motion. Edmund Halley, Christopher Wren, and Robert Hooke all used Kepler’s relation between the squares of the periods and the cubes of the diameters (taking the orbits as circles) to deduce an inverse square law of gravitation, and then Newton extended the argument to elliptical orbits. Today, of course, when you study mechanics you learn to deduce Kepler’s laws from Newton’s laws, not vice versa. We have a deep sense that Newton’s laws are more fundamental than Kepler’s laws, and it is in that sense that Newton’s laws explain Kepler’s laws rather than the other way around. But it’s not easy to put a precise meaning to the idea that one physical principle is more fundamental than another.
It is tempting to say that more fundamental means more comprehensive. Perhaps the best-known attempt to capture the meaning that scientists give to explanation was that of Carl Hempel. In his well-known 1948 article written with Paul Oppenheim, he remarked that “the explanation of a general regularity consists in subsuming it under another more comprehensive regularity, under a more general law.”4 But this doesn’t remove the difficulty. One might say for instance that Newton’s laws govern not only the motions of planets but also the tides on Earth, the falling of fruits from trees, and so on, while Kepler’s laws deal with the more limited context of planetary motions. But that isn’t strictly true. Kepler’s laws, to the extent that classical mechanics applies at all, also govern the motion of electrons around the nucleus, where gravity is irrelevant. So there is a sense in which Kepler’s laws have a generality that Newton’s laws don’t have. Yet it would feel absurd to say that Kepler’s laws explain Newton’s, while everyone (except perhaps a philosophical purist) is comfortable with the statement that Newton’s laws explain Kepler’s.
This example of Newton’s and Kepler’s laws is a bit artificial, because there is no real doubt about which is the explanation of the other. In other cases the question of what explains what is more difficult, and more important. Here is an example. When quantum mechanics is applied to Einstein’s general theory of relativity one finds that the energy and momentum in a gravitational field come in bundles known as gravitons, particles that have zero mass, like the particle of light, the photon, but have a spin equal to two (that is, twice the spin of the photon). On the other hand, it has been shown that any particle whose mass is zero and whose spin is equal to two will behave just the way that gravitons do in general relativity, and that the exchange of these gravitons will produce just the gravitational effects that are predicted by general relativity. Further, it is a general prediction of string theory that there must exist particles of mass zero and spin two. So is the existence of the graviton explained by the general theory of relativity, or is the general theory of relativity explained by the existence of the graviton? We don’t know. On the answer to this question hinges a choice of our vision of the future of physics—will it be based on space-time geometry, as in general relativity, or on some theory like string theory that predicts the existence of gravitons?
Advertisement
The idea of explanation as deduction also runs into trouble when we consider physical principles that seem to transcend the principles from which they have been deduced. This is especially true of thermodynamics, the science of heat and temperature and entropy. After the laws of thermodynamics had been formulated in the nineteenth century, Ludwig Boltzmann succeeded in deducing these laws from statistical mechanics, the physics of macroscopic samples of matter that are composed of large numbers of individual molecules. Boltzmann’s explanation of thermodynamics in terms of statistical mechanics became widely accepted, even though it was resisted by Max Planck, Ernst Zermelo, and a few other physicists who held on to the older view of the laws of thermodynamics as free-standing physical principles, as fundamental as any others. But then the work of Jacob Bekenstein and Stephen Hawking in the twentieth century showed that thermodynamics also applies to black holes, and not because they are composed of many molecules, but simply because they have a surface from which no particle or light ray can ever emerge. So thermodynamics seems to transcend the statistical mechanics of many-body systems from which it was originally deduced.
Nevertheless, I would argue that there is a sense in which the laws of thermodynamics are not as fundamental as the principles of general relativity or the Standard Model of elementary particles. It is important here to distinguish two different aspects of thermodynamics. On one hand, thermodynamics is a formal system that allows us to deduce interesting consequences from a few simple laws, wherever those laws apply. The laws apply to black holes, they apply to steam boilers, and to many other systems. But they don’t apply everywhere. Thermodynamics would have no meaning if applied to a single atom. To find out whether the laws of thermodynamics apply to a particular physical system, you have to ask whether the laws of thermodynamics can be deduced from what you know about that system. Sometimes they can, sometimes they can’t. Thermodynamics itself is never the explanation of anything—you always have to ask why thermodynamics applies to whatever system you are studying, and you do this by deducing the laws of thermodynamics from whatever more fundamental principles happen to be relevant to that system.
In this respect, I don’t see much difference between thermodynamics and Euclidean geometry. After all, Euclidean geometry applies in an astonishing variety of contexts. If three people agree that each one will measure the angle between the lines of sight to the other two, and then they get together and add up those angles, the sum will be 180 degrees. And you will get the same 180-degree result for the sum of the angles of a triangle made of steel bars or of pencil lines on a piece of paper. So it may seem that geometry is more fundamental than optics or mechanics. But Euclidean geometry is a formal system of inference based on postulates that may or may not apply in a given situation. As we learned from Einstein’s general theory of relativity, the Euclidean system does not apply in gravitational fields, though it is a very good approximation in the relatively weak gravitational field of the earth in which it was developed by Euclid. When we use Euclidean geometry to explain anything in nature we are tacitly relying on general relativity to explain why Euclidean geometry applies in the case at hand.
In talking about deduction, we run into another problem: Who is it that is doing the deducing? We often say that something is explained by something else without our actually being able to deduce it. For example, after the development of quantum mechanics in the mid-1920s, when it became possible to calculate for the first time in a clear and understandable way the spectrum of the hydrogen atom and the binding energy of hydrogen, many physicists immediately concluded that all of chemistry is explained by quantum mechanics and the principle of electrostatic attraction between electrons and atomic nuclei. Physicists like Paul Dirac proclaimed that now all of chemistry had become understood. But they had not yet succeeded in deducing the chemical properties of any molecules except the simplest hydrogen molecule. Physicists were sure that all these chemical properties were consequences of the laws of quantum mechanics as applied to nuclei and electrons.
Experience has borne this out; we now can in fact deduce the properties of fairly complicated molecules—not molecules as complicated as proteins or DNA, but still some fairly impressive organic molecules—by doing complicated computer calculations using quantum mechanics and the principle of electrostatic attraction. Almost any physicist would say that chemistry is explained by quantum mechanics and the simple properties of electrons and atomic nuclei. But chemical phenomena will never be entirely explained in this way, and so chemistry persists as a separate discipline. Chemists do not call themselves physicists; they have different journals and different skills from physicists. It’s difficult to deal with complicated molecules by the methods of quantum mechanics, but still we know that physics explains why chemicals are the way they are. The explanation is not in our books, it’s not in our scientific articles, it’s in nature; it is that the laws of physics require chemicals to behave the way they do.
Similar remarks apply to other areas of physical science. As part of the Standard Model, we have a well-verified theory of the strong nuclear force—the force that binds together both the particles in the nucleus and the particles that make up those particles—known as quantum chromodynamics, which we believe explains why the proton mass is what it is. The proton mass is produced by the strong forces that the quarks inside the proton exert on one another. It is not that we can actually calculate the proton mass; I’m not even sure we have a good algorithm for doing the calculation, but there is no sense of mystery about the mass of the proton. We feel we know why it is what it is, not in the sense that we have calculated it or even can calculate it, but in the sense that quantum chromodynamics can calculate it—the value of the proton mass is entailed by quantum chromodynamics, even though we don’t know how to do the calculation.
It can be very important to recognize that something has been explained, even in this limited sense, because it can give us a strategic sense of what problems to work on. If you want to work on calculating the proton mass, go ahead, more power to you. It would be a lovely show of calculational ability, but it would not advance our understanding of the laws of nature, because we already understand the strong nuclear force well enough to know that no new laws of nature will be needed in this calculation.
Another problem with explanation as deduction: in some cases we can deduce something without explaining it. That may sound really peculiar, but consider the following little story. When physicists started to take the big bang cosmology seriously one of the things they did was to calculate the production of light elements in the first few minutes of the expanding universe. The way this was done was to write down all the equations that govern the rates at which various nuclear reactions took place. The rate of change of the quantity (or “abundance,” as physicists say) of any one nuclear species is equal to a sum of terms, each term proportional to the abundances of other nuclear species. In this way you develop a large set of linked differential equations, and then you put them on a computer that produces a numerical solution.
When these equations were solved in the mid-1960s by James Peebles and then by Robert Wagoner, William Fowler, and Fred Hoyle, it was found that after the first few minutes one quarter of the mass of the universe was left in the form of helium, and almost all the rest was hydrogen, with other elements present only in tiny quantities. These calculations also revealed certain regularities. For instance, if you put something in the theory to speed up the expansion, as for instance by adding additional species of neutrinos, you would find that more helium would be produced. This is somewhat counterintuitive—you might think speeding up the expansion of the universe would leave less time for the nuclear reactions that produce helium, but in fact the calculations showed that it increased the amount of helium produced.
The explanation is not difficult, though it can’t easily be seen in the computer printout. While the universe was expanding and cooling in the first few minutes, nuclear reactions were occurring that built up complex nuclei from the primordial protons and neutrons, but because the density of matter was relatively low these reactions could occur only sequentially, first by combining some protons and neutrons to make the nucleus of heavy hydrogen, the deuteron, and then by combining deuterons with protons or neutrons or other deuterons to make heavier nuclei like helium. However, deuterons are very fragile; they’re relatively weakly bound, so essentially no deuterons were produced until the temperature had dropped to about a billion degrees, at the end of the first three minutes. During all this time neutrons were changing into protons, just as free neutrons do in our laboratories today.
When the temperature dropped to a billion degrees, and it became cold enough for deuterons to hold together, then all of the neutrons that were still left were rapidly gobbled up into deuterons, and the deuterons then into helium, a particularly stable nucleus. It takes two neutrons as well as two protons to make a helium nucleus, so the number of helium nuclei produced at that time was just half the number of remaining neutrons. Therefore the crucial thing that determines the amount of helium produced in the early universe is how many of the neutrons decayed before the temperature dropped to a billion degrees. The faster the expansion went, the earlier the temperature dropped to a billion degrees, so the less time the neutrons had to decay, so the more of them were left, and so the more helium was produced. That’s the explanation of what was found in the computer calculations; but the explanation was not to be found in the computer-generated graphs showing the abundance in relation to the speed of expansion.
Further, although I have said that physicists are only interested in explaining general principles, it is not so clear what is a principle and what is a mere accident. Sometimes what we think is a fundamental law of nature is just an accident. Kepler again provides an example. He is known today chiefly for his famous three laws of planetary motion, but when he was a young man he tried also to explain the diameters of the orbits of the planets by a complicated geometric construction involving regular polyhedra. Today we smile at this because we know that the distances of the planets from the sun reflect accidents that occurred as the solar system happened to be formed. We wouldn’t try to explain the diameters of the planetary orbits by deducing them from some fundamental law.
In a sense, however, there is a kind of approximate statistical explanation for the distance of the earth from the sun.5 If you ask why the earth is about a hundred million miles from the sun, as opposed, say, to two hundred million or fifty million miles, or even further, or even closer, one answer would be that if the earth were much closer to the sun then it would be too hot for us and if it were any further from the sun then it would be too cold for us. As it stands, that’s a pretty silly explanation, because we know that there was no advance knowledge of human beings in the formation of the solar system. But there is a sense in which that explanation is not so silly, because there are countless planets in the universe, so that even if only a tiny fraction are the right distance from their star and have the right mass and chemical composition and so on to allow life to evolve, it should be no surprise that creatures that inquire into the distance of their planet from its star would find that they live on one of the planets in this tiny fraction.
This kind of explanation is known as anthropic, and as you can see it does not offer a terribly useful insight into the physics of the solar system. But anthropic arguments may become very important when applied to what we usually call the universe. Cosmologists increasingly speculate that just as the earth is just one of many planets, so also our big bang, the great expansion of the universe in which we live, may be just one of many bangs that go off sporadically here and there in a much larger mega-universe. They further speculate that in these many different big bangs some of the supposed constants of nature take different values, and perhaps even some of what we now call the laws of nature take different forms. In this case, the question why the laws of nature that we discover and the constants of nature that we measure are what they are would have a rough teleological explanation—that it is only with this sort of big bang that there would be anyone to ask the question.
I certainly hope that we will not be driven to this sort of reasoning, and that we will discover a unique set of laws of nature that explain why all the constants of nature are what they are. But we have to keep in mind the possibility that what we now call the laws of nature and the constants of nature are accidental features of the big bang in which we happen to find ourselves, though constrained (as is the distance of the earth from the sun) by the requirement that they have to be in a range that allows the appearance of beings that can ask why they are what they are.
Conversely, it is also possible that a class of phenomena may be regarded as mere accidents when in fact they are manifestations of fundamental physical principles. I think this may be the answer to a historical question that has puzzled me for many years. Why was Aristotle (and many other natural philosophers, notably Descartes) satisfied with a theory of motion that did not provide any way of predicting where a projectile or other falling body would be at any moment during its flight, a prediction of the sort that Newton’s laws do provide? According to Aristotle, substances tend to move to their natural positions—the natural position of earth is downward, the natural position of fire is upward, and water and air are naturally somewhere in between, but Aristotle did not try to say how fast a bit of earth drops downward or a spark flies upward. I am not asking why Aristotle had not discovered Newton’s laws—obviously someone had to be the first to discover these laws, and the prize happened to go to Newton. What puzzles me is why Aristotle expressed no dissatisfaction that he had not learned how to calculate the positions of projectiles at each moment along their paths. He did not seem to realize that this was a problem that anyone ought to solve.
I suspect that this was because Aristotle implicitly assumed that the rates at which the elements move to their natural places are mere accidents, that they are not subject to rules, that you couldn’t say anything general about them (except that heavy objects fall faster than light ones), that the only things about which one could generalize were questions of equilibrium—where objects will come to rest. This may have reflected a widespread disdain for change on the part of the Hellenic philosophers, as shown for instance in the work of Parmenides, which was admired by Aristotle’s teacher Plato. Of course Aristotle was wrong about this, but if you imagine yourself in his times, you can see how far from obvious it would have been that motion is governed by precise mathematical rules that might be discovered. As far as I know, this was not understood until Galileo began to measure how long it took balls to roll various distances down an inclined plane. It is one of the great tasks of science to learn what are accidents and what are principles, and about this we cannot always know in advance.
So now that I have deconstructed the words “fundamental,” “deduce,” and “principle,” is anything left of my proposal, that in physics we say that we explain a principle when we deduce it from a more fundamental principle? Yes, I think there is, but only within a historical context, a vision of the future of science. We have been steadily moving toward a satisfying picture of the world. We hope that in the future we will have achieved an understanding of all the regularities that we see in nature, based on a few simple principles, laws of nature, from which all other regularities can be deduced. These laws will be the explanation of whatever principles (such as, for instance, the rules of the Standard Model or of general relativity) can be deduced directly from them, and those directly deduced principles will be the explanations of whatever principles can be deduced from them, and so on. Only when we have this final theory will we know for sure what is a principle and what an accident, what facts about nature are entailed by what principles, and which are the fundamental principles and which are the less fundamental principles that they explain.
I have now done the best I can to say whether science can explain anything, so let me take up the question whether science can explain everything. Clearly not. There certainly always will be accidents that no one will explain, not because they could not be explained if we knew all the precise conditions that led up to them, but because we never will know all these conditions. There are questions like why the genetic code is precisely what it is or why a comet happened to hit the earth 65 million years ago in just the place it did rather than somewhere else that will probably remain forever outside our grasp. We cannot explain, for example, why John Wilkes Booth’s bullet killed Lincoln while the Puerto Rican nationalists who tried to shoot Truman did not succeed. We might have a partial explanation if we had evidence that one of the gunmen’s arms was jostled just as he pulled the trigger, but, as it happens, we don’t. All such information is lost in the mists of time; events depend on accidents that we can never recover. We can perhaps try to explain them statistically: for example, you might consider a theory that Southern actors in the mid-nineteenth century tended to be good shots while Puerto Rican nationalists in the mid-twentieth century tended to be bad shots, but when you only have a few singular pieces of information it’s very difficult to make even statistical inferences. Physicists try to explain just those things that are not dependent on accidents, but in the real world most of what we try to understand does depend on accidents.
Further, science can never explain any moral principle. There seems to be an unbridgeable gulf between “is” questions and “ought” questions. We can perhaps explain why people think they should do things, or why the human race has evolved to feel that certain things should be done and other things should not, but it remains open to us to transcend these biologically based moral rules. It may be, for example, that our species has evolved in such a way that men and women play different roles—men hunt and fight, while women give birth and care for children—but we can try to work toward a society in which every sort of work is as open to women as it is to men. The moral postulates that tell us whether we should or should not do so cannot be deduced from our scientific knowledge.
There are also limitations on the certainty of our explanations. I don’t think we’ll ever be certain about any of them. Just as there are deep mathematical theorems that show the impossibility of proving that arithmetic is consistent, it seems likely that we will never be able to prove that the most fundamental laws of nature are mathematically consistent. Well, that doesn’t worry me, because even if we knew that the laws of nature are mathematically consistent, we still wouldn’t be certain that they are true. You give up worrying about certainty when you make that turn in your career that makes you a physicist rather than a mathematician.
Finally, it seems clear that we will never be able to explain our most fundamental scientific principles. (Maybe this is why some people say that science does not provide explanations, but by this reasoning nothing else does either.) I think that in the end we will come to a set of simple universal laws of nature, laws that we cannot explain. The only kind of explanation I can imagine (if we are not just going to find a deeper set of laws, which would then just push the question farther back) would be to show that mathematical consistency requires these laws. But this is clearly impossible, because we can already imagine sets of laws of nature that, as far as we can tell, are completely consistent mathematically but that do not describe nature as we observe it.
For example, if you take the Standard Model of elementary particles and just throw away everything except the strong nuclear forces and the particles on which they act, the quarks and the gluons, you are left with the theory known as quantum chromodynamics. It seems that quantum chromodynamics is mathematically self-consistent, but it describes an impoverished universe in which there are only nuclear particles—there are no atoms, there are no people. If you give up quantum mechanics and relativity then you can make up a huge variety of other logically consistent laws of nature, like Newton’s laws describing a few particles endlessly orbiting each other in accordance with these laws, with nothing else in the universe, and nothing new ever happening. These are logically consistent theories, but they are all impoverished. Perhaps our best hope for a final explanation is to discover a set of final laws of nature and show that this is the only logically consistent rich theory, rich enough for example to allow for the existence of ourselves. This may happen in a century or two, and if it does then I think that physicists will be at the extreme limits of their power of explanation.
This Issue
May 31, 2001
-
1
This article is based on a talk given at a symposium on “Science and the Limits of Explanation” at Amherst last autumn.
↩ -
2
“On the Notion of Cause,” reprinted in Mysticism and Logic (Doubleday, 1957), p. 174.
↩ -
3
There is an example of the difficulty of explaining events in terms of causes that is much cited by philosophers. Suppose it is discovered that the mayor has paresis. Is this explained by the fact that the mayor had an untreated case of syphilis some years earlier? The trouble with this explanation is that most people with untreated syphilis do not in fact get paresis. If you could trace the sequence of events that led from the syphilis to the paresis, you would discover a great many other things that played an essential role—perhaps a spirochete wiggled one way rather than another way, perhaps the mayor also had some vitamin deficiency—who knows? And yet we feel that in a sense the mayor’s syphilis is the explanation of his paresis. Perhaps this is because the syphilis is the most dramatic of the many causes that led to the effect, and it certainly is the one that would be most relevant politically.
↩ -
4
Carl Hempel and Paul Oppenheim, “Studies in the Logic of Confirmation,” Philosophy of Science, Vol. 15, No. 135 (1948), pp. 135–175; reprinted with some changes in Aspects of Scientific Explanation and Other Essays in the Philosophy of Science (Free Press, 1965).
↩ -
5
Professor R.J. Hankinson of the University of Texas has directed my attention to Galen for an early example of this “explanation.” Of course, writing 1400 years before Copernicus, Galen was concerned to explain the position of the sun rather than that of the earth. In “On the Usefulness of the Parts of the Body” he compared his explanation of the sun’s position to the explanation of the position of the human foot at the end of the leg—both sun and foot are placed by the creator where they would do the most good.
↩