The year 1989 marked a milestone in the history of artificial intelligence. A computer program, Deep Thought, defeated several chess grandmasters. The Russian grandmaster Garri Kasparov, it is true, defeated Deep Thought, but who can feel confident that in ten years’ time the then world champion will be able to defeat the best program? If a computer can succeed at the most beautiful and creative of games, what limits are there to the achievements of artificial intelligence? The question becomes more compelling if one accepts the current “strong AI” philosophy. According to this view, the human brain is only a large, if somewhat inaccurate, digital computer: consciousness is a necessary property of matter organized in such a way as to carry out complex computations: it follows, therefore, that when we construct computers as complex as the human brain, they too will be conscious. Computers will think as we do, and be aware of what they are doing.
Roger Penrose’s book was written to combat this strong AI philosophy. There are, he suggests, two other points of view open to you, even if you accept that the brain is the organ of thought, and hope for a scientific account of how it works. The first is that the brain is indeed a digital computer, but that it is conscious only because it is composed of living material. A digital computer made of transistors might perform identical computations, in identical ways, but it would not be conscious, because it was made of transistors and not of neurons. Penrose ascribes this view to John Searle, a philosopher who is critical of the strong AI view. I am not sure whether this correctly reflects Searle’s views: if not, no doubt he will find an opportunity to reply. In any case, Penrose does not find the argument convincing. For reasons I will give in a moment, I agree with him. Penrose’s own opposition to the strong AI view is differently based. The brain, he argues, is not a digital computer.
Before explaining why he thinks this, I must say a few words about the other line of argument, that computers are made of the wrong stuff to be conscious. As a geneticist, I am prejudiced against the idea that the peculiarities of living organisms arise because of the special nature of living material. This was a popular view when I was a boy, and was commonly used as a defense of religion against the inroads of science. I remember being told, by a schoolmaster who was also a parson, that scientists had shown that one could bring together all the chemical substances, carbon, oxygen, nitrogen, and so on, in the same proportions as are present in a seed, yet the seed would not grow into a plant, because it lacked the breath of life. Today we have a very satisfactory explanation of one of the two most fundamental properties of life—heredity—in chemical terms, so I hope that no schoolmaster is so foolish as to use this argument to stave off incipient atheism in his pupils.
The other property, consciousness, still escapes us, but it seems sensible to try to explain it in terms of physical law. I can see little sense in claiming that consciousness, any more than heredity, resides in single atoms of carbon or nitrogen. Presumably, then, it must reside in the way in which the matter is arranged. Of course, it is logically possible that only computers made of neurons can be conscious, but the idea is unattractive. It seems more plausible that any “computer” that is formally similar to a brain will be conscious. The idea of “formal similarity” is crucial, and is the topic of the rest of this review.
I think that Penrose would agree with the ideas expressed above. His essential point is different: it is that the brain is not a digital computer. He spends some time explaining what this means. Digital computers are “Universal Turing Machines.” That is, they are examples of a class of computing machines first defined by the mathematician Alan Turing. For the purposes of this review, it is sufficient to understand that a Turing machine is algorithmic: it reaches its conclusions by following a precisely defined set of rules, an “algorithm.” An example of an algorithm would be the following rule for deciding on your lead against a no-trump contract at bridge: “lead the fourth highest card of your longest suit” (an additional rule would be required to tell you what to do if you had two equal longest suits—for example, “lead from the higher-ranking suit”). The important point is that the rules must be precise, and of a kind whose consequences can be calculated unambiguously.
At this point, a distinction must be made between “producing the same answer” and “producing the answer in the same way.” For example, imagine you are playing an invisible opponent at chess. You do not know whether your opponent is either the computer program, Deep Thought, or a human grandmaster. Either way, you will probably lose—certainly so unless you are a better player than I am. But it would be hard, and perhaps impossible, to tell whether your opponent was machine or human: both would produce the same answer. But the calculations performed would be quite different. The computer would examine many millions of positions, and choose the best line by a “minimax” procedure, which selects the best line of play against an opponent who is also playing as well as possible. Indeed, the depressing thing about chess programs is that they are no cleverer than the programs that I and others imagined back in the 1940s. The difference is that computers have become much faster. In contrast, the human grandmaster would examine only a minute subset of the possible lines of play: we still find it impossible to write an exact rule, or algorithm, specifying which lines he would, or should, choose to examine.
Advertisement
Let me give a more illuminating example of two ways of reaching the same conclusion. I give you a large-scale map of England and a box of matches, and ask you how far it is from London to Brighton. One method would be to read off the map references (say 10, 20 and 40, 60) and find the answer by the method of Pythagoras (d2 = (40-10)2 + (60-20)2, or d=50). An elegant alternative would be to note that the length of a match corresponded to, say, five miles on the map, and that ten matches arranged end to end would reach from London to Brighton. These two methods give the same answer, but they do so in different ways. Both methods could, in principle, be mechanized. The Pythagoras method is the one that would be adopted by anyone programming a computer. There would, in fact, be nothing in the computer analogous to the “map.” The geographical information would merely be a list of places with their map references. There would be no physical object inside the computer that was a two-dimensional representation of the world. It would also be possible, though tricky, to devise a machine in which magnetized needles arranged themselves in a row linking “London” and “Brighton,” which themselves would be represented by north and south magnetic poles on a two-dimensional map.
The latter would be an “analog” computer. Today, the word analog means, to a computer scientist, continuously varying, as opposed to “digital,” which means existing in one of two discrete states. Originally, however, an analog computer was any device that made use of a physical analogue. For example, one can analyze the stresses in a beam by looking at soap bubbles, and the modes of vibration of a complex structure such as an airplane’s wing by measuring the current in an analogous electrical circuit. Today, analog computers are out of fashion, because the astonishing ability of digital computers to do arithmetic has rendered them obsolete. But this does not prove that the brain is not an analog device.
In fact there are reasons to think that the brain may not be only a digital computer. Optically, the eye is a device that throws a two-dimensional image of the world onto the retina. But the problem of vision is to explain how this image is translated into “there is a car approaching me on the wrong side of the road,” or “my friend Joe is smiling at me.” It is known that several 2-D representations of the retina are present in the brain, with particular points on the retina connected to corresponding points in the brain. These representations are presumably used in performing the calculations that interpret the information on the retina. But does the brain make use of the 2-D nature of the representation? It would in principle be possible to store the information from the retina in a form that bore no geometrical similarity to the image. Indeed, a digital computer could store the information, but it would not do so in a set of units arranged in a 2-D array. We have seen, in the example of estimating the distance from London to Brighton, that the geometric information can be made use of by an analog device, but that the same answer can be produced if the information is stored simply as a list of map references.
If the brain does make use of the 2-D pattern when computing, how might it do so? Clearly, the brain does not contain matchsticks or magnetic needles. But the distance between two points could easily be estimated by the time it takes for a message to travel from one to the other. Since most of the time would be taken up in transmitting the message across the synapses that connect one neuron to another, this method of measuring would be closely analogous to arranging matchsticks in a row. It is at least possible that much of visual perception depends on analog computation.
Advertisement
A proponent of digital computing, however, could argue that the 2-D representations in the brain, of tactile and auditory as well as visual information, exist because that is a convenient way to construct the brain during development, and not because the representation is used in analog computing. This is a valid objection, but there are several reasons for thinking that the 2-D representations exist because they need to be that way. Owls can form a picture of the world by using their ears, as well as their eyes. Both pictures, auditory and visual, have a 2-D representation in the cortex, and these two representations are superimposed on one another, so that a point in the external world is representated by a group of neurons in the auditory map, and by a group of neurons in the visual map, and these two groups lie over one another. It is hard to see why this arrangement exists unless it is used in an analog computation.
A second reason concerns the way in which tactile information from the skin is represented. The information is first passed to the thalamus, where there is rather little sign of a 2-D arrangement; it is then passed on to the cortex, where the 2-D pattern is reconstituted. Again, it is hard to see why this should be so unless the arrangement is used in analog computation.
There is a quite different line of argument leading to the conclusion that the brain is more than a digital computer. This is the argument from introspection: at least sometimes, thinking does not seem to be algorithmic. Penrose is particularly interesting on what it is like to be a mathematician. He emphasizes that, when one has a new idea (there is, of course, no need that the idea should be new to mankind: it is sufficient that it be new to oneself), it most often is geometric, and is grasped as a whole, and not as a series of logical steps. He insists that his thinking is not always verbal, particularly when it is most original. The verbal and logical part of thinking comes later, when one has to check that the idea that one has had is actually correct, or when one has to explain it to someone else. I find myself in complete agreement with Penrose about what thinking feels like, although, perhaps more often than he does, I do have ideas that I am sure are correct but that turn out to be false.
This apparently geometric and holistic nature of thinking may be illusory: it may simply be what a digital computer feels like when it has got an answer. There is, however, a related aspect of the process of seeing a new idea that is suggestive. A number of scientists have reported that they have reached a new idea by seeing an analogy between something they did understand, and something they did not. My own experience confirms this. I remember, for example, getting an idea about how repeated structures, like fingers or vertebrae, might develop because I had once read about Bohr’s model of the atom; and on another occasion producing a theory about why some organisms are hermaphrodite, whereas others have separate sexes, by analogy with the law of diminishing returns in economics. Indeed, most of my career has consisted of drawing analogies between engineering, biology, and economics. The point is much more general. Insofar as there is any sense in Structuralism, it is that the human mind works by drawing analogies. I do not know whether anyone has tried to program a digital computer to recognize analogies. An analogy, after all, is only a formal, mathematical similarity, so it should be recognizable.
I have spent some time discussing my own reasons for wondering whether the brain is more than a digital computer. Penrose is a mathematician and physicist, whereas I am an engineer and geneticist, so his reasons are different. Unfortunately, I do not find his arguments easy to follow, not because they are illogical or badly explained, but because they depend on hard mathematics and hard physics. He goes to extraordinary lengths to make his ideas accessible to nonspecialists. Sometimes, I think, he goes to excessive lengths. Is it really necessary, for example, to explain complex numbers and the Argand diagram in order to introduce the Mandelbrot set? His main reason for introducing the Mandelbrot set, as I understand it, was to make the point that mathematical objects have a real existence, and are out there waiting to be discovered.
I have some sympathy with this Platonic view of mathematics, but I wonder whether the point could not have been made more simply. Indeed, I am not clear what is the relevance of the Platonic view to Penrose’s main argument. Now I am reasonably sure that I have missed the point, but I’m not sure it is altogether my fault. I enjoyed understanding the derivation of the Mandelbrot set, and some of its properties, but by the time I had done so I was too exhausted, or too lazy, to work out its relevance or otherwise to the similarity of minds and computers. I suspect that most people who read the book are going to have this experience. They will learn a lot about crystalline tiling, or quantum dynamics, or black holes, without seeing what these topics have to do with the claim that brains are not digital computers.
One of Penrose’s central arguments rest on Gödel’s theorem. This is one of those ideas that many people have heard of but few understand. Penrose’s account of it is, by a long way, the clearest and most helpful that I have met. If I had got nothing else out of his book, the effort of reading it would be worth it for the understanding he gave me of this theorem. What Gödel showed was that any formal system of axioms and rules of procedure must contain some statements that can neither be proved nor disproved by means allowed within the system. Penrose describes, in nontechnical terms, how Gödel went about reaching this odd conclusion. Equally important, he brings out its significance, in a way that was new to me. I had previously seen Gödel’s theorem as demonstrating the limitations of formal systems, as indeed it does. What I had not appreciated was that the human mind may be able to perceive the truth of some proposition, although that truth cannot be settled within the formal system. This demonstrates, or so Penrose argues, that the mind cannot be purely algorithmic, since it can decide matters that cannot be decided algorithmically.
Penrose devotes considerable attention to the question of whether the mind can solve problems that cannot be solved by a digital computer. When I suggested above that the 2-D representations in the brain might enable it to perform analogically calculations that a computer performs algorithmically, I was not arguing that this would necessarily, or even probably, enable the brain to perform better than a computer. The computer would reach the same conclusions, but in a different way. But perhaps there are problems that cannot be solved algorithmically but can be solved analogically. It is important to distinguish between “cannot in principle,” and “cannot in a reasonable time.” The distinction is best explained by an example, the famous “travelling salesman” problem. There are, say, fifty cities that the salesman must visit before returning to his starting point. The travel time between each pair of cities is known. The problem is to find the route which minimizes the total time. In a sense, it is trivial to find the answer on a computer. One simply lists all the possible routes, calculates their lengths, and chooses the shortest. The snag is that, if there are many cities, the number of possible routes, and hence the time taken to do the calculation, becomes prohibitively long. Such a problem is said to be NP. (This is short for non-polynomial: the time taken to do the calculation increases more rapidly than n, or n2, or n3, or than any fixed power of n, where n is the number of cities.)
The existence of NP problems raises difficulties for you if you think that all problems can be solved by computers: they cannot, at least in a reasonable time. But the brain cannot solve NP problems either, so you can continue to believe that the brain is a digital computer. So far as I know, no NP problem is known that can be solved quickly by an analog computer. There are, however, some amusing near misses. Consider, for example, a simplified version of the traveling salesman problem in which the salesman must find the shortest route from London to Moscow: he need not visit all the cities, or return to his starting point. There is an analog computer that can solve this problem. To construct it, buy fifty curtain rings, and label each with the name of a city. Connect each pair of rings with a piece of string whose length is proportional to the time taken to travel between the cities. Then take hold of the rings labeled London and Moscow, and pull them apart. When one line of string becomes taut, it gives the solution: the rings on the taut string are the intermediate stops. At first sight, this might seem to be an NP problem. If the number of cities is large, the problem cannot be solved in a reasonable time by listing all possible routes, and choosing the shortest. I do not know who first thought of the problem, or its analog solution. When I first met it, I thought it demonstrated an analog solution to an NP problem. Unfortunately, it does not. It is possible to think of an algorithm that solves the problem in “polynomial time.”
Although the modified traveling salesman problem is not an example of an NP problem solved by an analog device, it is a nice illustration of two formally different ways, analog and algorithmic, of solving the same problem. There are other types of problem that cannot in practice be solved by a digital computer. In particular, Laplace’s problem—to predict the future behavior of the universe, given the positions and velocities of all the particles in it—cannot be solved, even if one assumes deterministic classical dynamics. The only “computer” able to solve the problem is the universe itself: it will do something, and if you hang around you will see the answer. But no computer smaller than the universe could do the calculations. There are many physical processes whose long-term behavior cannot be predicted by numerical calculation. But I do not think that this undermines the claim that the brain is a computer, because the brain cannot solve such problems either.
The crucial question, then, is whether there are problems that cannot be solved, in a reasonable time, by a digital computer, but that could be solved analogically. I do not know the answer to this question. Penrose argues that a “quantum computer” could perhaps outperform a digital computer, and that the brain may be such a computer. I had better admit right away that I do not understand how such a computer would work. There are, however, no reasons at present to think that the brain is such a device, except in the sense that all physics and chemistry depends on quantum theory.
Penrose makes the important point that it would not be possible to make a digital computer if the world obeyed strictly classical dynamics. What the quantuam nature of physics does is to make possible the existence of discrete states without intermediates: it breaks up a smooth and continuous world into a bumpy one. It is the existence of discrete alternative states that makes digital computers possible. I was interested that he made this point, because it has seemed to me for some time that all transmission of information, if it is to take place without degrading the message, depends on the existence of discrete states, and that it is for this reason that hereditary information is carried in discrete form by DNA. In the hereditary context, the connection between quantum theory and the digital nature of information is more obvious, because it depends on the chemistry of base pairing. But ultimately, as Penrose makes clear, all discrete states depend on quantum processes.
When Penrose argues that the brain is a quantum computer, he is making a much stronger claim. He is suggesting that the functioning of the brain may depend on individual neurons being sensitive to individual quanta. As he points out, cells in the retina are influenced by single photons, single quanta of light; but what we know of the anatomy and physiology of neurons suggests that their behavior, although discrete in the sense that a nerve either conducts an impulse or it does not, is determined by summing up a large number of inputs. It seems to me, therefore, that the behavior of neurons should be deterministic, as is the behavior of transistors whose switching is determined by a large number of electrons. However, this is an issue that I should leave to physicists to argue about. Penrose himself is aware that most neurons have multiple inputs, and does not regard it as a fatal objection to his ideas, although he puts forward the suggestion that the brain may be a quantum computer as no more than a speculation. We know too little about the way brains work to dismiss the speculation out of hand, but at present there is little to support it.
At this point, I will try to summarize the argument so far. Digital computers solve problems by means of algorithms: that is, by carrying out a series of logical operations. There is an alternative class of computer—the so-called analog computer—that solves problems by making use of the formal analogies between different physical systems, for example electrical and mechanical, so that one can predict the behavior of one system by looking at another. There are anatomical and physiological reasons for thinking that the brain may be, in part, an analog computer, particularly when analyzing spatial information.
This raises the question of whether there are types of problems that can be solved by an analog device, but not, at least in a reasonable time, by a digital one. There are certainly problems that cannot, in practice, be solved by digital computers, but it is not clear that these can be solved by analog computers or, for that matter, by brains. Penrose argues, on several grounds, that the brain can do things that no digital computer will be able to do. First, introspection suggests that the answer to a hard problem often presents itself holistically, and not in a series of steps. Second, Gödel’s theorem states that there are, within any formal system, propositions that cannot be settled, yet the truth or falsehood of these propositions can be decided by the human mind. Finally, it may be that there are problems that could be settled by a quantum computer but not by a digital computer, and that the brain is such a quantum computer.
These arguments are relevant to whether brains can do things that computers cannot, but not to whether brains are conscious whereas computers are not. I find consciousness deeply puzzling. I can understand, in Darwinian terms, why organisms should have evolved patterns of behavior that cause them to avoid stimuli that, if continued, would damage them, but not why they should experience pain. Of course I withdraw my hand from a flame, but why is it painful?
An odd thing about discussions of consciousness—and Penrose is no exception—is that, even if they start by saying that pain, hunger, thirst, and so on are the simplest manifestations of the phenomenon, they soon turn to a discussion of consciousness in more complex contexts, such as self-consciousness. This confuses two issues: on the one hand awareness, either of simple sensations like pain or of complex ideas like self, and on the other hand the mental capacity needed to solve problems. The difficulty, as I see it, is to account for awareness of any kind, and not just the awareness present when we are solving problems. For example, a social animal like ourselves will benefit if it can imagine how other people feel, and this requires that one be conscious of oneself, and of the fact that others have selves, and may feel as one would feel in their place. Now I agree that it would pay a social animal to think in this way, and that to do so it would have to have concepts such as “self” and “another person.” But it is not obvious to me that a device, brain or computer, that can think in this way need be conscious, any more than I see why one has to feel pain in order to withdraw one’s hand from a flame.
The natural attitude for a biologist is to regard consciousness, of pain or of self, as a property of matter behaving in a particular way. Is this an assumption one makes because it is hard to see what else one could assume, or is it a testable hypothesis? If the latter, can we hope to find out what type of behavior of matter gives rise to consciousness? I fear that, most of the time, it is only a convenient assumption. But, provided that we are willing, as seems reasonable, to accept other people’s word for their state of consciousness, the assumption can become a testable hypothesis. There are situations, often in the course of surgery performed for other reasons, in which it is possible to ask someone, about whose brain activity something is known, whether he or she is conscious, and if so of what. Penrose has a brief but clear discussion of such cases. Already, we have the beginnings of an experimental study of the physical basis of consciousness.
The strong AI position is that, if we make a machine that thinks as we do, that machine will be conscious. There are two ways of disagreeing with this. First, one could argue that, even if the machine performs as we do, it reaches its answers in a different way: Deep Thought performs like a grandmaster, but does not choose its moves in the same way. There is no reason to expect a machine that reaches the same answers as the brain, but in a different way, to be conscious. This is part of Penrose’s argument, but he also argues that the brain, because it is not a digital computer, can do things that a computer cannot. I am not sure whether he thinks that, if we construct a machine that not only performs like a brain but does so in the same way, it will be conscious. The second way of disagreeing with the strong AI position, which I take to be Searle’s, is to argue that no machine will be conscious unless it is made of the same stuff as the brain.
What I find most puzzling about Penrose’s position is that he wants consciousness to “do something.” He writes as if consciousness were an additional cause of thought, or of behavior, over and above the physical events in the brain. I do not understand what he can mean. I can understand those who think that our actions are not determined by physical events in our brains because the will can override physical law. But Penrose cannot think this, because if he did there would have been no point in writing his book. Why introduce physics if human behavior is independent of physical law?
I must emphasize that he does not invoke quantum indeterminacy as a basis for free will. I agree with him. The whole point about free will is that our actions are determined by our character and disposition, not that they are random. The problem is to provide a physical interpretation of “character” and “disposition,” and I cannot see how an appeal to indeterminacy would help. In any case, I do not think that free will has much relevance to a discussion of how we think, as opposed to how we act. If I ask you, “What do three sevens make?” you could reply “twenty-one,” or you could refuse to answer, or you could decide to lie and say “twenty-two”: what you could not do is to think that the answer was twenty-two. When we speak of freedom of thought in a political context, we mean the freedom to express our thoughts, to hear what others think, to have access to information, and so on. But we cannot by an act of will change what we think.
Penrose’s book will evoke different responses from different classes of people. The AI community will not like it. Most of them would admit that computers are still very bad at some tasks that people do effortlessly, particularly tasks that have to do with language, or with visual perception. But they would not accept that computers are necessarily bad at such tasks. Only by trying to overcome the difficulties, they would argue, can we find out what, if any, are the limitations of AI. I have a lot of sympathy with this point of view.
Biologists are also likely to be critical of Penrose, for two reasons. First, if he is right in his speculations about quantum computers, we are going to have to learn some hard physics, and we have enough on our plates without that. Sociologists are hostile to sociobiology for the same reason: if E. O. Wilson is right, they will have to learn some genetics. A second reason has more justification, but is harder to explain. A curious fact about the history of science in this century is that biology and physics have been moving in opposite directions. Of course, modern biology depends heavily on physics: there would be no molecular biology without isotopes, and no neurobiology without electronics. But conceptually we are poles apart. Biology has become more and more mechanistic, in the sense of believing that organisms are like machines, and can be understood by assuming that matter behaves in a commonsense way—that is, in the way that the objects of our everyday experience behave. In contrast, physicists, who are concerned with the very large and the very small, imagine a less and less mechanistic universe, in which a particle can pass through two slits simultaneously, mass can turn into energy, and stars collapse into black holes.
This difference of attitude is not accidental, but arises from the very different scales, in size and energy, of the objects we study. Biologists have had great successes during the past fifty years. These successes have depended on new physical and chemical techniques, not on recent physical theory. Has the time now come when we must pay attention to recent advances in theoretical physics? Obviously, no biologist would be content if there was a contradiction between his own theories and those of physics, but Penrose is not suggesting that this is so. Equally obviously, if there are any testable predictions that follow from the idea that the brain is a quantum computer, they should be tested. But it will need some spectacular successes in the application of physical theory to biology to persuade most biologists to abandon the mechanistic approach which served them so well. For the present, biologists will be distrustful of Penrose’s approach. For much the same reason, physicists will like the book. I observe among my physicist friends a conviction that there must be something wrong with biology, because it ignores modern physics.
The people who are going to like the book best, however, will probably be those who don’t understand it. As an evolutionary biologist, I have learned over the years that most people do not want to see themselves as lumbering robots programmed to ensure the survival of their genes. I don’t think they will want to see themselves as digital computers either. To be told by someone with impeccable scientific credentials that they are nothing of the kind can only be pleasing.
This Issue
March 15, 1990