In response to:

I Married a Computer from the April 8, 1999 issue

To the Editors:

John Searle starts with a distorted caricature of my book and then attacks the caricature [“I Married a Computer,” NYR, April 8]. Had I written the book he describes, I would attack it also. It is impossible here to unravel the thicket of Searle’s misunderstandings, massive misrepresentations, out-of-context quotes, and philosophical sleights of hand, so I direct the reader to www.kurzweiltech.com/Searle. The following offers a few salient observations.

Searle writes that I “frequently cite IBM’s Deep Blue as evidence of superior intelligence in the computer.” The opposite is the case: I cite Deep Blue to examine the “human and [contemporary] machine approaches to chess…not to belabor the issue of chess, but rather because [they] illustrate a clear contrast” (p. 289). Human thinking follows a very different paradigm. Solutions emerge in the human brain from the unpredictable interaction of millions of simultaneous self-organizing chaotic processes. There are profound advantages to the human paradigm: we can recognize and respond to extremely subtle patterns. But we can build machines the same way.

Searle says that my book “is an extended reflection of the implications of Moore’s Law.” But the exponential growth of computing power is only a small part of the story. As I repeatedly state, adequate computational power is a necessary but not sufficient condition to achieve human levels of intelligence. Searle essentially doesn’t mention my primary thesis: we are learning how to organize these increasingly formidable resources by reverse engineering the human brain itself. By examining brains in microscopic detail, we will be able to re-create and then vastly extend these processes.

Searle is best known for his “Chinese Room” analogy and has presented various formulations of it over twenty years (see web posting). His descriptions illustrate a failure to understand the essence of either brain processes or nonbiological processes that could replicate them. Searle starts with the assumption that the man in the room doesn’t understand anything because, after all, “he is just a computer,” thereby illuminating Searle’s own bias. Searle then concludes—no surprise—that the computer doesn’t understand. Searle combines this tautology with a basic contradiction: the computer doesn’t understand Chinese, yet (according to Searle) can convincingly answer questions in Chinese. But if an entity—biological or otherwise—really doesn’t understand human language, it will quickly be unmasked by a competent interlocutor. In addition, for the program to convincingly respond, it would have to be as complex as a human brain. The observers would long be dead while the man in the room spends millions of years following a program billions of pages long.

Most importantly, the man is acting only as the central processing unit, a small part of a system. While the man may not see it, the understanding is distributed across the entire pattern of the program itself and the billions of notes he would have to make to follow the program. I understand English, but none of my neurons do. My understanding is represented in vast patterns of neurotransmitter strengths, synaptic clefts, and interneuronal connections.

Searle writes that I confuse a simulation from a re-creation of the real thing. What my book actually talks about is a third category: functionally equivalent re-creation. He writes that we could not stuff a pizza into a computer simulation of the stomach and expect it to be digested. But we could indeed accomplish this with a properly designed artificial stomach. I am not talking about a mere “simulation” of the human brain as Searle construes it, but rather functionally equivalent re-creations of its causal powers. We already have functionally equivalent replacements of portions of the brain to overcome such disabilities as deafness and Parkinson’s disease.

Well, I haven’t even touched on the issue of consciousness (see my posting). Searle writes: “It is out of the question…to suppose that…the computer is conscious.” Given this assumption, Searle’s conclusions are no surprise. Amazingly, Searle writes that “human brains cause consciousness by… specific neurobiological processes.” Now who is being the reductionist here? Searle would have us believe that you can’t be conscious if you don’t squirt neurotransmitters (or some other specific biological process). No entities based on functionally equivalent processes need apply. This biology-centric view of consciousness is likely to go the way of other human-centric beliefs. In my view, we cannot penetrate subjective experience with objective measurement, which is why many classical approaches to its understanding quickly hit a wall.

Searle’s slippery and circular arguments aside, nonbiological entities, which today have many narrowly focused skills, are going to vastly expand in the breadth, depth, and subtlety of their intelligence and creativity. My book discusses the impact this will have on our human-machine civilization (including just the sorts of legal issues that Searle claims I ignore), a development no less important than the emergence of human intelligence some thousands of generations ago.

Advertisement

Ray Kurzweil

Kurzweil Technologies, Inc.

Wellesley, Massachusetts

John Searle replies:

Ray Kurzweil claims that I presented a “distorted caricature” of his book, but he provides no evidence of any distortion. In fact I tried very hard to be scrupulously accurate both in reporting his claims and in conveying the general tone of futuristic techno-enthusiasm that pervades the book. Here are the theses in his book that I found most striking:

  1. Kurzweil thinks that within a few decades we will be able to download our minds onto computer hardware. We will continue to exist as computer software. “We will be software, not hardware” (p. 129, his italics). And “the essence of our identity will switch to the permanence of our software” (p. 129).
  2. According to him, we will be able to rebuild our bodies, cell by cell, with different and better materials using “nanotechnology.” Eventually, “there won’t be a clear difference between humans and robots” (p. 148).
  3. We will be immortal, not only because we will be made of better materials, but because even if we were destroyed we will keep copies of our programs and databases in storage and can be reconstructed at will. “Our immortality will be a matter of being sufficiently careful to make frequent backups,” he says, adding the further caution: “If we’re careless about this, we’ll have to load an old backup copy and be doomed to repeat our recent past” (p. 129). (What is this supposed to mean? That we will be doomed to repeat our recent car accident and spring vacation?)

  4. We will have overwhelming evidence that computers are conscious. Indeed there will be “no longer any clear distinction between humans and computers” (p. 280).

  5. There will be many advantages to this new existence, but one he stresses is that virtual sex will soon be a “viable competitor to the real thing,” affording “sensations that are more intense and pleasurable than conventional sex” (p. 147).

Frankly, had I read this as a summary of some author’s claims, I might think it must be a “distorted caricature,” but Kurzweil does in fact make each of these claims, as I show by extensive quotation. In his letter he does not challenge me on any of these central points. He concedes by his silence that my understanding of him on these central issues is correct. So where is the “distorted caricature”?

I then point out that his arguments are inadequate to establish any of these spectacular conclusions. They suffer from a persistent confusion between simulating a cognitive process and duplicating it, and an even worse confusion between the observer-relative, in-the-eye-of-the-beholder sense of concepts like intelligence, thinking, etc., and the observer-independent intrinsic sense.

What has he to say in response? Well, about the main argument he says nothing. About the distinction between simulation and duplication, he says he is describing neither simulations of mental powers nor re-creations of the real thing, but “functionally equivalent re-creation.” But the notion “functionally equivalent” is ambiguous precisely between simulation and duplication. What exactly functions to do exactly what? Does the computer simulation function to enable the system to have external behavior which is as if it were conscious, or does it function to actually cause internal conscious states? For example, my pocket calculator is “functionally equivalent” to (indeed better than) me in producing answers to arithmetic problems, but it is not thereby functionally equivalent to me in producing the conscious thought processes that go with solving arithmetic problems. Kurzweil’s argument about consciousness is based on the assumption that the external behavior is overwhelming evidence for the presence of the internal conscious states. He has no answer to my objection that once you know that the computer works by shuffling symbols, its behavior is no evidence at all for consciousness. The notion of functional equivalence does not overcome the distinction between simulation and duplication, it just disguises it for one step.

In his letter he tells us he is interested in doing “reverse engineering” to figure out how the brain works. But in the book there is virtually nothing about the actual working of the brain and how the specific electro-chemical properties of the thalamo-cortical system could produce consciousness. His attention rather is on the computational advantages of superior hardware.

On the subject of consciousness there actually is a “distorted caricature,” but it is Kurzweil’s distorted caricature of my arguments. He says, “Searle would have us believe that you can’t be conscious if you don’t squirt neurotransmitters (or some other specific biological process).” Here is what I actually wrote: “I believe there is no objection in principle to constructing an artificial hardware system that would duplicate the causal powers of the brain to cause consciousness using some chemistry different from neurons.” Not much about the necessity of squirting neurotransmitters there. The point I made, and repeat here, is that because we know that brains cause consciousness with specific biological mechanisms, any nonbiological mechanism has to share with brains the causal power to do it. An artificial brain might succeed by using something other than carbon-based chemistry, but just shuffling symbols is not enough, by itself, to guarantee those powers. Once again, he offers no answer to this argument.

Advertisement

He challenges my Chinese Room Argument, but he seriously misrepresents it. The argument is not the circular claim that I do not understand Chinese because I am just a computer, but rather that I don’t as a matter of fact understand Chinese and could not acquire an understanding by carrying out a computer program. There is nothing circular about that. His chief counterclaim is that the man is only the central processing unit, not the whole computer. But this misses the point of the argument. The reason the man does not understand Chinese is that he does not have any way to get from the symbols, the syntax, to what the symbols mean, the semantics. But if the man cannot get the semantics from the syntax alone, neither can the whole computer. It is, by the way, a misunderstanding on his part to think that I am claiming that a man could actually carry out the billions of steps necessary to carry out a whole program. The point of the example is to illustrate the fact that the symbol manipulations alone, even billions of them, are not constitutive of meaning or thought content, conscious or unconscious. To repeat, the syntax of the implemented program is not semantics.

Concerning other points in his letter: He says that I am wrong to think that he attributes superior thinking to Deep Blue. But here is what he wrote in response to the charge that Deep Blue just does number crunching and not thinking: “One could say that the opposite is the case, that Deep Blue was indeed thinking through the implications of each move and countermove, and that it was Kasparov who did not have time to think very much during the tournament” (p. 290).

He also says that on his view Moore’s Law is only a part of the story. Quite so. In my review I mention other points he makes such as, importantly, nanotechnology.

I cannot recall reading a book in which there is such a huge gulf between the spectacular claims advanced and the weakness of the arguments given in their support. Kurzweil promises us our minds downloaded onto decent hardware, new bodies made of better stuff, evolution without DNA, better sex without the inconvenience of actual partners, computers that convince us that they are conscious, and above all personal immortality. The main theme of my review is that the existing technological advances that are supposed to provide evidence in support of these predictions, wonderful though they are, offer no support whatever for these spectacular conclusions. In every case the arguments are based on conceptual confusions. Increased computational power by itself is no evidence whatever for consciousness in computers. On these central issues, Kurzweil’s letter is strangely silent.

This Issue

May 20, 1999