Can a Machine Think?
Christopher Evans Christopher Evans (1931-1979) was an
experimental psychologist and computer scientist. He wrote several books
on various aspects of psychology.
In the early years of the second world war when the
British began, in ultra-secret, to put together their effort to crack German
codes, they set out to recruit a team of the brightest minds available
in mathematics and the then rather novel field of electronic engineering.
Recruiting the electronic whizzes was easy, as many of them were to be
found engrossed in the fascinating problem of radio location of aircraftor
radar as it later came to be called. Finding mathematicians with the right
kind of obsessive brilliance to make a contribution in the strange field
of cryptography was another matter. In the end they adopted the ingenious
strategy of searching through lists of young mathematicians who were also
top-flight chess players. As a result of a nationwide trawl an amazing
collection of characters were billeted together in the country-house surroundings
of Bletchley Park, and three of the most remarkable were Irving John Good,
Donald Michie, and Alan Turing....
If contemporary accounts of what the workers at Bletchley
were talking about in their few moments of spare time can be relied on,
many of them were a bit over-optimistic if anything. Both Good and Michie
believed that the use of electronic computers such as Colossus* would result
in major advances in mathematics in the immediate post-war era and Turing
was of the same opinion. All three (and one or two of their colleagues)
were also confident that it would not be long before machines were exhibiting
intelligence, including problem-solving abilities, and that their role
as simple number-crunchers was only one phase in their evolution. Although
the exact substance of their conversations, carried long into the night
when they were waiting for the test results of the first creaky Colossus
prototypes, has softened with passage of time, it is known the topic of
machine intelligence loomed very large. They discussed, with a ftisson
of excitement and unease, the peculiar ramifications of the subject they
were pioneering and about which the rest of the world knew (and still knows)
so little. Could there ever be a machine which was able
[* An early computer that was first used in 1943.-Ed.]
to solve problems that no human could solve? Could a computer ever beat
a human at chess? Lastly, could a machine think?
Of all the questions that can be asked about computers
none has such an eerie ring. Allow a machine intelligence perhaps, the
ability to control other machines, repair itself, help us solve problems,
compute numbers a million fold quicker than any human; allow it to fly
airplanes, drive cars, superintend our medical records and even, possibly,
give advice to politicians. Somehow you can see how a machine might come
to do all these things. But that it could be made to perform that apparently
exclusively human operation known as thinking is something else,
and something which is offensive, alien and threatening. Only in the most
forms of science fiction, stretching back to Mary Shelley's masterpiece
Frankenstein, is the topic touched on, and always with a sense of
great uncertainty about the enigmatic nature of the problem area.
Good, Michie and their companions were content to work
the ideas through in their spare moments. But Turing-older, a touch more
serious and less cavalierset out to consider things in depth. In particular
he addressed himself to the critical question: Can, or could, a machine
think? The way he set out to do this three decades ago and long before
any other scientists had considered it so cogently, is of lasting interest.
The main thesis was published in the philosophical journal Mind in
1950. Logically unassailable, when read impartially it serves to break
down any barriers of uncertainty which surround this and parallel questions.
Despite its classic status the work is seldom read outside the fields of
computer science and philosophy, but now that events in computer science
and in the field of artificial intelligence are beginning to move with
the rapidity and momentum which the Bletchley scientists knew they ultimately
would, the time has come for Turing's paper to achieve a wider public.
Soon after the war ended and the Colossus project folded,
Turing joined the National Physical Laboratory in Teddington and began
to work with a gifted team on the design of what was to become the world's
most powerful computer, ACE. Later he moved to Manchester, where, spurred
by the pioneers Kilburn, Hartee, Williams and Newman, a vigorous effort
was being applied to develop another powerful electronic machine. It was
a heady, hard- driving time, comparable to the state of events now prevailing
in microprocessors, when anyone with special knowledge rushes along under
immense pressure, ever conscious of the feeling that whoever is
second in the race may as well not have entered it at all. As a result
Turing found less time than he would have hoped to follow up his private
hobbies, particularly his ideas on computer game playing-checkers, chess
and the ancient game of Go-which he saw was an important sub-set of machine
Games like chess are unarguably intellectual pursuits,
and yet, unlike certain other intellectual exercises, such as writing poetry
or discussing the inconsistent football of the hometown team, they have
easily describable rules of operation. The task, therefore, would seem
to be simply a matter of writing a computer program which "knew" these
rules and which could follow them when faced with moves offered by a human
player. Turing made very little headway as it happens, and the first chess-playing
programs which were scratched together in the late '40s and early '50s
were quite awful-so much so that there was a strong feeling that this kind
of project was not worth pursuing, since the game of chess as played by
an "expert" involves some special intellectual skill which could never
be specified in machine terms.
Turing found this ready dismissal of the computer's
potential to be both interesting and suggestive. If people were unwilling
to accept the idea of a machine which could play games, how would they
feel about one which exhibited "intelligence," or one which could "think"?
In the course of discussions with friends Turing found that a good part
of the problem was that people were universally unsure of their definitions.
What exactly did one mean when one used the word "thought"? What processes
were actually in action when "thinking" took place? If a machine was created
which could think, how would one set about testing it? The last question,
Turing surmised, was the key one, and with a wonderful surge of imagination
spotted a way to answer it, proposing what has in computer circles come
to be known as "The Turing Test for Thinking Machines." In the next section,
we will examine that test, see how workable it is, and also try to assess
how close computers have come, and will come, to passing it.
When Turing asked people whether
they believed that a computer could think, he found almost universal rejection
of the idea-just as I did when I carried out a similar survey almost thirty
years later. The objections I received were similar to those that Turing
documented in his paper "Computing Machinery and Intelligence," and I will
summarize them here, adding my own comments and trying to meet the various
objections as they occur.
First there is the Theological
Objection. This was more common in Turing's time than it is now, but it
still crops up occasionally It can be summed up as follows: "Man is a creation
of God, and has been given a soul and the power of conscious thought. Machines
are not spiritual beings, have no soul and thus must be incapable of thought."
As Turing pointed out, this seems to place an unwarranted restriction on
God. Why shouldn't he give machines souls and allow them to think if he
wanted to? On one level I suppose it is irrefutable: if someone chooses
to define thinking as something that only Man can do and that only
God can bestow, then that is the end of the matter. Even then the force
of the argument does seem to depend upon a confusion between "thought"
and "spirituality," upon the old Cartesian dichotomy of the ghost in the
machine. The ghost presumably does the thinking while the machine is merely
the vehicle which carries the ghost around.
Then there is the Shock/Horror
Objection, which Turing called the "Heads in the Sand Objection." Both
phrases will do though I prefer my own. When the subject of machine thought
is first broached, a common reaction goes something like this: "What a
horrible idea! How could any scientist work on such a monstrous development?
I hope to goodness that the field of artificial intelligence doesn't
[* Published in Mind, Vol. LIX
advance a step further if its end-product is a thinking
machine!" The attitude is not very logical-and it is not really an argument
why it could not happen, but rather the expression of a heartfelt
wish that it never will!
The Extra-sensory Perception Objection
was the one that impressed Turing most, and impresses me least. If there
were such a thing as extra-sensory perception and if it were in some way
a function of human brains, then it could well also be an important constituent
of thought. By this token, in the absence of any evidence proving that
computers are telepathic, we would have to assume that they could never
be capable of thinking in its fullest sense. The same argument applies
to any other "psychic" or spiritual component of human psychology I cannot
take this objection seriously because there seems to me to be no evidence
which carries any scientific weight that extra- sensory perception does
exist. The situation was different in Turing's time, when the world-renowned
parapsychology laboratory at Duke University in North Carolina, under Dr.
J. B. Rhine, was generating an enormous amount of material supposedly offering
evidence for telepathy and precognition. This is not the place to go into
the long, and by no means conclusive, arguments about the declining status
of parapsychology, but it is certainly true that as far as most scientists
are concerned, what once looked like a rather good case for the existence
of telepathy, etc., now seems to be an extremely thin one. But even if
ESP is shown to be a genuine phenomenon, it is, in my own view, something
to do with the transmission of information from a source point to a receiver
and ought therefore to be quite easy to reproduce in a machine. After all,
machines can communicate by radio already, which is, effectively, ESP and
is a far better method of long-distance communication than that possessed
by any biological system.
The Personal Consciousness Objection
is, superficially, a rather potent argument which comes up in various guises.
Turing noticed it expressed particularly cogently in a report, in the British
Medical journal in 1949, on the Lister Oration for that year, which
was entitled "The Mind of Mechanical Man." It was given by a distinguished
medical scientist, Professor G. Jefferson. A short quote from the Oration
Not until a machine can write a sonnet
or compose a concerto
because of thoughts and emotions felt, and
not by the chance fall of symbols, could we agree that machine equals brain-that
is, not only write it but
know that it had written it. No mechanism
could feel (and not merely artificially signal, an easy contrivance) pleasure
at its successes, grief when its valves fuse, be warmed by flattery, be
made miserable by its mistakes, be charmed by sex, be angry or depressed
when it cannot get what it wants.
The italics, which are mine, highlight what I believe
to be the fundamental objection: the output of the machine is more or less
irrelevant, no matter how impressive it is. Even if it wrote a sonnet-and
a very good one-it would not mean much unless it had written it as the
result of "thoughts and emotions felt," and it would also have to "know
that it had written it." This could be a useful "final definition" of one
aspect of human thought-but how would you establish whether or not the
sonnet was written with "emotions"? Asking the computer would not help
for, as Professor Jefferson realized, there would be no guarantee that
it was not simply declaring that it had felt emotions. He is really
propounding the extreme solipsist position and should, therefore, apply
the same rules to humans. Extreme solipsism is logically irrefutable ("I
am the only real thing; all else is illusion") but it is so unhelpful a
view of the universe that most people choose to ignore it and decide that
when people say they are thinking or feeling they may as well believe them.
In other words, Professor Jefferson's objection could be over-ridden if
you became the computer and experienced its thoughts (if any)-only
then could you really know. His objection is worth discussing in
some depth because it is so commonly heard in one form or another, and
because it sets us up in part for Turing's resolution of the machinethought
problem, which we will come to later.
The Unpredictability Objection
argues that computers are created by humans according to sets of rules
and operate according to carefully scripted programs which themselves are
sets of rules. So if you wanted to, you could work out exactly what a computer
was going to do at any particular time. It is, in principle, totally predictable.
If you have all the facts available you can predict a computer's
behavior because it follows rules, whereas there is no way in which you
could hope to do the same with a human
because he is not behaving according
to a set of immutable rules. Thus there is an essential difference
between computers and humans, so (the argument gets rather weak here) thinking,
because it is unpredictable and does not blindly follow rules, must be
an essentially human ability
There are two comments: firstly,
computers are becoming so complex that it is doubtful their behavior could
be predicted even if everything was known about them-computer programmers
and engineers have found that one of the striking characteristics of present-day
systems is that they constantly spring surprises. The second point follows
naturally: humans are already
in that super-complex state and the
reason that we cannot predict what they do is not because they have
no ground rules but because (a) we don't know what the rules are, and (b)
even if we did know them they would still be too complicated to handle.
At best, the unpredictability argument is thin, but it is often raised.
People frequently remark that there is always "the element of surprise"
in a human. I have no doubt that that is just because any very complex
system is bound to be surprising. A variant of the argument is that humans
are capable of error whereas the "perfect" computer is not. That may well
be true, which suggests that machines are superior to humans, for there
seems to be little point in having any information-processing system, biological
or electronic, that makes errors in processing. It would be possible to
build a random element into computers to make them unpredictable from time
to time, but it would be a peculiarly pointless exercise.
The "See How Stupid They Are"
Objection will not need much introduction. At one level it is expressed
in jokes about computers that generate ridiculous bank statements or electricity
bills; at another and subtler level, it is a fair appraisal of the computer's
stupendous weaknesses in comparison with Man. "How could you possibly imagine
that such backward, limited things could ever reach the point
where they could be said to think?" The answer, as
we have already pointed out, is that they may be dumb now but they have
advanced at a pretty dramatic rate and show every sign of continuing to
do so. Their present limitations may be valid when arguing whether they
could be said to be capable of thinking
or in the very near
future, but it has no relevance to whether they would be capable of thinking
at some later date.
The "Ah But It Can't Do That"
Objection is an eternally regressing argument which, for a quarter of a
century, computer scientists have been listening to, partially refuting,
and then having to listen to all over again. It runs: "Oh yes, you can
obviously make a computer do so and so-you have just demonstrated that,
but of course you will never be able to make it do such and such." The
such and such may be anything you name-once it was play a good game of
chess, have a storage capacity greater than the human memory; read human
hand-writing or understand human speech. Now that these "Ah buts" have
(quite swiftly) been overcome, one is faced by a new range: beat the world
human chess champion, operate on parallel as opposed to serial processing,
perform medical diagnosis better than a doctor, translate satisfactorily
from one language to another, help solve its own software problems, etc.
When these challenges are met, no doubt it will have to design a complete
city from scratch, invent a game more interesting than chess, admire a
pretty girl/handsome man, work out the unified field theory, enjoy bacon
and eggs, and so on. I cannot think of anything more silly than developing
a computer which could enjoy bacon and eggs, but there is, nothing to suggest
that, provided enough time and money was invested, one could not pull off
such a surrealistic venture. On the other hand, it might be most useful
to have computers design safe, optimally cheap buildings. Even more ambitious
(and perhaps comparable to the bacon and egg project but more worthwhile)
would be to set a system to tackle the problem of the relationship between
gravity and light, and my own guess is that before the conclusion of the
long-term future (before the start of the twenty-first century), computers
will be hard at work on these problems and will be having great success.
The "It Is Not Biological" Objection
may seem like another version of the theological objection-only living
things could have the capacity for thought, so nonbiological systems could
not possibly think. But there is a subtle edge that requires a bit more
explanation. It is a characteristic of most modern computers that they
are discrete state machines, which is to say that they are digital and
operate in a series of discrete steps-on/off. Now the biological central
nervous system may not be so obviously digital, though there is evidence
that the neurone, the basic unit of communication, acts in an on/ off,
all or nothing way. But if it turned out that it were
not, and operated
on some more elaborate strategy, then it is conceivable that "thought"
might only be manifest in things which had switching systems of this more
elaborate kind. Put it another way: it might be possible to build digital
computers which were immensely intelligent, but no matter how intelligent
they became they would never be able to think. The argument cannot
be refuted at the moment, but even so there is no shred of evidence to
suppose that only non-digital systems can think. There may be other
facets of living things that make them unique from the point of view of
their capacity to generate thought, but none that we can identify, or even
guess at. This objection therefore is not a valid one at present, though
in the event of some new biological discovery, it may became so.
The Mathematical Objection is
one of the most intriguing of the ten objections, and is the one most frequently
encountered in discussions with academics. It is based on a fascinating
exercise in mathematical logic propounded by the Hungarian Kurt Gbdel.
To put it rather superficially, Gbdel's theorem shows that within any sufficiently
powerful logical system (which could be a computer operating according
to clearly defined rules), statements can be formulated which can neither
be proved nor disproved
within the system. In his famous 1936 paper,
Alan Turing restructured G6del's theorem so that it could apply specifically
to machines. This effectively states that no matter how powerful a computer
is, there are bound to be certain tasks that it cannot tackle on its own.
In other words, you could not build a computer which could solve every
problem no matter how well it was programmed; or, if you wanted to
carry the thing to the realms of fancy, no computer (or any other digital
system) could end up being God.
Godel's theorem, and its later
refinements by Alonzo Church, Bertrand Russell and others, is interesting
to mathematicians, not so much because it assumes an implicit limitation
to machine intelligence, but because it indicates a limitation to mathematics
itself. But the theorem has been used, incorrectly, by critics of machine
intelligence to "prove" that computers could never reach the same intellectual
level as Man. The weakness of the position is that it is based on the assumption
that the human brain is not a formal logical system. But such evidence
as we have suggests very strongly that it is and will, therefore, be bound
by the same G6del-limitations as are machines. There is also a tweak in
the tail. While the theorem admittedly states that no system on its
own can completely tackle its own problems-"understand itself"-it does
not imply that the areas of mystery could not be tackled by some
other system. No individual human brain could solve its own problems or
fully "know itself," but with the assistance of other brains these deficiencies
might be corrected. Equally, and significantly, problem areas associated
with complex computer systems could be solved totally and absolutely by
other computer systems, provided that they were clever enough.
The last of the ten arguments
against the concept of a thinking machine has become known as Lady Lovelace's
Objection.... In its modern form this comes up as, "A Computer cannot do
anything that you have not programmed it to." The objection is so fundamental
and so widely accepted that it needs detailed discussion.
In the most absolute and literal sense, this statement
is perfectly correct and applies to any machine or computer that has been
made or that could be made. According to the rules of the universe that
we live in, nothing can take place without a prior cause; a computer will
not spring into action without something powering it and guiding it on
its way. In the case of the various tasks that a computer performs, the
"cause"-to stretch the use of the word rather-is the program or sets of
programs that control these tasks. Much the same applies to a brain: it,
too, must come equipped with sets of programs which cause it to run through
its repertoire of tasks. This might seem to support Lady Lovelace, at least
to the extent that machines "need" a human to set them up, but it would
also seem to invalidate the argument that this constitutes an essential
difference between computers and people. But is there not still a crucial
difference between brains and computers? No matter how sophisticated computers
are, must there not always have been a human being to write its programs?
Surely the same does not have to be said for humans?
To tackle this we need to remember
that all brains, human included, are equipped at birth with a comprehensive
collection of programs which are common to all members of a species and
which are known as instincts. These control respiration, gastric absorption,
cardiac activity, and, at a behavioral level, such reflexes as sucking,
eyeblink, grasping and so on. There may also be programs which "cause"
the young animal to explore its environment, exercise its muscles, play
and so on. Where do these come from? Well, they are acquired over an immensely
longwinded trial-and-error process through the course of evolution. We
might call them permanent software ("firmware" is the phrase used sometimes
by computer scientists) and they correspond to the suites of programs which
every computer has when it leaves the factory, and which are to do with
its basic running, maintenance, and so on.
In addition to this, all biological
computers come equipped with a bank of what might best be described as
raw programs. No one has the faintest idea whether they are neurological,
biochemical, electrical or what-all we know is that they must exist. They
start being laid down the moment the creature begins to interact with the
world around it. In the course of time they build up into a colossal suite
of software which ultimately enables us to talk, write, walk, read, enjoy
bacon and eggs, appreciate music, think, feel, write books, or come up
with mathematical ideas. These programs are useful only to the owner of
that particular brain, vanish with his death and are quite separate from
If this seems too trivial a description
of the magnificent field of human learning and achievement, it is only
because anything appears trivial when you reduce it to its bare components:
a fabulous sculpture to a quintillion highly similar electrons and protons,
a microprocessor to a million impurities buried in a wafer of sand, the
human brain into a collection of neurones, blood cells and chemical elements.
What is not trivial is the endlessly devious, indescribably profound way
in which these elements are structured to make up the whole. The real difference
between the brain and most existing computers is that in the former, data
acquisition and the initial writing and later modification of the program
is done by a mechanism within the brain itself, while in the latter, the
software is prepared outside and passed to the computer in its completed
state. But I did use the word "most." In recent years increasing emphasis
has been placed on the development of "adaptive" programs-software which
can be modified and revised on the basis of the program's interaction with
the environment. In simple terms these could be looked upon as "programs
which learn for themselves," and they will, in due course, become an important
feature of many powerful computer systems.
At this point the sceptic still
has a few weapons in his armoury. The first is generally put in the form
of the statement: "Ah, but even when computers
can update their
own software and acquire new programs for themselves, they will still only
be doing this because of Man's ingenuity. Man may no longer actually write
the programs, but had he not invented the idea of the self-adaptive program
in the first place none of this could have happened." This is perfectly
true but has little to do with whether or not computers could think, or
perform any other intellectual exercise. It could place computers eternally
in our debt, and we may be able to enjoy a smug sense of pride at having
created them, but it offers no real restriction on their development.
The sceptic may also argue that
no matter how clever or how intelligent you make computers, they will never
be able to perform a creative task. Everything they do will inevitably
spring from something they have been taught, have experienced or is the
subject of some preexisting program. There are two points being made here.
One is that computers could never have an original or creative thought.
The other is that the seeds of everything they do, no matter how intelligent,
lie in their existing software. To take the second point first: again one
is forced to say that the same comment applies to humans. Unless the argument
is that some of Man's thoughts or ideas come from genuine inspiration-a
message from God, angels, or spirits of the departed-no one can dispute
that all aspects of our intelligence evolve from preexisting programs and
the background experiences of life. This evolution may be enormously complex
and its progress might be impossible to track, but any intellectual flowering
arises from the seeds of experience planted in the fertile substrate of
There still remains the point
about creativity, and it is one that is full of pitfalls. Before making
any assumptions about creativity being an exclusive
Man, the concept has to be defined. It is not enough to say "write a poem,"
~Ipaint a picture" or "discuss philosophical ideas," because it is easy
enough to program computers to do all these things. The fact that their
poems, paintings and philosophical ramblings are pretty mediocre is beside
the point: it would be just as unfair to ask them to write, say, a sonnet
of Shakespearian calibre or a painting of da Vinci quality and fail them
for lack of creativity as it would be to give the same task to the man
in the street. Beware too of repeating the old saying, "Ah, but you have
to program them to paint, play chess and so on," for the same is unquestionably
true of people. Try handing a twelve-month-old baby a pot of paint or a
chessboard if you have any doubts about the need for some measure of learning
Obviously a crisper definition
of creativity is required, and here is one that is almost universally acceptable:
If a person demonstrates a skill which has never been demonstrated before
and which was not specifically taught to him by someone else, or in the
intellectual domain provides an
entirely novel solution to a problem,
a solution which was not known to any other human being-then they can be
said to have done something original or had an original or creative thought.
There may be other forms of creativity of course, but this would undeniably
be an example of it in action. There is plenty of evidence that humans
are creative by this standard and the history of science is littered with
"original" ideas which humans have generated. Clearly, until a computer
also provides such evidence, Lady Lovelace's Objection still holds, at
least in one of its forms.
But alas for the sceptics. This
particular barrier has been overthrown by computers on a number of occasions
in the past few years. A well-publicized one was the solution, by a computer,
of the venerable "four colour problem." This has some mathematical importance,
and can best be expressed by thinking of a twodimensional map featuring
a large number of territories, say the counties of England or the states
of the USA. Supposing you want to give each territory a colour, what is
the minimum number of colours you need to employ to ensure that no two
territories of the same colour adjoin each other?
After fiddling around with maps
and crayons, you will find that the number seems to come out at four, and
no one has ever been able to find a configuration where five colours are
required, or where you can always get away with three. Empirically, therefore,
four is the answer-hence the name of the problem. But if you attempt to
demonstrate this mathematically and prove that four colours will do for
any conceivable map, you will get nowhere. For decades mathematicians have
wrestled with this elusive problem, and from time to time have come up
with a "proof" which in the end turns out to be incomplete or fallacious.
But the mathematical world was rocked when in 1977 the problem was handed
over to a computer, which attacked it with a stupendous frontal assault,
sifting through huge combinations of possibilities and eventually demonstrating,
to every mathematician's satisfaction, that four colours would do the trick.
Actually, although this is spectacular testimony to the computer's creative
powers, it is not really the most cogent example, for its technique was
block-busting rather than heuristic (problem solving by testing hypotheses).
It was like
solving a chess problem by working out every possible combination
of moves, rather than by concentrating on likely areas and experimenting
with them. A better, and much earlier, demonstration of computer originality
came from a program which was set to generate some totally new proofs in
Euclidean geometry. The computer produced a completely novel proof of the
well-known theorem which shows that the base angles of an isosceles triangle
are equal, by flipping the triangles through 180 degrees and declaring
them to be congruent. Quite apart from the fact that it had not before
been known to Man, it showed such originality that one famous mathematician
remarked, "If any of my students had done that, I would have marked him
down as a budding genius."
And so Lady Lovelace's long-lasting
objection can be overruled. We have shown that computers can be intelligent,
and that they can even be creative-but we have not yet proved that they
can, or ever could, think.
Now, what do we mean by the word "think"?
TOWARDS THE ULTRA-INTELLIGENT MACHINE
The most common objections raised to the notion of thinking
machines are based on misunderstandings of fairly simple issues, or on
semantic confusions of one kind or another. We are still left with the
problem of defining the verb "to think," and in this section we will attempt
to deal with this, or at least to discuss one particular and very compelling
way of dealing with it. From this position we shall find ourselves drifting
inevitably into a consideration of the problem of creating thinking machines,
and in particular to the eerie concept of the Ultra- Intelligent Machine.
Most people believe that they know what they mean when they talk about
"thinking" and have no difficulty identifying it when it is going on in
their own heads. We are prepared to believe other human beings think because
we have experience of it ourselves and accept that it is a common property
of the human race. But we cannot make the same assumption about machines,
and would be sceptical if one of them told us, no matter how persuasively,
that it too was thinking. But sooner or later a machine will make just
such a declaration and the question then will be, how do we decide whether
to believe it or not?
When Turing tackled the machine-thought issue, he proposed
a characteristically brilliant solution which, while not entirely free
from flaws, is nevertheless the best that has yet been put forward. The
key to it all, he pointed out, is to ask what the signs and signals are
that humans give out, from which we infer that they are thinking.
It is clearly a matter of what kind of conversation we can have with
them, and has nothing to do with what kind of face they have and what
kind of clothes they wear. Unfortunately physical appearances automatically
set up prejudices in our minds, and if we were having a spirited conversation
with a microprocessor we might be very sceptical about its capacity for
thought, simply because it did not look like any thinking thing we had
seen in the past. But we would
be interested in what it had to say;
and thus Turing invented his experiment or test.
Put a human-the judge or tester-in a room where there
are two computer terminals, one connected to a computer, the other to a
person. The judge, of course, does not know which terminal is connected
to which, but can type into either terminal and receive typed messages
back on them. Now the judge's job is to decide, by carrying out conversations
with the entities on the end of the respective terminals, which is which.
the computer is very stupid, it will immediately be revealed as such and
the human will have no difficulty identifying it. If it is bright, he may
find that he can carry on quite a good conversation with it, though he
may ultimately spot that it must be the computer. If it is exceptionally
bright and has a wide range of knowledge, he may find it impossible to
say whether it is the computer he is talking to or the person. In this
case, Turing argues, the computer will have passed the test and could for
all practical purposes be said to be a thinking machine.
The argument has a simple but compelling force: if the
intellectual exchange we achieve with a machine is indistinguishable from
that we have with a being we know to be thinking, then we are, to
all intents and purposes, communicating with another thinking being. This,
by the way, does not imply that the personal experience, state of consciousness,
level of awareness or whatever, of the entity is going to be the same as
that experienced by a human when he or she thinks, so the test is not for
these particular qualities. They are not, in any case, the parameters which
concern the observer.
At first the Turing Test may seem a surprising way of
looking at the problem, but it is an extremely sensible way of approaching
it. The question now arises: is any computer at present in existence capable
of passing the test?-And if not, how long is it likely to be before one
comes along? From time to time one laboratory or another claims that a
computer has had at least a pretty good stab at it. Scientists using the
big computer conferencing systems (each scientist has a terminal in his
office and is connected to his colleagues via the computer, which acts
as host and general message-sorter) often find it difficult to be sure,
for a brief period of time at least, whether they are talking to the computer
or to one of their colleagues. On one celebrated occasion at MIT, two scientists
had been chatting via the network when one of them left the scene without
telling the other, who carried on a cheery conversation with the computer
under the assumption that he was talking to his friend. I have had the
same spooky experience when chatting with computers which I have programmed
myself, and often find their answers curiously perceptive and unpredictable.
To give another interesting example: in the remarkable
match played in Toronto in August 1978 between the International Chess
Master David Levy, and the then computer chess champion of the world, Northwestern
University's "Chess 4.7," the computer made a number of moves of an uncannily
"human" nature. The effect was so powerful that Levy subsequently told
me that he found it difficult to believe that he was not facing an outstanding
human opponent. Few chess buffs who looked at the move-by-move transcripts
of the match were, without prior knowledge, able to tell which had been
made by the computer and which by the flesh-and-blood chess master. David
Levy himself suggested that Chess 4.7 had effectively passed the Turing
It would be nice to believe that I had been present on such an historic
occasion, but this did not constitute a proper "pass." In the test as Turing
formulated it, the judge is allowed to converse with either of his two
mystery entities on any topic that he chooses, and he may use any conversational
trick he wants. Furthermore he can continue the inquisition for as long
as he wants, always seeking some clue that will force the computer to reveal
itself. Both the computer and the human can lie if they want to in their
attempts to fool the tester, so the answers to questions like "Are you
the computer?" or "Do you watch much television?" will not give much away.
Obviously any computer with a chance in hell of passing the test will have
to have a pretty substantial bank of software at its disposal, and not
just be extremely bright in one area. Chess 4.7 for example might look
as though it was thinking if it was questioned about chess, or, better
still, invited to play the game, but switch the area of discourse
to human anatomy, politics or good restaurants and it would be shown up
as a dunderhead.
As things stand at present, computers have quite a way
to go before they jump the hurdle so cleverly laid out for them by Turing.
But this should not be taken as providing unmitigated comfort for those
who resist the notion of advanced machine intelligence. It should now be
clear that the difference, in intellectual terms, between a human being
and a computer is one of degree and not of kind.
Turing himself says in his Mind paper that he
feels computers will have passed the test before the turn of the century,
and there is little doubt that he would dearly have liked to live long
enough to be around on the splendiferous occasion when "machine thinking"