Why the Deep Blues?



By Jaron Lanier


The game of chess possesses a rare combination of qualities, from the point of

view of a human mind. It is easy to understand the rules, yet hard to play

well; and most importantly, the quest to master chess seems to be eternal.

Human players surmount ever higher plateaus of skill, and yet no one has been

able to make the claim that chess skill can be pushed no further.


What makes chess fascinating to us is precisely that we're bad at it. From a

contemporary computer scientist's point of view, human brains routinely do

things that would seem much harder to accomplish, such as understanding

sentences, and yet people do not have sentence-comprehension tournaments --

because we find that task too easy, too ordinary.


Computers fascinate and frustrate us in a similar way. Children learn to

program them, and yet it is hard for the most accomplished professional to

program them well. Despite the evident potential of computers, we know for

certain that we have not thought of the best programs to write. Simple systems

that can be played out beyond our horizon are constant reminders of the

limitations of the human intellect. Chess reminds us that though we are

compelled to try, we can never fully master our fates.


Computers and chess share a common ancestry. Both originated as tools of

war. Chess began as a battle simulation, a mental martial art. (The design of

chess reverberates even further back in the past than that, all the way back to

our sad animal ancestry of pecking orders and competing clans.) Modern

computers, likewise, were developed to aim missiles and break military secret

codes. Chess and computers are both direct descendants of the violence that

drives evolution in the natural world, however sanitized and abstracted it may

be in the context of civilization. The drive for competition is palpable in

both computers and chess, and when they are brought together, adrenaline flows.


But all of this is not enough to explain the outpouring of public angst on the

occasion of Deep Blue's recent victory. To be fair, the event was amplified

through the prism of the mass media; yet it is clear that the mass response was

genuine and deeply felt. There was much talk about whether human beings were

still special, whether computers were becoming our equal.


This way of framing the event is unfortunate. What happened was primarily

that a team of computer scientists built a very fast machine and figured out a

better way to represent the problem of how to choose the next move in a chess

game. This accomplishment was performed by people, not machines, and its

character was intellectual just as much as it was technical. The Deep Blue

team's central victory was one of clarity and elegance in thought.


In order for a computer to beat the human chess champion, two kinds of

progress had to converge; an increase in raw hardware power, and an improvement

in the sophistication and clarity with which the decisions of chess play are

represented in software. This dual path made it hard to predict the year, but

not the eventuality, that a computer would triumph. If the Deep Blue team had

not been as good at the software problem, a computer would still have been able

to become the world champion at some later date due to sheer brawn. So the

suspense was not whether a computer would ever beat the best human at chess,

but rather the degree to which elegance of thought (among the programmers)

would play a role in the victory. Deep Blue won earlier than it might have,

scoring a point for elegance.


The public reaction to the defeat of Kasparov leaves the computer science

community with an important question, however. Is it useful to portray

computers themselves as intelligent, or human-like in any way? Does this

presentation serve to clarify or obscure the role of computers in our lives?


I believe the attribution of intelligence to machines obscures more than it

illuminates. When people are told a computer is intelligent, we are prone to

changing ourselves in order to make the computer appear to work better, instead

of demanding that the computer be changed to become more useful. Treating

computers as intelligent, autonomous entities ends up standing the process of

engineering on its head; we can't afford to respect our own designs so much.

The same algorithms that are found in "intelligent" systems can just as well be

presented to users as subservient tools, and the latter choice is much more

likely to elicit the feedback needed to improve designs and performance.


I'm in a minority in holding this opinion: the mainstream in computer science

is comfortable with the notion that computers are becoming "intelligent." The

origin of this idea can be found in the celebrated thought experiment known as

the Turing Test, after Alan Turing, one of the inventors of the modern

computer. According to Turing, if a computer and a human are concealed behind

screens and a human judge who interacts with both is fooled into thinking the

machine is a human, then the computer can be said to posses an intellect equivalent to a human's. It might be made of chips instead of biological cells, but if it

can act in an equivalent way to a person, does its body really matter? Many

thoughtful computer scientists who have been influenced by Turing have come to

believe that computers are on their way to becoming an intelligent life form.

If the Turing Test is right, then it would seem unfair, "racist" in a way, to

think of a sufficiently powerful computer as a mere tool for humans.


In practice the Turing Test is dangerously flawed, because it is impossible to

distinguish whether the computer is getting more human-like or the human is

getting more computer-like. People are vulnerable, unfortunately, to making

themselves stupid in order to make computers appear to be smarter. In real

world applications where versions of the Turing Test are mundanely played out

in miniature every day, it is far too likely that the human will become

"stupider" -- or rather "simpler"; that a human will adapt to fit the

expectations of the software model of human behavior imbedded in a computer



The process of human accommodation can be seen in many quarters as we entrust

information systems with more and more decisions of consequence. It is found in

the degree to which personal behavior has been modified to please the

reductionist algorithms that determine credit ratings. (Many people change

their financial behavior in subtle ways in order to appear favorably to these

programs -- for example by borrowing on occasion even when it is not needed.)

More serious are so-called "intelligent agent" programs that help users find

content on the Internet. To use these programs, ones taste must conform to the

editorial distinctions that can be expressed by the agent's internal software

-- and those epresentations must be kept hidden in order to maintain the

illusion of the agent's autonomy. In order to play along with the fantasy of

computer intelligence, users may never realize the subtleties of their own

judgments that they never had the chance to nurture and express.


Whenever a computer is imagined to be intelligent, what is really happening is

that humans have abandoned aspects of the subject at hand in order to remove

from consideration whatever the computer is blind to. This happened to chess

itself in the case of Deep Blue. There is an aspect of chess that is a little

like poker; the staring down of an opponent, the projection of confidence.

Even though it would be relatively easy to write a simple program to play poker

"as well as" the best human player, poker is really a game about the subtleties

of non-verbal communication between people, such as bluffing and hiding

emotion. In the wake of Deep Blue's victory, the poker side of chess has been

largely overshadowed by the abstract, algorithmic aspect -- while ironically it

was in the poker side of the game that Kasparov failed critically.


Kasparov ultimately seems to have allowed himself to be spooked by the

computer, even after he demonstrated an ability to defeat it on occasion. He

might very well have won if he were playing a human player with exactly the

same skill as Deep Blue (at least as the computer exists this year). Instead,

Kasparov detected a sinister stone face where in fact there was absolutely

nothing. While the contest was not intended as a Turing Test, it ended up as

one, and Kasparov was fooled.


Go back to Jaron's home page.