Mindless Thought Experiments
(A Critique of Machine Intelligence)
by Jaron Lanier
Thought Experiment #1: Your Brain in Silicon
Since there isn't a computer that seems conscious at this time, the idea
of machine consciousness is supported by thought experiments. Here's one
old chestnut: "What if you replaced your neurons one by one with neuron-sized
and shaped substitutes made of silicon chips that perfectly mimicked the
chemical and electric functions of the originals? If you just replaced
one single neuron, surely you'd feel the same. As you proceed, as more
and more neurons are replaced, you'd stay conscious. Why wouldn't you still
be conscious at the end of the process, when you'd reside in a brain-shaped
glob of silicon? And why couldn't the resulting replacement brain have
been manufactured by some other means?"
OK, let's take this thought experiment even further. Instead of physical
neuron replacements, what if you used software? Every time you plucked
a neuron out of your brain you'd put in a radio transceiver that talked
to a nearby computer that is running neuron simulations. When enough neurons
had been transferred to software they could start talking to each other
directly in the computer and you could start throwing away the radio links.
When you're done your brain would be entirely on the computer.
If you think consciousness doesn't travel into software you've got a problem.
What is so special about physical neuron replacement parts? After all,
the computer is made of matter too, and it's performing the same computation.
If you think software lacks some needed essence, you might as well believe
that authentic, original, brand name human neurons from your very own head
are the only source of that essence. In that case, you've made up your
mind: You don't believe in AI. But let's assume that software is a legitimate
medium for consciousness and move on.
So now your consciousness exists as a series of numbers in a computer; that
is all a computer program is, after all. Let's go a little further with
this. Let's suppose you have a marvelous new sensor that can read the positions
of every raindrop in a storm. Gather all those raindrop positions as a
list of numbers and pretend those numbers are a computer program. Now start
searching through all the possible computers that could exist up to a certain
very large size until you find one that treats the raindrop positions as
a program that is exactly equivalent to your brain. Yes, it can be done:
The list of possible computers of any particular size is large but finite,
and so is your brain, according to the earlier steps in the thought experiment,
OK, so is the rainstorm conscious? Is it conscious as being specifically
you, since it implements you? Or are you going to bring up an essence argument
again? You say the rainstorm isn't really doing computation- it's just
sitting there as a passive program- so it doesn't count? Fine, then we'll
measure a larger rainstorm and search for a new computer that treats a larger
collection of raindrops as implementing BOTH the computer we found before
that runs your brain as raindrops AS WELL AS your brain in raindrops. Now
the raindrops are doing the computation. Maybe you're still not happy with
this because it seems the raindrops are only equivalent to a computer that
is never turned on.
We can go further. The thought experiment supply store can ship us an even
better sensor that can measure the motions, not merely the instant positions,
of all the raindrops in a storm over a period of time. Now we'll look for
a computer that treats the numerical readings of those motions as an implementation
of your brain changing over time. Once we've found it, we can say that
the raindrops are doing the same work of computation as your brain for at
least a specified amount of time. The rainstorm computer has been turned
on. The raindrops won't cohere forever, but no computer lasts forever.
Every computer is gradually straying into entropy, just like our thunderstorm.
During a few minutes, a rainstorm might implement millions of minds; a
whole civilization might rise and fall before the water hits the dirt.
And further still: You might object that the raindrops are not influencing
each other, so they are still passive, as far as computing your brain is
concerned. Let's switch instead, then, to a large swarm of asteroids hurdling
through space. They all exert gravitational pull on each other. Now we'll
use a sensor for asteroid swarm internal motion and use it to get data that
will be matched to an appropriate computer to implement your brain. Now
you have a physical system whose internal interactions perform the computation
of your mind.
But we're not done. You should realize by now that your brain is simultaneously
implemented everywhere. It's in a thunderstorm, in the birth rate statistics,
in the dimples of gummy bears.
Enough! I hope the reader can see that my game can be played ad infinitum.
I can always make up a new kind of sensor from the supply store that will
give me data from some part of the physical universe that is related to
itself in the same way that your neurons are related to each other by a
given AI proponent.
AI proponents usually seize on some specific stage in my reducto ad absurdum
to locate the point where I've gone too far. But the chosen stage varies
widely from proponent to proponent. Some concoct finicky rules for what
matter has to do to be conscious; be the minimum physical system isomorphic
to a conscious algorithm, for instance. The problem with such rules is
that they have to race ahead of my absurdifying thought experiments, so
they become stringent to the point that they no longer allow the brain itself
to be conscious. The brain is almost certainly not the minimum physical
system isomorphic to its thought processes, for instance.
A few DO take the bait and choose to believe there are a myriad of consciousnesses
everywhere. This has got to be the least elegant position ever taken on
any subject in the history of science. It would mean that there is a vast
superset of consciousnesses sort of like you, for instance the one that
includes both your brain plus the plate of pasta you're eating.
Some others object that an asteroid swarm doesn't DO anything, while a mind
acts in the world in a way that we can understand. I would respond that
to the right alien, it might appear that people do nothing, and asteroid
swarms are acting consciously. Even on Earth we can see enough variation
in organisms to doubt the universality of the human perspective. How easy
would it be for an intelligent bacteria to notice people as integral entities?
We might appear more as slow storms moving into the bacterial environment.
If we are relying solely on the human perspective to validate machine consciousness,
we're really only putting human-ness on an even higher pedestal than it
might have been at the start of our thought experiment.
The variation among responses from AI proponents is what should be taken
as the meaningful product of my flight of fancy. I don't claim to know
for certain where consciousness is or isn't, but I hope I've at least shown
that there is a real problem.
Thought experiment two: The Turing Test
Sometimes the idea of machine intelligence is framed in moral terms: Would
you deny equal rights to a machine that seemed conscious? This question
will serve to introduce the mother of all AI thought experiments, the Turing
Test. Before I go on, a note on terminology: In the following discussion,
I'll let the terms "smart" and "conscious" blur together,
even though I profoundly disagree that they are interchangeable. This is
the claim of machine intelligence, however; that consciousness "emerges"
from intelligence. To constantly point out my objection would make the
tale too tedious to tell. That is a danger in thought experiments: You
might find yourself adopting some of the preliminary thoughts while you're
distracted by the rest of the experiment.
At any rate, Alan Turing proposed a test in which a computer and a person
are placed in isolation booths and are only allowed to communicate via media
that conceal their identities, such as typed emails. A human subject is
then asked to determine which isolation booth holds a fellow human, and
which holds a machine. Turing's interpretation was that if the test subject
cannot tell the human and machine apart, then it would be improper to impose
a distinction between them when the true identities are revealed. It would
be time to give the computer "equal rights".
I have long proposed that Turing misinterpreted his thought experiment.
If a person cannot tell which is machine and which is human, it does not
necessarily mean that the computer has become more human-like. The other
possibility is that the human has become more computer-like. This is not
just a hypothetical point of argument, but a serious concern in software
Part 3: Pragmatic opposition to machine intelligence
When a piece of software is deemed autonomous to some degree, the only test
of its status is whether users believe it. AI developers would certainly
agree that humans are more mentally agile than any existing software today,
so today it's more likely than not that a person is changing in order to
make the software seem smart. Ironically, the harder a problem area is,
the easier it can be for humans to believe that a computer is smart at it.
An AI program that attempts to make decisions about something we understand
easily, like basic home finance, is booted out the door immediately, because
it is perceived as ridiculous, or even dangerous. Microsoft's "Bob"
program was an example of the ridiculous. But an AI program that teaches
children is acceptable because we don't know much about how children learn,
or how teachers teach. Furthermore, children will adapt to the program,
making it seem successful. Such programs can already be found in many homes
and schools. The less we understand a problem, the more ready we are to
There is no functional gain in making a program "intelligent".
Exactly the same capabilities as are found in an "intelligent"
or "autonomous" program (such as the ability to recognize a face)
could just as well be inclusively packaged within a "non-autonomous"
user interface. The only real difference between the two approaches is
that if users are told a computer is autonomous, then they are more likely
to change themselves to adapt to the computer.
This means that software packaged as being "non-intelligent" is
more likely to improve, because the designers will receive better critical
feedback from users. The idea of intelligence removes some of the "evolutionary
pressure" from software, by subtly indicating to users it is they,
rather than the software, that should be changing.
As it happens, machine decision making is already running our household
finances to a scary degree, but it's doing so with a Wizard of Oz-like remote
authority that keeps us from questioning it. I'm referring to the machines
that calculate our credit ratings. Most of us have decided to change our
habits in order to appeal to these machines. We have simplified ourselves
in order to be comprehensible to simplistic data-bases, making them look
smart and authoritative. Our demonstrated willingness to accommodate machines
in this way is ample reason to adopt a standing bias against the idea of
Inserting a judgment-making machine into a system allows individual humans
to avoid responsibility. If a trust-worthy, gainfully employed person is
denied a loan, it's because of the algorithm, not because of another specific
person. The loss of personal responsibility can be seen most clearly in
the military's continued fascination with intelligent machines. AI has
been one of the most funded, and least bountiful, areas of scientific inquiry
in the second half of the twentieth century. It keeps on failing and bouncing
back with a different name, only to be over-funded once again. The most
recent marketing moniker was "Intelligent Agents". Before that
were "Expert Systems". The lemming-like funding charge is always
lead by the defense establishment. AI is perfect research for the military
to fund. It lets strategists imagine less gruesome warfare and avoid personal
responsibility at the same time.
AI proponents object that a Turing Test-passing computer would be spectacularly,
obviously intelligent and conscious, and that my arguments only apply to
present day, crude computers. The argument I'm presenting relates to the
way computers change, however. The AI fantasy causes people to change more
than computers; therefore it impedes the progress of computers. If there
IS a potential for conscious computers, I wouldn't be surprised if the idea
of AI is what turns out to prevent them from appearing.
AI boosters believe that computers are getting better so quickly that we
will inevitably see qualitative changes in them, including consciousness,
before we know it. I'm concerned by the attitude implied in this position;
that machines are essentially improving on their own. This is a "trickled
down" version of the retreat from responsibility implied by AI. I
think we in the computer science community need to take more responsibility
than that. Even though we're used to seeing spectacular progress in the
hardware capabilities of computers, software improves much more slowly,
and sometimes not at all. I saw a novice user the other day complain that
she missed her old text-only computer because it felt faster than her new
pentium machine at word processing. Software awkwardness will always be
able to outpace gains in hardware speed and capacity, however spectacular
they may be. Once again, emphasizing human responsibility instead of machine
capability is much more likely to create better machines.
Even strong AI enthusiasts worry that humans might not agree on whether
the Turing Test is passed by a future machine. Some of them bring up the
moral "equal rights" argument for the machine's benefit. After
the thought experiments fail to turn in definitive results, the machine
is favored anyway, and it's rights are defended.
This is where AI crosses a boundary and turns into a religion. A new form
of mysterious essence is being proposed for the benefit of machines. When
I say religion, I mean it. The culture of machine consciousness enthusiasts
often includes the expressed hope that human death will be avoidable by
actually enacting the first thought experiment above, of transferring the
human brain into a machine. Hans Moravec (<-check spelling<) is one
researcher who explicitly hopes for this eventuality. If we can become
machines we don't have to die, but only if we believe in machine consciousness.
I don't think it's productive to argue about religion in the same way we
argue about philosophy or science, but it is important to understand when
religion is what we are talking about.
I will not argue religion here, but I will restate the heart of my objection
to the idea of machine intelligence. The attraction, and the danger, of
the idea is that it lets us avoid admitting how little we understand certain
hard problems. By creating an umbrella category for "everything brains
do", it's possible to feel we are making progress on problems we don't
even know how to frame yet.
Even though the question of machine consciousness is both undecidable and
lacking in consequence until some hypothesized future time when an artificial
intelligence appears, attitudes towards the question today nonetheless have
a tangible effect. We are vulnerable to making ourselves stupid in order
to make possibly smart machines seem smart.
Artificial Intelligence enthusiasts like to characterize their opponents
as inventing a problem of consciousness where there needn't be one in order
to preserve a special place for people in the universe. They often invoke
the shameful history of hostile receptions to Galileo and Darwin in order
to dramatize their plight as shunned visionaries. In their view, AI is
resisted only because it threatens humanity's desire to be special in the
same way the ideas of these hallowed scientists once did. This "spin"
on opponents was first invented, with heroic immodesty, by Freud. While
Freud was undeniably an decisive original thinker, his ideas have not held
up as well as Darwin's or Galileo's. In retrospect he doesn't seem to have
been a particularly objective scientist, if he was a scientist at all.
It's hard not to wonder if his self-inflation contributed to his failings.
Machine consciousness believers should take Freud's case as a cautionary
tale. Believing in Freud profoundly changed generations of doctors, educators,
artists, and parents. Similarly, belief in the possibility of AI is beginning
to change present day practices both in areas I have touched on- software
engineering, education, and military planning- and in many other fields,
including aspects of biology, economics, and social policy. The idea of
AI is already changing the world, and it is important for everyone who is
influenced by it to realize that its foundations are every bit as subjective
and elusive as those of non-believers.
Go back to Jaron's