1000 Words on Why I Don't Believe in Machine Consciousness
It is collective self-flattery for members of the computer science community
to argue that computers can be conscious. I will take the contrarian position
and argue that they cannot.
Arguments about machine intelligence hinge on questions of epistemology;
our ways of knowing what we know. The first and most basic argument of
this kind is known as the Turing Test. Alan Turing proposed that if a computer
was programmed in such a way that it could fool you, a human observer, into
believing that is was conscious, then it would be sentimental foolishness
to suggest that it wasn't conscious. It would be like claiming the Earth
was at the center of the universe; a desperate attempt to hold on to a vain
I claim that there are different ways of knowing things. Consciousness
is the thing we share that we don't share objectively. We only experience
it subjectively, but that does not mean it does not exist.
How should we then approach the problem of deciding whether machines might
also experience consciousness? I believe the Turing test fails to help
us decide this question. In Turing's original setup, it is impossible to
tell whether the computer has become more human-like, or whether the human
has become more computer-like. All we are able to measure is their similarity.
This ambiguity makes artificial intelligence an idea that is not only groundless,
but damaging. If you observe humans using computer programs that are designated
to be "smart", you will see them make themselves stupid in order
to make the programs work. This can be observed in connection with Apple's
Newton or the program "Bob" from Microsoft.
What starts as an epistemological argument quickly turns into a practical
design argument. In the Turing test, we cannot tell whether people are
making themselves stupid in order to make computers seem to be smart. Therefore
the idea of machine intelligence makes it harder to design good machines.
When users treat a computer program as a dumb tool, they are more likely
to criticize a program that is not easy to use. When users grant autonomy
to a program, they are more likely to defer to it, and blame themselves.
This interrupts the feedback process that leads to improvements in design.
Any capability that might exist in a program that is designated "intelligent"
or "conscious" can also be presented in the context of a dumb
tool without ruining this vital feedback loop. The only measurable difference
between a smart program and a dumb tool is in the psychology of the human
user, and the dumb tool approach is the only option that leads to practical
The above argument suggest that it is better for us to believe that computers
cannot be conscious, but what if they actually are? This is a different
kind of question, a question of ontology.
I argue that computers are not conscious because they cannot recognize each
other. In other words, if we sent a computer in a spaceship to an alien
planet and asked for a definitive analysis of whether there were computers
present on that planet, the computer would not be able to answer. There
are theoretical limits on one program's ability to fully analyze another
that make this so. People can nonetheless recognize and use computers,
so therefore people cannot be in the same ontological category as computers.
This is just another way of saying that without consciousness, the world
as we know it through our science need not be made of gross objects at all,
only fundamental particles. Complex processes such as computation can only
be found in nature through the filter of what Murray Gell-Mann has called
a "coarse-grained history" of the universe. For instance, one
has to be able to distinguish cars from air in order to measure "traffic".
Our most accurately confirmed scientific hypothesis, those of fundamental
physics, do not, however, acknowledge cars or other gross objects.
It is easy to claim that it is the state of a person's brain that notices
cars or computers, but that avoids the question of how the brain comes to
matter as a unit in the first place. If consciousness is associated with
a brain, why is it not also associated with a momentary correlation between
a brain and the arrangement of noodles on a plate of pasta being eaten by
the owner of the brain? If we wish to, we can find complex elements equivalent
to brains wherever we look. In a large enough city, the traffic patterns
could be interpreted as being exactly equivalent to a brain if we could
find the right translation table.
Even brains exist only by virtue conscious acknowledgment. The alternative
idea would be that the right kind of complex process gives rise to consciousness.
In that case there would be huge swarm of slightly different consciousness
around each person, corresponding to every combination of their brain, or
sections of it, with other objects in the universe. This idea violates
Occam's razor, the principle of simplicity.
A world without consciousness would be a world of elementary particles at
play. They would be the same particles in the same positions and relationships
as in our world, but no one would notice them as members of objects like
It is important to point out that I am not claiming there is something outside
of my brain that contains the content of my experience. I can accept that
the content of my subjective experience might be held in my neurons, and
still claim that it is experience itself that makes those neurons exist
as effective units.
The first argument presented above, about the Turing test, turns out to
have practical relevance because it influences our ability to design better
user interfaces. Does the second, ontological argument have any practical
importance? I think it does in that computers have come to play such a
central role in our culture that our way of thinking about them effects
our ways of thinking about each other. The tendency to think of computation
as the most fundamental metaphor for experience and action leads inevitably
to sloppy computer metaphors in politics, economics, psychology, and many
other areas. I hope that if we acknowledge just how strange and wonderful
it is that we are conscious, that wonder will translate into less bland
and nerdy metaphors to guide us in those areas.
Go back to Jaron's