Agents of Alienation

 

by

Jaron Lanier

 

Part One

 

I find myself holding a passionate opinion that almost nobody in the "Wired-style" community agrees with and I'm wondering; What's gotten into all of you? I find that in the wide, though shrinking, world away from computers most people find my position obvious, while infophiles find it impenetrable. I am trying to bridge a chasm of misunderstanding.

Here is the opinion: that the idea of "intelligent agents" is both wrong and evil. I also believe that this is an issue of real consequence to the near term future of culture and society. As the infobahn rears its gargantuan head, the agent question looms as a deciding factor in whether this new beast will be much better than TV, or much worse.

The idea of agents comes up in response to an obvious predicament of the new media revolution we find ourselves hurtling through. How do you make sense of a world of information available to you on demand? How do you find the grains of gold in the heaps of dirt that will be shipped to you on the infobahn everyday? The "official" answer is that autonomous "Artificial Intelligence" programs called agents will get to know you by hanging out with you, and they'll figure it all out, presenting you with a custom morning newspaper, or whatever. This is the idea of Microsoft's "Bob" program, and the pitch presented in the AT&T "You will" commercials, in Nicholas Negroponte's columns in Wired magazine (in which he has called intelligent agents the "unequivocal" future of computing), and in the marketing of many products, like Apple's Newton.

While intelligent agents have been a dominant theme in anticipating the immediate future for some time, they can often not quite be found in the present. Offensively paternal "Bob" and the Newton are rare examples of products that actually shipped that claimed to have an agent capability. I will argue that the rarity of actual agents does not diminish their harm, because they exist primarily in the minds of expectant beholders anyway, and in a way that damages those minds.

I should also make it clear that I am concerned with autonomous agents that people are meant to interact with in consequential ways; the term "agent" is also sometimes used in fields like Artificial Life to refer to experimental software elements in a closed system. These should be classified separately, though true believers in agents might not see the distinction.

 

So, why am I enraged by the notion of agents? What galls me aren't just the practical problems, but I'll summarize those first, to get them out of the way:

 

· If info-consumers see the world through agent's eyes, then advertising will transform into the art of controlling agents, through bribing, hacking, whatever. You can imagine an "arms race" between armor-plated agents and hacker-laden ad agencies. Lovely.

 

· Since agents are little computer programs, they'll have a lot more in common with each other than people do. Agents would become the new information bottleneck, narrowing the otherwise delightfully anarchic infobahn, which was supposed to replace the broadcast model with something more inclusive.

An agent's model of what you are interested in will be a cartoon model, and you will see a cartoon version of the world through the agent's eyes. It is therefore a self-reinforcing model. This will recreate the lowest-common-denominator approach to content that plagues TV. "You're interested in Balinese ritual, therefore you're interested in travel, therefore you're interested in the Infobahn Travel Game Show!".

 

· Agents will inevitably deliver an overdose of kitsch. Microsoft's "Bob" is the agent of the moment, and it proposes to the user a life of caricatured meaninglessness, sliding unintentionally into the grotesque, that is straight out of Diane Arbus.

 

Now true agent believers can answer any specific criticism by postulating better agents. But the specific problems are not the ones that make my blood boil, anyway.

Agents make people redefine themselves into lesser beings. THAT is the monster problem.

Am I making an inappropriately broad claim here? I don't think so.

You see, the problem is that the only difference between an autonomous "agent" program and a non-autonomous "editor/filter" program is in the psychology of the human user. You change yourself in order to make the agent look smart. Specifically, you make yourself dumb.

Well, actually, agent programs as a rule will also have worse user interfaces than non-agent programs.

Here is how people reduce themselves by acknowledging agents, step by step:

 

Step 1) Person gives computer program extra deference because it is supposed to be "smart" and "autonomous". (People have a tendency to yield authority to computers anyway, and it's a shame. In my experience and observations, computers, unlike other tools, seem to produce the best results when users have an antagonistic attitude towards them.)

Step 2) Projected autonomy is a self-fulfilling prophecy, as anyone who has ever had a teddy bear knows. The person starts to think of the computer as being like a person.

Step 3) As a consequence of unavoidable psychological algebra, the person starts to think of himself as being like the computer.

Step 4) Unlike a teddy bear, the computer is made of ideas. The person starts to limit herself to the categories and procedures represented in the computer, without realizing what has been lost. Music becomes MIDI, art becomes Postscript. I believe that this process is the precise origin of the nerdy quality that the outside world perceives in some computer culture.

Step 5) This process is greatly exacerbated if the software is conceived of as an agent and is therefore attempting to represent the person with a software model. The person's act of projecting autonomy onto the computer becomes an unconscious choice to limit behaviors to those that fit naturally into the grooves of the software model.

Even without agents, a person's creative output is compromised by identification with a computer. With agents, however, the person himself is compromised.

 

For a recent example of the dumbing of human behavior in the presence of a product that is designated "smart", look no further than the Newton. Find someone who uses one, especially the agent features, and watch them closely. See how well they have adapted themselves to the project of making the product look smart? Don't they look silly? Do you realize that if everyone was contorting themselves to take down simple notes, or emphasizing tasks that happen to fit into a database ("Call Mom!"), it would look normal?

(If, on the other hand, you want to see a program that encourages people to be smart and autonomous, check out Eudora.)

 

If Ambrose Bierce were alive today, I think he might add the following entry to "The Devil's Dictionary":

 

Agent (n): A network/database query program whose user interface is so obscure that the user must think of it as a quirky, but powerful, person in order to accept it. ALT. Definition: A program that conceals a haphazard personality profile of a user from that user.

Now an agent supporter might say: Why does the personality profile have to be concealed? Wouldn't your objection be answered if users could edit what the agent does? Of course that would satisfy me! But where is the room for autonomy once you have such an editor? You have changed your psychology, empowering yourself at the expense of the agent. You have murdered the agent by exposing its murky guts to sunlight.

 

Agents are the work of lazy programmers. Writing a good user-interface for a complicated task, like finding and filtering a ton of information, is much harder to do than making an intelligent agent. From a user's point of view, an agent is something you give slack to by making your mind mushy, while a user-interface is a tool that you use, and you can tell whether you are using a good tool or not.

The extremely hard work of making user-interfaces to genuinely empower people is the true work of the information age, much harder, say, than making faster computer chips. The Macintosh was a prime example of this work. It didn't do anything other computers didn't, it just made those things clear to users, and that was a revolution. It should be remembered as an early step in a journey that goes much farther.

 

So agents are double trouble. Evil, because they make people diminish themselves, and wrong, because they confuse the feedback that leads to good design.

But remember, although agent programs tend to share a set of deficiencies, it is your psychology that really makes a program into an agent; a very similar program with identical capabilities would not be an agent if you take responsibility for understanding and editing what it does. An agent is a way of using a program, in which you have ceded your autonomy. Agents only exist in your imagination. I am talking about YOU, not the computer.

 

 

 

Part Two

Now, my objections to agents would surely be moot if agents were truthfully, in fact, autonomous and even perhaps conscious. If agents were real, it would be a lie to deny them. To address this possibility is to consider Artificial Intelligence. The problem with the AI school of technology development is that it doesn't say a thing about technology at all; rather, it redefines people. As Turing pointed out, the only objective measurement of humanity is the responses and judgments of humans.

If you don't know about the Turing Test, you should. It is the creation myth of artificial intelligence. It was invented by a brilliant mathematician, Alan Turing, considered a war hero for having broken a Nazi secret code... who happened to be gay and was imprisoned by the British government and subjected to quack treatments for homosexuality. He forced to receive large quantities of female hormones and developed breasts. It was under these conditions, shortly before his apparent suicide, that he created the mythical basis for smart machines. Einstein's thought experiments were in vogue, and Turing offered a thought experiment in their spirit. The Turing Test has now practically become an urban legend of the computing world. Here is the usual formulation: Imagine two soundproof booths, one occupied by a man, the other by a woman, each typing at you, each trying to impersonate the other (a precursor to the current fun troubles with cyber-dating!). We then proceed to a booth with a man versus a booth with a machine. The claim of the Turing Test is that if you cannot tell the difference you have no scientific basis to claim a different status for people than for machines. When I first learned about the test in school, I thought it was odd and superfluous to begin with the man/woman setup, and I only learned of its significance much later . Turing was trying to escape the pain of his circumstances by fantasizing an abstract intelligence, free of the dreadful mysteries of the flesh.

The problem with the Turing Test is that it presents a conundrum of scientific method. We presume that improvement to machines takes place, so there is a starting state in our experiments where the human is considered "smarter", whatever that means, than the computer. We are measuring a change in human ability to discriminate between human and machine behavior. But the human is defined as the most flexible element in the measurement loop at the start, so how do we know we aren't measuring a state change in the human, rather than the computer? Is there an experimental difference, in this setup, between computers getting "smarter" and humans getting "stupider"? I don't think so.

 

If I seem overly mean in attacking these ideas, it is because I have been in a beleaguered minority for so long in the infophile community. Believers in the artificial intelligence world-view have a few standard insults for disbelievers. Here are two insults and my responses:

 

Insult One: Non-AIers must be "dualists" claiming some sort of alternate track of reality for souls or something.

 

My response: Experience is the only thing we share that we do not share objectively. I'm happy to assume that the study of the brain can potentially proceed to the point where every tiny thought and feeling can be measured and even controlled, and still experience itself will remain unmeasured. Even though experience cannot be experimentally verified, and therefore can never be a part of science, we cannot ignore it in forming our philosophy, however tempted we might be by the simplicity of doing so, any more than a physicist could ignore gravity in order to have a unified field theory .

 

Insult Two: If you do not accept AI you are placing humans in a "special" category, as onerous and silly as the placement of the earth at the center of the solar system by the church in the face of Gallileo.

 

My response: What I am doing is avoiding judgment about what I cannot measure of myself, and keeping a general attitude of optimism about what I might be, that is all. In this case, unlike Gallileo's, science will remain perpetually uncertain. We can, if we are unusually disciplined, accept this uncertainty in our platonic thoughts, but when it comes time to act, we must choose a fantasy to act on. I am ultimately arguing the merits of the humanist fantasy versus the materialist fantasy.

 

Part 3

 

There is a much higher stake here than a question of bad science and software design. It is a subtle question that might be described as spiritual, and it means all the world to me.

The agent question is important because it is part of a bigger question: Do people keep an open mind about what they are, or might be capable of becoming? Or do people limit themselves according to some supposedly objective measure, perhaps provided by science or technology? A related question is also important: Is information or experience primary?

The worst failing of communism, in my opinion, is that it did not acknowledge the existence of human experience beyond the scope of its own ideas. The most stifling threat to freedom is to bind people within the limits of ideas, since we, just like the rest of nature, are always a step ahead of our best interpretations. Thus under communism we saw an attempt to destroy spirituality, sentimentality, identity, and tradition.

In another context, discussing Virtual Reality, I came up with the slogan "Information is Alienated Experience". This phrase came to me partially in response to the imperialist tendency of theorists of politics, art, and computer design to pretend that ideas or words can represent people.

The discipline of science is to only respect falsifiable theories. When you create a boundary for yourself or others by believing in a theory of what you or they are, you create a conundrum of scientific method in which you can never know what might have been, and therefore have no opportunity to test the theory.

Part of the beauty of the American idea of government is in its self-limiting charter. For instance, the phrase "the pursuit of happiness" in an instant identifies an indefinable territory beyond the reach of law, or even language, that constitutes a critical part of "freedom". Let's limit the charter of computers and the infobahn in the same way.

 

This essay has been an attempt to find such limitations. It can be summarized by three limiting principles that I propose, to avoid a recurrence of agent-like confusions:

 

· treat computers as nothing more than fancy conduits to bring people together

 

· never treat information as being real on its own; its only meaning is in its use by people

 

· never believe that software models can represent people

 

Part Four

 

Versions of this essay, excluding this final section, have been posted on the web and delivered in lecture form. There has been an extraordinary, sharply divided response. The artificial intelligence question is the abortion question of the computer world. What was once a research topic has become a controversy where practical decisions must reflect a fundamental ontological definition about what a person is and is not, and there is no middle ground.

 

Feelings in the computer community run very deep on this subject. I have had literally hundreds of people come up to me after I have given a talk saying that it had completely changed the way they thought about computers and had answered a vague unease about some computer trends that they had never heard articulated before.

 

A fiery contingent (including Nicholas Negroponte) think the ideas in this essay are dangerously wrong. Nicholas has even expressed the opinion that it is "irresponsible" for me to present my argument in public "because it could mislead people". Part of the reason for this might be that many in the computer world are attracted to the deathless world of abstraction, and nurture hopes of being able to live forever by backing themselves onto a computer tape.

 

Still other members of the community are, I believe, overcome with a reaction of denial, and convince themselves that I am saying nothing more than that agents aren't good enough yet. It is this final group that surprises and infuriates me. There is such a universal orthodoxy holding that artificial intelligence is a useful and valid idea that many in the computer community can read this essay and believe that I am only criticizing certain agents, or expressing a distaste for premature agents. They are somehow unable to grasp that someone could categorically attack ALL agents on the basis that they do not exist, and that it is potentially harmful to believe that they do. They have staked their immortality on the belief that the emperor is indeed wearing new clothes.

 

Ultimately there is nothing more important to us than our definition of what a person is. Isn't this the core question in a great many controversies? This definition drives our ethics, because it determines what is enough like us to deserve our empathy. Whether we are concerned with animal rights, whether we feel it is essential to intervene in Bosnia; our perceived circle of commonality determines our actions. Beyond ethics, our sense of what else is like us is the glue of our culture. The multiculturalism debate is another example of a struggle to define the center and the extent of the circle of commonality.

 

I have long believed that the most important question about information technology is "How does it effect our definition of what a person is?" The AI/agent question is the controversy that exposes the answer. It is an answer that is directly consequential to the pattern of life in the future, to the quality of the technology which will be the defining vessel of our culture, and to a spiritual sense of whether we are to be bounded by ideas or not.

 

We cannot expect to have certain, universal agreement on any question of personhood, but we all are forced to hold an answer in our hearts and act upon our best guess. Our best guess runs our world.

 

-END-


1 Turing's original paper doesn't start this way, but that is the way it is taught and I am most interested in its life as a legend.

2 I don't use the word "consciousness" anymore because it has been colonized by materialists, and now means a specialized part of a computer that models another part and can exert executive control.



Go back to Jaron's home page.