Faking It: Deception in the Computer Age



ELIZA’s Offspring: The Lure of Intelligent Machines

Oscar Firschein, Stanford University

In the story The Return of Martin Guerre1, a wife greets the man returned from a long absence as her husband (Davis). But does she have any doubts as to his real identity? Is she in a state of believing or is she merely desirous of belief?

A similar need for belief may exist in our treatment of computer-based sentient devices. Thus, we may regard the GPS car navigator as having human characteristics. “Recalculating,” the voice says officiously, when we fail to make a required turn. In a recent one-act play, the boxed female voice convinces a husband to run off with her when his wife gets out of the car for a rest stop. The theater audience gleefully joined in this absurdity.

Wouldn’t it be even more impressive if a device not only speaks, but understands and responds to our speech. This is actually now the case with the Apple iPhone 4, in which SIRI (the phone’s intelligent software) can understand not just words, but the intentions behind them. “Set up an appointment with Mary Smith for 3 pm next Wednesday” requires a program to integrate calendar, contact list, and email apps, create and send an invitation, and come back with an appropriate spoken response. Watching SIRI in action, one becomes near-convinced of a “human” thus enclosed. It is no wonder that the average person believes that sentient devices already exist. Many people assume that computers can already see, speak, carry on a conversation, understand social conventions, and detect human emotions (remember HAL, in the film 2001?).

At the start of the computer age there were a few prescient researchers who saw the dangers of the machine imitating a person. Prof. Joseph Weizenbaum, in a 1966 paper, described his program ELIZA that simulated a psychologist. The program was quite unsophisticated, consisting of a set of simple responses, so that when the user entered something like “I am sad,” ELIZA interpreted this as “I am X” and responded. “Why are you X?” (in this case “Why are you sad.”) When the respondent mentioned the word father or mother, ELIZA asked, “Tell me more about your family.” When ELIZA did not find a pre-stored pattern, the program had a repertoire of general responses. Thus, it could reply with a statement such as, “That is interesting. Tell me more.”

Here is typical conversation taken from Weizenbaum’s paper.

User: Men are all alike.

ELIZA: IN WHAT WAY?

User: They're always bugging us about something or other.

ELIZA: CAN YOU THINK OF A SPECIFIC EXAMPLE?

User: Well, my boyfriend made me come here.

ELIZA: YOUR BOYFRIEND MADE YOU COME HERE?

User: He says I'm depressed much of the time.

ELIZA: I AM SORRY TO HEAR YOU ARE DEPRESSED.

User: It's true. I am unhappy.

ELIZA: DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY?

Remember, in all of these interchanges ELIZA is merely examining strings of letters to find matches in the pre-stored word sequences. There is no understanding involved. Weizenbam was dismayed to find that users were treating the program as if it was human, and were taking ELIZA’s advice seriously. To counteract such user behavior, he wrote a paper that showed the secrets of how ELIZA faked intelligence, but people still sought the program’s advice. As a matter of fact, he found that his secretary, who had typed up the ELIZA paper, was using the program to deal with her personal problems. She seemingly had a will to believe that a computer could think.

Debate as to whether a machine could think arose even before the invention of the computer. In 1950, the British researcher Alan Turing, desiring a more formal definition of an “artificial intelligence,” posed what he called the Imitation Game (Turing). It was to be played by three people -- a man, a woman, and a human interrogator. The interrogator was in a room apart from the other two, and asked them questions via teletype. The object of the game was for the interrogator to determine from the responses which of the two was the man, and which was the woman. Initially, the man is instructed to reply as if he was the woman, and later the woman imitates the man. At some point, a computer would replace either the man or the woman. Statistics would be kept of the number of times that the computer succeeded in deceiving the interrogator that it was the man or the woman. If the interrogator was fooled by the computer to the same degree that he was earlier fooled by the two humans, Turing would say that the machine was intelligent. Turing posited that imitation requires great intelligence, and therefore a computer that could imitate gender demonstrates far more intelligence than one that is simulating being human by merely carrying on a conversation.

To show the high level of intelligence required for Turing’s Imitation Game, I have demonstrated the process in a classroom. I informed the class that a male and female student would be sent into the corridor. The male was to answer questions to be posed by the class as if he were a woman. A student courier would present the two students with these written questions and their responses would be entered on the classroom blackboard by the courier. The class was to decide which answer came from the male and which from the female. After the game was played with the man imitating the woman, their roles would be reversed. The class quickly realized that it required much ingenuity to think up questions that determined which response is from the man and which from the woman, particularly since nowadays the gender differences are more ambiguous. For example, since few women sew creatively and few men fix their own cars, questions in these domains probably could not be answered by either the man or the woman. The participants in the corridor also realized that faking gender requires much skill and ingenuity.

To date, there have been several sensational computer successes that have been deemed as demonstrating computer intelligence, such as the defeat of the Russian chess champion Kasparov in 1997 by IBM’s Big Blue, and the defeat of experts in the TV quiz show Jeopardy by IBM’s Watson in 2011.

In contrast with such computer intellectuality, this paper is concerned with socially intelligent computers that truly act like a person or animal. For example SIRI, incorporated in the Apple iPhone 4, can recognize speech and respond in natural language, can access huge amounts of information, and carry on a more sophisticated conversation than ELIZA. A rave New York Times review said of ELIZA’s “offspring”:

You can say, “Wake me up at 7:35,” or “Change my 7:35 alarm to 8.” You can say, “What’s Gary’s work number?” Or, “How do I get to the airport?” Or, “Any good Thai restaurants around here?” Or, “Make a note to rent ‘Ishtar’ this weekend.” Or, “How many days until Valentine’s Day?” Or, “Play some Beatles.” Or, “When was Abraham Lincoln born?” In each case, SIRI thinks for a few seconds, displays a beautifully formatted response and speaks in a calm female voice . . . Unbelievable. (Pogue)

Quite an improvement over ELIZA!

In the social robot approach, designers aim for a device that can interact with a person in a human manner. For the past ten years or so there has been significant effort in producing these “social robots,” either in the form of intelligent toys for children, or as “comfort robots” for the elderly. The toys attempt to mimic animal behavior (animal sound, motion, speech response) that provide the illusion of a living creature.

For example, several years ago Furby was a very popular animal robot, an owl-like toy that required regular “feeding” and seemed to learn English. Its price ranged from $40 to $100, depending on the complexity of the device. A newly purchased Furby starts out speaking entirely Furbish, the unique language that all Furbies use, but is programmed to start using English words and phrases in place of Furbish over time, simulating the process of learning English. In 2005, new Furbies were released with voice-recognition and more complex facial movements, and many other changes and improvements that make Furbies seem to understand meaning, and to act quite animal-like. Consider the reaction of a child to Furby:

Jessica, eight, plays with the idea that she and her Furby have body things in common . . . She has a Furby at home; when her sisters pull its hair, Jessica worries about its pain. . . (Turkel 37)

Comfort robots that provide pet-like companionship to the elderly tend to be more sophisticated and are therefore more expensive. The main function of such robots is to enhance the health and psychological well-being of elderly users. A popular comfort robot produced by the Japanese is PARO, modeled after the little harp seals found in northeastern Canada. It is covered with soft artificial fur to make people feel comfortable, as if they were touching a real animal. PARO is active during the day but gets sleepy at night. It has tactile, light, sound, temperature, and posture sensors to perceive people and its environment. It can express feelings such as surprise and happiness by blinking its eyes and moving its head and legs to make it seem that it has feelings. Every PARO has an individual “personality,” which it develops through a process of interactive behavioral learning with its owners. Tests of PARO in nursing homes and hospitals for handicapped children throughout the world showed that having a robot companion can bring about the same effects as interaction with a real animal. The British Guardian newspaper, reporting on elderly survivors of Japan’s March 2012 earthquake and tsunami, states,

. . .comfort has come in the form of a small, white robotic seal named Paro. . . They are treated as pets by the residents, many still dealing with memories of the quake. "If I hold on to this, it doesn't matter if there's a typhoon outside, I still feel safe," said 85-year-old Satsuko Yatsuzaka, after she had been hugging one of the seals for about half an hour. (Guardian)

Another comfort robot is AIBO a robotic dog ($1300-$2000), developed by Sony, that can walk, stand, sit, and lie down.

Depending how it is treated, an individual AIBO develops a distinct personality as it matures from a fall-down puppy to a grown-up dog. Along the way, AIBO learns new tricks and expresses feelings . . . A later version of AIBO recognizes its primary caregiver. (Turkle 53)

AIBO can “see” its environment and recognize spoken commands in Spanish and English. AIBO robotic pets are able to learn and mature based on external stimuli from their owner, their environment, and from other AIBOs. Its software allows it to be raised from pup to fully grown adult while going through various stages of development as its owner interacts with it.

And inevitably, there is the sex doll, built by TrueCompanion. Roxxxy stands 5 feet 7 inches tall and weighs 120 pounds, and can be programmed to learn the owner's likes and dislikes. According to the website of the company, Roxxxy is not limited to sexual uses and “can carry on a discussion and expresses her love to you.” Roxxxy is priced at $7,000 to $9,000.

There are actually several university labs devoted to the design of socially intelligent machines, and even an annual conference on social robotics. For example, Prof. Andrea Thomas of Georgia Tech describes her research as developing computers that can learn from and collaborate with people, can apply emotional intelligence, and can perceive and respond to another person’s intensions. (Kavli)

Methods for determining user emotion by analyzing facial expression and speech patterns are being investigated at MIT’s Media Lab. MIT Professor Rosalind Picard believes that if we want computers to be genuinely intelligent and to interact naturally with people, we must provide computers with the ability to recognize, understand, and even to have and express emotions (Picard). She wants to teach computers how to be more sensitive to user emotional states, like confusion, and liking or disliking.

Finally, the Social Robots Project at Carnegie Mellon University has developed VIKLA, a robot that has a personality, and which can behave according to social conventions. While a small set of rules of social behavior will be initially supplied to the robot, the project hopes to use learning techniques that will enable the robot to acquire additional rules on its own.

What if there is a human need to treat such devices as if they are intelligent, despite knowing that they are not? This is the view of Sherry Turkel, Professor of the Social Studies of Science and Technology at MIT, who has been studying people’s responses to intelligent-like devices for many years. Her startling conclusion, given in her recent book, “Alone Together: Why we expect more from technology and less from each other”(Turkel), is that people want to be duped by these machines! Thus, in discussing ELIZA, she states the “ELIZA effect”:

Over years and with some reluctance, I came to understand that ELIZA’s popularity revealed more than people’s willingness to talk to machines; it revealed their reluctance to talk to other people. The idea of an attentive machine provides the fantasy that we may escape from each other . . . We are willing to put aside a program’s lack of understanding and, indeed, to work to make it seem to understand more than it does – all to create the fantasy that there is an alternative to people. This is the deeper “ELIZA effect.” (Turkel 282)

And finally, “. . . roboticists have learned those few triggers that help us fool ourselves. We don’t need much. We are ready to enter the romance” (20). Thus, to Terkel, it is more than a strange quirk that people treat intelligent-seeming devices as if they were human. Rather, this behavior has ominous implications concerning the relationship of people to people, “… long before we have devices that can pass any version of the Turing test, the test will seem beside the point. We will not care if our machines are clever but whether they love us” (286).

Turkel thus sees a basic danger to society because people’s desire for intimacy with a seemingly intelligent machine is becoming more desirable than with a person. As she says,

. . . I call attention to our strong response to the relatively little that sociable robots offer – fueled it would seem by our fond hope that they will offer more. With each new robot, there is a ramp-up in our expectations. I find us vulnerable – a vulnerability not without risk. (52)

About fifteen years ago, I went to a celebration at the Kennedy Center in Washington. There was a robot in the courtyard, moving around and talking to the people. It would change its mode of speech, depending on whether it was talking to an adult or a child. It commented on the clothing the kids were wearing, so it seemed to be able to see. I immediately looked for the human operator, and sure enough there he was, half-hidden behind a column, talking into a microphone in his sleeve, while operating the motion control with his other hand in his pocket. If this had happened today, I don’t know whether I would have looked for a human operator.

I, too, get caught up in the human aspects of these sentient devices. I have a ten-year old car navigator device that needs updating. Whenever I cross the San Mateo Bridge and get onto highway 880, my navigator goes crazy because there is a new interchange there. “Recalculating” she cries over and over as she tries to correct my supposedly deviant path. I find myself comforting her, “It’s o.k.”, I say. “Everything will be o.k. soon.” I feel better when we are on 880 and she calms down.

As Weizenbaum discovered in the case of ELIZA, the problem of people treating machines as if they are human cannot be solved by showing that these devices are only simulating intelligence. We will soon confront a society dominated by smart, sociable machines that are far more capable than ELIZA, and we can expect that the public will naturally respond enthusiastically to such devices. Will this result, as Turkel believes, in a society that prizes human to computer relationships more than human to human contact? If this is so, we will have a society that, like the wife of Martin Guerre, “believes” by submerging disbelief.

References

Davis, N.Z. The Return of Martin Guerre. Cambridge MA: Harvard UP. 1983.

Guardian. “Japanese earthquake survivors find comfort in robot seals,” 1 Aug. 2011.

Kavli Foundation, “Recipe for a Robot.

Picard, R. Affective Computing. Cambridge MA: MIT Press. 1997.

Pogue, D. “New iPhone Conceals Sheer Magic,” NY Times. Oct. 11, 2011.

Turing, A.M. “Computing Machinery and Intelligence,” Mind. Oct., 1950. 59:433-460. Reprinted in: Feigenbaum, E.A. and Feldman, J. (eds) Computers and Thought. NY. McGraw-Hill. 1963.

Turkel, S. Alone together: Why we expect more from technology and less from each other. New York: Basic Books. 2011.

NOTE

1 Martin Guerre, a French peasant of the 16th century, was at the center of a famous case of imposture. Several years after the man had left his wife, child, and village, a man claiming to be Guerre arrived. He lived with Guerre's wife and son for three years. The false Martin Guerre was tried, and executed. The real Martin Guerre had returned during the trial.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download