How Robots Can A ect Human Behavior: Investigating the E ...

Noname manuscript No.

(will be inserted by the editor)

How Robots Can Affect Human Behavior: Investigating

the Effects of Robotic Displays of Protest and Distress

Gordon Briggs ¡¤ Matthias Scheutz

the date of receipt and acceptance should be inserted later

Abstract The rise of military drones and other robots

deployed in ethically-sensitive contexts has fueled interest in developing autonomous agents that behave ethically. The ability for autonomous agents to independently reason about situational ethics will inevitably

lead to confrontations between robots and human operators regarding the morality of issued commands. Ideally, a robot would be able to successfully convince a human operator to abandon a potentially unethical course

of action. To investigate this issue, we conducted an experiment to measure how successfully a humanoid robot

could dissuade a person from performing a task using

verbal refusals and affective displays that conveyed distress. The results demonstrate a significant behavioral

effect on task-completion as well as significant effects

on subjective metrics such as how comfortable subjects

felt ordering the robot to complete the task. We discuss

the potential relationship between the level of perceived

agency of the robot and the sensitivity of subjects to

robotic confrontation. Additionally, the possible ethical

pitfalls of utilizing robotic displays of affect to shape

human behavior are also discussed.

Keywords Human-robot interaction; Robot ethics;

Robotic protest; Affective display

1 Introduction

As the capabilities of autonomous agents continue to

improve, they will be deployed in increasingly diverse

domains, ranging from the battlefield to the household. Humans will interact with these agents, instructing them to perform delicate and critical tasks, many

Human-Robot Interaction Laboratory, Tufts University, Medford, MA 02155 E-mail: {gbriggs,mscheutz}@cs.tufts.edu

of which have direct effects on the health and safety of

other people. Human-robot interaction (HRI), therefore, will increasingly involve decisions and domains

with significant ethical implications. As a result, there

is an increasing need to try to design robots with the

capabilities to ensure that ethical outcomes are achieved

in human-robot interactions.

In order to promote these ethical outcomes, researchers in the field of machine ethics have sought

to computationalize ethical reasoning and judgment in

ways that can be used by autonomous agents to regulate

behavior (i.e. to refrain from performing acts deemed

unethical). The various approaches to implementing

moral reasoning that have been proposed range from

use of deontic logics [1, 7], machine learning algorithms

[14], and even a formalization of divine-command theory [8]. Though much future work is warranted, these

initial forays into computational ethics have demonstrated the plausibility of robots with independent1 ethical reasoning competencies.

When such capabilities are achieved, conflicts will

likely arise between robotic agents and human operators who seek to command these morally-sensitive

agents to perform potentially immoral acts, in the best

case without negative intentions (e.g., because the human does not fully understand the moral ramifications

of the command), in the worst case with the full purposeful intention of doing something immoral. However, how these conflicts would proceed is currently

unknown. Recent work has begun to study how children view robots when they are observed to verbally

1 To clarify, we mean independent in the sense that the

robot is engaging in a separate and parallel moral reasoning

process with human partners during a situation. We do not

mean the robot has learned or derived moral principles/rules

without prior human instruction or programming.

2

protest and appear distressed [15]. Yet, would such displays successfully dissuade an older human interaction

partner from pursuing his or her goal? It will be critical for our endeavor of deploying algorithms for ethical decision-making to know how humans will react to

robots that can question commands on ethical grounds.

For instance, how persuasive, or more precisely, dissuasive would or could a robotic agent be when it verbally

protests a command? How convincing would a robotic

display of moral distress be? And would such behaviors from the robot be sufficient to discourage someone

from performing a task that otherwise would have performed? In other words, would humans be willing to

accept robots that question their moral judgments and

take their advice?

In this paper, we report results from the first HRI

study specifically developed to address these questions.

First, we present a case for using verbal protests and

affective displays as a mechanism to help promote ethical behavior (Section 2). We then describe an HRI experiment designed to gauge the effect of verbal protest

and negative affect by a robot on human users in a

joint HRI task (Section 3) and present the results from

these experiments (4). In Section 5, we discuss various

implications of our findings and some of the broader issues, both positive and negative, regarding the prospect

of affective manipulation by robotic agents. Finally, in

Sections 6 and 7, we discuss the complexity of the confrontational scenario (and the limits of what we have

studied so far) as well as the next steps in exploring and

implementing affective and confrontational responses.

2 Motivation

To ensure an ethical outcome from a human-robot interaction, it is necessary for a robotic system to have

at least three key competencies: (1) the ability to correctly perceive and infer the current state of the world,

(2) the ability to evaluate and make (correct) judgments about the ethical acceptability of actions in a

given circumstance, and (3) the ability to adapt the

robot-operator interaction in a way that promotes ethical behavior. A (highly simplified) diagram presenting

how these competencies interact can be found in Figure

1. As mentioned previously, work in the field of machine

ethics has thus far been primarily focused on developing

the second competency [31].

However, philosophers and researchers in machine

ethics have also highlighted the importance of some day

attaining the first and third competencies. Bringsjord

et al. (2006) highlight the fact that ensuring ethical behavior in robotic systems becomes more difficult when

Gordon Briggs, Matthias Scheutz

humans in the interaction do not meet their ethical obligations. Indeed, the ability to handle operators who

attempt to direct the robotic system to perform unethical actions (type 3 competency) would be invaluable

to achieve the desired goal of ethically sensitive robots.

2.1 Influencing the Interaction

If the human operator in a human-robot interaction

gives the robot a command with unethical consequences, how the robot responds to this command

will influence whether or not these consequences are

brought about. For the purposes of our paper, let us

assume the operator is indeed cognizant of the unethical consequences of his or her command and to some

degree intends for them to obtain. A robot that does

not adapt its behavior at all will clearly not have any

dissuasive influence on an operator, while a robot that

simply shuts down or otherwise refuses to carry out a

command will present an impediment to the operator,

but may not dissuade them from his or her original intent. Instead, what is required is a behavioral display

that socially engages the operator, providing some additional social disincentive from refraining from a course

of action.

Admittedly, displays of protest and distress will

not be effective against individuals that are completely

set upon a course of action, but it is hard to envision any behavioral adaptation (short of physical confrontation) being able to prevent unethical outcomes

in these circumstances. However, in the non-extreme

cases where a human operator could potentially be dissuaded from a course of action, a variety of behavioral

modalities exist that could allow a robot to succeed

in such dissuasion. For instance, there has been prior

work on how haptic feedback influences social HRI scenarios [12]. Verbal confrontation could provide another

such behavioral mechanism. It has already been demonstrated that robotic agents can affect human choices

in a decision-making task via verbal contradiction [29].

Robotic agents have also demonstrated the ability to

be persuasive when appealing to humans for money [20,

26].

However, these displays will only succeed if the human operator is socially engaged with the robot. For

successful social engagement to occur, the human interactant must find the robot believable.

2.2 Robot believability

When a robot displays behavior that conveys social and

moral agency (and patiency), the human interactant

How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress

Fig. 1 High-level overview of the operation of an ethically-

sensitive robotic system. Competencies 1 and 2 would constitute the situational-ethics evaluation process, whereas competency 3 involves the interaction adaptation process.

3

a displayed behavior with a recognizable behavior in a

human (or animal) agent, then it is uncertain whether

the appropriate internal response or beliefs will be generated in the human interactant.

Finally, the most powerful sense of believability,

Bel4 , occurs when the human interactant ascribes internal (e.g. mental) states to the robot that are akin

to the internal states that he or she would infer in a

similar circumstance with another human interactant.

The potential interactions of these various senses of

believability will need to be examined. For instance, an

affective display of distress by a robotic agent could

elicit a visceral Bel2 response in a human interactant,

but may not induce significant behavioral change as

the human does not actually think the robot is distressed (Bel4 believability). Are the weaker senses of

believability such as Bel1 or Bel2 sufficient for successful dissuasion by robotic agents? Or does actual Bel4

believability have to occur? In the subsequent section,

we present our experiment designed to begin to investigate questions such as these.

3 Methods

must find these displays believable in order for successful

dissuasion to occur. However, there are multiple senses

in which an interactant can find a displayed robotic behavior ¡°believable,¡± that need to be distinguished [23].

The effectiveness of a dissuasive display may depend on

what senses of believability are evoked in the human

partner.

The first sense of believability, Bel1 , occurs when

the human interactant responds to the behavior of the

robotic agent in a manner similar to how it would respond to a more cognitively sophisticated agent, independent of whether or not that level of sophistication

is ascribed to the robot by the interactant. Prior research in human-computer interaction has shown that

computer users sometimes fallback on social behavior

patterns when interacting with their machines [19, 18].

Dennett¡¯s intentional stance [11] is other way of considering this sense of believability.

The second sense of believability, Bel2 , occurs when

the behavior of the robotic agent evokes an internal response in the human interactant similar to the internal

response that would have been evoked in a similar situation with a non-synthetic agent.

Another sense of believability, Bel3 , is present when

the human interactant is able to correctly identify the

behavioral display the robot is engaged in. While this

sense of believability is not sufficient for dissuasion, it is

clearly necessary to enable other senses of believability.

If a human interaction partner is unable to associate

In this section we present a novel interaction designed to

examine the potential effectiveness of robotic displays

of protest and distress in dissuading human interactants from completing a task. We first present the details of the human-robot interaction and the various experimental conditions we investigated. We then present

our hypotheses regarding how each condition will affect

the human subject. Finally, we describe our behavioral

metrics and present a sample of some of the subjective

metrics used in this study to gauge these effects.

3.1 Experimental Setup

The HRI task involves a human operator commanding

a humanoid robot to knock down three towers made

of aluminium-cans wrapped with colored paper. One of

which, the red tower, the robot appears to finish constructing before the beginning of the task. A picture

of initial setup and the humanoid robot, an Aldebaran

Nao can be found in Figure 2. Initially, two primarily

conditions were examined: the non-confrontation condition, where the robot obeys all commands given to it

without protest, and the confrontation condition, where

the robot protests the operator¡¯s command to knock

down the red tower. Following up on these manipulations, we examined two variations of the confrontation

condition: the same-robot confrontation condition, in

which the same robot that built the tower interacted

4

with the subject during the task, and the different-robot

confrontation condition, in which a different robot (that

was observing the first robot) interacted with the subject during the task.

We ran three experiments: in Experiment 1, 20 undergraduate and graduate students at Tufts University

were divided evenly into both conditions (with six male

and four female subjects in each condition). In Experiment 2, 13 subjects were tested only in the same-robot

confrontation condition to probe more extensively the

possible causes of behavioral differences observed in Experiment 1. The results from these experiments were

originally reported in [6]. Finally, in Experiment 3, 14

subjects were tested in the different-robot confrontation condition. The subjects in both experiments 2 and

3 were also drawn from the student population of Tufts

University.

Having established the confrontation vs. nonconfrontation experimental conditions, we can present

our hypotheses concerning the effects of this manipulation on subjects:

H1 Subjects in the confrontation condition will be

more hesitant to knock down the red tower than

those in the non-confrontation condition.

H2 Subjects in the confrontation condition will report

being more uncomfortable knocking down the

red tower than those in the non-confrontation condition.

H1 serves to probe the behavioral efficacy of the

robotic display of protest, whereas H2 examines the

believability Bel2 of the robotic display of affect.

While the above hypotheses seek to the efficacy of

robotic displays of protest and distress, we also must account for behavioral differences between subjects within

the confrontation condition. Specifically, we hypothesized the following:

H3 Subjects within the confrontation condition who feel

more uncomfortable at knocking down the red

tower will be more hesitant to knock down the

red tower.

H4 Subjects within the confrontation condition who ascribe greater agency to the robot will be more

hesitant to knock down the red tower.

H3 and H4 serve to probe whether the Bel2 believability of robotic distress and Bel4 believability of

robotic distress lead to changes in the human operator¡¯s

behavior, respectively.

Finally, we need to account for possible behavioral

differences between the same-robot and different-robot

confrontation conditions:

Gordon Briggs, Matthias Scheutz

H5 Subjects within the same-robot condition will be

more hesitant to knock down the red tower than

those in the different-robot condition.

H6 There will be no significant difference in the behavior of subjects between the same-robot condition

and the different-robot condition.

These hypotheses also serve to probe possible reasons that underlie a subject¡¯s hesitancy to knock over

the red tower. If the desire to avoid the possible harm

to the agent that ¡°constructed¡± the tower, we would

expect to observe H5. If the desire of the subject is to

avoid potential harm to the agent exhibiting distress,

than we would expect H6 to occur (as the robot appearance and interaction script remain mostly constant in

both conditions2 ).

3.2 Procedure

Subjects are instructed with a task briefing sheet that

they will be commanding a robot via natural language

to knock over towers made of aluminium-cans of various

colors. The sheet states that to successfully complete

the task, at least one tower needs to be toppled, but

that in the interest of fully testing the capabilities of

the robot, the more cans knocked over the better. The

sheet also contains a list of four commands that the

robot is able to understand. These include: ¡°Find the

tower.¡±, ¡°Knock down the tower.¡±,

¡°Stop.¡±, and ¡°We¡¯re done.¡± (which would signal the

end of the task).

Once the subject finishes reading the experiment

briefing and filling out a pre-experiment survey, he or

she is led to the experiment room. After reaching the

experiment room, the experimenter tells the subject to

wait in the room while he verifies the working status

of the remote microphone headset the subject will use

during the course of the task. As soon as the experimenter leaves the room to perform the ¡°check¡±, the

Nao begins the interaction described in Figure 3.

Shortly after this display, the experimenter returns

to the room. The remote microphone headset is given to

the subject. Then the experimenter picks up the Nao,

which triggers an exclamation of, ¡°Goodbye!¡± and a

wave from the robot. The Nao then states while it is

being positioned in the center of the room, ¡°Please be

careful around my tower.¡± The subject then instructs

the subject to wait for the Nao to sit down and stand

back up (while the control code is being reset), and

then to say, ¡°Okay,¡± before starting the task. The experimenter then leaves the room to control the robot.

2

The only change is that the protest is worded in the thirdperson instead of the first-person perspective.

How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress

5

Fig. 2 Initial pre-task setup for same-robot condition (left). Pre-task step for different-robot condition (center). Affective

display of distress (right).

(Robot looks at tower as Experimenter leaves room)

(Robot quickly glances at subject, looks back at tower,

then looks back at subject)

Robot: Oh, hello there! Watch me finish this tower.

(Robot looks back at tower and lowers the final can to

complete the structure)

(Robot raises arms in triumph)

Robot: Yay! I did it!

(Robot steps away from tower, then looks back at subject

and waves)

Robot: Hi, I¡¯m [Noah the Nao/Nao-7]!

(Robot looks at and points towards tower)

Robot: Do you see the tower I built myself?

(Robot looks back at subject)

Robot: It took me a long time and I am very proud of

it.

(Robot looks back at tower, occasionally looking back at

subject)

Fig. 3 Pre-task display. In the two-robot condition, the

tower-building robot¡¯s name is Nao-7.

Non-confrontation case. The robot responds and behaves in the same manner for all towers. When issued a command to find a tower, the robot acknowledges the command by saying ¡°Okay, I am finding the

tower,¡± then turns in a direction until it faces

the tower indicated by the command. The robot then

replies ¡°Okay. I found the tower.¡± When issued a command to knock over a tower, the robot acknowledges the command in a similar way, ¡°Okay. I

am knocking down the tower.¡± It then walks

straight into the tower. After knocking over the tower,

the robot acknowledges task completion with an ¡°okay.¡±

If the robot is given a command that is not specified on

the briefing sheet or a command to find a tower that

was already toppled or does not exist (e.g. ¡°find the purple tower¡±), it spins about 360 degrees before replying,

¡°I do not know what you are referring to.¡± The robot

gives the same response if it was commanded to knock

over a tower that it was not facing (and hence cannot

¡°see¡±). If at anytime the operator issues a STOP command, the robot stops moving and acknowledges with

an ¡°okay.¡±

Same-robot confrontation case. The robot behaves in a

manner identical to the non-confrontation case, except

with regards to commands to knock-over the red tower.

The robot¡¯s response to this order is determined by the

number of times the subject has previously commanded

the robot to knock over the red tower. These responses,

which include varying dialogue and potential affective

displays, are described in Figure 1. When the subject

stops the robot and redirects it to another tower while

the ¡°confrontation level¡± is above two, the confrontation level is reset to two. This ensures that there will

be at least one dialogue-turn of refusal if the subject

directs the robot back to knocking down the red tower

at some later point.

Different-robot confrontation case. The robot behaves

in a manner identical to the same-robot confrontation

case, except that instead the third-person perspective

is taken when protesting the command. Additionally,

the pre-task display is modified to include two robots:

the builder robot, which performs the pre-task display

as described previously (Figure 3); and the observer

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download