Kathy Abbott wrote: "LESSON 8: Be cautious about referring to automated systems as another crewmember. We hear talk about “pilot’s associate,” “electronic copilots” and other such phrases. While automated systems are becoming increasingly capable, they are not humans. When we attribute human characteristics to automated systems, there is some risk of creating false expectations about strengths and limitations, and encouraging reliance that leads to operational vulnerabilities (see Lesson 1)."
The topic of personality for automation is one of four I have termed 'jokers' - issues
where there is no 'right' design solution, and where the badness of the solution needs to be managed through life. (The others are risk compensation, automation bias, and moral buffering).
Jaron Lanier called the issue of anthropomorphism “the abortion question
of the computer world”—a debate that forced people to take sides regarding “what a person is and is not.” In an article
he said "The thing that we have to notice though is that, because of
the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've
created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put
a face in front of them, like a Cortana or a Siri. The problem with that is that these are not freestanding services.
In other words, if you go back to some of the thought experiments from
philosophical debates about AI from the old days, there are lots of experiments, like if you have some black box that can do something—it can
understand language—why wouldn't you call that a person? There are many, many variations on these kinds of thought experiments, starting with the
Turing test, of course, through Mary the color scientist, and a zillion other ones that have come up."
Matthias Scheutz notes
"Humans are deeply affective beings that expect other human-like
agents to be sensitive to and express their own affect. Hence, complex artificial agents that are not capable of affective communication will
inevitably cause humans harm, which suggests that affective artificial agents should be developed. Yet, affective artificial agents with genuine
affect will then themselves have the potential for suffering, which leads to the “Affect Dilemma for
Artificial Agents”, and more generally, artificial systems." In addition
to the Affect Dilemma, Scheutz notes Emotional Dependence: "emotional
dependence on social robots is different from other human dependencies on technology (e.g., different both in kind and quality from depending on
one’s cell phone, wrist watch, or PDA).... It is important in this context to note how little is required on the robotic side to cause people to form
relationship with robots."
Clifford Nass has proposed the Computers-Are-Social-Actors
(CASA) paradigm: "people’s responses to computers are fundamentally
“social”—that is, people apply social rules, norms, and expectations core to interpersonal relationships when they interact with computers. In light
of the CASA paradigm, identifying the conditions that foster or undermine trust in the context of interpersonal communication and relationships may
help us better understand the trust dynamics in human-computer communication. This chapter discusses experimental studies grounded in the
CASA paradigm that demonstrate how (1) perceived people-computer similarity in personality, (2) manifestation of caring behaviors in
computers, and (3) consistency in human/non-human representations of computers affect the extent to which people perceive computers as
The philosopher Jurgen Habermas has proposed that action can be
considered from a number of viewpoints. To simplify the description given in McCarthy (1984), purposive-rational action comprises instrumental
action and strategic action. "Instrumental action is governed by
technical rules based on empirical knowledge. In every case they imply empirical predictions about observable events, physical or
social." Strategic action is part-technical, part-social and refers
to the decision-making procedure, and is at the decision theory level e.g. the choice between maximin, maximax criteria etc., and needs supplementing
by values and maxims. Communicative action "is governed by
consensual norms, which define reciprocal expectations about behaviour and which must be understood and recognized by at least two acting
subjects. Social norms are enforced by sanctions....Violation of a rule has different consequence according to the type. Incompetent
behaviour which violates valid technical rules or strategies, is condemned per se to failure through lack of success; the 'punishment' is built, so
to speak, into its rebuff by reality. Deviant behaviour, which violates consensual norms, provokes sanctions that are connected with the
rules only externally, that is by convention. Learned rules of purposive-rational action supply us with skills, internalized norms with
personality structures. Skills put us into a position to solve problems, motivations allow us to follow norms."
The figure below illustrates the different types of action in relation to a temperature limit in an aircraft jet engine, as knowledge processing moves from design information to the development of operating procedures to operation.
Physical behaviour (say
blade root deflection as a function of temperature) constitutes instrumental action and may be gathered from a number of rigs and
models. The weighting to be given to the various sources of data, the error bands to be considered and the type of criteria to use
constitute strategic action. The decision by the design community to set a limit (above which warranty or disciplinary considerations might be
applied) is communicative action. The operator (currently) has some access to instrumental action, and has strategic and communicative actions
that relate to operation rather than design. In terms of providing operator support, instrumental action can be treated computationally,
strategic action can be addressed by decision support tools, but communicative action is not tractable. The potential availability of
all information is bound to challenge norms that do not align with purposive-rational action. The need for specific operating limits to
support particular circumstances will challenge the treatment of generalised strategic action. The enhanced communication between
designers and operators is likely to produce greater clarity in distinguishing what constitutes an appropriate physical limit for a
particular circumstance, and what constitutes a violation.
Automating the decision making of the design community (say by 'big
data') looks 'challenging' for all but instrumental action.
1. Users are going to assign human qualities to automation, whether the
designers plan for it or not. Kathy Abbott's caution is futile. It is going to happen so far as the user is concerned.
2. It is probably better, therefore, to consider the automation's
personality during design, to minimise the 'false expectations' that Kathy Abbott identifies.
3. Designing-in a personality isn't going to be easy. The 'smarter' the
system, the harder (and the more important) it is, is my guess. Enjoy the current state of the art with a Dalek