Sunday 3 January 2016

Human aspects of automation - The 'Jokers'

I propose four 'Jokers' to be considered in the design and operation of automated / autonomous systems. These are not 'risks' as normally managed, though there may be ways of managing them for people who have taken the red pill. The Jokers are:
  • Affect dilemma: Users WILL attribute a personality to your system and act on it, which may or may not match the behaviour of the system.
  • Risk compensation: Users WILL use systems installed for safety purposes to achieve commercial gain.
  • Automation bias: Users WILL trust the system when they shouldn't.
  • Moral buffering: Remoteness brings moral and ethical distance. Users WILL become morally disengaged.
The Jokers need to be addressed during design and operation. There are no simple means of 'mitigating' or 'treating' them.   To a large extent, engineers have got away with minor informal treatment of the (unrecognised) Jokers. This won't be possible with Robotics and Autonomous Systems.

Affect dilemma

Whether you intend it or not, your computer will be assigned a personality by its users e.g. Tamagotchi effect.  This doesn't just apply to social robots; nuisance alarms and other such 'technical' features will be used by the users in assigning a personality to the computer, and this will drive their interaction with it. This seems to be an area well short of having 'best practice' and may just need lots of monitoring, with corrective action where possible. Giving the interface personality human values sounds a good start.

Risk compensation

Wikipedia has a good entry on risk compensation. Despite being a well-accepted phenomenon, I have yet to encounter its explicit treatment in design, operation, or regulation. I should be delighted to hear of its appearance in a single safety case. 'Shared Space' stands out as a cultural oddity.
Risk compensation triggered by regulation is termed the Peltzman Effect.
[Note: Wilde's risk homeostasis is not being discussed here.]

Automation bias

"The automation's fine when it works" Margareta Lützhöft. Problems can arise when it doesn't. The reliability of modern automation means that it makes sense for the user to rely on it without checking. A summary from a paper by Missy Cumming:
"Known as automation bias, humans have a tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct (Mosier & Skitka, 1996; Parasuraman & Riley, 1997).  Automation bias is particularly problematic when intelligent decision support is needed in large problem spaces with time pressure like what is needed in command and control domains such as emergency path planning and resource allocation (Cummings, 2004). Moreover, automated decision aids designed to reduce human error can actually cause new errors in the operation of a system.  In an experiment in which subjects were required to both monitor low fidelity gauges and participate in a tracking task, 39 out of 40 subjects committed errors of commission, i.e. these subjects almost always followed incorrect automated directives or recommendations, despite the fact that contraindications existed and verification was possible (Skitka et al., 1999). "
Kathleen Mosier has shown that automation bias is surprisingly resistant to extra users or training, and that automation can lead to new, different types of error. AFAIK, automation bias is not addressed in Human Reliability Analysis, or explicitly addressed in design or operation. It is recognised as a concern in reports by the CAA and Eurocontrol.
The blame-the-human language of over-reliance is unwelcome but unsurprising. It begs the question of what would be optimal reliance. “The reason that current research does not unequivocally support the presence of complacency is that none of the research known has rigorously defined optimal behaviour in supervisory monitoring” (Moray & Inagaki, 2000)
Measures of trust, including trustworthiness, trustedness, trust miscalibration may need to be part of the answer. The Yagoda trust scale is of potential use in this context.
It could reasonably be argued that automation bias is a consequence of the affect dilemma. My grounds for having two separate Jokers is that, even when not independent, they are  separate concerns from a design or operational point of view.

Moral buffering

Dumping your boyfriend by text message. Letting people go by email. "Distant punishment" in modern warfare. Moral buffering. The moral buffer is described by Missy Cummings.
"The concept of moral buffering is related to but not the same as Bandura's (2002) idea of moral disengagement in which people disengage in moral self-censure in order to engage in reprehensible conduct. A moral buffer adds an additional layer of ambiguity and possible diminishment of accountability and responsibility through an artifact or process, such as a computer interface or automated recommendations. Moral buffers can be the conduits for moral disengagement, which is precisely the reason for the need to examine ethical issues in interface design."
People can exploit moral buffering to generate the 'Agency Problem' as set out by Nassim Nicholas Taleb:
"Solution to the AGENCY PROBLEM: Never get on a plane unless the person piloting it is also on board.
Generalization: no-one should be allowed to declare war, make a prediction, express an opinion, publish an academic paper, manage a firm, treat a patient, etc. without having something to lose (or win) from the outcome
."
Taleb links the agency problem to 'skin in the game'.
A classic demonstration of moral buffering is the 'Button Defense' in 'How To Murder Your Wife' - "Edna will never know".


The Jokers are due to appear in a paper in the Safety Critical Systems Club Newsletter, which will give them a proper citation. To be added when published this month.
There is some overlap between the Jokers and BS8611. To be the subject of a future post.