Sunday 2 October 2016

Ergonomics of the Internet of Things - 1


BLUF; The Internet of Things (IoT) will have lots of small, low-powered devices with complex functionality. This is a challenging context for good ergonomics. By and large, the IoT community won't even try; they'll take the same approach to usability as they have to security. Inability to power usable file formatting is a perhaps obscure example, but a good one. Message to the engineers: Just don't do it.

My Sony Walkman mp3 player has given good service over the years, but is not quite functioning correctly any more, and my music collection has grown well beyond its capacity. So the prospect of a small cheap mp3 player that takes a large capacity MicroSD card was too tempting. Bought.

Unusable; the tracks would not play in the right order when copied over from my PC.

MP3 file tagging is quite hard to get right, and it matters, especially for classical music (the bulk of my files).
What follows is  a bit of background and a summary of what I had to do to fix it. It is the result of a good bit of digging around and trial and error. Even if decent instructions came with the device, it is too big a demand to make of the user that just wants to listen to music. The engineers who thought that they had made an acceptable compromise in the interests of a low-power device were wrong.

In FAT32, filenames are written in the File Allocation Table with a creation date/time and the mp3 player reads the FAT and shows the files in the order they were written to the disc. It makes it more difficult when you want to view or play files in alphabetical or numerical order. Windows applications can sort the files on name and replay them in sequence but small devices such as mp3 players are more limited because of their low power constraints apparently.

I used Drivesort after trying some other applications to sort the files on the MicroSD card into alphabetical and numerical order. I see other people have had problems with Drivesort, but it is free and it worked for me. I used a mixture of Long Name Sort and Short Name Sort. I had to do it folder by folder, which was pretty tedious. There is a subdirectory function but I couldn't get it to work.
My MicroSD card came in exFAT format, so I had to format it to FA32 before I could use Drivesort. Windows wouldn't do it, so I used guiformat  (free but Paypal donations welcome).


After the event for me, I hope, but this looks a useful resource on file sorting for mp3 players

Thursday 22 September 2016

What autonomous ships are really about

It is easy for technical people (and others) to take technical ideas at face value. These days, this is often a serious mistake. In his brilliant monograph 'The Epic Struggle for the Internet of Things', reviewed here, Bruce Sterling puts us right. Google spent billions buying Nest, not for its thermostat technology, but to stake a claim in home automation.A technical device can be used to support a narrative for the future. So it is with autonomous ships.
First a small paleofuture exercise. Go back five or six years. What was 'the future of shipping' then? Perhaps, how big container ships might get, what alternative fuels might we be using. Look at the ships in this piece by Wärtsilä  from 2010 about shipping in 2030. Those ships have bridges and people. No mention of autonomous ships by anybody, probably.
Now, to the present. Go to any shipping event and ask about 'the future of shipping'. Autonomous ships will be mentioned within the first three sentences, and Rolls-Royce will be named. Rolls-Royce has put itself at the centre of the dominant narrative for the whole industry. This positioning is worth a fortune, and RR has done it at almost zero cost. A contribution to an EU research project or two, some press releases, some snazzy graphics from Ålesund - pennies. The autonomous ship has been the device used to stake that claim. Please don't mistake it for a technical exercise.

Friday 17 June 2016

Pre-empting Hindsight Bias in Automated Warfare

"His Majesty made you a major because he believed you would know when not to obey his orders." Prince Frederick Karl (cited by Von Moltke)

The killer robot community is debating concepts such as Meaningful Human Control (MHC) and 'appropriate human judgment' with a view to their operationalisation in practical use. For the purpose of this post, the various terms are bundled into the abbreviation MHC.

After things have gone wrong, the challenge for incident analysis is to avoid 'hindsight bias'. To learn from an incident, it is necessary to find out why it made sense at the time. "to reconstruct the evolving mindset", to quote Sidney Dekker. There is a long history of the wrong people getting the blame for an incident - usually some poor soul at the 'sharp end' (Woods).

In a world of highly automated systems, the distinction between 'human' and 'machine' becomes blurred. In most systems, there are a number of human stakeholders to consider, and a through-life perspective is frequently useful.

In a combat situation, 'control' is an aspiration rather than a continuing reality, and losers will have lost 'control' before the battle - e.g. after the opponent has got inside their OODA loop. What is a realistic baseline for MHC in combat? We have to be able to determine this without hindsight bias.
How would an investigator determine the presence or absence of MHC in the reconstruction of an incident? It would be virtue signalling of the lowest order to wait until after an incident and then decide how to determine the presence or absence of MHC.

One aspect of such determination is to de-couple the decision making from outcomes. The classic paper on this topic is '“Either a medal or a corporal”: The effects of success and failure on the evaluation of decision making and decision makers' by Raanan Lipshitz
There is, of course, a sizeable literature on decision quality e.g. Keren and de Bruin.


The game of 'consequences' developed here has been to provide food for thought, and an aid to discussion on what an investigator would need to know to make a determination of MHC.  It comprises short sections of dialogue. The allocation of function to human or machine, and the outcomes, are open to chance variation.
The information required to determine MHC might help in system specification, including the specifics of a 'human window'. It is not always the case that automation provides such a window - especially in the case of Machine Learning. So, how do we determine MHC in a combat situation? Try some of the exercises and see how much you would need to know. If the exercises here don't help make a determination - what would?

Please let me know in comments below, or on Twitter @BrianSJ3

As an aside, there are proven approaches to take in system development that can provide assurance of decision quality. This is not entirely a new challenge to the world of Human-System Integration. "What assurances are there that weapon systems developed can be operated and maintained by the people who must use them?"
[Guidelines for Assessing Whether Human Factors Were Considered in the Weapon Systems Acquisition Process FPCD-82-5, US GAO, 1981]

Sunday 3 January 2016

Human aspects of automation - The 'Jokers'

I propose four 'Jokers' to be considered in the design and operation of automated / autonomous systems. These are not 'risks' as normally managed, though there may be ways of managing them for people who have taken the red pill. The Jokers are:
  • Affect dilemma: Users WILL attribute a personality to your system and act on it, which may or may not match the behaviour of the system.
  • Risk compensation: Users WILL use systems installed for safety purposes to achieve commercial gain.
  • Automation bias: Users WILL trust the system when they shouldn't.
  • Moral buffering: Remoteness brings moral and ethical distance. Users WILL become morally disengaged.
The Jokers need to be addressed during design and operation. There are no simple means of 'mitigating' or 'treating' them.   To a large extent, engineers have got away with minor informal treatment of the (unrecognised) Jokers. This won't be possible with Robotics and Autonomous Systems.

Affect dilemma

Whether you intend it or not, your computer will be assigned a personality by its users e.g. Tamagotchi effect.  This doesn't just apply to social robots; nuisance alarms and other such 'technical' features will be used by the users in assigning a personality to the computer, and this will drive their interaction with it. This seems to be an area well short of having 'best practice' and may just need lots of monitoring, with corrective action where possible. Giving the interface personality human values sounds a good start.

Risk compensation

Wikipedia has a good entry on risk compensation. Despite being a well-accepted phenomenon, I have yet to encounter its explicit treatment in design, operation, or regulation. I should be delighted to hear of its appearance in a single safety case. 'Shared Space' stands out as a cultural oddity.
Risk compensation triggered by regulation is termed the Peltzman Effect.
[Note: Wilde's risk homeostasis is not being discussed here.]

Automation bias

"The automation's fine when it works" Margareta Lützhöft. Problems can arise when it doesn't. The reliability of modern automation means that it makes sense for the user to rely on it without checking. A summary from a paper by Missy Cumming:
"Known as automation bias, humans have a tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct (Mosier & Skitka, 1996; Parasuraman & Riley, 1997).  Automation bias is particularly problematic when intelligent decision support is needed in large problem spaces with time pressure like what is needed in command and control domains such as emergency path planning and resource allocation (Cummings, 2004). Moreover, automated decision aids designed to reduce human error can actually cause new errors in the operation of a system.  In an experiment in which subjects were required to both monitor low fidelity gauges and participate in a tracking task, 39 out of 40 subjects committed errors of commission, i.e. these subjects almost always followed incorrect automated directives or recommendations, despite the fact that contraindications existed and verification was possible (Skitka et al., 1999). "
Kathleen Mosier has shown that automation bias is surprisingly resistant to extra users or training, and that automation can lead to new, different types of error. AFAIK, automation bias is not addressed in Human Reliability Analysis, or explicitly addressed in design or operation. It is recognised as a concern in reports by the CAA and Eurocontrol.
The blame-the-human language of over-reliance is unwelcome but unsurprising. It begs the question of what would be optimal reliance. “The reason that current research does not unequivocally support the presence of complacency is that none of the research known has rigorously defined optimal behaviour in supervisory monitoring” (Moray & Inagaki, 2000)
Measures of trust, including trustworthiness, trustedness, trust miscalibration may need to be part of the answer. The Yagoda trust scale is of potential use in this context.
It could reasonably be argued that automation bias is a consequence of the affect dilemma. My grounds for having two separate Jokers is that, even when not independent, they are  separate concerns from a design or operational point of view.

Moral buffering

Dumping your boyfriend by text message. Letting people go by email. "Distant punishment" in modern warfare. Moral buffering. The moral buffer is described by Missy Cummings.
"The concept of moral buffering is related to but not the same as Bandura's (2002) idea of moral disengagement in which people disengage in moral self-censure in order to engage in reprehensible conduct. A moral buffer adds an additional layer of ambiguity and possible diminishment of accountability and responsibility through an artifact or process, such as a computer interface or automated recommendations. Moral buffers can be the conduits for moral disengagement, which is precisely the reason for the need to examine ethical issues in interface design."
People can exploit moral buffering to generate the 'Agency Problem' as set out by Nassim Nicholas Taleb:
"Solution to the AGENCY PROBLEM: Never get on a plane unless the person piloting it is also on board.
Generalization: no-one should be allowed to declare war, make a prediction, express an opinion, publish an academic paper, manage a firm, treat a patient, etc. without having something to lose (or win) from the outcome
."
Taleb links the agency problem to 'skin in the game'.
A classic demonstration of moral buffering is the 'Button Defense' in 'How To Murder Your Wife' - "Edna will never know".


The Jokers are due to appear in a paper in the Safety Critical Systems Club Newsletter, which will give them a proper citation. To be added when published this month.
There is some overlap between the Jokers and BS8611. To be the subject of a future post.