Thursday, 19 July 2012

Cognitive anti-patterns - more inputs

Don Norman's book “Things that make us smart” has Grudin’s Law: When those who benefit are not those who do the work, then the technology is likely to fail, or at least be subverted.
Amalberti's human error self-fulfilling prophecy: by regarding the human as a risk factor and delegating all safety-critical functions to technology as  the presumed safety factor, the human is actually turned into  a risk factor.
Gary Klein, Dave Snowden and Chew Lock Pin have listed 'useless advice' regarding anticipatory thinking. 'Useless advice' is pretty spot-on for anti-patterns. The useless advice is :
  • Gather more data.
  • Use information technology to help analyze the data.
  • Reduce judgment biases.
  • Encourage people to keep an open mind.
  • Appoint “devil’s advocates” to challenge thinking.
  • Encourage vigilance.
The 'devil's advocate' refers to a specific challenging role, rather than an independent overview role.  'Encouraging vigilance' is about vigilance not being a substitute for expertise, as opposed to mindfulness training.

Robert Hoffman provides some laws about Complex and Cognitive Systems (CACS). The laws are not quite patterns/anti-patterns, but look capable of being worked into that framework. Woods and Hollnagel have developed them into patterns for Joint Cognitive Systems. A number of the laws relate to 'integration work'. The following seem relevant:
The Penny Foolish Law: Any focus on short-term cost considerations always comes with a hefty price down the road, that weighs much more heavily on the
shoulders of the users than on the shoulders of project managers.
The Cognitive Vacuum Law: When working as a part of a CACS, people will perceive patterns and derive understandings and explanations, and these are not
necessarily either veridical or faithful to the intentions of the designers.  [bsj i.e. design intent needs to be explicit.]
Mr. Weasley’s Law: Humans should be supported in rapidly achieving a veridical and useful understanding of the “intent” and “stance” of the machines. Mr. Weasley states in the Harry Potter series, “Never trust anything that can think for itself if you can’t see where it keeps its brain.”
The Law of Stretched Systems: CACSs are always stretched to their limits of performance and adaptability. Interventions will always increase the tempo
and intensity of activity.
Rasmussen’s Law: In cognitive work within a CACS, people do not conduct tasks, they engage in context-sensitive, knowledge-driven choice among action
sequence alternatives. [bsj This links to Amalberti's 'ecological risk management'.]
Dilbert's Law: A human will not cooperate, or will not cooperate well with another agent if it is assumed that the other agent is not competent. 
Law of Coordinative Entropy: Coordination costs, continuously. The success of new technology depends on how the design affects the ability to manage the costs of coordinating activity and maintaining or repairing common ground.
Law of Systems as Surrogates: Technology refl ects the stances, agendas, and goals of those who design and deploy the technology. Designs, in turn, refl ect the models and assumptions of distant parties about the actual diffi culties in real operations. For this reason, design intent is usually far removed from the actual conditions in which technology is used, leading to costly gaps between these models of work and the “real work.”
The Law of the Kludge: Work systems always require workarounds, with resultant kludges that attempt to bridge the gap between the original design objectives and current realities or to reconcile conflicting goals among workers.
The Law of Fluency: Well-adapted cognitive work occurs with a facility that belies the difficulty of resolving demands and balancing dilemmas. The adaptation process hides the factors and constraints that are being adapted to or around.   Uncovering the constraints that fluent performance solves, and therefore seeing the limits of or threats to fluency, requires a contrast across perspectives.

Ned Hickling has challenged the universality of 'strong, silent automation is bad' i.e.  Mr Weasley's Law does not apply all the time. Disagreeing with Ned is fine. Just one problem. It means you are wrong. A proper response will appear, but after some thoughts on 'autonomy'.
The answer is likely to make use of Grote's thinking on zones of no control, whereby it is recognized that there are areas of automation where the operator has no effective control (cf. Ironies of Automation). For these zones, the operator is not held accountable, and accountability is assigned to the design authority, the operating organization or other agencies as appropriate.

"It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so." -- Mark Twain

No comments:

Post a Comment