Thursday 19 July 2012

Is 'autonomy' a helpful aim for 'unmanned' platforms?

Building 'autonomous' platforms sounds exciting as an engineering challenge. This post suggests that the concept may be sufficiently flawed that it takes well-intentioned technical effort down a blind alley, and that a somewhat less-exciting conceptual framework may end up supporting greater technical advance.

Chapter 3 of Vincenti's classic book "What Engineers Know and How They Know It" is on Flying-Quality Specifications. The concepts of stability and control had to be re-thought. Essentially stability had been seen as a property of the aircraft on its own. This concept had to change to provide the pilot with adequate control. Flying qualities emerged as a concept that related to the aircraft-pilot system. I would give a better description, but someone hasn't returned my copy of the book. Changing the underlying concepts took a good decade of experimentation and pilot-designer interaction. My concern is that 'autonomy' as currently defined will hold back progress in the way that 'stability' did in the 1920's.  The 2010 version of CAP722 (Unmanned Aircraft System Operations in UK Airspace – Guidance) is used as the reference on current thinking on 'autonomy'.

The advantage we have for 'autonomy' over stability in the 1920's is that there is a good body of work on human-automation interaction, supervisory control etc.going back sixty years that can be used. There is well-established work that can elaborate human-automation interaction beyond a simple 'autonomous' label, or a 'semi-autonomous' label (reminders of 'slightly pregnant' as a concept). For example,
  • Tom Sheridan defined five generic supervisory of functions planning, teaching (or programming the computer), monitoring, intervening and learning. These functions operate within three nested control loops.
  • The Bonner-Taylor PACT framework  for pilot authorisation  and control of tasks can be used to describe operation in various modes.
  • Work by John Reising , Terry Emerson and others developed design principles and approaches to Human-Electronic teamwork, using, inter alia, Asimov's Laws of Robotics.
  • Recent work by Anderson et al on a constraint-based approach to UGV semi-autonomous control.
CAP722 (3.6.1) requires an overseeing autonomous management system. This has echoes of the 'executive' function at the heart of the Pilot's Associate programme. It is my recollection that the name and function of the executive kept changing, and perhaps proved too difficult to implement. A more feasible solution would be a number of agents assisting the human operator. It is not obvious why CAA guidance precludes such an option.

CAP722 (3.5.1) states: 'The autonomy concept encompasses systems ranging in capability from those that can operate without human control or direct oversight (“fully autonomous”), through “semi-autonomous” systems that are subordinate to a certain level of human authority, to systems that simply provide timely advice and leave the human to make all the decisions and execute the appropriate actions'. 'Full Autonomy' is thus a self-defining no control zone (Grote). As a pre-requisite for such a zone, the transfer of responsibility from the operator at the sharp end to the relevant authority (e.g. the Design Authority, the Type Certificate Holder, the IPT Leader) needs to be clearly signalled to all concerned. The 2010 version of CAP 722 seems to leave responsibility at the sharp end, and the inevitable accusations of 'operator error' when things go wrong.

I  leave the last words to Adm. Rickover:
"Responsibility is a unique concept: It can only reside and inhere in a single individual.  You may share it with others but your portion is not diminished.  You may delegate it but it is still with you.  Even if you do not recognise it or admit its presence, you cannot escape it.  If responsibility is rightfully yours, no evasion, or ignorance, or passing the blame can shift the burden to someone else.  Unless you can point your finger at the man responsible when something goes wrong then you never had anyone really responsible."

Update: Project ORCHID seems to have the right approach, talking about degree of autonomy required for tasks, and providing digital assistants. Also, please note the agent based approach.

Update: It's nice to be ahead of Dangerroom. "The Pentagon doesn't trust its own robots". Problems of autonomy!

Cognitive anti-patterns - more inputs

Don Norman's book “Things that make us smart” has Grudin’s Law: When those who benefit are not those who do the work, then the technology is likely to fail, or at least be subverted.
Amalberti's human error self-fulfilling prophecy: by regarding the human as a risk factor and delegating all safety-critical functions to technology as  the presumed safety factor, the human is actually turned into  a risk factor.
Gary Klein, Dave Snowden and Chew Lock Pin have listed 'useless advice' regarding anticipatory thinking. 'Useless advice' is pretty spot-on for anti-patterns. The useless advice is :
  • Gather more data.
  • Use information technology to help analyze the data.
  • Reduce judgment biases.
  • Encourage people to keep an open mind.
  • Appoint “devil’s advocates” to challenge thinking.
  • Encourage vigilance.
The 'devil's advocate' refers to a specific challenging role, rather than an independent overview role.  'Encouraging vigilance' is about vigilance not being a substitute for expertise, as opposed to mindfulness training.

Robert Hoffman provides some laws about Complex and Cognitive Systems (CACS). The laws are not quite patterns/anti-patterns, but look capable of being worked into that framework. Woods and Hollnagel have developed them into patterns for Joint Cognitive Systems. A number of the laws relate to 'integration work'. The following seem relevant:
The Penny Foolish Law: Any focus on short-term cost considerations always comes with a hefty price down the road, that weighs much more heavily on the
shoulders of the users than on the shoulders of project managers.
The Cognitive Vacuum Law: When working as a part of a CACS, people will perceive patterns and derive understandings and explanations, and these are not
necessarily either veridical or faithful to the intentions of the designers.  [bsj i.e. design intent needs to be explicit.]
Mr. Weasley’s Law: Humans should be supported in rapidly achieving a veridical and useful understanding of the “intent” and “stance” of the machines. Mr. Weasley states in the Harry Potter series, “Never trust anything that can think for itself if you can’t see where it keeps its brain.”
The Law of Stretched Systems: CACSs are always stretched to their limits of performance and adaptability. Interventions will always increase the tempo
and intensity of activity.
Rasmussen’s Law: In cognitive work within a CACS, people do not conduct tasks, they engage in context-sensitive, knowledge-driven choice among action
sequence alternatives. [bsj This links to Amalberti's 'ecological risk management'.]
Dilbert's Law: A human will not cooperate, or will not cooperate well with another agent if it is assumed that the other agent is not competent. 
Law of Coordinative Entropy: Coordination costs, continuously. The success of new technology depends on how the design affects the ability to manage the costs of coordinating activity and maintaining or repairing common ground.
Law of Systems as Surrogates: Technology refl ects the stances, agendas, and goals of those who design and deploy the technology. Designs, in turn, refl ect the models and assumptions of distant parties about the actual diffi culties in real operations. For this reason, design intent is usually far removed from the actual conditions in which technology is used, leading to costly gaps between these models of work and the “real work.”
The Law of the Kludge: Work systems always require workarounds, with resultant kludges that attempt to bridge the gap between the original design objectives and current realities or to reconcile conflicting goals among workers.
The Law of Fluency: Well-adapted cognitive work occurs with a facility that belies the difficulty of resolving demands and balancing dilemmas. The adaptation process hides the factors and constraints that are being adapted to or around.   Uncovering the constraints that fluent performance solves, and therefore seeing the limits of or threats to fluency, requires a contrast across perspectives.

Ned Hickling has challenged the universality of 'strong, silent automation is bad' i.e.  Mr Weasley's Law does not apply all the time. Disagreeing with Ned is fine. Just one problem. It means you are wrong. A proper response will appear, but after some thoughts on 'autonomy'.
The answer is likely to make use of Grote's thinking on zones of no control, whereby it is recognized that there are areas of automation where the operator has no effective control (cf. Ironies of Automation). For these zones, the operator is not held accountable, and accountability is assigned to the design authority, the operating organization or other agencies as appropriate.

"It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so." -- Mark Twain

Friday 13 July 2012

Internet of Things - Use Cases (not)

Some use cases for consideration by the Internet of Things (IoT) community

Monday 9 July 2012

Managerialism

Evolutionary managerialism - our current situation as a development of past bad habits

The wikipedia entry for managerialism is pretty good. It cites a definition by Robert R Locke.
"What occurs when a special group, called management, ensconces itself systemically in an organization and deprives owners and employees of their decision-making power (including the distribution of emolument), and justifies that takeover on the grounds of the managing group's education and exclusive possession of the codified bodies of knowledge and know-how necessary to the efficient running of the organization."
Locke's principal writing on the topic seems to be here, as 'Managerialism and the Demise of the Big Three' (pdf), and the book 'Confronting Managerialism'. The Big Three are the US automobile makers, and their demise is seen as being brought about by the Japanese management approach. Locke lays blame at the door of of neoclassical economics and  business school teaching.

This view of managerialism has strong precursors. Pfeffer and Sutton have pointed out the problems of US MBAs, what and how they teach. Mintzberg's  'Strategy Safari' has an account of the demise of the British motorcycle industry and Honda's success, including this quote from Hopwood:
"In the early 1960s the Chief Executive of a world famous group of management consultants tried hard to convince me that it is ideal that top level management executives should have as little knowledge as possible relative to the product. This great man really believed that this qualification enabled them to deal efficiently with all business matters in a detached and uninhibited way."
This ideal sounds like a job description of a UK generalist civil servant - still not dead 44 years after Fulton. Rory Stewart has written about managerialism in a number of public organizations e.g.  here and here.

A proper 21st Century dystopian view of managerialism as an entity in itself

Bruce Sterling has given a good description of a dystopian future as  'favela chic', a talk beautifully visualised here.  The connections between science fiction dystopia and collapsonomics are all too realistic for comfort.

The full horrors of managerialism as embodied in current global corporate capitalism have been captured in contemporary language by Rao, as 'The Gervais Principle', which starts with one of my favourite Hugh McLeod cartoons. The life cycle diagram seems to map well onto the more traditional life cycle at Adizes.

Rao's Guerilla Guide to Social Business is also available for download and is bang on the money. It includes a wonderful take on KM.

Managerialism and safety management

Managerialism appears to have penetrated safety management. One consequence is a concentration on hazards that are easily managed, at the expense of systemic hazards that require a resilient, learning, sensemaking approach. The sensemaking approach is set out in the 'The Learning School' in Mintzberg, or Weick and Sutcliffe (this link takes you to a great resource on High Reliability Organizations) and their book.

Managerialism in a safety context has been parodied all too accurately  by The Daily Mash here.

In a safety management context, managerialism looks deceptively innocent. The diagrams below look fine at first sight. Everything is organized. That is the problem. Being organized is vital, but it is not enough. Where, on these diagrams, can we find crew input, sensemaking, informal learning, trying things out? Not in 'the system'. This is the most difficult challenge facing the move to resilience.





Wednesday 4 July 2012

Cognitive Anti-Patterns 1

Something went wrong with the previous post in Blogger so a slightly revised version published here.
Anti-patterns are discussed here and here. Jim Coplien states: "an anti-pattern is something that looks like a good idea, but which backfires badly when applied".  SEI has published a document (pdf) with system archetypes - using archetypes to beat the odds. These archetypes (which have origins in ITIL) are very similar to anti-patterns. They are also nicely set out here.

The early recognition and countering of anti-patterns is an extremely valuable skill that is rarely taught, and is probably not very hard to acquire. I suspect that it is not taught often is because it sees what might pretentiously be called knowledge work as a craft or skill. On the contrary, this sort of diagnostic skill is at the heart of expertise. It all appears very negative, unfortunately, but this is the case with all risk management. Are there opportunities to complement these risks? Possibly. Not the subject of this post though. The list looks like it has a real down on automation. This in no way puts it near Marcuse' 'One Dimensional Man' or the Unabomber Manifesto. It is just a reflection of the prevalence of technology-push in our current society.

This post does not (yet) have well formulated anti-patterns, just some starting points and first drafts.

Starting points

Gary Klein offered three great 'unintelligent system anti-patterns' in this document (pdf).
  • The Man behind the Curtain (from the Wizard of Oz). Information technology usually doesn’t let people see how it reasons; it’s not understandable. The alternative is to design a 'human window' (Donald Michie).
  • Hide-and-Seek. On the belief that decision-aids must transform data into information and information into knowledge, data are actually hidden from the decision maker. The negative consequence of this antipattern is that decision makers can’t use their expertise.
  • The Mind Is a Muscle. In the attempt to acknowledge human factors in the procurement process, some guidelines end up actually working against human-centering considerations: “Design efforts shall minimize or eliminate system characteristics that require excessive cognitive, physical, or sensory skills.”
Sue E. Berryman's Cognitive Apprenticeship Model  proposed Five Assumptions About Learning - All Wrong:
1. That people predictably transfer learning from one situation to another.
2. That learners are passive receivers of wisdom - vessels into which knowledge is poured.
3. That learning is the strengthening of bonds between stimuli and correct responses.
4. That learners are blank slates on which knowledge is inscribed.
5. That skills and knowledge, to be transferable to new situations, should be acquired independent of their contexts of use.

First drafts

People are just a source of error that needs to be minimised. The alternative is to recognize that people (also) 'make safety'.
Accidents are usually the result of human error. The alternative is to see human error as an outcome (rather than a 'cause'), a sign that something is wrong with the system (Sidney Dekker).
Safe systems are usually safe. The alternative is that safe systems usually run broken.
Cycle of error (Cook and Woods). After an incident, 'things need tightening up, lessons must be learned'. Organizational reactions to failure focus on human error. The reactions to failure are: blame & train, sanctions, new regulations, rules, and technology. These interventions increase complexity and introduce new forms of failure.
Providing feedback on operational performance can be bad for morale and is best not done.

Rationality/logic/MEU is the benchmark for human decision making. The alternative is "reasoning is not about truth but about convincing others when trust alone is not enough. Doing so may seem irrational, but it is in fact social intelligence at its best." Gerd Gigerenzer, or "Man is not a rational animal, he is a rationalizing animal".  Robert A Heinlein.
People are information processors, like computers. The alternative is to recognise the role of narrative, metaphore etc.
Cognitive biases are useful aids to people making decisions.
Human cognition is a higher mental function, and the lizard brain and emotions should not be involved.
People without emotional influence make better decisions.
Bull (Norman Dixon). Being clean and tidy is vital, whether it is polished brass, dress codes or tidy desks.

The important aspect of human decision making is the 'moment of choice' . Design and operational aspects should be focused on this. The alternatives include a narrative approach (Rao).

Automate what you can and leave the operator to do the rest (job design by left-overs). Supervisory control is a good model for job design. The human-centred automation alternative is to design a human-machine team to avoid cogminutia fragmentosa.
Automation reduces workload.
Automation improves performance.
Automation reduces staffing requirements.
Strong, silent automation is good (Dave Woods).
There are no UNK-UNK failure modes (Tom Sheridan), so we do not need to design or plan for them.

Regulations, rules and procedures will work as intended without reactive or cumulative effects. Technology improves safety. The alternative is to consider the reactive effects of their introduction, including risk compensation, and to remember that the people at the sharp end make continuing judgments balancing risk, profitability, workload (ETTO).

People will obey the rules in potentially high hazard systems just because they are there.

Procedures can be expected to cover all circumstances. Risk management can be comprehensive. Things will go according to the plan, so it is worth having a really detailed plan, and not investing in preparedness. The alternative is "In preparing for battle I have always found that plans are useless, but planning is indispensable" D D Eisenhower.

Providing unnecessary data 'just in case', whether it is a fourteen page checklist, a handful of alarm channels, or an overfilled tactical display. Planned information overload has adverse consequences (see operator error).
Chartjunk is a good basis for display design.  The flows through a system (the 'big picture') can be presented as disjointed bullet points (Tufte).

Work can be divided by procurement or organizational boundaries, leading to stovepipe sub-systems, and the crew doing the 'integration work'.

Training can fix design problems.

Tuesday 3 July 2012

Standards and hard resilience

 Vinay Gupta has made the distinction between hard resilience (e.g. food, water, power, communications) and social resilience e.g. here.
The emphasis in writing about resilience is on flexibility and adaptability, whether it is about communities or resilience engineering, and standardisation does not seem to have much prominence.  Standardisation has always supported flexibility. Jim Ware has identified the importance of standardised touchdown zones for nomadic workers within an enterprise as an element in corporate agility. This type of work standardisation goes back to medieval monasteries, as documented by Jean Gimpel.

However, resilience goes beyond agility/flexibility. Having the right connectors for portable diesel generators may be a requirement that goes beyond day to day flexibility but which proves invaluable in an emergency.  Dealing with insurance claims after a fire on a big container ship might benefit from standards aimed at resilience. An inspiring story on these lines comes from post-earthquake Japan, entitled 'Beat the bureaucracy and overcome the disaster'. A large pdf on their experience can be downloaded from the site  (or here)

The company went to great lengths over four years to understand their real business, simplify processes, and remove silos. The benefits following the Great East Japan Earthquake included the ability to send 1,600 employees to Tohoku area as temporary staff, to deploy 800 new terminals in a week, and finally 1,800 terminals by May 13. They completed 87% of 160,000 claims by the end of May and completed 97.3% of 173,000 claims by the middle of September. They had an earthquake damage contact centre with 110 terminals set up by the day after the quake. These accomplishments could not have been made without a standard plug and play virtual desktop, no paperwork and cashless payment systems.

The contrast between resilience and control is made in one of their moonshots: Create a democracy of information
People at the front lines should be at least as well informed as those in the executive suite.”

"Most organizations control information in order to control people. Yet, increasingly, value is created where first-level employees meet customers — and the most value is created when those people have the information and the permission to do the right thing for customers at the right moment. Information transparency doesn’t just produce happy employees and happy customers, it’s a key ingredient in building resilience. Adaptability suffers when employees lack the freedom to act quickly and the data to act intelligently. The costs of information hoarding are quickly becoming untenable. Companies must build holographic information systems that give every employee a 3-D view of critical performance metrics and key priorities."

Providing assurance of safe and effective operation of unmanned platforms

The Association for Unmanned Vehicle Systems International (AUVSI)  has published an industry code of conduct for unmanned aircraft system operations. It is based on safety, professionalism, and respect. This is a significant document, given the wave of safety-related so-called unmanned systems coming our way. It is short (a good thing) with a reasonable set of principles.

A good many domains start with a set of principles as the basis for assurance or regulation. The hard bit comes with working out the detail. I am sure that the AUVSI does not propose to re-invent the safety management wheel, but a number of schemes that have been expensive to implement, do not seem to be getting too much of a good press these days, so it would pay them to consider the detailed working before committing too heavily to a form of implementation.

Links with other groups such as say the robotics community and its events might make sense.

From my own point of view, it would be a delight to see the unmanned platform community adopt process standards as a form of leadership (process ownership drives process improvement) and assurance. Relevant standards include ISO/IEC 15288, ISO TS 18152, ISO 15504 Part 10.

Monday 2 July 2012

Internet of Things - Glass Half-Full

Temporal linkages between Internet of Things developments sparked some thoughts.

The European Data Protection Supervisor (EDPS) said (pdf) that, while smart meters were potentially useful for controlling energy use, they will also "enable massive collection of personal data which can track what members of a household do within the privacy of their own homes". Good to see, but  is it too little too late to prevent a) abuse or b) a backlash? Will the utilities become as popular as bankers? There isn't much of a gap now, I suspect.

A Pew report (pdf) on the future of smart homes includes this gem of realism:
"Most of the comments shared by survey participants were assertions that the Home of the Future will continue to be mostly a marketing mirage. The written responses were mostly negative and did not mirror the evenly split verdict when respondents made their scenario selection. Because the written elaborations are the meat of this research report and the vast majority of them poked holes in the ideal of smart systems being well-implemented by individuals in most connected homes by 2020, this report reflects the naysayers’ sense that there are difficult obstacles that are not likely to be overcome over the next few years."

You may have missed this website devoted to internet fridges. (Shame virtual fridge never took off - Alan Dix would have been much better than Mark Zuckerberg as the social media czar).

Samsung has launched a smartphone health app. Huge market for this sort of thing is developing. Next steps presumably include connecting to things (perhaps using the work at Glasgow University) and possibly some data-mining of healthcare providers (whathaveyougotonme.com or somesuch). Such a path would provide  market based 'empowered patient' model, with a user centred approach a business survival requirement. A user-led mashup tool such as sen.se is likely to figure large.

The People Centred Design Group has distilled its work into a set of recommendations for the Internet of Things SIG. Still quite thing-centred e.g. "As the thing passes through its lifecycle, define the end users’ experience... ", and still no mention of HCD standards.

The IoT showcase presentations illustrate the glass half-full situation. I guess that is where we are just now.