Wednesday 30 December 2015

Providing assurance of machine decision making

All Models Are Wrong But Some Are Useful” -George Box

The aim of Human-Machine Teams (HMT) is to make rapid decisions under changing situations characterised by uncertainty. The aim of much modern automation is to enable machines to make such decisions for use by people or other machines. The process of converting incomplete, uncertain, conflicting, context-sensitive data to an outcome or decision needs to be effective, efficient, and to provide some freedom from risk. It also may need to reflect human values, legislation, social justice etc. How can the designer or operator of such an automated system provide assurance of the quality of decision making (potentially to customers, users, regulators, society at large)? 'Transparency' is part of the answer, but the practical meaning of transparency has still to be worked out.

The philosopher Jurgen Habermas has proposed that action can be considered from a number of viewpoints. To simplify the description given in McCarthy (1984), purposive-rational action comprises instrumental action and strategic action. Strategic action is part-technical, part-social and refers to the decision-making procedure, and is a the decision theory level e.g. the choice between maximin, maximax criteria etc., and needs supplementing by values and maxims. It may be that Value Sensitive Design forms a useful supplement to Human-Centred Design to address values.

The Myth of Rationality

"Like the magician who consults a chicken’s entrails, many organizational decision makers insist that the facts and figures be examined before a policy decision is made, even though the statistics provide unreliable guides as to what is likely to happen in the future. And, as with the magician, they or their magic are not discredited when events prove them wrong. (…) It is for this reason that anthropologists often refer to rationality as the myth of modern society, for, like primitive myth, it provides us with a comprehensive frame of reference, or structure of belief, through which we can negotiate day-to-day experience and make it intelligible."
Gareth Morgan

The Myth of Rationality is discussed e.g.  here.  The limits of rationality (or perhaps its irrelevance) in military situations  should be obvious. If you need a refresher, then try Star Trek 'The Galileo Seven'. The myth of the rational manager is discussed here. This is not to say that vigilant decision making is a bad thing - quite the opposite. As Lee Frank points out, rationality is not the same as being able to rationalise.

The need for explanation / transparency

The need for transparency and/or observability is discussed in a previous post here. There is an interaction between meeting this need and the approach to decision making. AFAIK the types of Machine Learning (ML) currently popular with the majors cannot produce a rationalisation/explanation for decisions/outcomes, which would seem a serious shortcoming for applications such as healthcare. If I am a customer, how can I gain assurance that a system will give the various users the explanations they need?

Approach to decision making

It is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits; it is evidently equally foolish to accept probable reasoning from a mathematician and to demand from a rhetorician scientific proofs.” Aristotle
At some point, someone has to decide how the machine is to convert data to outcomes (what might have been called an Inference Engine at one point). There is a wide range of choices; the numeric/symbolic split, algorithms, heuristics, statistics, ML, neural nets, rule induction. In some cases, the form of decision making is inherent in the tool used e.g. constraint-based planning tool, forward-chaining production system, truth maintenance system etc. There are choices to be made in search (depth vs. breadth) and in types of logic or reasoning to be used. There were attempts before the AI winter to match problem type to implementation but IMHO they didn't finish the job, and worked-up methodologies such as CommonKADS would be a hard sell now. So, what guidance is available to system designers, and what forms of assurance can be offered to customers at design time? Genuine question.

Sunday 20 December 2015

Human Machine Teaming - Data Quality Management

"A mathematician is a man who is willing to assume anything except responsibility." (Theodore von Karman)

"Rapid, effective decision making under conditions of uncertainty whilst retaining Meaningful Human Control (MHC)" is the sort of mantra associated with Human Machine Teaming (HMT). A purely mathematical approach to risk and uncertainty is unlikely to match the needs of real world operation, as Wall St. has discovered.

So, during the design of a system where the data are potentially incomplete, uncertain, contradictory etc. how does the designer offer assurance that data quality is being addressed in an appropriate manner? Or are we doomed to crafted systems on the basis of "trust me"?

Not all forms of uncertainty should be treated in the same way; this applies to data fusion, say, and most other tasks. It is my impression that the literature on data quality and information quality is not being used widely in the AI, ML, HMT community just now - I'd be delighted to be corrected on that.

 ISO/IEC 25012 “Software Engineering – Software Product Quality Requirements and Evaluation (SQuaRE) – “Data Quality Model”, 2008 categorises quality attributes into fifteen characteristics from two different perspectives: inherent and system dependent ones. This framework may or may not be appropriate to all applications of HMT but it makes the point that there is more than just "uncertainty". Richard Y Wang has proposed that "incorporating quality information explicitly in the development of information systems can be surprisingly useful"  in the context of military image recognition.

HMT takes place in the context of Organisational Information Processing. The good news is that this is quite well-developed for flows within an organisation (less so for dealing with an opposing organisation). The bad news is that Weick is hard work. The key term is equivocality, and I suggest that the HMT community use it as an umbrella term, embracing 'uncertainty' and other such parameters. Media richness theory helps.

"A man's gotta know his limitations" (Clint Eastwood). "So does a robot" (BSJ)

A key driver for data quality management is whether a system (or agent etc.) assumes an open world or a closed one. Closed world processing has to know the fine details e.g. how a Google self-driving car interacts with a cyclist on a fixed-wheel bicycle.  By contrast, GeckoSystems  takes an open world approach to 'sense and avoid' and doesn't have to know these fine details. It would seem that closed world processing needs explicit treatment of data quality to avoid brittleness.

Time flies like an arrow, fruit flies like a banana.

At some point, the parameters acquire meaning, or semantic values. "We won’t be surfing with search engines any more. We’ll be trawling with engines of meaning." (Bruce Sterling). The parameters may be classified on the basis of a folksonomy, or the results of knowledge elicitation. So far as I can see, the Semantic Revolution has a way to run before achieving dependable performance. Roger Schank has been fairly blunt about the present state of the art. Semantic parameters are likely to have contextual sensitivity, which may be hard to characterise.

If a system is to support human decision making, then it may need to provide information well beyond that required analytically for the derivation of a mathematical solution. Accordingly, the system may need to manage data about the quality of processing.   For robotic state estimation, the user may need more than a point best estimate. Confidence estimates may need to be expressed in operational terms, rather than mathematical ones. Indeed, the HMT may need to reason about uncertainty as much as under uncertainty.

This post is scrappy and home-brewed. Suggestions for improvement are welcome. If I am anywhere near right, then the state of art needs advancing quite swiftly. As a customer I wouldn't know how to gain assurance that the management of data quality would support safe and effective operation, and as a project manager, I wouldn't know how to offer such assurance.

Update: This is nice on unknown unknowns and approaches to uncertainty in data:

Friday 11 December 2015

Human-Machine Teaming - meeting the Centaur challenge

At the centre of the US DoD Third Offset is Human-Machine Teaming (HMT), with five building blocks:
  1. Machine Learning
  2. Autonomy / AI
  3. Human-Machine Collaboration
  4. Assisted human operations
  5. Autonomous weapons.
The analogy with Centaur Chess is a powerful one, and potentially offers the best use for both people (H) and machines (M). However, this approach is not easy to implement.This post is a quick look at some issues of design and development for HMT. Other aspects of HMT will be addressed in subsequent posts (hopefully).

1. Human-Centred Automation

The problems of SAGE, one of the first automated systems, were well-documented in 1963. Most automated systems built now still have the same problems. "H" managed to get the UK MoD to use the phrase "So-called unmanned systems" to reflect their reality. There are people working on autonomous systems who really believe there will be no human involvement or oversight. These people will, of course, build systems that don't work. In summary, the state of the art is not good - an engineering led technical focus leads to "human error".
The principles of human-centred automation were set out by Billings in 1991:
  • To command effectively, the human operator must be involved.
  • To be involved, the human operator must be informed.
  • The human operator must be able to monitor automated systems.
  • Automation systems must be predictable.
  • The automated system must also be able to monitor the human operator.
  • Each of the elements of the system must have knowledge of the other’s intent. 
We know a great deal about the human aspects of automation. The problem is getting this knowledge applied.
There is a considerable literature on technical aspects of HMT, including work on the Pilot's Associate / Electronic Crewmember. The challenge is with getting this expertise used.

2. Human-System Integration process

Human-System Integration (HSI) is more talked-about than done. For HMT, HSI has to be pretty central to design, development, and operation. This will require enlightened engineers, programmers, risk managers etc. There are standards etc. for HSI (e.g. the Incremental Commitment Model), though these do not address HMT-specific matters.

The state of Cognitive Systems Engineering (CSE) is lamentable. I can take some share of the blame here, having dropped my topics of interest in the AI winter (the day job got in the way). Nearly all of it is academic as opposed to practical. Some of the more visible approaches have very little cognition,minimal systems thinking and no connection with engineering. Gary Klein's work is probably the best place to find practical resources (starting with Decision Centred Design).
MANPRINT; the integration of people and machines may go very deep and require closer coupling of Human Factors Engineering and Human Resources (selection, training, career structures etc) than has been the case to date. Not easy at scale.
Simulation-based design is probably the way to achieve iteration through to wargaming to support operation. Obviously there are issues of fidelity (realism) here, but they should be manageable.

3. Capability, ownership, responsibilities

The industrial capability to deliver HMT is limited, and the small pool of expertise is divided by the AI winter. Caveat emptor will be vital, and specalist capability evaluation tools for HMT don't exist (though HSI capability evaluation tools could be expanded to do the job). 'Saw one, did one, taught one' won't work here unless you want to fail.
The data (big or otherwise), algorithms, heuristics, rules, concepts, folksonomies etc. are core to military operations (and may be sensitive). It would be best if they were owned and managed by a responsible military organisation, rather than a contractor. In a sense, they could be considered an expansion of doctrine.

4. Test, acceptance

If HMT is to work as a team, then it may well be that the M becomes personally tailored to the individual H. This goes against military procurement in general, and raises questions about how to conduct T&E and acceptance. If the M evolves in harmony with the H, then this raises further difficulties. Not insuperable, but certainly challenging. Probably simpler in the context of the extended military responsibility proposed above.

5.State of art

We are seeing the return of hype in AI. Sadly, it seems little was learned from the problems of the previous phase, exacerbated by somewhat impractical hype on ethics.
It is still as much craft as engineering to build responsible systems; there is a real shortage of good design guidance. HMT has been the province of the lab, and has not been translated into anything resembling mainstream system acquisition. Much to do.

Thursday 3 December 2015

Smart shipping and the human element

Martin Stopford (MS) has written about 'smart' shipping here and here. There is a related article here and videos here and here. He makes a number of important points about seafaring. This article picks up some of these points and responds a) because Martin Stopford's proposal is likely to be influential and b) because it has the potential to be positive for seafarers. This is an opportunity to be grasped with both hands by those concerned with seafarers or the human element.
MS summary: 
Smart shipping would bring about a much greater integration of ship operations with the internet and big data. The Smart Shipping model focuses on the transport performance of the company/fleet as a whole, rather than a collection of individual ships, resulting in wide reaching improvements in transport productivity; safety; personnel development; and logistics. The need is to spend an appropriate amount of money on how assets are going to be used.



Four problems with the existing business are identified; firstly that the technology used was old and economies of scale had been taken to an extreme; second there was a real problem in attracting crew; third a change in the market so that two-thirds of the cargo was controlled by non-OECD countries; and fourth the industry has very weak customer relationships.


1. From Gambling to Management

MS summary:
Shipping needs to move from gambling to management The problem with the bulk shipping business over the last 20 -30 years is its been a gambling business not a management business,”“It's a management solution, you semi-automate ship operations, you semi-automate navigation and you implement door-to-door logistics.". The focus here is on optimizing the overall management of the business by treating the transport performance of ships as a single production unit, like a BMW car plant. The result is QA systems that really work, not a set of manuals nobody consults.
Good management is quite straightforward, and well summarised in Henry Stewart's Happy Manifesto. His Four Steps to Happiness are also good sense. Henry Mintzberg's brief 'Musings on Management' are as relevant as ever, particularly the change from top-down to inside-out.
The challenge for shipping is: Is shipping smart enough to embark on becoming a Wirearchy? “a dynamic two-way flow of power and authority, based on knowledge, trust, credibility and a focus on results, enabled by interconnected people and technology”.
The 'smart' in smart shipping may signal a move to a knowledge economy and intellectual capital.

2. Customer relations and innovation

MS Summary:
“We need to put these things together and squeeze some more value out the transport chain and put a smile on the customer’s face. The customer should not be the person you are beating to death over the negotiating table.”
"The way you treat your employees is the way they will treat your customers" Richard Branson

Eric von Hippel and his team at MIT have made the case for user-centred innovation, where ideas from users and customers are sought and used. 'Lead' users may not just be a source of innovation, they may be the innovators themselves
As regards the necessity for such disruption, do remember, if you don't disrupt yourself, someone else will do it for you.

3. Jobs, Job To Be Done (JTBD), careers, incentives

MS summary:
The need is for a business model which allows employees in a shipping company to be effective as a single team, with better and more rewarding career opportunities for young people and greater integration between ship and shore. Tomorrow's shipping is most of all about people.Better use of people as a resource. Manage ship and shore personnel into a more productive team with better career opportunities. Break down ship-shore barrier, create a team spirit, opportunities for a career. Build a whole new culture. Run the fleet as a team. Experienced engineers ashore, junior ones at sea getting responsibility early, with support. We’ve got 1.7 people onshore, 20 people on the ship, the 20 people on the ship hate the people onshore and the 1.7 onshore think the guys on the ship are a load of idiots. Is that the way to run a business? Integrate the systems and get it to run better.
Automate & de-skill ship operations & navigation: "It's not about the crew, it's about automation of navigation"
The proposal here is very exciting. It will require technology and jobs to be developed together, probably using Human Centred Design (HCD). Demonstrations and trial facilities will probably be needed if STCW is to advance at any sort of pace.
There will need to be a careful watch on incentives that block progress:“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” Upton Sinclair

4. Getting there - managing change

MS Summary:
We should not assume it will be easy.
The stakeholder gridlock in shipping will make change difficult. The full horrors of the alternative - of not changing - may not have been thought through yet.
The need for change is clear: "The Internet is nothing less than an extinction-level event for the traditional firm as we have known it for the past 100 years. The Internet makes it possible to create totally new forms of economic entities." Esko Kilpi   Bandwidth limitations have sheltered shipping from the full force of the internet. That protection is ending.
As regards the process of organisational change, it is important to remember Virginia Satir's remark "No one likes to be should upon" For a few resources on change, see here.

5. The 'Human Error' Reduction Fallacy

Unsurprisingly, Martin Stopford has fallen for the 'human error' reduction fallacy being pushed by the 'toys for boys' autonomous ship/car/toaster crowd. However he recognises that driverless ships are a very difficult topic which should only be considered when the industry has much more experience and depth.
"Human error is the symptom of system failure, not the cause" Dan Maurinho "In their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid." James Reason
The 'human error' reduction fallacy is an unhealthy way to approach the design of automatic or autonomous systems.It is also likely to miss any big business opportunities that emerge. Human-Centred Automation based on Billings would be the basis of safe and effective operation.

6. Big data and safety

MS Summary:
Centralise analysis. Moving to shipping and maritime economics, this could turn out to be a tremendously exciting era for Maritime analysts.
It is early days, but it seems that, left to itself, big data tends to become big brother, using the panopticon for micro-management. Platform capitalism seems to like moral buffering upstream of the algorithms, with a moral crumple zone for the folk at the sharp end. This is not what shipping needs. It also looks like it isn't enough on its own. Here is an example from healthcare:
"In the real world, a big factor in patient health are social factors like mental health, social isolation, and transportation issues. Since this data is not typically collected, it is largely ignored by Big Data analytics. By collecting this data in a structured way, it can combine with the clinical data to create a truly complete care plan for the patient."
Centralised analysis may be right for economics, but for management, big data needs to be used to support smart people who can appreciate the context. "Appreciate the situation, don't situate the appreciation".

7. Automation to support a new business model.

MS summary:
Smart phone style apps and standard interfaces. De-skilling. Don't just automate navigation, but onboard operations, systems.
Using systems that are familiar to people from the 21st Century sounds a good idea. Folk are going to use that which is easy to use, whether Type Approved or not. Commonality with what comes up the gangway is sensible.
De-skilling by automation has usually been counter-productive. Changing the organisation to give more responsibility to junior people by supporting them with a network of other people plus automation - that is do-able.
Automation and IT need to be seen as oxygen, not lubricating oil. Chris Boorman has a nice piece on the difference between human-centred automation and human replacement automation. "Automation enables enterprises to automate those core processes not to make cuts, but to free up resource to work on new disruptive projects. Faced with an increasingly complex world of technology - cloud, mobile, big data, internet of things - as well as growing consumer expectations, every business needs to turn to automation or perish....Every industry is going through a period of change as new technologies and new entrants look to disrupt the status-quo.  Automation is a key enabler for helping enterprises to disrupt their own industries and drive that change.  Acquiring new customers, retaining customers, driving business analytics, consolidating enterprises following mergers or driving agility and speed are all critical business imperatives.  Automation delivers the efficiency and enables the new way of thinking from your brightest talent to succeed."

8. Setting expectations

As Martin Stopford has recognised, moving to smart shipping is not going to be easy. A detailed passage plan is obviously inappropriate, but some sort of route with easy stages might be welcome.

9. Early actions

Some early actions are obvious:
  • Set the means of achieving scalable learning in place, including trying out some creation spaces.
  • Chris Boorman again:"Automation needs to be ingrained in an organization’s DNA early on and not deployed later as a replacement measure for existing job functions. It should instead be used to allow people and resources to be more focused on driving the business forwards, rather than on just keeping the lights on." The ingraining needs to start now.
  • Rob Miles has proposed levels of enlightenment (later slides in the presentation) as regards integrating safety into business. Smart shipping will need to have some enlightenment, and this will need to include the regulators.
Footnote: I understand the objections to the term 'human element' and sympathise. However, it is the IMO term. If we can use it to convey a Socio-Technical Systems (STS) approach, with a human element and a technical element, then it will do some good. See here for resources on STS.

The gloomy bits: From the CyClaDes EU project on crew-centred design, it has become apparent to me that shipping is far from ready to do crew-centred design - there is a long way to travel for all stakeholders. The 'human error' issue also goes to the heart of matters, from accident investigation through to daily operations.The smart people needed for smart shipping includes all sorts of people. e.g. engine experts who fit the wrong rings in the wrong grooves.