Friday, 11 December 2015

Human-Machine Teaming - meeting the Centaur challenge

At the centre of the US DoD Third Offset is Human-Machine Teaming (HMT), with five building blocks:
  1. Machine Learning
  2. Autonomy / AI
  3. Human-Machine Collaboration
  4. Assisted human operations
  5. Autonomous weapons.
The analogy with Centaur Chess is a powerful one, and potentially offers the best use for both people (H) and machines (M). However, this approach is not easy to implement.This post is a quick look at some issues of design and development for HMT. Other aspects of HMT will be addressed in subsequent posts (hopefully).

1. Human-Centred Automation

The problems of SAGE, one of the first automated systems, were well-documented in 1963. Most automated systems built now still have the same problems. "H" managed to get the UK MoD to use the phrase "So-called unmanned systems" to reflect their reality. There are people working on autonomous systems who really believe there will be no human involvement or oversight. These people will, of course, build systems that don't work. In summary, the state of the art is not good - an engineering led technical focus leads to "human error".
The principles of human-centred automation were set out by Billings in 1991:
  • To command effectively, the human operator must be involved.
  • To be involved, the human operator must be informed.
  • The human operator must be able to monitor automated systems.
  • Automation systems must be predictable.
  • The automated system must also be able to monitor the human operator.
  • Each of the elements of the system must have knowledge of the other’s intent. 
We know a great deal about the human aspects of automation. The problem is getting this knowledge applied.
There is a considerable literature on technical aspects of HMT, including work on the Pilot's Associate / Electronic Crewmember. The challenge is with getting this expertise used.

2. Human-System Integration process

Human-System Integration (HSI) is more talked-about than done. For HMT, HSI has to be pretty central to design, development, and operation. This will require enlightened engineers, programmers, risk managers etc. There are standards etc. for HSI (e.g. the Incremental Commitment Model), though these do not address HMT-specific matters.

The state of Cognitive Systems Engineering (CSE) is lamentable. I can take some share of the blame here, having dropped my topics of interest in the AI winter (the day job got in the way). Nearly all of it is academic as opposed to practical. Some of the more visible approaches have very little cognition,minimal systems thinking and no connection with engineering. Gary Klein's work is probably the best place to find practical resources (starting with Decision Centred Design).
MANPRINT; the integration of people and machines may go very deep and require closer coupling of Human Factors Engineering and Human Resources (selection, training, career structures etc) than has been the case to date. Not easy at scale.
Simulation-based design is probably the way to achieve iteration through to wargaming to support operation. Obviously there are issues of fidelity (realism) here, but they should be manageable.

3. Capability, ownership, responsibilities

The industrial capability to deliver HMT is limited, and the small pool of expertise is divided by the AI winter. Caveat emptor will be vital, and specalist capability evaluation tools for HMT don't exist (though HSI capability evaluation tools could be expanded to do the job). 'Saw one, did one, taught one' won't work here unless you want to fail.
The data (big or otherwise), algorithms, heuristics, rules, concepts, folksonomies etc. are core to military operations (and may be sensitive). It would be best if they were owned and managed by a responsible military organisation, rather than a contractor. In a sense, they could be considered an expansion of doctrine.

4. Test, acceptance

If HMT is to work as a team, then it may well be that the M becomes personally tailored to the individual H. This goes against military procurement in general, and raises questions about how to conduct T&E and acceptance. If the M evolves in harmony with the H, then this raises further difficulties. Not insuperable, but certainly challenging. Probably simpler in the context of the extended military responsibility proposed above.

5.State of art

We are seeing the return of hype in AI. Sadly, it seems little was learned from the problems of the previous phase, exacerbated by somewhat impractical hype on ethics.
It is still as much craft as engineering to build responsible systems; there is a real shortage of good design guidance. HMT has been the province of the lab, and has not been translated into anything resembling mainstream system acquisition. Much to do.

No comments:

Post a Comment