Thursday 19 July 2012

Is 'autonomy' a helpful aim for 'unmanned' platforms?

Building 'autonomous' platforms sounds exciting as an engineering challenge. This post suggests that the concept may be sufficiently flawed that it takes well-intentioned technical effort down a blind alley, and that a somewhat less-exciting conceptual framework may end up supporting greater technical advance.

Chapter 3 of Vincenti's classic book "What Engineers Know and How They Know It" is on Flying-Quality Specifications. The concepts of stability and control had to be re-thought. Essentially stability had been seen as a property of the aircraft on its own. This concept had to change to provide the pilot with adequate control. Flying qualities emerged as a concept that related to the aircraft-pilot system. I would give a better description, but someone hasn't returned my copy of the book. Changing the underlying concepts took a good decade of experimentation and pilot-designer interaction. My concern is that 'autonomy' as currently defined will hold back progress in the way that 'stability' did in the 1920's.  The 2010 version of CAP722 (Unmanned Aircraft System Operations in UK Airspace – Guidance) is used as the reference on current thinking on 'autonomy'.

The advantage we have for 'autonomy' over stability in the 1920's is that there is a good body of work on human-automation interaction, supervisory control etc.going back sixty years that can be used. There is well-established work that can elaborate human-automation interaction beyond a simple 'autonomous' label, or a 'semi-autonomous' label (reminders of 'slightly pregnant' as a concept). For example,
  • Tom Sheridan defined five generic supervisory of functions planning, teaching (or programming the computer), monitoring, intervening and learning. These functions operate within three nested control loops.
  • The Bonner-Taylor PACT framework  for pilot authorisation  and control of tasks can be used to describe operation in various modes.
  • Work by John Reising , Terry Emerson and others developed design principles and approaches to Human-Electronic teamwork, using, inter alia, Asimov's Laws of Robotics.
  • Recent work by Anderson et al on a constraint-based approach to UGV semi-autonomous control.
CAP722 (3.6.1) requires an overseeing autonomous management system. This has echoes of the 'executive' function at the heart of the Pilot's Associate programme. It is my recollection that the name and function of the executive kept changing, and perhaps proved too difficult to implement. A more feasible solution would be a number of agents assisting the human operator. It is not obvious why CAA guidance precludes such an option.

CAP722 (3.5.1) states: 'The autonomy concept encompasses systems ranging in capability from those that can operate without human control or direct oversight (“fully autonomous”), through “semi-autonomous” systems that are subordinate to a certain level of human authority, to systems that simply provide timely advice and leave the human to make all the decisions and execute the appropriate actions'. 'Full Autonomy' is thus a self-defining no control zone (Grote). As a pre-requisite for such a zone, the transfer of responsibility from the operator at the sharp end to the relevant authority (e.g. the Design Authority, the Type Certificate Holder, the IPT Leader) needs to be clearly signalled to all concerned. The 2010 version of CAP 722 seems to leave responsibility at the sharp end, and the inevitable accusations of 'operator error' when things go wrong.

I  leave the last words to Adm. Rickover:
"Responsibility is a unique concept: It can only reside and inhere in a single individual.  You may share it with others but your portion is not diminished.  You may delegate it but it is still with you.  Even if you do not recognise it or admit its presence, you cannot escape it.  If responsibility is rightfully yours, no evasion, or ignorance, or passing the blame can shift the burden to someone else.  Unless you can point your finger at the man responsible when something goes wrong then you never had anyone really responsible."

Update: Project ORCHID seems to have the right approach, talking about degree of autonomy required for tasks, and providing digital assistants. Also, please note the agent based approach.

Update: It's nice to be ahead of Dangerroom. "The Pentagon doesn't trust its own robots". Problems of autonomy!

No comments:

Post a Comment