"His Majesty made you a major because he believed you would know when
        not to obey his orders." Prince Frederick Karl (cited by Von Moltke)
The killer robot community is debating concepts such as Meaningful Human
      Control (MHC) and 'appropriate human judgment' with a view to their      operationalisation in practical use. For the purpose of this post, the
      various terms are bundled into the abbreviation MHC.
    After things have gone wrong, the challenge for incident analysis is to
      avoid 'hindsight bias'. To learn from an incident, it is necessary to find      out why it made sense at the time. "to reconstruct the evolving mindset",
      to quote Sidney Dekker. There is a long history of the wrong people      getting the blame for an incident - usually some poor soul at the 'sharp
      end' (Woods).
    In a world of highly automated systems, the distinction between 'human'
      and 'machine' becomes blurred. In most systems, there are a number of      human stakeholders to consider, and a through-life perspective is
      frequently useful.
    In a combat situation, 'control' is an aspiration rather than a
      continuing reality, and losers will have lost 'control' before the battle      - e.g. after the opponent has got inside their OODA loop. What is a
      realistic baseline for MHC in combat? We have to be able to determine this      without hindsight bias.
    How would an investigator determine the presence or absence of MHC in the
      reconstruction of an incident? It would be virtue signalling of the lowest      order to wait until after an incident and then decide how to determine the
      presence or absence of MHC.
    One aspect of such determination is to de-couple the decision making from
      outcomes. The classic paper
        on this topic is '“Either a medal or a corporal”: The effects of
        success and failure on the evaluation of decision making and decision        makers' by Raanan Lipshitz
    There is, of course, a sizeable literature on decision quality e.g. Keren
      and de Bruin.
    
The game of 'consequences' developed here has been to provide food for
      thought, and an aid to discussion on what an investigator would need to      know to make a determination of MHC.  It comprises short sections of
      dialogue. The allocation of function to human or machine, and the      outcomes, are open to chance variation.
The information required to determine MHC might help in system
      specification, including the specifics of a 'human window'. It is 
not      always the case that automation provides such a window - 
especially in the
      case of Machine Learning.
    So, how do we determine MHC in a combat situation? Try some of the
      exercises and see how much you would need to know. If the 
exercises here      don't help make a determination - what would?
Please let me know in comments below, or on Twitter @BrianSJ3
    As an aside, there are proven approaches to take in system development
      that can provide assurance of decision quality. This is not entirely a new      challenge to the world of Human-System Integration. "What assurances
        are there that weapon systems developed can be operated and maintained        by the people who must use them?" 
      [Guidelines for Assessing Whether Human Factors Were Considered in the      Weapon Systems Acquisition Process FPCD-82-5, US GAO, 1981]
Subscribe to:
Post Comments (Atom)
 
 
 Posts
Posts
 
 
No comments:
Post a Comment